CN111189454A - Unmanned vehicle SLAM navigation method based on rank Kalman filtering - Google Patents

Unmanned vehicle SLAM navigation method based on rank Kalman filtering Download PDF

Info

Publication number
CN111189454A
CN111189454A CN202010020887.6A CN202010020887A CN111189454A CN 111189454 A CN111189454 A CN 111189454A CN 202010020887 A CN202010020887 A CN 202010020887A CN 111189454 A CN111189454 A CN 111189454A
Authority
CN
China
Prior art keywords
unmanned vehicle
state
prediction
rank
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010020887.6A
Other languages
Chinese (zh)
Inventor
娄泰山
班鸿业
赵素娜
贺振东
丁国强
焦玉召
王晓雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN202010020887.6A priority Critical patent/CN111189454A/en
Publication of CN111189454A publication Critical patent/CN111189454A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention provides an unmanned vehicle SLAM navigation method based on rank Kalman filtering to solve the technical problems of low positioning precision and insufficient stability of the conventional SLAM navigation. The invention comprises the following steps: 1. modeling the mobile robot, establishing a dynamic model and an observation model of the mobile robot, and initializing; 2. prediction and update of SLAM; 3. and updating and outputting the pose information of the mobile robot by adopting rank Kalman filtering. The invention has the beneficial effects that: the method has no requirement on whether the dynamic model meets Gaussian distribution or not, has small error and has higher positioning and mapping accuracy in unknown environment.

Description

Unmanned vehicle SLAM navigation method based on rank Kalman filtering
Technical Field
The invention relates to the field of unmanned vehicle autonomous navigation, in particular to an unmanned vehicle SLAM navigation method based on rank Kalman filtering.
Background
SLAM is a technique that solves the problem of navigation of unmanned vehicles in unknown environments. The technology is that the unmanned vehicle uses a sensor carried by the unmanned vehicle to obtain data, calculates and determines the environment information of the unmanned vehicle, and updates the position of the unmanned vehicle by using the environment information obtained by the unmanned vehicle. And meanwhile, the method also becomes a hot algorithm in a robot navigation algorithm.
SLAM algorithms have been studied with a focus on using filter theory, typically using odometer inputs as the prediction inputs for the filter and lidar or vision sensor outputs as the inputs for the filter update phase. The most traditional method is to introduce Extended Kalman Filtering (EKF) into SLAM, which is actually to process the nonlinear model by using extended kalman filtering. The EKF expands the nonlinear function into Taylor series and omits second-order and above terms, can solve a part of nonlinear problems, but when the system nonlinearity is serious, the truncation error generated by the approximate linearization can reduce the filtering precision and stability, so in the actual engineering, the EKF can only carry out local linearization processing on a weak nonlinear system.
Disclosure of Invention
The invention provides an unmanned vehicle SLAM navigation method based on rank Kalman filtering to solve the technical problems of low positioning precision and insufficient stability of the conventional SLAM navigation.
In order to solve the technical problems, the invention adopts the following technical scheme:
an unmanned vehicle SLAM navigation method based on rank Kalman filtering is designed, and comprises the following steps:
the method comprises the following steps: the dynamic model and the observation model of the unmanned vehicle are established as follows:
Figure BDA0002360696990000011
wherein the content of the first and second substances,
Figure BDA0002360696990000012
for the pose information of the mobile robot at time k, ukIs nuDimension control vector, zkIs nzObservation vector of dimension, wkAnd vkRespectively a system noise vector and a measurement noise vector;
step two: establishing a SLAM model, predicting and updating, wherein the SLAM model is as follows:
Figure BDA0002360696990000021
in the formula
Figure BDA0002360696990000022
A feature map at time k;
the prediction is through a model of the unmanned vehicle's state
Figure BDA0002360696990000023
And the posterior probability distribution of the k-1 moment to obtain the prior probability distribution of the k moment:
Figure BDA0002360696990000024
this probability distribution corresponds to the kinetic model of the unmanned vehicle
Figure BDA0002360696990000025
The updating is that the measurement equation obtains the measured value z at the moment kkThen, a posterior probability distribution is obtained:
Figure BDA0002360696990000026
this probability distribution corresponds to the observation model z of the above-mentioned unmanned vehiclek=h(xk)+vk. Therefore, the whole unmanned vehicle SLAM navigation method can be solved by Kalman filtering.
Step three: the pose information of the unmanned vehicle is sampled, and a rank sampling point set is obtained as follows:
Figure BDA0002360696990000027
in the formula, xk-1,iIs composed of
Figure BDA0002360696990000028
4n sample points in total; n is
Figure BDA0002360696990000029
Dimension (d) of (a).
Figure BDA00023606969900000210
Is represented by Pk-1The ith column vector of square roots; u. ofpIs a standard normal offset;
Figure BDA00023606969900000211
the state of the unmanned vehicle at the k-1 moment is shown;
step four: obtaining the state one-step prediction of the k step of the unmanned vehicle:
xk/k-1,i=f(χk-1,i),i=1,2,…,4n (3)
Figure BDA00023606969900000212
in the formula xk/k-1,iThe state prediction value of the ith sampling point is shown;
Figure BDA00023606969900000213
the state prediction value of the mobile robot is shown;
step five: obtaining a one-step prediction of covariance:
Figure BDA00023606969900000214
in the formula, Pk/k-1Representing the one-step prediction error covariance of the state of the mobile robot; qk-1Represents the covariance of the process noise;
step six: covariance Q due to process noise in one-step prediction of covariance obtained in step fivek-1Thus, one step prediction of the state will be re-rank sampled:
Figure BDA0002360696990000031
in the formula, xk/k-1,iIs composed of
Figure BDA0002360696990000032
The ith sample point of (a);
Figure BDA0002360696990000033
is a one-step prediction of state; pk/k-1A one-step prediction of covariance;
step seven: obtaining a measurement prediction value zk/k-1,i
zk/k-1,i=h(χk/k-1,i),i=1,2,…,4n (7)
Figure BDA0002360696990000034
In the formula, zk/k-1,iThe measured predicted value of the ith sampling point is shown;
Figure BDA0002360696990000035
the measurement predicted value of the mobile robot is shown;
step eight: obtaining a filter gain matrix Kk
Figure BDA0002360696990000036
Figure BDA0002360696990000037
In the formula, PzzRepresenting the covariance of the measurement prediction value of the mobile robot; pxzRepresenting the cross covariance of the state predicted value and the measurement predicted value; kkRepresenting the gain of rank Kalman filtering;
step nine: and (3) acquiring state updating:
Figure BDA0002360696990000038
step ten: obtaining a state estimation error variance Pk
Figure BDA0002360696990000039
Step eleven: circularly iterating the steps from three to ten to obtain the state estimation value of the unmanned vehicle
Figure BDA0002360696990000041
Further, w in step 1kAnd vkAre respectively QkAnd RkAnd satisfies:
Figure BDA0002360696990000042
further, QkAnd RkRespectively as follows:
Figure BDA0002360696990000043
and
Figure BDA0002360696990000044
further, the initial state of the unmanned vehicle in step 1 is as follows:
Figure BDA0002360696990000045
Figure BDA0002360696990000046
further, the initial state value of the unmanned vehicle is
Figure BDA0002360696990000047
Initial error variance matrix
Figure BDA0002360696990000048
Compared with the prior art, the invention has the beneficial technical effects that:
the SLAM navigation method based on the rank Kalman filtering has no requirement on whether a dynamics model of the unmanned vehicle is in Gaussian distribution or not, and the method based on the rank filtering has the advantages of smaller error, better effect, higher positioning and mapping accuracy of the unmanned vehicle in an unknown environment and stronger adaptability to the unknown environment.
Drawings
FIG. 1 is a graph of the results of a simulation experiment of the present invention;
FIG. 2 is a plot of x-axis root mean square error versus EKF-SLAM and RKF-SLAM algorithms used in the present invention during unmanned vehicle navigation;
FIG. 3 is a plot of the root mean square error of the y-axis of the invention using EKF-SLAM and RKF-SLAM algorithms while the drone is navigating.
Detailed Description
The following examples are intended to illustrate the present invention in detail and should not be construed as limiting the scope of the present invention in any way.
The programs referred to or relied on in the following embodiments are all conventional programs or simple programs in the art, and those skilled in the art can make routine selection or adaptation according to specific application scenarios.
Example 1: an unmanned vehicle SLAM navigation method based on rank Kalman filtering comprises the following steps:
the method comprises the following steps: modeling the unmanned vehicle, establishing a dynamic model and an observation model of the unmanned vehicle, and initializing;
selecting the state information vector of the system as
Figure BDA0002360696990000051
The dynamics and observation models of the system are then:
Figure BDA0002360696990000052
wherein the content of the first and second substances,
Figure BDA0002360696990000053
the pose information of the unmanned vehicle at the moment k,
Figure BDA0002360696990000054
characteristic map of time k, ukIs nuDimension control vector, zkIs nzObservation vector of dimension, wkAnd vkRespectively, the system noise vector and the measured noise vector, and the variance matrix is QkAnd RkAnd satisfy
Figure BDA0002360696990000055
Wherein the content of the first and second substances,
Figure BDA0002360696990000056
Figure BDA0002360696990000057
initializing a state model of the system
Figure BDA0002360696990000058
Figure BDA0002360696990000059
Wherein, the initial state
Figure BDA00023606969900000510
Initial error variance matrix
Figure BDA00023606969900000511
P0、QkAnd RkAre all unrelated;
step two: SLAM prediction and update
The SLAM model was:
Figure BDA0002360696990000061
wherein z isk,ukCan be formed by zk=h(xk)+vk,uk(v, γ) (including forward velocity and angular velocity) is calculated. The Bayes filtering principle is adopted to calculate the posterior probability distribution, so that the optimal state estimation can be obtained, namely after the path is planned, the unmanned vehicle can drive along the track closest to the planned path in the navigation process through calculation. The whole calculation step is divided into two steps:
the first step of prediction is through a model of the state of the vehicle
Figure BDA0002360696990000062
And the posterior probability distribution of the k-1 moment to obtain the prior probability distribution of the k moment:
Figure BDA0002360696990000063
this probability distribution corresponds to the kinetic model of the unmanned vehicle
Figure BDA0002360696990000064
The second step of updating is to obtain the measured value z at the time k from the measurement equationkThen, a posterior probability distribution is calculated:
Figure BDA0002360696990000065
this probability distribution corresponds to the observation model z of the above-mentioned unmanned vehiclek=h(xk)+vk. Therefore, the whole unmanned vehicle SLAM navigation method can be solved by Kalman filtering.
Step three: updating pose information of the unmanned vehicle by adopting Rank Kalman (RKF) filtering and outputting the pose information
1. Calculating rank sampling points:
rank sample point set { χ for m-2 sampling strategy and symmetric distribution casek-1,iIs as
Figure BDA0002360696990000066
In the formula, xk-1,iIs composed of
Figure BDA0002360696990000067
4n sample points in total; n is
Figure BDA0002360696990000068
Dimension (d) of (a).
Figure BDA0002360696990000069
Is represented by Pk-1The ith column vector of square roots;
Figure BDA00023606969900000610
the state of the unmanned vehicle at the k-1 moment is shown; u. ofpFor standard normal bias, p is calculated using the median rankj=(j+2.7)/5.4,p1=0.6852,
Figure BDA00023606969900000612
p2=0.8704,
Figure BDA00023606969900000613
ω denotes a covariance weight coefficient, and
Figure BDA00023606969900000611
2. calculating a state one-step prediction:
state estimation at time k-1
Figure BDA0002360696990000071
And its error variance matrix Pk-1And predicting the state of the k step as follows:
xk/k-1,i=f(χk-1,i),i=1,2,…,4n (11)
Figure BDA0002360696990000072
in the formula xk/k-1,iThe state prediction value of the ith sampling point is shown;
Figure BDA0002360696990000073
the predicted value of the state of the unmanned vehicle is shown.
3. One-step prediction for covariance calculationk/k-1
One-step prediction based on the state of the k-th step
Figure BDA0002360696990000074
Obtaining a state estimation error variance matrix one-step prediction:
Figure BDA0002360696990000075
in the formula, Pk/k-1The method comprises the steps of representing the one-step prediction error covariance of the state of the unmanned vehicle; qk-1The covariance of the process noise is shown.
4. Covariance Q due to process noise in one-step prediction of covariance obtained in step fivek-1So that a re-rank sampling of the state one-step prediction will be performed;
Figure BDA0002360696990000076
in the formula, xk/k-1,iIs composed of
Figure BDA0002360696990000077
The ith sample point of (a);
Figure BDA0002360696990000078
is a one-step prediction of state; pk/k-1Is a one-step prediction of covariance.
5. Calculating a measurement prediction value zk/k-1,i
zk/k-1,i=h(χk/k-1,i),i=1,2,…,4n (15)
Figure BDA0002360696990000079
In the formula, zk/k-1,iThe measured predicted value of the ith sampling point is shown;
Figure BDA00023606969900000710
the measured predicted value of the unmanned vehicle is shown.
6. Calculating a filter gain matrix Kk
Figure BDA0002360696990000081
Figure BDA0002360696990000082
Figure BDA0002360696990000083
In the formula, PzzRepresenting the covariance of the unmanned vehicle measurement predicted value; pxzRepresenting the cross covariance of the state predicted value and the measurement predicted value; kkRepresenting the gain of the rank kalman filter.
7. Computing state updates
Figure BDA0002360696990000084
In the formula (I), the compound is shown in the specification,
Figure BDA0002360696990000085
the estimated value of the state of the unmanned vehicle at the time k is shown.
8. Calculating a state estimation error variance Pk
Figure BDA0002360696990000086
In the formula, PkRepresenting the covariance estimation value of the unmanned vehicle at the time k;
9. the above steps are iterated circularly to obtain the state estimation value of the unmanned vehicle
Figure BDA0002360696990000087
In this embodiment, the sampling time interval h is 0.025[ s ], the car speed v is 3[ m/s ], the angular speed γ is pi/6 [ rad/s ] and the car axle distance d is 4[ m ], and the landmark detection time interval t is 0.3[ s ]; as shown in fig. 2 and 3, when the proposed rank kalman filtering SLAM navigation method is adopted, the root mean square error of the position is small, and the errors in the x direction and the y direction are both within the range of 0.12m, that is, the rank kalman filtering SLAM navigation method has small error, good estimation effect, higher positioning and mapping accuracy for unmanned vehicles in an unknown environment, and stronger adaptability to the unknown environment.
While the present invention has been described in detail with reference to the drawings and the embodiments, those skilled in the art will understand that various specific parameters in the above embodiments can be changed without departing from the spirit of the present invention, and a plurality of specific embodiments are formed, which are common variation ranges of the present invention, and will not be described in detail herein.

Claims (5)

1. An unmanned vehicle SLAM navigation method based on rank Kalman filtering is characterized by comprising the following steps:
the method comprises the following steps: the dynamic model and the observation model of the unmanned vehicle are established as follows:
Figure FDA0002360696980000011
wherein the content of the first and second substances,
Figure FDA0002360696980000012
for the pose information of the mobile robot at time k, ukIs nuDimension control vector, zkIs nzObservation vector of dimension, wkAnd vkRespectively a system noise vector and a measurement noise vector;
step two: establishing a SLAM model, predicting and updating, wherein the SLAM model is as follows:
Figure FDA0002360696980000013
in the formula
Figure FDA0002360696980000014
A feature map at time k;
the prediction is through a model of the unmanned vehicle's state
Figure FDA0002360696980000015
And the posterior probability distribution of the k-1 moment to obtain the prior probability distribution of the k moment:
Figure FDA0002360696980000016
this probability distribution corresponds to the kinetic model of the unmanned vehicle
Figure FDA0002360696980000017
The updating is that the measurement equation obtains the measured value z at the moment kkThen, a posterior probability distribution is obtained:
Figure FDA0002360696980000018
this probability distribution corresponds to the observation model z of the above-mentioned unmanned vehiclek=h(xk)+vk
Step three: the pose information of the unmanned vehicle is sampled, and a rank sampling point set is obtained as follows:
Figure FDA0002360696980000019
in the formula, xk-1,iIs composed of
Figure FDA00023606969800000110
I th of (1)4n sampling points in total; n is
Figure FDA00023606969800000111
Dimension (d) of (a).
Figure FDA00023606969800000112
Is represented by Pk-1The ith column vector of square roots; u. ofpIs a standard normal offset;
Figure FDA00023606969800000113
the state of the unmanned vehicle at the k-1 moment is shown;
step four: obtaining the state one-step prediction of the k step of the unmanned vehicle:
xk/k-1,i=f(χk-1,i),i=1,2,…,4n (3)
Figure FDA00023606969800000114
in the formula xk/k-1,iThe state prediction value of the ith sampling point is shown;
Figure FDA0002360696980000021
the state prediction value of the mobile robot is shown;
step five: obtaining a one-step prediction of covariance:
Figure FDA0002360696980000022
in the formula, Pk/k-1Representing the one-step prediction error covariance of the state of the mobile robot; qk-1Represents the covariance of the process noise;
step six: re-rank sampling the state one-step prediction:
Figure FDA0002360696980000023
in the formula, xk/k-1,iIs composed of
Figure FDA0002360696980000024
The ith sample point of (a);
Figure FDA0002360696980000025
is a one-step prediction of state; pk/k-1A one-step prediction of covariance;
step seven: obtaining a measurement prediction value zk/k-1,i
zk/k-1,i=h(χk/k-1,i),i=1,2,…,4n (7)
Figure FDA0002360696980000026
In the formula, zk/k-1,iThe measured predicted value of the ith sampling point is shown;
Figure FDA0002360696980000027
the measurement predicted value of the mobile robot is shown;
step eight: obtaining a filter gain matrix Kk
Figure FDA0002360696980000028
Figure FDA0002360696980000029
In the formula, PzzRepresenting the covariance of the measurement prediction value of the mobile robot; pxzRepresenting the cross covariance of the state predicted value and the measurement predicted value; kkRepresenting the gain of rank Kalman filtering;
step nine: and (3) acquiring state updating:
Figure FDA0002360696980000031
step ten: obtaining a state estimation error squareDifference Pk
Figure FDA0002360696980000032
Step eleven: circularly iterating the steps from three to ten to obtain the state estimation value of the unmanned vehicle
Figure FDA00023606969800000310
2. The rank kalman filter-based unmanned vehicle SLAM navigation method of claim 1, wherein: w in said step 1kAnd vkAre respectively QkAnd RkAnd satisfies:
Figure FDA0002360696980000033
3. the rank kalman filter-based unmanned vehicle SLAM navigation method of claim 2, characterized in that: said QkAnd RkRespectively as follows:
Figure FDA0002360696980000034
and
Figure FDA0002360696980000035
4. the rank kalman filter-based unmanned vehicle SLAM navigation method of claim 1, wherein:
the initial state of the unmanned vehicle in the step 1 is as follows:
Figure FDA0002360696980000036
Figure FDA0002360696980000037
5. the rank Kalman filtering based unmanned vehicle SLAM navigation method of claim 4, characterized in that: the initial state has a value of
Figure FDA0002360696980000038
Initial error variance matrix
Figure FDA0002360696980000039
CN202010020887.6A 2020-01-09 2020-01-09 Unmanned vehicle SLAM navigation method based on rank Kalman filtering Pending CN111189454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010020887.6A CN111189454A (en) 2020-01-09 2020-01-09 Unmanned vehicle SLAM navigation method based on rank Kalman filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010020887.6A CN111189454A (en) 2020-01-09 2020-01-09 Unmanned vehicle SLAM navigation method based on rank Kalman filtering

Publications (1)

Publication Number Publication Date
CN111189454A true CN111189454A (en) 2020-05-22

Family

ID=70708522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010020887.6A Pending CN111189454A (en) 2020-01-09 2020-01-09 Unmanned vehicle SLAM navigation method based on rank Kalman filtering

Country Status (1)

Country Link
CN (1) CN111189454A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113532416A (en) * 2021-05-28 2021-10-22 河南应用技术职业学院 Wheeled robot state estimation method and autonomous navigation method based on robust rank Kalman filtering
CN113703443A (en) * 2021-08-12 2021-11-26 北京科技大学 Unmanned vehicle autonomous positioning and environment exploration method independent of GNSS
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN114370879A (en) * 2022-01-14 2022-04-19 东南大学 AUV robust VBHIAKF-SLAM navigation method based on underwater environment characteristics

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006098126A (en) * 2004-09-28 2006-04-13 Toyota Motor Corp Work surface flaw inspection device
US20080040037A1 (en) * 2006-06-29 2008-02-14 De Lamare Rodrigo Caiado System and Method for Adaptive Reduced-Rank Parameter Estimation Using an Adaptive Decimation and Interpolation Scheme
CN105300387A (en) * 2015-11-03 2016-02-03 北京航空航天大学 Nonlinear non-Gaussian ranking filtering method for Martian atmosphere entering section
CN105467838A (en) * 2015-11-10 2016-04-06 山西大学 Synchronous positioning and map constructing method under random finite set framework
CN107246873A (en) * 2017-07-03 2017-10-13 哈尔滨工程大学 A kind of method of the mobile robot simultaneous localization and mapping based on improved particle filter
CN107807906A (en) * 2017-09-20 2018-03-16 北京航空航天大学 A kind of multi-model self calibration order filtering method
CN108759838A (en) * 2018-05-23 2018-11-06 安徽科技学院 Mobile robot multiple sensor information amalgamation method based on order Kalman filter
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Positioning and map creating method while the pilotless automobile of view-based access control model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006098126A (en) * 2004-09-28 2006-04-13 Toyota Motor Corp Work surface flaw inspection device
US20080040037A1 (en) * 2006-06-29 2008-02-14 De Lamare Rodrigo Caiado System and Method for Adaptive Reduced-Rank Parameter Estimation Using an Adaptive Decimation and Interpolation Scheme
CN105300387A (en) * 2015-11-03 2016-02-03 北京航空航天大学 Nonlinear non-Gaussian ranking filtering method for Martian atmosphere entering section
CN105467838A (en) * 2015-11-10 2016-04-06 山西大学 Synchronous positioning and map constructing method under random finite set framework
CN107246873A (en) * 2017-07-03 2017-10-13 哈尔滨工程大学 A kind of method of the mobile robot simultaneous localization and mapping based on improved particle filter
CN107807906A (en) * 2017-09-20 2018-03-16 北京航空航天大学 A kind of multi-model self calibration order filtering method
CN108759838A (en) * 2018-05-23 2018-11-06 安徽科技学院 Mobile robot multiple sensor information amalgamation method based on order Kalman filter
CN109870167A (en) * 2018-12-25 2019-06-11 四川嘉垭汽车科技有限公司 Positioning and map creating method while the pilotless automobile of view-based access control model

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113532416A (en) * 2021-05-28 2021-10-22 河南应用技术职业学院 Wheeled robot state estimation method and autonomous navigation method based on robust rank Kalman filtering
CN113703443A (en) * 2021-08-12 2021-11-26 北京科技大学 Unmanned vehicle autonomous positioning and environment exploration method independent of GNSS
CN113703443B (en) * 2021-08-12 2023-10-13 北京科技大学 GNSS independent unmanned vehicle autonomous positioning and environment exploration method
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN113884098B (en) * 2021-10-15 2024-01-23 上海师范大学 Iterative Kalman filtering positioning method based on materialization model
CN114370879A (en) * 2022-01-14 2022-04-19 东南大学 AUV robust VBHIAKF-SLAM navigation method based on underwater environment characteristics
CN114370879B (en) * 2022-01-14 2023-03-10 东南大学 AUV robust VBHIAKF-SLAM navigation method based on underwater environment characteristics

Similar Documents

Publication Publication Date Title
CN109211276B (en) SINS initial alignment method based on GPR and improved SRCKF
CN111189454A (en) Unmanned vehicle SLAM navigation method based on rank Kalman filtering
CN109885883B (en) Unmanned vehicle transverse motion control method based on GK clustering algorithm model prediction
CN108759833B (en) Intelligent vehicle positioning method based on prior map
CN108362288B (en) Polarized light SLAM method based on unscented Kalman filtering
CN105136145A (en) Kalman filtering based quadrotor unmanned aerial vehicle attitude data fusion method
CN108387236B (en) Polarized light SLAM method based on extended Kalman filtering
CN111751857A (en) Vehicle pose estimation method, device, storage medium and system
CN113325452A (en) Method for tracking maneuvering target by using three-star passive fusion positioning system
CN116552550A (en) Vehicle track tracking control system based on parameter uncertainty and yaw stability
Ge et al. Robust adaptive sliding mode control for path tracking of unmanned agricultural vehicles
CN110637209A (en) Method, apparatus, and computer-readable storage medium having instructions for estimating a pose of a motor vehicle
CN113311845B (en) Pure tracking control error compensation method and device based on path curvature
CN112034445B (en) Vehicle motion trail tracking method and system based on millimeter wave radar
CN110912535B (en) Novel non-pilot Kalman filtering method
Azizi et al. Mobile robot position determination using data from gyro and odometry
CN115143954B (en) Unmanned vehicle navigation method based on multi-source information fusion
JP7206883B2 (en) Yaw rate corrector
CN116338719A (en) Laser radar-inertia-vehicle fusion positioning method based on B spline function
CN113306573B (en) Learning type path tracking prediction control method for automatic driving vehicle
CN113341997B (en) Transverse control method and system based on multi-state parameter collaborative estimation
Zarei et al. Performance improvement for mobile robot position determination using cubature Kalman filter
CN113911123A (en) Road model updating method and device
CN114935345A (en) Vehicle-mounted inertial navigation mounting angle error compensation method based on pattern recognition
CN114323007A (en) Carrier motion state estimation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination