CN114690229A - GPS-fused mobile robot visual inertial navigation method - Google Patents
GPS-fused mobile robot visual inertial navigation method Download PDFInfo
- Publication number
- CN114690229A CN114690229A CN202210480378.0A CN202210480378A CN114690229A CN 114690229 A CN114690229 A CN 114690229A CN 202210480378 A CN202210480378 A CN 202210480378A CN 114690229 A CN114690229 A CN 114690229A
- Authority
- CN
- China
- Prior art keywords
- gps
- imu
- representing
- coordinate system
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/485—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/49—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Navigation (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a GPS-integrated mobile robot vision inertial navigation method, in particular to a navigation method combining vision inertial SLAM with a GPS sensor. The purpose is to eliminate the accumulative error of visual inertia SLAM and improve the navigation precision of the mobile robot. The method is realized by the following steps: the method comprises the following steps: the vehicle-mounted sensor collects images, positions, acceleration and angular speed, and then data are aligned; step two: IMU pre-integration, and integration is not required to be carried out again after optimization and updating every time, so that the calculated amount is reduced; step three: initializing and calculating each parameter; step four: calculating marginalized prior information, IMU measurement residual and visual observation residual by tight coupling optimization; step five: GPS measurements and visual inertia SLAM are jointly optimized. The invention is suitable for the visual inertial navigation system.
Description
Technical Field
The invention relates to the technical field of mobile robot navigation, in particular to a mobile robot vision inertial navigation method fusing a GPS.
Background
The application of visual inertial SLAM to mobile robots faces many problems and challenges, and some key technical problems remain to be solved. For example, the visual inertia SLAM may bring accumulated errors, so that the navigation accuracy of the navigation system in a large-scale environment is sharply reduced. The large accumulation of positioning errors over time also limits the application of visual inertial SLAM. To mitigate accumulated errors in large scale outdoor environments, loop detection and relocation methods are commonly used, but these methods all increase computational complexity and memory requirements. To provide true and consistent localization, a reliable and efficient approach is to fuse global pose information and local pose estimates. The GPS can be integrated into the visual inertia SLAM, so that the positioning precision is improved, and the drifting problem of the mobile robot is solved. Therefore, in order to improve the accuracy and the robustness of positioning, the invention integrates the three-degree-of-freedom position information of the GPS in the global reference system and the six-degree-of-freedom pose of the visual inertia SLAM in the local reference system. In order to improve the positioning and state estimation precision of the mobile robot in a large-scale environment, the invention provides a visual inertial navigation system integrated with a GPS sensor, and provides theoretical guidance and a solution for applying a visual inertial SLAM to the mobile robot. The invention aims to solve the problems of reduced pose estimation precision, track drift and even navigation failure of the visual inertia SLAM in a large-scale environment.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a visual inertial navigation method integrated with a GPS, and the navigation precision of a mobile robot is improved.
The invention is realized by adopting the following technical scheme:
the visual inertial navigation method integrated with the GPS is realized by adopting the following steps:
firstly, a vehicle-mounted binocular camera acquires a gray image of a road in front, an IMU sensor acquires acceleration and angular velocity of a vehicle, and a master control aligns data.
And step two, eliminating the noise and deviation of the IMU, performing pre-integration on the IMU data, calculating an observed value of the IMU data, a covariance matrix and a Jacobian matrix of residual errors, and then correcting the deviation.
And step three, initializing and calculating an absolute scale, a gyroscope bias, an acceleration bias, a gravity acceleration and the speed of each IMU at the moment.
And step four, after the initialization is completed, executing a tight coupling optimization based on a sliding window to obtain a state estimation value with higher precision and better robustness.
And fifthly, jointly optimizing the GPS measured value and the visual inertia SLAM.
The invention has the following effective effects: firstly, the time calibration is carried out on the acquired data, and the pose estimation precision is improved; the IMU data is pre-integrated, so that the calculation amount consumed in integration is reduced. Secondly, acquiring a global position by using a GPS sensor; it provides a drift-free measurement in a global frame, eliminating the cumulative error of the visual inertia SLAM. Thirdly, modeling GPS measurement as a general factor, wherein the GPS factor and the relevant state form a part of a global attitude diagram; in order to eliminate drift and accumulated errors, a global attitude map optimization module is adopted.
The effect of the invention is verified by the following experiment:
compared with the VINS-Fusion, the method has the functions of closed loop, reloading, high-precision map multiplexing and the like. The present invention has evaluated the proposed method in a common KITTI dataset and in the real world. To align the estimated trajectory with the ground truth, the present invention uses the method of horns's. The method evaluates the accuracy of navigation through the absolute pose error and the absolute track error. The absolute pose error and the absolute trajectory error are calculated by the EVO tool. The invention adopts the absolute track root mean square error to carry out quantitative evaluation. The absolute track error compares the absolute distance between the estimate and the translation component of the ground truth track. The absolute trajectory error at time step i is calculated as follows:
where T represents a transformation that aligns two tracks; g represents a real ground; e represents the estimated trajectory.
For a sequence of N poses, the absolute trajectory root mean square error of the absolute trajectory error is calculated as follows:
all values were obtained through five experiments considering the uncertainty of the system, and show the median result of the estimated trajectory accuracy. The present invention uses the KITTI data set to evaluate the proposed method, which is suitable for evaluating the performance of visual inertial SLAM in autonomous vehicles. In this dataset, a real ground is provided for sequences 00-10. In order to provide accurate ground truth, the lidar, camera and GPS are calibrated and synchronized. The KITTI data set is collected from various scene data in the real world, such as rural, highway and urban.
In real-world experiments, the present invention utilizes a homemade sensor suite to demonstrate the proposed method. The present invention drives a car in an outdoor large-scale environment and uses four different combinations for state estimation. In order to carry out error evaluation on the experiment, the control point is arranged through the RTK-GPS sensor and serves as a real ground point. To obtain the absolute error, the present invention aligns the estimated trajectory with the control point.
Experiments on KITTI data sets and the real world indicate that the method provided by the invention improves the positioning accuracy by about 38% in a large-scale environment compared with VINS-Fusion. The comparison experiment result shows that the method improves the accuracy and robustness of the navigation system in a large-scale environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of steps one to five in the present invention.
Fig. 2 is a schematic diagram of the coordinate transformation relationship in step five of the present invention.
FIG. 3 is a schematic diagram of the joint optimization of GPS/VIO in step five of the present invention.
FIG. 4 is the motion trajectory and dense point cloud of the present invention in a KITTI data set during the experiment.
FIG. 5 is a real world experimental set-up of the present invention in an experiment.
FIG. 6 is a plot of the root mean square error of the absolute track versus the absolute track error for the present invention and conventional methods.
Detailed Description
The present invention will be described in detail with reference to specific examples.
The visual inertial navigation method integrated with the GPS is realized by adopting the following steps, specifically shown in the flow of FIG. 1:
firstly, a vehicle-mounted binocular camera acquires a gray image of a road in front, an IMU sensor acquires acceleration and angular velocity of a vehicle, and a master control aligns data.
And step two, eliminating the noise and deviation of the IMU, performing pre-integration on the IMU data, calculating an observed value of the IMU data, a covariance matrix and a Jacobian matrix of residual errors, and then correcting the deviation.
And step three, initializing and calculating an absolute scale, a gyroscope bias, an acceleration bias, a gravity acceleration and the speed of each IMU at the moment.
And step four, after the initialization is completed, executing a tight coupling optimization based on a sliding window to obtain a state estimation value with higher precision and better robustness.
And fifthly, jointly optimizing the GPS measured value and the visual inertia SLAM.
In the first step, the specific steps include:
(1) a binocular camera carried by the mobile robot collects left and right target gray level images at the frequency of 10 Hz;
(2) an IMU sensor carried by the mobile robot acquires the three-axis acceleration and the angular velocity of a vehicle body at the frequency of 100 Hz;
(3) a GPS sensor carried by the mobile robot collects position information at the frequency of 10 Hz;
(4) the master control aligns data; due to trigger delay, transmission delay and an unsynchronized clock, the generated timestamp is not equal to the actual sampling time, resulting in time misalignment between the camera and the IMU; the alignment data is calculated as:
tIMU=tcam+td;
wherein, tIMUAnd tcamTimestamps representing the IMU and the camera, respectively; t is tdRepresenting the time offset of an unknown constant.
Because the position information collected by the GPS sensor is unstable, there is no need to align the GPS data.
In the second step, the concrete steps include:
(1) the IMU is noise and offset cancelled. The measured value is influenced by acceleration bias, gyroscope bias and noise, and the calculation formula for eliminating the noise and the bias is as follows:
wherein the content of the first and second substances,andrandom walk noise representing the accelerometer and gyroscope, respectively; a istAn ideal value representing the accelerometer measurements; omegatRepresenting an ideal value of a gyroscope measurement;a rotation matrix representing an IMU coordinate system to a camera coordinate system; gwRepresenting a gravity vector; acceleration noise naAnd gyroscope noise nwIs gaussian white noise and is expressed as:
(2) IMU measurements are pre-integrated. For two consecutive frames bkAnd bk+1The time interval t needs to be setk,tk+1]Integrating the inertial measurement into a local frame bkIn (1), the calculation formula is as follows:
wherein the content of the first and second substances,representing a position pre-integration;representing a velocity pre-integration;representing the rotational pre-integration.
(3) And (5) correcting the deviation. If the predicted deviation varies little, it will be adjusted by a first order approximation of the deviationThe calculation formula is as follows:
in the third step, the concrete steps include:
(1) the pose is converted from the camera frame into an IMU frame. The calculation formula is as follows:
wherein s is a proportional parameter to be solved;andrepresenting the external reference between the camera and the IMU.
(2) The gyroscope bias is calibrated. Initial calibration of the gyroscope bias is obtained by linearizing the IMU pre-integration with respect to the bias of the gyroscope and minimizing the cost function. The cost function is calculated by the formula:
wherein the content of the first and second substances,indicating the index of all frames in the window.
(3) The velocity, gravity vector and metric scale are initialized. The velocity, gravity vector and metric vector are shown in the following equations:
wherein the content of the first and second substances,representing the velocity of the IMU in the k-th image; g represents a gravity vector.
Two consecutive frames b in the windowkAnd bk+1The conversion formula of (1) is as follows:
the following linear model was obtained:
wherein the content of the first and second substances,Δtkis the time interval between two successive frames.
By solving the linear least square problem, the velocity of each frame in the window is obtained, and the calculation formula is as follows:
in the fourth step, the concrete steps include:
(1) a cost function. For cameras and IMUs, the overall state that needs to be estimated is defined as:
wherein p represents an estimated position; r represents the estimated direction; x is the number ofcamRepresenting a camera-related state vector; x is a radical of a fluorine atomimuRepresenting IMU-related state vectors; x is the number ofgpsRepresents a GPS-related state vector; λ represents the depth of the feature; v represents a velocity vector; baRepresenting an acceleration bias; bgRepresenting a gyroscope bias; x is the number ofgps、ygpsAnd zgpsRespectively, representing longitude, latitude, and altitude.
After defining the state to be estimated, performing state estimation, wherein the state estimation is a maximum likelihood estimation problem and is defined as the following steps:
wherein S represents a measurement value set; x ═ x0,x1,...,xn},Andrespectively a position vector and a direction quaternion;the uncertainty representing the measured value is a gaussian distribution h (·).
The negative log-likelihood function of the above equation is:
wherein the content of the first and second substances,information matrix representing a gaussian distributionAndh (-) represents the sensor model.
The state estimate is then converted to a non-linear least squares problem. Solving the nonlinear least squares problem requires factoring each sensor.
(2) A sensor factor. The camera factor consists of the features of each frame of image. The residual calculation formula observed in the image t is:
wherein, picAndprojection and back projection functions of the camera model, respectively; t represents a 4x4 homogeneous transformation matrix;a transformation matrix representing the IMU center to camera center of the off-line calibration.
The IMU residual error calculation formula is as follows:
wherein α represents a position; β represents a velocity; γ represents rotation;negative value operation exclusively for non-linear rotation; g represents a gravity vector equal to 9.81.
The GPS measurements do not contain accumulated errors, and the GPS constraints added as global constraints can effectively mitigate the accumulated errors. The GPS residual is defined as:
wherein the content of the first and second substances,representing measurements from GPS signals.Is calculated from the state at the previous time and the position change at the previous time.
(3) And (6) optimizing. And (3) after the camera factor and the IMU factor are obtained through the step (2), optimizing the nonlinear least square problem. The cost function is calculated by the formula:
After nonlinear approximation, the above equation yields a closed solution of δ χ. The derived calculation of the solution is as follows:
(4) and (5) marginating. To reduce the amount of computation, marginalization is introduced without losing useful information. Summarizing all marginalization factors by the above equation, resulting in a new H and b order of the rearranged states, the following relationship is obtained:
wherein the content of the first and second substances,representing a set of states that are marginalized;representing the remaining state set.
The formula for marginalization is as follows:
obtaining a new prior H of the residual state by the above formulapAnd bp. After prior information about the current state is obtained, a Bayesian rule is used to compute the posterior as the product of the likelihood and the prior:then the state estimation becomes the maximum a posteriori problem, and the calculation formula of the maximum a posteriori problem is as follows:
in the fifth step, the concrete steps include:
(1) the GPS/VIO optimization module first needs to process the VIO data. The pose of the Visual Odometer (VO) is converted to the coordinate system of the GPS. I.e. an observation equation from the local pose to the global pose. The conversion expression is as follows:
TGPS=TGPS->VO*TVO;
wherein, TGPS->VORepresents an extrinsic parameter between the GPS coordinate system and the VO coordinate system, the extrinsic parameter matrix being optimized by ceres solution. The calculation formula of the external parameter is as follows:
(2) as shown in fig. 2, the GPS pre-processing module first processes the raw GPS data. The module converts longitude, latitude and altitude of the GPS into a world coordinate system and puts the world coordinate system into a global variable. Latitude and longitude are the coordinates of the earth, but the earth is spherical and needs to be converted into a plane coordinate system. I.e. the longitude and latitude height (LLA) coordinates are converted to north-East (ENU) coordinates. The module first converts the LLA coordinate system to an earth-fixed earth-centered rectangular coordinate system (ECEF) coordinate system and then to the ENU coordinate system. For a certain point in space, the transformation of the LLA coordinate system to the ECEF coordinate system is as follows:
wherein x, y and z represent longitude, latitude and altitude, respectively; n represents the radius of curvature of the point,e2=(a2-b2)/a2and a, b and e are respectively a major semi-axis, a minor semi-axis and a first eccentricity of the reference ellipse in the LLA coordinate system.
The conversion from the ENU coordinate system to the ECEF coordinate system is calculated as follows:
wherein the content of the first and second substances,make it possible toObtaining an ENU coordinate system:
unfolding the above equation yields:
(3) as shown in fig. 3, the present invention uses the first frame of GPS data as the world coordinate system origin, and then converts the longitude and latitude into the world coordinate system. The converted GPS data and covariance are stored in the global observation. Two sets of observations are obtained, global observations of the GPS and local observations of the VIO. The GPS provides an initial position, the observed trust attitude of the VIO changes as an observed value of the entire cost function. The pose increment optimized by the GPS/VIO is the projection under the VIO. And obtaining the residual error of the VIO by changing the pose increment.
Fig. 4 is a sparse map and a motion trajectory constructed by the method proposed by the present invention. The device for collecting data set in real world of the present invention is shown in fig. 5, and fig. 5 contains binocular depth camera (MYNTEYE-S1030), Inertial Measurement Unit (IMU) and GPS sensor (GPS-U-Blox). After the data set is obtained, the precision verification of the algorithm is carried out, the precision comparison result is shown in fig. 6, and the traditional algorithm in fig. 6 is VINS-Fusion.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. A GPS-fused mobile robot visual inertial navigation method is characterized by comprising the following steps:
firstly, a binocular camera collects a gray image of a road ahead, an IMU sensor collects acceleration and angular velocity, a GPS sensor collects position information, and a master control aligns data;
eliminating noise and deviation of the IMU, performing pre-integration on IMU data, calculating an observed value of the IMU data, a covariance matrix and a Jacobian matrix of residual errors, and then correcting the deviation;
initializing and calculating an absolute scale, gyroscope offset, acceleration offset, gravity acceleration and the speed of each IMU moment;
and step four, after initialization is completed, performing tight coupling optimization based on a sliding window to obtain a state estimation value with higher precision and better robustness.
And fifthly, jointly optimizing the GPS measured value and the visual inertia SLAM.
2. The visual inertial navigation method of a mobile robot with integrated GPS according to claim 1, characterized in that: in the first step, the specific steps include:
(1) a binocular camera carried by the mobile robot collects left and right target gray level images at the frequency of 10 Hz;
(2) an IMU sensor carried by the mobile robot acquires the three-axis acceleration and the angular speed of the robot at the frequency of 100 Hz;
(3) a GPS sensor carried by the mobile robot collects position information at the frequency of 10 Hz;
(4) the master control aligns data; due to trigger delay, transmission delay and an unsynchronized clock, the generated timestamp is not equal to the actual sampling time, resulting in time misalignment between the camera and the IMU; the alignment data is calculated as:
tIMU=tcam+td;
wherein, tIMUAnd tcamTimestamps representing the IMU and the camera, respectively; t is tdRepresents oneTime offset of unknown constant;
because the position information collected by the GPS sensor is unstable, there is no need to align the GPS data.
3. The visual inertial navigation method of a mobile robot with integrated GPS according to claim 1, characterized in that: in the second step, the concrete steps include:
(1) eliminating noise and bias of the IMU; the measured value is influenced by acceleration bias, gyroscope bias and noise, and the calculation formula for eliminating the noise and the bias is as follows:
wherein the content of the first and second substances,andrandom walk noise representing the accelerometer and gyroscope, respectively; a is atAn ideal value representing the accelerometer measurements; omegatRepresenting an ideal value of a gyroscope measurement;a rotation matrix representing an IMU coordinate system to a camera coordinate system; gwRepresenting a gravity vector; acceleration noise naAnd gyroscope noise nwIs gaussian white noise and is expressed as:
(2) pre-integrating IMU measurement values; for two consecutive frames bkAnd bk+1The time interval t needs to be setk,tk+1]Integrating the inertial measurement into a local frame bkIn (1), the calculation formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,representing a position pre-integration;representing a velocity pre-integration;representing a rotational pre-integration;
(3) correcting deviation; if the predicted deviation varies little, it will be adjusted by a first order approximation of the deviationAndthe calculation formula is as follows:
4. the visual inertial navigation method of a mobile robot with integrated GPS according to claim 1, characterized in that: in the third step, the concrete steps include:
(1) converting the pose from the camera frame into an IMU frame; the calculation formula is as follows:
wherein s is a proportional parameter to be solved;andrepresenting external parameters between the camera and the IMU;
(2) calibrating a gyroscope bias; obtaining an initial calibration of the gyroscope bias by linearizing the IMU pre-integration with respect to the bias of the gyroscope and minimizing a cost function; the cost function is calculated by the formula:
wherein, B represents the index of all frames in the window;
(3) initializing a speed, a gravity vector and a measurement proportion; the velocity, gravity vector and metric vector are shown in the following equations:
wherein the content of the first and second substances,representing the velocity of the IMU in the k-th image; g represents a gravity vector;
two consecutive frames b in the windowkAnd bk+1The conversion formula of (1) is as follows:
the following linear model was obtained:
wherein, the first and the second end of the pipe are connected with each other,Δtkis the time interval between two successive frames;
by solving the linear least square problem, the velocity of each frame in the window is obtained, and the calculation formula is as follows:
5. the visual inertial navigation method of a mobile robot with integrated GPS according to claim 1, characterized in that: in the fourth step, the concrete steps include:
(1) a cost function; for cameras and IMUs, the overall state that needs to be estimated is defined as:
wherein p represents an estimated position; r represents the estimated direction; x is the number ofcamRepresenting a camera-related state vector; x is the number ofimuRepresenting IMU-related state vectors; x is the number ofgpsRepresents a GPS-related state vector; λ represents the depth of the feature; v represents a velocity vector; baRepresenting an acceleration bias; bgRepresenting a gyroscope bias; x is the number ofgps、ygpsAnd zgpsRespectively representing longitude, latitude and altitude;
after defining the state to be estimated, performing state estimation, wherein the state estimation is a maximum likelihood estimation problem and is defined as the following steps:
wherein S represents a measurement value set; x ═ x0,x1,...,xn}, Andposition vector and direction quaternion respectively;represents the uncertainty of the measured value as a gaussian distribution h (·);
the negative log-likelihood function of the above equation is:
wherein the content of the first and second substances, information matrix representing a gaussian distributionAndh (-) represents the sensor model;
then, converting the state estimation into a nonlinear least square problem; solving the nonlinear least squares problem requires solving the factors for each sensor;
(2) a sensor factor; the camera factor consists of the characteristics of each frame of image; the residual calculation formula observed in the image t is:
wherein, picAndprojection and back projection functions of the camera model, respectively; t represents a 4x4 homogeneous transformation matrix;a transformation matrix representing an IMU center to camera center of an off-line calibration;
the IMU residual error calculation formula is as follows:
wherein α represents a position; β represents a velocity; γ represents rotation;negative value operation exclusively for non-linear rotation; g represents a gravity vector equal to 9.81;
the GPS measurement does not contain accumulated errors, and the accumulated errors can be effectively relieved by the GPS constraint added as the global constraint; the GPS residual is defined as:
wherein the content of the first and second substances,represents measurements from GPS signals;is calculated by the state of the previous moment and the position conversion of the previous moment;
(3) optimizing; after the camera factor and the IMU factor are obtained through the step (2), the nonlinear least square problem is optimized; the cost function is calculated by the formula:
after nonlinear approximation, the above formula obtains a closed solution of δ χ; the derived calculation of the solution is as follows:
(4) marginalizing; in order to reduce the calculation amount, marginalization is introduced on the premise of not losing useful information; summarizing all marginalization factors by the above equation, resulting in a new H and b order of the rearranged states, the following relationship is obtained:
wherein, χmRepresenting a set of states that are marginalized; chi shaperRepresenting a set of remaining states;
the formula for marginalization is as follows:
obtaining a new prior H of the residual state by the above formulapAnd bp(ii) a After prior information about the current state is obtained, a Bayesian rule is used to compute the posterior as the product of the likelihood and the prior: p (χ | z) oc |. p (z |. χ) p (χ); then the state estimation becomes the maximum a posteriori problem, and the calculation formula of the maximum a posteriori problem is as follows:
6. the visual inertial navigation method of a mobile robot with integrated GPS according to claim 1, characterized in that: in the fifth step, the concrete steps include:
(1) the GPS/VIO optimization module firstly needs to process VIO data; the pose of the Visual Odometer (VO) is converted into a coordinate system of the GPS; namely an observation equation from a local pose to a global pose; the conversion expression is as follows:
TGPS=TGPS->VO*TVO;
wherein, TGPS->VORepresenting an external parameter between a GPS coordinate system and a VO coordinate system, wherein an external parameter matrix is optimized by a ceres solvent; the calculation formula of the external parameter is as follows:
(2) the GPS preprocessing module firstly processes original GPS data; the module converts longitude, latitude and height of the GPS into a world coordinate system and puts the world coordinate system into a global variable; the longitude and latitude are coordinates of the earth, but the earth is spherical and needs to be converted into a plane coordinate system; namely, converting longitude and latitude height (LLA) coordinates into north-east upper (ENU) coordinates; the module firstly converts an LLA coordinate system into an earth-fixed earth-centered rectangular coordinate system (ECEF) coordinate system, and then converts the LLA coordinate system into an ENU coordinate system; for a certain point in space, the transformation of the LLA coordinate system to the ECEF coordinate system is as follows:
wherein x, y and z represent longitude, latitude and altitude, respectively; n denotes the radius of curvature of the point,e2=(a2-b2)/a2a, b and e are respectively a major semi-axis, a minor semi-axis and a first eccentricity of a reference ellipse in the LLA coordinate system;
the conversion from the ENU coordinate system to the ECEF coordinate system is calculated as follows:
unfolding the above equation yields:
(3) the invention takes a first frame of GPS data as the origin of a world coordinate system, and then converts longitude and latitude into the world coordinate system; the converted GPS data and covariance are stored in a global observation value; obtaining two groups of observations, global observation of GPS and local observation of VIO; the GPS provides an initial position, the observation trust attitude of the VIO changes, and the change is used as an observation value of the whole cost function; the pose increment optimized by the GPS/VIO is the projection under the VIO; and obtaining the residual error of the VIO by changing the pose increment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210480378.0A CN114690229A (en) | 2022-05-05 | 2022-05-05 | GPS-fused mobile robot visual inertial navigation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210480378.0A CN114690229A (en) | 2022-05-05 | 2022-05-05 | GPS-fused mobile robot visual inertial navigation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114690229A true CN114690229A (en) | 2022-07-01 |
Family
ID=82144585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210480378.0A Pending CN114690229A (en) | 2022-05-05 | 2022-05-05 | GPS-fused mobile robot visual inertial navigation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114690229A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115468569A (en) * | 2022-09-16 | 2022-12-13 | 海南大学 | Voice control vehicle navigation method based on double positioning |
CN115877811A (en) * | 2023-03-08 | 2023-03-31 | 北京东方国信科技股份有限公司 | Process technology treatment method, device and equipment |
-
2022
- 2022-05-05 CN CN202210480378.0A patent/CN114690229A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115468569A (en) * | 2022-09-16 | 2022-12-13 | 海南大学 | Voice control vehicle navigation method based on double positioning |
CN115877811A (en) * | 2023-03-08 | 2023-03-31 | 北京东方国信科技股份有限公司 | Process technology treatment method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN108731670B (en) | Inertial/visual odometer integrated navigation positioning method based on measurement model optimization | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
Heo et al. | EKF-based visual inertial navigation using sliding window nonlinear optimization | |
US7868821B2 (en) | Method and apparatus to estimate vehicle position and recognized landmark positions using GPS and camera | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN109269511B (en) | Curve matching visual navigation method for planet landing in unknown environment | |
CN114690229A (en) | GPS-fused mobile robot visual inertial navigation method | |
CN114693754B (en) | Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion | |
CN112146655A (en) | Elastic model design method for BeiDou/SINS tight integrated navigation system | |
CN112562077A (en) | Pedestrian indoor positioning method integrating PDR and prior map | |
CN114019552A (en) | Bayesian multi-sensor error constraint-based location reliability optimization method | |
CN110929402A (en) | Probabilistic terrain estimation method based on uncertain analysis | |
CN114001733A (en) | Map-based consistency efficient visual inertial positioning algorithm | |
Khoshelham et al. | Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry | |
CN114964276A (en) | Dynamic vision SLAM method fusing inertial navigation | |
CN114690230A (en) | Automatic driving vehicle navigation method based on visual inertia SLAM | |
Tang et al. | Ic-gvins: A robust, real-time, ins-centric gnss-visual-inertial navigation system for wheeled robot | |
CN115930948A (en) | Orchard robot fusion positioning method | |
CN114646993A (en) | Data fusion algorithm for accurate positioning based on GNSS, vision and IMU | |
Hoshizaki et al. | Performance of Integrated Electro‐Optical Navigation Systems | |
CN115344033A (en) | Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method | |
Luo et al. | An imu/visual odometry integrated navigation method based on measurement model optimization | |
Hoang et al. | Localisation using lidar and camera | |
CN113503872B (en) | Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |