CN114719843A - High-precision positioning method in complex environment - Google Patents
High-precision positioning method in complex environment Download PDFInfo
- Publication number
- CN114719843A CN114719843A CN202210643924.8A CN202210643924A CN114719843A CN 114719843 A CN114719843 A CN 114719843A CN 202210643924 A CN202210643924 A CN 202210643924A CN 114719843 A CN114719843 A CN 114719843A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- imu
- frame
- local world
- rotation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/49—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a high-precision positioning method in a complex environment, which comprises the steps of acquiring original observation data of a GNSS sensor, a VISON vision sensor, an SINS inertial sensor and a WHEEL WHEEL type odometer, preprocessing and synchronizing; carrying out VISON/SINS/WHEEL combined dynamic initialization; aligning the vision sensor data, the inertial sensor data, the wheel odometer data with the GNSS sensor data; constructing a pre-integration model of the inertial sensor and the wheel type odometer; aggregating the GNSS original observation residual, the visual residual, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual model; and solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment. The method not only realizes the high-precision positioning process in a complex environment, but also has high reliability, good precision and good robustness.
Description
Technical Field
The invention belongs to the technical field of navigation, and particularly relates to a high-precision positioning method in a complex environment.
Background
With the development of economic technology and the improvement of living standard of people, the positioning and navigation service is widely applied to the production and the life of people, and brings endless convenience to the production and the life of people. Therefore, ensuring the accuracy of positioning and navigation becomes the research focus of researchers.
Currently, in practical applications, many areas with weak GNSS (Global Navigation Satellite System) signals or no GNSS signals exist in an actual environment, such as a tunnel, an overhead structure, a garage, and the like; in these areas, if pure GNSS positioning is used alone, the positioning system cannot provide a highly accurate positioning result in real time. Therefore, at present, a comprehensive positioning mode combining multiple sensors is generally adopted to ensure the robustness and high reliability of real-time positioning.
However, the current multi-source fusion positioning method generally has some defects: in the currently adopted method of integrating the VIO (Visual-Inertial metrology, Visual-Inertial trajectory method) and the GNSS, in a region without GNSS signals, a positioning system is degenerated into a pure VIO system, and because a ground carrier is approximate to a plane, an additional invisible degree of freedom occurs in a monocular VIO system, so that positioning drift is caused; in addition, the fusion of the VIO and the GNSS at present mainly adopts loose coupling or semi-tight combination, and cannot provide a highly robust positioning result in a complex environment.
Disclosure of Invention
The invention aims to provide a high-precision positioning method under a complex environment, which has high reliability, good precision and good robustness.
The high-precision positioning method under the complex environment comprises the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer;
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
The step S1 specifically includes the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; and then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors.
The step S2 specifically includes the following steps:
A. in the sliding window, calculating to obtain a camera pose and a characteristic point coordinate of a relative scale by using a pure visual From Motion (SFM) method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of vision, the IMU and the wheel type odometer;
D. and C, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under the local world coordinate system.
The step C specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometerIs composed ofWhereinIs as followskThe velocity in the IMU coordinate system of the frame,is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
in the formulaPre-integrating the position of the IMU;is composed ofc 0Rotation relative to the k frame IMU;is as followskRelative IMU of framec 0The location of the frame;is the time interval between two frames;is as followskRelative IMU of framec 0Rotation of the frame;the velocity pre-integral quantity of the IMU is obtained;the pre-integration amount of the wheel type odometer is obtained;is as followskFrame odom oppositionc 0Rotation of the frame;is as followskFrame odom oppositionc 0Bits of a framePlacing;the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravity acceleration and the scale factor.
The step S3 specifically includes the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double-difference model;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite;
c. and aligning a local world coordinate system and a global ECEF coordinate system according to the result of the VISION/SINS/WHEEL combined dynamic initialization to finish fine adjustment of the position of the anchor point.
The step a specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
in the formulaA difference between a difference in carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;is the carrier wavelength;an observation vector of the base station to the satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mthe number of original observed quantities of carrier phases in the kth sliding window is set;is the Markov norm.
The step b specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
in the formulaA receiver frequency drift;is the carrier wavelength;f d is the Doppler raw observed quantity;an observation vector for the receiver to satellite i;the speed of the ith satellite;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w for a local world coordinate systemThe speed of the carrier;frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;is a Markov norm;yawis the yaw angle.
The step c specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
in the formulaIs the rough coordinates of the anchor point;an observation vector of a base station to a satellite i;is the carrier wavelength;the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;is the coordinates of the base station;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;is the Markov norm.
The step S4 specifically includes the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
in the formulaIs a position pre-integration quantity;b k IMU at time k;at time t relative tob k The amount of rotation of;is the output value of the accelerometer;b at is the null shift of the accelerometer;n a is the Gauss white of an accelerometerNoise;is the output value of the angular velocity meter;is the null shift of the angular velocity meter;white gaussian noise that is an angular velocimeter;at time t relative tob k The amount of rotation of;t k is the k moment;is a velocity pre-integration quantity;pre-integrating the quantity for rotation;representing the cross product.
Modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
in the formulaIs a position pre-integration quantity;o k is the odom at the time k;at time t relative too k The amount of rotation of;is the linear velocity;n v white gaussian noise at linear velocity;is a rotation pre-integration quantity;is the yaw rate;gaussian white noise that is yaw rate;at time t relative too k The rotation of (2).
The step S5 specifically includes the following steps:
the following equation is used as a nonlinear residual model:
in the formulaxIs the estimator to be optimized;the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;is the mahalanobis distance;nthe residual constraint quantity;a residual function that is a sensor measurement;
In the formulaIs a position increment;is the speed increment;is an angular increment;is the accelerometer bias increment;is the angular velocity meter bias increment;rotation of local world coordinates to the k frame IMU;the position of the IMU of the k frame under the local world coordinates;the velocity of the IMU of the kth frame under a local world coordinate system;is the time interval between two frames;g w is the acceleration of gravity;is a position pre-integration quantity;is a velocity pre-integration quantity;the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;is a rotation pre-integration quantity;is the imaginary part of four elements;acceleration zero drift for the k frame;zero drift for the gyroscope of the kth frame;is a four-element multiplication;
In the formulaIs a position increment;is an angular increment;is a part ofRotation of world coordinates to kth frame odom;the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;the wheel speed odometer is under a local world coordinate system for the kth frame;is a position pre-integration quantity;the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;is a rotation pre-integration quantity;
In the formulaNormalized camera coordinates for a landmark point of a jth camera coordinate system;the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj 、y cj andz cj is calculated by the formulaIn the formulaAre the rotation parameters of the camera and IMU coordinate system,for a rotation of the local world coordinates to the IMU of frame j,the amount of rotation of the IMU of frame i to the local world coordinate system,in order to rotate the camera to the IMU,in order to be the inverse depth value,are the translation parameters of the camera and IMU,the coordinates of the IMU in the i-th frame in the local world coordinate system,for the local world coordinate system and the jth frame IMU translation,translation parameters of the IMU and a camera coordinate system;
In the formulaIs the carrier wavelength;is the observation vector of the ith satellite;ECEF coordinates of the receiver;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;is the ECEF coordinate of the reference station, an,The conversion relation from a local world coordinate system to an enu coordinate system is obtained;for the transformation relationship from the local world coordinate system to the enu coordinate system,for the position of the IMU coordinate of the kth frame in the local world coordinate system,is the anchor point coordinate;
In the formulaIs a set matrix, and;is the amount of rotation of the from to IMU coordinate system;the attitude of the IMU in a local world coordinate system;for local world coordinate systems to virtual planesThe amount of rotation of;e 3is a set matrix, and;the position of the IMU in a local world coordinate system;the translation amount from the odom to the IMU coordinate system;is a roll angle;is the pitch angle;is composed ofThe noise of the gaussian noise of (a),for local world coordinates to a virtual planeThe height value of (a).
According to the high-precision positioning method under the complex environment, the high-precision positioning process under the complex environment is realized through the innovative positioning method flow design and the innovative multi-sensor fusion calculation process, and the method is high in reliability, good in precision and good in robustness.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram illustrating a comparison between a conventional high-precision positioning track and a true value track in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a comparison between a location track and a truth track according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the error comparison in the X direction of the conventional high-precision positioning method and the method of the present invention in the embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating an error comparison between the conventional high-precision positioning method and the method of the present invention in the Y direction in the embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating comparison of errors in the Z direction between the conventional high-precision positioning method and the method of the present invention in the embodiment of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the high-precision positioning method under the complex environment comprises the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data; the method specifically comprises the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data; the method specifically comprises the following steps:
A. in the sliding window, calculating to obtain a camera pose and a characteristic point coordinate of a relative scale by using a pure visual From Motion (SFM) method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer; the method specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometerIs composed ofWhereinIs as followskThe velocity in the IMU coordinate system of the frame,is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
in the formulaPre-integrating the position of the IMU;is composed ofc 0Rotation relative to the k frame IMU;is as followskFrame IMU oppositionc 0The location of the frame;is the time interval between two frames;is as followskFrame IMU oppositionc 0Rotation of the frame;the velocity pre-integral quantity of the IMU is obtained;is a pre-integration value of the wheel type odometer;is as followskFrame odom oppositionc 0Rotation of the frame;is as followskFrame odom oppositionc 0The location of the frame;the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravitational acceleration and the scale factor;
D. c, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under a local world coordinate system;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2; the method specifically comprises the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double difference model; the method specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
in the formulaA difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;is the carrier wavelength;an observation vector of a base station to a satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window is;mthe original observed quantity number of the carrier phase in the kth sliding window is obtained;is a Markov norm;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite; the method specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
in the formulaA receiver frequency drift;is the carrier wavelength;f d is the Doppler raw observed quantity;an observation vector for the receiver to satellite i;the velocity of the ith satellite;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w the carrier speed under a local world coordinate system;frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window is;mis a firstkThe number of original observed quantities of carrier phases in each sliding window;is a Markov norm;yawis a yaw angle;
c. aligning a local world coordinate system and a global ECEF coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result to finish fine adjustment of the position of an anchor point; the method specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
in the formulaIs the rough coordinates of the anchor point;an observation vector of a base station to a satellite i;is the carrier wavelength;the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;is the coordinates of the base station;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;is a Markov norm;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer; the method specifically comprises the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
in the formulaIs a position pre-integration quantity;b k IMU at time k;at time t relative tob k The amount of rotation of;is the output value of the accelerometer;b at is the null shift of the accelerometer;n a gaussian white noise for accelerometers;is the output value of the angular velocity meter;is the null shift of the angular velocity meter;white gaussian noise which is an angular velocity meter;at time t relative tob k The amount of rotation of (c);t k is the k moment;is a velocity pre-integration quantity;is a rotation pre-integration quantity;representing a cross product;
modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
in the formulaPre-integrating the quantity for the position;o k is the odom at time k;at time t relative too k The amount of rotation of;is the linear velocity;n v white gaussian noise at linear velocity;is a rotation pre-integration quantity;is the yaw rate;gaussian white noise that is yaw rate;at time t relative too k Rotation of (2);
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model; the method specifically comprises the following steps:
the following equation is used as a nonlinear residual model:
in the formulaxIs the estimator to be optimized;the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;is the Mahalanobis distance;nthe residual constraint quantity;a residual function that is a sensor measurement;
In the formulaIs a position increment;is the speed increment;is an angular increment;is the accelerometer bias increment;is the angular velocity meter bias increment;rotation of local world coordinates to the k frame IMU;the position of the IMU of the k frame under the local world coordinates;the velocity of the IMU of the kth frame under a local world coordinate system;is the time interval between two frames;g w is the acceleration of gravity;pre-integrating the quantity for the position;is a velocity pre-integration quantity;the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;is a rotation pre-integration quantity;is the imaginary part of four elements;acceleration zero drift for the k frame;zero drift for the gyroscope of the kth frame;is a four-element multiplication;
In the formulaIs a position increment;is an angular increment;rotation of local world coordinates to the kth frame odom;the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;the wheel speed odometer is under a local world coordinate system for the kth frame;pre-integrating the quantity for the position;the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;is a rotation pre-integration quantity;
In the formulaNormalized camera coordinates for a landmark point of a jth camera coordinate system;the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj 、y cj andz cj is calculated by the formulaIn the formulaAre the rotation parameters of the camera and IMU coordinate system,for a rotation of the local world coordinates to the IMU of frame j,the amount of rotation of the IMU of frame i to the local world coordinate system,in order to rotate the camera to the IMU,in order to be the inverse depth value,are the translation parameters of the camera and IMU,the coordinates of the IMU in the i-th frame in the local world coordinate system,for the local world coordinate system and the jth frame IMU translation,translation parameters of the IMU and a camera coordinate system;
In the formulaIs the carrier wavelength;is the observation vector of the ith satellite;ECEF coordinates of the receiver;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;for receiver and base station to the ith satelliteA difference between the difference in the carrier phase observations of the satellites and the difference in the carrier phase observations of the jth satellite;is the ECEF coordinates of the reference station, and,the conversion relation from a local world coordinate system to an enu coordinate system is obtained;for the transformation relationship from the local world coordinate system to the enu coordinate system,for the position of the IMU coordinate of the kth frame in the local world coordinate system,is the anchor point coordinate;
In the formulaIs a set matrix, and;is the amount of rotation of the from to IMU coordinate system;the attitude of the IMU in a local world coordinate system;for local world coordinate systems to virtual planesThe amount of rotation of;e 3is a set matrix, and;the position of the IMU in a local world coordinate system;the translation amount from the from to the IMU coordinate system;is a roll angle;is the pitch angle;is composed ofThe noise of the gaussian noise of (a),for local world coordinates to a virtual planeA height value of (d);
in specific implementation, whether the value of n is 5 or not, and when n = 1-5,corresponding to IMU residual factor, wheel-type odometer residual factor, visual reprojection residual factor, carrier phase double-difference measurement factor and random plane constraint factor respectively;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
The complex environment high-level location method of the present invention is further described below with reference to an embodiment:
the method comprises the steps of collecting images, IMU and GNSS original observed quantities based on a small foraging D1000-IR-120 camera and a self-developed satellite signal receiver, receiving GNSS differential data based on a network TCP mode, receiving vehicle wheel speed odometer data based on an ODB mode, wherein the image data receiving frequency is 30hz, only left-eye data are used, the IMU data receiving frequency is 200hz, the GNSS original observed quantities and the differential data receiving frequency are 1hz, the wheel speed odometer data receiving frequency is 30hz, and all data are synchronously processed and issued into a ros format through related tools. After the hardware is well installed and calibrated, data are recorded for about 30 minutes around the complex environment and garage of the building dense area, and the method is compared with the existing high-precision positioning method based on the method provided by the invention. FIG. 2 is a comparison between a current high-precision positioning track and a true value track, in which the track No. 1 is a true value and the track No. 2 is a high-precision positioning result; fig. 3 is a comparison between the positioning track and the true track, in which the track No. 1 is the true value and the track No. 3 is the positioning result of the present invention. As is apparent from fig. 2, when the existing high-precision positioning method is used for positioning, the track 1 and the track 2 still have obvious separation traces in partial sections, which indicates that the positioning result is not satisfactory; in fig. 3, when the method of the present invention is used for positioning, the track 1 and the track 3 are almost always in an overlapped state, thus indicating that the method of the present invention has high accuracy.
The coordinate values of the existing high-precision positioning method and the method of the present invention are subtracted from the true values to obtain error values in the X/Y/Z direction, FIG. 4 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the X direction, FIG. 5 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the Y direction, and FIG. 6 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the Z direction, and it can be seen from the drawings that the error values based on the positioning method of the present invention are far smaller than those of the existing high-precision positioning method in the X, Y, Z direction, which shows that the positioning accuracy based on the present invention can be greatly improved in the complex environment.
Claims (10)
1. A high-precision positioning method under a complex environment is characterized by comprising the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer;
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
2. The method for high-precision positioning under complex environment according to claim 1, wherein the step S1 specifically includes the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; and then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors.
3. The method for high-precision positioning under complex environment according to claim 2, wherein the step S2 specifically includes the following steps:
A. in the sliding window, calculating to obtain the relative scale camera pose and the feature point coordinate by using a pure vision SFM method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer;
D. and C, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under the local world coordinate system.
4. The method for high-precision positioning under complex environment according to claim 3, wherein the step C specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometerIs composed ofWhereinIs as followskThe velocity in the IMU coordinate system of the frame,is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
in the formulaPre-integrating the position of the IMU;is composed ofc 0Rotation relative to the k frame IMU;is as followskFrame IMU oppositionc 0The location of the frame;is the time interval between two frames;is as followskFrame IMU oppositionc 0Rotation of the frame;pre-integrating the IMU speed;the pre-integration amount of the wheel type odometer is obtained;is as followskFrame odom oppositionc 0Rotation of the frame;is as followskFrame odom oppositionc 0The location of the frame;the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravity acceleration and the scale factor.
5. The method for high-precision positioning under complex environment according to claim 4, wherein the step S3 specifically comprises the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double difference model;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite;
c. and aligning a local world coordinate system and a global ECEF coordinate system according to the result of the VISION/SINS/WHEEL combined dynamic initialization to finish fine adjustment of the position of the anchor point.
6. The method for high-precision positioning under complex environment according to claim 5, wherein the step a specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
in the formulaA difference between a difference in carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;is the carrier wavelength;an observation vector of the base station to the satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mthe original observed quantity number of the carrier phase in the kth sliding window is obtained;is the Markov norm.
7. The method for high-precision positioning under complex environment according to claim 6, wherein the step b specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
in the formulaA receiver frequency drift;is the carrier wavelength;f d is the Doppler raw observed quantity;an observation vector of a receiver to a satellite i;the velocity of the ith satellite;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w the carrier speed under a local world coordinate system;frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;is a Markov norm;yawis the yaw angle.
8. The method for high-precision positioning under complex environment according to claim 7, wherein the step c specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
in the formulaIs the rough coordinates of the anchor point;an observation vector of a base station to a satellite i;is the carrier wavelength;the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;is the coordinates of the base station;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis a firstkThe number of original observed quantities of carrier phases in each sliding window;is the Markov norm.
9. The method for high-precision positioning under complex environment according to claim 8, wherein the step S4 specifically includes the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
in the formulaIs a position pre-integration quantity;b k IMU at time k;at time t relative tob k The amount of rotation of;is the output value of the accelerometer;b at is the null shift of the accelerometer;n a gaussian white noise for accelerometers;is the output value of the angular velocity meter;is the null shift of the angular velocity meter;white gaussian noise which is an angular velocity meter;at time t relative tob k The amount of rotation of (c);t k is the k moment;is a velocity pre-integration quantity;is a rotation pre-integration quantity;representing a cross product;
modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
in the formulaPre-integrating the quantity for the position;o k is the odom at the time k;at time t relative too k The amount of rotation of;is the linear velocity;n v white gaussian noise at linear velocity;is a rotation pre-integration quantity;is the yaw rate;gaussian white noise that is yaw rate;at time t relative too k The rotation of (2).
10. The method for high-precision positioning under complex environment according to claim 9, wherein the step S5 specifically includes the following steps:
the following equation is used as a nonlinear residual model:
in the formulaxIs the estimator to be optimized;the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;is the mahalanobis distance;nthe residual constraint quantity;a residual function that is a sensor measurement;
In the formulaIs a position increment;is the speed increment;is an angular increment;is the accelerometer bias increment;is the angular velocity meter bias increment;rotation of local world coordinates to the k frame IMU;is as followsThe location of the k frame IMU in local world coordinates;the speed of the k frame IMU under a local world coordinate system;is the time interval between two frames;g w is the acceleration of gravity;is a position pre-integration quantity;is a velocity pre-integration quantity;the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;is a rotation pre-integration quantity;is the imaginary part of four elements;acceleration zero drift for the k frame;zero drift for the gyroscope of the kth frame;is a four-element multiplication;
In the formulaIs a position increment;is an angular increment;rotation of local world coordinates to the kth frame odom;the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;the wheel speed odometer is under a local world coordinate system for the kth frame;is a position pre-integration quantity;the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;is a rotation pre-integration quantity;
In the formulaNormalized camera coordinates for a landmark point of a jth camera coordinate system;the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj 、y cj andz cj is calculated by the formulaIn the formulaAre the rotation parameters of the camera and IMU coordinate system,for a rotation of the local world coordinates to the IMU of frame j,the amount of rotation of the IMU of frame i to the local world coordinate system,in order to rotate the camera to the IMU,for the inverse depth value,are the translation parameters of the camera and IMU,the coordinates of the IMU in the i-th frame in the local world coordinate system,for the local world coordinate system and the jth frame IMU translation,translation parameters of the IMU and a camera coordinate system are obtained;
In the formulaIs the carrier wavelength;an observation vector of the ith satellite;ECEF coordinates of the receiver;the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;is the ECEF coordinates of the reference station, and,the conversion relation from a local world coordinate system to an enu coordinate system is obtained;for the transformation relationship from the local world coordinate system to the enu coordinate system,for the position of the IMU coordinate of the kth frame in the local world coordinate system,is the anchor point coordinate;
In the formulaIs a set matrix, and;is the amount of rotation of the from to IMU coordinate system;the attitude of the IMU in a local world coordinate system;for local world coordinate systems to virtual planesThe amount of rotation of;e 3is a set matrix, and;the position of the IMU in a local world coordinate system;the translation amount from the odom to the IMU coordinate system;is a roll angle;is the pitch angle;is composed ofThe noise of the gaussian noise of (a),for local world coordinates to a virtual planeThe height value of (a).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210643924.8A CN114719843B (en) | 2022-06-09 | 2022-06-09 | High-precision positioning method in complex environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210643924.8A CN114719843B (en) | 2022-06-09 | 2022-06-09 | High-precision positioning method in complex environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114719843A true CN114719843A (en) | 2022-07-08 |
CN114719843B CN114719843B (en) | 2022-09-30 |
Family
ID=82232706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210643924.8A Active CN114719843B (en) | 2022-06-09 | 2022-06-09 | High-precision positioning method in complex environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114719843B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115201873A (en) * | 2022-09-06 | 2022-10-18 | 中冶智诚(武汉)工程技术有限公司 | Multi-system collaborative indoor and outdoor precise positioning system architecture and operation method thereof |
CN115523920A (en) * | 2022-11-30 | 2022-12-27 | 西北工业大学 | Seamless positioning method based on visual inertial GNSS tight coupling |
CN117288187A (en) * | 2023-11-23 | 2023-12-26 | 北京小米机器人技术有限公司 | Robot pose determining method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160209846A1 (en) * | 2015-01-19 | 2016-07-21 | The Regents Of The University Of Michigan | Visual Localization Within LIDAR Maps |
US20190331496A1 (en) * | 2016-12-14 | 2019-10-31 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Locating a vehicle |
US20190368879A1 (en) * | 2018-05-29 | 2019-12-05 | Regents Of The University Of Minnesota | Vision-aided inertial navigation system for ground vehicle localization |
CN111121767A (en) * | 2019-12-18 | 2020-05-08 | 南京理工大学 | GPS-fused robot vision inertial navigation combined positioning method |
CN113223161A (en) * | 2021-04-07 | 2021-08-06 | 武汉大学 | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling |
CN113252033A (en) * | 2021-06-29 | 2021-08-13 | 长沙海格北斗信息技术有限公司 | Positioning method, positioning system and robot based on multi-sensor fusion |
CN114019552A (en) * | 2021-10-21 | 2022-02-08 | 中国电子科技集团公司第五十四研究所 | Bayesian multi-sensor error constraint-based location reliability optimization method |
-
2022
- 2022-06-09 CN CN202210643924.8A patent/CN114719843B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160209846A1 (en) * | 2015-01-19 | 2016-07-21 | The Regents Of The University Of Michigan | Visual Localization Within LIDAR Maps |
US20190331496A1 (en) * | 2016-12-14 | 2019-10-31 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Locating a vehicle |
US20190368879A1 (en) * | 2018-05-29 | 2019-12-05 | Regents Of The University Of Minnesota | Vision-aided inertial navigation system for ground vehicle localization |
CN111121767A (en) * | 2019-12-18 | 2020-05-08 | 南京理工大学 | GPS-fused robot vision inertial navigation combined positioning method |
CN113223161A (en) * | 2021-04-07 | 2021-08-06 | 武汉大学 | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling |
CN113252033A (en) * | 2021-06-29 | 2021-08-13 | 长沙海格北斗信息技术有限公司 | Positioning method, positioning system and robot based on multi-sensor fusion |
CN114019552A (en) * | 2021-10-21 | 2022-02-08 | 中国电子科技集团公司第五十四研究所 | Bayesian multi-sensor error constraint-based location reliability optimization method |
Non-Patent Citations (2)
Title |
---|
DEQIANG TIAN 等: "A Design of Odometer-Aided Visual Inertial Integrated Navigation Algorithm Based on Multiple View Geometry Constraints", 《2017 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC)》 * |
黄鑫: "基于多源传感器信息融合的智能汽车定位技术研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115201873A (en) * | 2022-09-06 | 2022-10-18 | 中冶智诚(武汉)工程技术有限公司 | Multi-system collaborative indoor and outdoor precise positioning system architecture and operation method thereof |
CN115201873B (en) * | 2022-09-06 | 2023-07-28 | 中冶智诚(武汉)工程技术有限公司 | Multi-system cooperative indoor and outdoor precise positioning system architecture and operation method thereof |
CN115523920A (en) * | 2022-11-30 | 2022-12-27 | 西北工业大学 | Seamless positioning method based on visual inertial GNSS tight coupling |
CN115523920B (en) * | 2022-11-30 | 2023-03-10 | 西北工业大学 | Seamless positioning method based on visual inertial GNSS tight coupling |
CN117288187A (en) * | 2023-11-23 | 2023-12-26 | 北京小米机器人技术有限公司 | Robot pose determining method and device, electronic equipment and storage medium |
CN117288187B (en) * | 2023-11-23 | 2024-02-23 | 北京小米机器人技术有限公司 | Robot pose determining method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114719843B (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114719843B (en) | High-precision positioning method in complex environment | |
CN111426318B (en) | Low-cost AHRS course angle compensation method based on quaternion-extended Kalman filtering | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN110307836B (en) | Accurate positioning method for welt cleaning of unmanned cleaning vehicle | |
CN114199259B (en) | Multi-source fusion navigation positioning method based on motion state and environment perception | |
CN112987065B (en) | Multi-sensor-integrated handheld SLAM device and control method thereof | |
CN111426320B (en) | Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter | |
CN113252033B (en) | Positioning method, positioning system and robot based on multi-sensor fusion | |
CN110940344B (en) | Low-cost sensor combination positioning method for automatic driving | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN110412596A (en) | A kind of robot localization method based on image information and laser point cloud | |
CN113175933A (en) | Factor graph combined navigation method based on high-precision inertia pre-integration | |
CN115388884A (en) | Joint initialization method for intelligent body pose estimator | |
CN115371665A (en) | Mobile robot positioning method based on depth camera and inertia fusion | |
CN115435779A (en) | Intelligent body pose estimation method based on GNSS/IMU/optical flow information fusion | |
CN111595332A (en) | Full-environment positioning method integrating inertial technology and visual modeling | |
CN114690229A (en) | GPS-fused mobile robot visual inertial navigation method | |
Park et al. | Implementation of vehicle navigation system using GNSS, INS, odometer and barometer | |
CN113008229A (en) | Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor | |
CN110926483B (en) | Low-cost sensor combination positioning system and method for automatic driving | |
CN116576849A (en) | Vehicle fusion positioning method and system based on GMM assistance | |
CN112923934A (en) | Laser SLAM technology suitable for combining inertial navigation in unstructured scene | |
CN115542363A (en) | Attitude measurement method suitable for vertical downward-looking aviation pod | |
CN114674345A (en) | Online combined calibration method for inertial navigation/camera/laser velocimeter | |
CN114646993A (en) | Data fusion algorithm for accurate positioning based on GNSS, vision and IMU |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |