CN114719843A - High-precision positioning method in complex environment - Google Patents

High-precision positioning method in complex environment Download PDF

Info

Publication number
CN114719843A
CN114719843A CN202210643924.8A CN202210643924A CN114719843A CN 114719843 A CN114719843 A CN 114719843A CN 202210643924 A CN202210643924 A CN 202210643924A CN 114719843 A CN114719843 A CN 114719843A
Authority
CN
China
Prior art keywords
coordinate system
imu
frame
local world
rotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210643924.8A
Other languages
Chinese (zh)
Other versions
CN114719843B (en
Inventor
孙德安
王锦山
蒋云翔
易炯
于泠汰
王宇雨
方锐涌
冼志怀
李俊璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Jinwei Information Technology Co ltd
Original Assignee
Changsha Jinwei Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Jinwei Information Technology Co ltd filed Critical Changsha Jinwei Information Technology Co ltd
Priority to CN202210643924.8A priority Critical patent/CN114719843B/en
Publication of CN114719843A publication Critical patent/CN114719843A/en
Application granted granted Critical
Publication of CN114719843B publication Critical patent/CN114719843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a high-precision positioning method in a complex environment, which comprises the steps of acquiring original observation data of a GNSS sensor, a VISON vision sensor, an SINS inertial sensor and a WHEEL WHEEL type odometer, preprocessing and synchronizing; carrying out VISON/SINS/WHEEL combined dynamic initialization; aligning the vision sensor data, the inertial sensor data, the wheel odometer data with the GNSS sensor data; constructing a pre-integration model of the inertial sensor and the wheel type odometer; aggregating the GNSS original observation residual, the visual residual, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual model; and solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment. The method not only realizes the high-precision positioning process in a complex environment, but also has high reliability, good precision and good robustness.

Description

High-precision positioning method in complex environment
Technical Field
The invention belongs to the technical field of navigation, and particularly relates to a high-precision positioning method in a complex environment.
Background
With the development of economic technology and the improvement of living standard of people, the positioning and navigation service is widely applied to the production and the life of people, and brings endless convenience to the production and the life of people. Therefore, ensuring the accuracy of positioning and navigation becomes the research focus of researchers.
Currently, in practical applications, many areas with weak GNSS (Global Navigation Satellite System) signals or no GNSS signals exist in an actual environment, such as a tunnel, an overhead structure, a garage, and the like; in these areas, if pure GNSS positioning is used alone, the positioning system cannot provide a highly accurate positioning result in real time. Therefore, at present, a comprehensive positioning mode combining multiple sensors is generally adopted to ensure the robustness and high reliability of real-time positioning.
However, the current multi-source fusion positioning method generally has some defects: in the currently adopted method of integrating the VIO (Visual-Inertial metrology, Visual-Inertial trajectory method) and the GNSS, in a region without GNSS signals, a positioning system is degenerated into a pure VIO system, and because a ground carrier is approximate to a plane, an additional invisible degree of freedom occurs in a monocular VIO system, so that positioning drift is caused; in addition, the fusion of the VIO and the GNSS at present mainly adopts loose coupling or semi-tight combination, and cannot provide a highly robust positioning result in a complex environment.
Disclosure of Invention
The invention aims to provide a high-precision positioning method under a complex environment, which has high reliability, good precision and good robustness.
The high-precision positioning method under the complex environment comprises the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer;
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
The step S1 specifically includes the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; and then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors.
The step S2 specifically includes the following steps:
A. in the sliding window, calculating to obtain a camera pose and a characteristic point coordinate of a relative scale by using a pure visual From Motion (SFM) method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of vision, the IMU and the wheel type odometer;
D. and C, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under the local world coordinate system.
The step C specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer
Figure DEST_PATH_IMAGE001
Is composed of
Figure DEST_PATH_IMAGE002
Wherein
Figure DEST_PATH_IMAGE003
Is as followskThe velocity in the IMU coordinate system of the frame,
Figure DEST_PATH_IMAGE004
is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
in the formula
Figure DEST_PATH_IMAGE008
Pre-integrating the position of the IMU;
Figure DEST_PATH_IMAGE009
is composed ofc 0Rotation relative to the k frame IMU;
Figure DEST_PATH_IMAGE010
is as followskRelative IMU of framec 0The location of the frame;
Figure DEST_PATH_IMAGE011
is the time interval between two frames;
Figure DEST_PATH_IMAGE012
is as followskRelative IMU of framec 0Rotation of the frame;
Figure DEST_PATH_IMAGE013
the velocity pre-integral quantity of the IMU is obtained;
Figure DEST_PATH_IMAGE014
the pre-integration amount of the wheel type odometer is obtained;
Figure DEST_PATH_IMAGE015
is as followskFrame odom oppositionc 0Rotation of the frame;
Figure DEST_PATH_IMAGE016
is as followskFrame odom oppositionc 0Bits of a framePlacing;
Figure DEST_PATH_IMAGE017
the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravity acceleration and the scale factor.
The step S3 specifically includes the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double-difference model;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite;
c. and aligning a local world coordinate system and a global ECEF coordinate system according to the result of the VISION/SINS/WHEEL combined dynamic initialization to finish fine adjustment of the position of the anchor point.
The step a specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
Figure DEST_PATH_IMAGE018
in the formula
Figure DEST_PATH_IMAGE019
A difference between a difference in carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;
Figure DEST_PATH_IMAGE020
is the carrier wavelength;
Figure DEST_PATH_IMAGE021
an observation vector of the base station to the satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;
Figure DEST_PATH_IMAGE022
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mthe number of original observed quantities of carrier phases in the kth sliding window is set;
Figure DEST_PATH_IMAGE023
is the Markov norm.
The step b specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
Figure DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE025
in the formula
Figure DEST_PATH_IMAGE026
A receiver frequency drift;
Figure 792370DEST_PATH_IMAGE020
is the carrier wavelength;f d is the Doppler raw observed quantity;
Figure DEST_PATH_IMAGE027
an observation vector for the receiver to satellite i;
Figure DEST_PATH_IMAGE028
the speed of the ith satellite;
Figure DEST_PATH_IMAGE029
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure DEST_PATH_IMAGE030
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w for a local world coordinate systemThe speed of the carrier;
Figure DEST_PATH_IMAGE031
frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;
Figure 489936DEST_PATH_IMAGE023
is a Markov norm;yawis the yaw angle.
The step c specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
Figure DEST_PATH_IMAGE033
in the formula
Figure DEST_PATH_IMAGE034
Is the rough coordinates of the anchor point;
Figure DEST_PATH_IMAGE035
an observation vector of a base station to a satellite i;
Figure 738515DEST_PATH_IMAGE020
is the carrier wavelength;
Figure DEST_PATH_IMAGE036
the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;
Figure DEST_PATH_IMAGE037
is the coordinates of the base station;
Figure DEST_PATH_IMAGE038
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure DEST_PATH_IMAGE039
a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;
Figure 523937DEST_PATH_IMAGE023
is the Markov norm.
The step S4 specifically includes the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
Figure DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE041
Figure DEST_PATH_IMAGE042
Figure DEST_PATH_IMAGE043
in the formula
Figure DEST_PATH_IMAGE044
Is a position pre-integration quantity;b k IMU at time k;
Figure DEST_PATH_IMAGE045
at time t relative tob k The amount of rotation of;
Figure DEST_PATH_IMAGE046
is the output value of the accelerometer;b at is the null shift of the accelerometer;n a is the Gauss white of an accelerometerNoise;
Figure DEST_PATH_IMAGE047
is the output value of the angular velocity meter;
Figure DEST_PATH_IMAGE048
is the null shift of the angular velocity meter;
Figure DEST_PATH_IMAGE049
white gaussian noise that is an angular velocimeter;
Figure DEST_PATH_IMAGE050
at time t relative tob k The amount of rotation of;t k is the k moment;
Figure DEST_PATH_IMAGE051
is a velocity pre-integration quantity;
Figure DEST_PATH_IMAGE052
pre-integrating the quantity for rotation;
Figure DEST_PATH_IMAGE053
representing the cross product.
Modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
Figure DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE055
in the formula
Figure DEST_PATH_IMAGE056
Is a position pre-integration quantity;o k is the odom at the time k;
Figure DEST_PATH_IMAGE057
at time t relative too k The amount of rotation of;
Figure DEST_PATH_IMAGE058
is the linear velocity;n v white gaussian noise at linear velocity;
Figure DEST_PATH_IMAGE059
is a rotation pre-integration quantity;
Figure DEST_PATH_IMAGE060
is the yaw rate;
Figure DEST_PATH_IMAGE061
gaussian white noise that is yaw rate;
Figure DEST_PATH_IMAGE062
at time t relative too k The rotation of (2).
The step S5 specifically includes the following steps:
the following equation is used as a nonlinear residual model:
Figure DEST_PATH_IMAGE063
in the formulaxIs the estimator to be optimized;
Figure DEST_PATH_IMAGE064
the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;
Figure DEST_PATH_IMAGE065
is the mahalanobis distance;nthe residual constraint quantity;
Figure DEST_PATH_IMAGE066
a residual function that is a sensor measurement;
when the sensor is a SINS inertial sensor,
Figure 482185DEST_PATH_IMAGE066
take a value of
Figure 101385DEST_PATH_IMAGE067
Figure DEST_PATH_IMAGE068
In the formula
Figure DEST_PATH_IMAGE069
Is a position increment;
Figure DEST_PATH_IMAGE070
is the speed increment;
Figure DEST_PATH_IMAGE071
is an angular increment;
Figure DEST_PATH_IMAGE072
is the accelerometer bias increment;
Figure DEST_PATH_IMAGE073
is the angular velocity meter bias increment;
Figure DEST_PATH_IMAGE074
rotation of local world coordinates to the k frame IMU;
Figure DEST_PATH_IMAGE075
the position of the IMU of the k frame under the local world coordinates;
Figure DEST_PATH_IMAGE076
the velocity of the IMU of the kth frame under a local world coordinate system;
Figure DEST_PATH_IMAGE077
is the time interval between two frames;g w is the acceleration of gravity;
Figure DEST_PATH_IMAGE078
is a position pre-integration quantity;
Figure DEST_PATH_IMAGE079
is a velocity pre-integration quantity;
Figure DEST_PATH_IMAGE080
the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;
Figure DEST_PATH_IMAGE081
is a rotation pre-integration quantity;
Figure DEST_PATH_IMAGE082
is the imaginary part of four elements;
Figure DEST_PATH_IMAGE083
acceleration zero drift for the k frame;
Figure DEST_PATH_IMAGE084
zero drift for the gyroscope of the kth frame;
Figure DEST_PATH_IMAGE085
is a four-element multiplication;
when the sensor is a WHEEL-type odometer,
Figure DEST_PATH_IMAGE086
take a value of
Figure 100002_DEST_PATH_IMAGE087
Figure DEST_PATH_IMAGE088
In the formula
Figure DEST_PATH_IMAGE089
Is a position increment;
Figure DEST_PATH_IMAGE090
is an angular increment;
Figure DEST_PATH_IMAGE091
is a part ofRotation of world coordinates to kth frame odom;
Figure DEST_PATH_IMAGE092
the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;
Figure DEST_PATH_IMAGE093
the wheel speed odometer is under a local world coordinate system for the kth frame;
Figure DEST_PATH_IMAGE094
is a position pre-integration quantity;
Figure DEST_PATH_IMAGE095
the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;
Figure DEST_PATH_IMAGE096
is a rotation pre-integration quantity;
when the sensor is a VISION sensor,
Figure 782466DEST_PATH_IMAGE066
take a value of
Figure DEST_PATH_IMAGE097
Figure DEST_PATH_IMAGE098
In the formula
Figure DEST_PATH_IMAGE099
Normalized camera coordinates for a landmark point of a jth camera coordinate system;
Figure DEST_PATH_IMAGE100
the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj y cj andz cj is calculated by the formula
Figure DEST_PATH_IMAGE101
In the formula
Figure DEST_PATH_IMAGE102
Are the rotation parameters of the camera and IMU coordinate system,
Figure DEST_PATH_IMAGE103
for a rotation of the local world coordinates to the IMU of frame j,
Figure DEST_PATH_IMAGE104
the amount of rotation of the IMU of frame i to the local world coordinate system,
Figure DEST_PATH_IMAGE105
in order to rotate the camera to the IMU,
Figure DEST_PATH_IMAGE106
in order to be the inverse depth value,
Figure DEST_PATH_IMAGE107
are the translation parameters of the camera and IMU,
Figure DEST_PATH_IMAGE108
the coordinates of the IMU in the i-th frame in the local world coordinate system,
Figure DEST_PATH_IMAGE109
for the local world coordinate system and the jth frame IMU translation,
Figure DEST_PATH_IMAGE110
translation parameters of the IMU and a camera coordinate system;
when the sensor is a GNSS sensor,
Figure 279568DEST_PATH_IMAGE066
take a value of
Figure DEST_PATH_IMAGE111
Figure DEST_PATH_IMAGE112
In the formula
Figure DEST_PATH_IMAGE113
Is the carrier wavelength;
Figure DEST_PATH_IMAGE114
is the observation vector of the ith satellite;
Figure DEST_PATH_IMAGE115
ECEF coordinates of the receiver;
Figure DEST_PATH_IMAGE116
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure DEST_PATH_IMAGE117
a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;
Figure DEST_PATH_IMAGE118
is the ECEF coordinate of the reference station, an
Figure DEST_PATH_IMAGE119
Figure DEST_PATH_IMAGE120
The conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure DEST_PATH_IMAGE121
for the transformation relationship from the local world coordinate system to the enu coordinate system,
Figure DEST_PATH_IMAGE122
for the position of the IMU coordinate of the kth frame in the local world coordinate system,
Figure DEST_PATH_IMAGE123
is the anchor point coordinate;
when a random planar constraint factor is required,
Figure 56769DEST_PATH_IMAGE066
take a value of
Figure DEST_PATH_IMAGE124
Figure 100002_DEST_PATH_IMAGE125
In the formula
Figure DEST_PATH_IMAGE126
Is a set matrix, and
Figure DEST_PATH_IMAGE127
Figure DEST_PATH_IMAGE128
is the amount of rotation of the from to IMU coordinate system;
Figure DEST_PATH_IMAGE129
the attitude of the IMU in a local world coordinate system;
Figure DEST_PATH_IMAGE130
for local world coordinate systems to virtual planes
Figure DEST_PATH_IMAGE131
The amount of rotation of;e 3is a set matrix, and
Figure DEST_PATH_IMAGE132
Figure DEST_PATH_IMAGE133
the position of the IMU in a local world coordinate system;
Figure DEST_PATH_IMAGE134
the translation amount from the odom to the IMU coordinate system;
Figure DEST_PATH_IMAGE135
is a roll angle;
Figure DEST_PATH_IMAGE136
is the pitch angle;
Figure DEST_PATH_IMAGE137
is composed of
Figure DEST_PATH_IMAGE138
The noise of the gaussian noise of (a),
Figure 817177DEST_PATH_IMAGE138
for local world coordinates to a virtual plane
Figure 305927DEST_PATH_IMAGE131
The height value of (a).
According to the high-precision positioning method under the complex environment, the high-precision positioning process under the complex environment is realized through the innovative positioning method flow design and the innovative multi-sensor fusion calculation process, and the method is high in reliability, good in precision and good in robustness.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram illustrating a comparison between a conventional high-precision positioning track and a true value track in the embodiment of the present invention.
FIG. 3 is a diagram illustrating a comparison between a location track and a truth track according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the error comparison in the X direction of the conventional high-precision positioning method and the method of the present invention in the embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating an error comparison between the conventional high-precision positioning method and the method of the present invention in the Y direction in the embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating comparison of errors in the Z direction between the conventional high-precision positioning method and the method of the present invention in the embodiment of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the high-precision positioning method under the complex environment comprises the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data; the method specifically comprises the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data; the method specifically comprises the following steps:
A. in the sliding window, calculating to obtain a camera pose and a characteristic point coordinate of a relative scale by using a pure visual From Motion (SFM) method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer; the method specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer
Figure 574097DEST_PATH_IMAGE001
Is composed of
Figure 769587DEST_PATH_IMAGE002
Wherein
Figure 933852DEST_PATH_IMAGE003
Is as followskThe velocity in the IMU coordinate system of the frame,
Figure 706636DEST_PATH_IMAGE004
is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
Figure 730279DEST_PATH_IMAGE005
Figure 904908DEST_PATH_IMAGE006
Figure 974496DEST_PATH_IMAGE007
in the formula
Figure 109942DEST_PATH_IMAGE008
Pre-integrating the position of the IMU;
Figure 719915DEST_PATH_IMAGE009
is composed ofc 0Rotation relative to the k frame IMU;
Figure 873684DEST_PATH_IMAGE010
is as followskFrame IMU oppositionc 0The location of the frame;
Figure 442069DEST_PATH_IMAGE011
is the time interval between two frames;
Figure 799232DEST_PATH_IMAGE012
is as followskFrame IMU oppositionc 0Rotation of the frame;
Figure 416158DEST_PATH_IMAGE013
the velocity pre-integral quantity of the IMU is obtained;
Figure 565380DEST_PATH_IMAGE014
is a pre-integration value of the wheel type odometer;
Figure 242349DEST_PATH_IMAGE015
is as followskFrame odom oppositionc 0Rotation of the frame;
Figure 336076DEST_PATH_IMAGE016
is as followskFrame odom oppositionc 0The location of the frame;
Figure 553430DEST_PATH_IMAGE017
the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravitational acceleration and the scale factor;
D. c, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under a local world coordinate system;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2; the method specifically comprises the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double difference model; the method specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
Figure DEST_PATH_IMAGE139
in the formula
Figure 963683DEST_PATH_IMAGE019
A difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;
Figure 811553DEST_PATH_IMAGE020
is the carrier wavelength;
Figure 658156DEST_PATH_IMAGE021
an observation vector of a base station to a satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;
Figure 413622DEST_PATH_IMAGE022
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window is;mthe original observed quantity number of the carrier phase in the kth sliding window is obtained;
Figure 209540DEST_PATH_IMAGE023
is a Markov norm;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite; the method specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
Figure DEST_PATH_IMAGE140
Figure 697153DEST_PATH_IMAGE025
in the formula
Figure 844100DEST_PATH_IMAGE026
A receiver frequency drift;
Figure 530821DEST_PATH_IMAGE020
is the carrier wavelength;f d is the Doppler raw observed quantity;
Figure 509141DEST_PATH_IMAGE027
an observation vector for the receiver to satellite i;
Figure 433235DEST_PATH_IMAGE028
the velocity of the ith satellite;
Figure 739583DEST_PATH_IMAGE029
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure 102431DEST_PATH_IMAGE030
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w the carrier speed under a local world coordinate system;
Figure 138520DEST_PATH_IMAGE031
frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window is;mis a firstkThe number of original observed quantities of carrier phases in each sliding window;
Figure 420466DEST_PATH_IMAGE023
is a Markov norm;yawis a yaw angle;
c. aligning a local world coordinate system and a global ECEF coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result to finish fine adjustment of the position of an anchor point; the method specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
Figure 100002_DEST_PATH_IMAGE141
in the formula
Figure 73164DEST_PATH_IMAGE034
Is the rough coordinates of the anchor point;
Figure 115069DEST_PATH_IMAGE035
an observation vector of a base station to a satellite i;
Figure 536823DEST_PATH_IMAGE020
is the carrier wavelength;
Figure 68299DEST_PATH_IMAGE036
the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;
Figure 598506DEST_PATH_IMAGE037
is the coordinates of the base station;
Figure DEST_PATH_IMAGE142
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure 37578DEST_PATH_IMAGE039
a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;
Figure 720363DEST_PATH_IMAGE023
is a Markov norm;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer; the method specifically comprises the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
Figure 219477DEST_PATH_IMAGE040
Figure 50030DEST_PATH_IMAGE041
Figure 683006DEST_PATH_IMAGE042
Figure 100002_DEST_PATH_IMAGE143
in the formula
Figure 813773DEST_PATH_IMAGE044
Is a position pre-integration quantity;b k IMU at time k;
Figure 624734DEST_PATH_IMAGE045
at time t relative tob k The amount of rotation of;
Figure 942583DEST_PATH_IMAGE046
is the output value of the accelerometer;b at is the null shift of the accelerometer;n a gaussian white noise for accelerometers;
Figure 723457DEST_PATH_IMAGE047
is the output value of the angular velocity meter;
Figure 633032DEST_PATH_IMAGE048
is the null shift of the angular velocity meter;
Figure 473949DEST_PATH_IMAGE049
white gaussian noise which is an angular velocity meter;
Figure 279094DEST_PATH_IMAGE050
at time t relative tob k The amount of rotation of (c);t k is the k moment;
Figure 4604DEST_PATH_IMAGE051
is a velocity pre-integration quantity;
Figure DEST_PATH_IMAGE144
is a rotation pre-integration quantity;
Figure 109963DEST_PATH_IMAGE053
representing a cross product;
modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
Figure 511995DEST_PATH_IMAGE054
Figure 538857DEST_PATH_IMAGE055
in the formula
Figure 927113DEST_PATH_IMAGE056
Pre-integrating the quantity for the position;o k is the odom at time k;
Figure 293503DEST_PATH_IMAGE057
at time t relative too k The amount of rotation of;
Figure 679485DEST_PATH_IMAGE058
is the linear velocity;n v white gaussian noise at linear velocity;
Figure DEST_PATH_IMAGE145
is a rotation pre-integration quantity;
Figure 177331DEST_PATH_IMAGE060
is the yaw rate;
Figure 103699DEST_PATH_IMAGE061
gaussian white noise that is yaw rate;
Figure DEST_PATH_IMAGE146
at time t relative too k Rotation of (2);
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model; the method specifically comprises the following steps:
the following equation is used as a nonlinear residual model:
Figure 59017DEST_PATH_IMAGE063
in the formulaxIs the estimator to be optimized;
Figure 678217DEST_PATH_IMAGE064
the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;
Figure 679671DEST_PATH_IMAGE065
is the Mahalanobis distance;nthe residual constraint quantity;
Figure 799942DEST_PATH_IMAGE066
a residual function that is a sensor measurement;
when the sensor is a SINS inertial sensor,
Figure 734400DEST_PATH_IMAGE066
take a value of
Figure 727764DEST_PATH_IMAGE067
Figure DEST_PATH_IMAGE147
In the formula
Figure DEST_PATH_IMAGE148
Is a position increment;
Figure 419777DEST_PATH_IMAGE070
is the speed increment;
Figure 815511DEST_PATH_IMAGE149
is an angular increment;
Figure 870054DEST_PATH_IMAGE072
is the accelerometer bias increment;
Figure 972002DEST_PATH_IMAGE073
is the angular velocity meter bias increment;
Figure DEST_PATH_IMAGE150
rotation of local world coordinates to the k frame IMU;
Figure DEST_PATH_IMAGE151
the position of the IMU of the k frame under the local world coordinates;
Figure DEST_PATH_IMAGE152
the velocity of the IMU of the kth frame under a local world coordinate system;
Figure 869420DEST_PATH_IMAGE077
is the time interval between two frames;g w is the acceleration of gravity;
Figure DEST_PATH_IMAGE153
pre-integrating the quantity for the position;
Figure DEST_PATH_IMAGE154
is a velocity pre-integration quantity;
Figure 347806DEST_PATH_IMAGE080
the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;
Figure DEST_PATH_IMAGE155
is a rotation pre-integration quantity;
Figure 443807DEST_PATH_IMAGE082
is the imaginary part of four elements;
Figure DEST_PATH_IMAGE156
acceleration zero drift for the k frame;
Figure 310132DEST_PATH_IMAGE084
zero drift for the gyroscope of the kth frame;
Figure DEST_PATH_IMAGE157
is a four-element multiplication;
when the sensor is a WHEEL-type odometer,
Figure 179999DEST_PATH_IMAGE066
take a value of
Figure 55551DEST_PATH_IMAGE087
Figure DEST_PATH_IMAGE158
In the formula
Figure 943741DEST_PATH_IMAGE089
Is a position increment;
Figure DEST_PATH_IMAGE159
is an angular increment;
Figure DEST_PATH_IMAGE160
rotation of local world coordinates to the kth frame odom;
Figure 387492DEST_PATH_IMAGE092
the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;
Figure DEST_PATH_IMAGE161
the wheel speed odometer is under a local world coordinate system for the kth frame;
Figure DEST_PATH_IMAGE162
pre-integrating the quantity for the position;
Figure 781871DEST_PATH_IMAGE095
the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;
Figure 461114DEST_PATH_IMAGE096
is a rotation pre-integration quantity;
when the sensor is a VISION sensor,
Figure 220123DEST_PATH_IMAGE066
take a value of
Figure 897092DEST_PATH_IMAGE097
Figure DEST_PATH_IMAGE163
In the formula
Figure 866185DEST_PATH_IMAGE099
Normalized camera coordinates for a landmark point of a jth camera coordinate system;
Figure 942594DEST_PATH_IMAGE100
the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj y cj andz cj is calculated by the formula
Figure DEST_PATH_IMAGE164
In the formula
Figure 618426DEST_PATH_IMAGE102
Are the rotation parameters of the camera and IMU coordinate system,
Figure DEST_PATH_IMAGE165
for a rotation of the local world coordinates to the IMU of frame j,
Figure DEST_PATH_IMAGE166
the amount of rotation of the IMU of frame i to the local world coordinate system,
Figure DEST_PATH_IMAGE167
in order to rotate the camera to the IMU,
Figure 387668DEST_PATH_IMAGE106
in order to be the inverse depth value,
Figure 109636DEST_PATH_IMAGE107
are the translation parameters of the camera and IMU,
Figure DEST_PATH_IMAGE168
the coordinates of the IMU in the i-th frame in the local world coordinate system,
Figure 474890DEST_PATH_IMAGE109
for the local world coordinate system and the jth frame IMU translation,
Figure 598704DEST_PATH_IMAGE110
translation parameters of the IMU and a camera coordinate system;
when the sensor is a GNSS sensor,
Figure 538847DEST_PATH_IMAGE066
take a value of
Figure DEST_PATH_IMAGE169
Figure DEST_PATH_IMAGE170
In the formula
Figure 889057DEST_PATH_IMAGE113
Is the carrier wavelength;
Figure 385897DEST_PATH_IMAGE114
is the observation vector of the ith satellite;
Figure DEST_PATH_IMAGE171
ECEF coordinates of the receiver;
Figure DEST_PATH_IMAGE172
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure 554098DEST_PATH_IMAGE117
for receiver and base station to the ith satelliteA difference between the difference in the carrier phase observations of the satellites and the difference in the carrier phase observations of the jth satellite;
Figure DEST_PATH_IMAGE173
is the ECEF coordinates of the reference station, and
Figure DEST_PATH_IMAGE174
Figure DEST_PATH_IMAGE175
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure 681454DEST_PATH_IMAGE121
for the transformation relationship from the local world coordinate system to the enu coordinate system,
Figure DEST_PATH_IMAGE176
for the position of the IMU coordinate of the kth frame in the local world coordinate system,
Figure DEST_PATH_IMAGE177
is the anchor point coordinate;
when a random planar constraint factor is required,
Figure 768227DEST_PATH_IMAGE066
take a value of
Figure 6442DEST_PATH_IMAGE124
Figure DEST_PATH_IMAGE178
In the formula
Figure 308110DEST_PATH_IMAGE126
Is a set matrix, and
Figure DEST_PATH_IMAGE179
Figure 590056DEST_PATH_IMAGE128
is the amount of rotation of the from to IMU coordinate system;
Figure 508333DEST_PATH_IMAGE129
the attitude of the IMU in a local world coordinate system;
Figure 284659DEST_PATH_IMAGE130
for local world coordinate systems to virtual planes
Figure 971992DEST_PATH_IMAGE131
The amount of rotation of;e 3is a set matrix, and
Figure DEST_PATH_IMAGE180
Figure 424839DEST_PATH_IMAGE133
the position of the IMU in a local world coordinate system;
Figure DEST_PATH_IMAGE181
the translation amount from the from to the IMU coordinate system;
Figure 299254DEST_PATH_IMAGE135
is a roll angle;
Figure 144851DEST_PATH_IMAGE136
is the pitch angle;
Figure 686691DEST_PATH_IMAGE137
is composed of
Figure 389067DEST_PATH_IMAGE138
The noise of the gaussian noise of (a),
Figure 143921DEST_PATH_IMAGE138
for local world coordinates to a virtual plane
Figure 386684DEST_PATH_IMAGE131
A height value of (d);
in specific implementation, whether the value of n is 5 or not, and when n = 1-5,
Figure DEST_PATH_IMAGE182
corresponding to IMU residual factor, wheel-type odometer residual factor, visual reprojection residual factor, carrier phase double-difference measurement factor and random plane constraint factor respectively;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
The complex environment high-level location method of the present invention is further described below with reference to an embodiment:
the method comprises the steps of collecting images, IMU and GNSS original observed quantities based on a small foraging D1000-IR-120 camera and a self-developed satellite signal receiver, receiving GNSS differential data based on a network TCP mode, receiving vehicle wheel speed odometer data based on an ODB mode, wherein the image data receiving frequency is 30hz, only left-eye data are used, the IMU data receiving frequency is 200hz, the GNSS original observed quantities and the differential data receiving frequency are 1hz, the wheel speed odometer data receiving frequency is 30hz, and all data are synchronously processed and issued into a ros format through related tools. After the hardware is well installed and calibrated, data are recorded for about 30 minutes around the complex environment and garage of the building dense area, and the method is compared with the existing high-precision positioning method based on the method provided by the invention. FIG. 2 is a comparison between a current high-precision positioning track and a true value track, in which the track No. 1 is a true value and the track No. 2 is a high-precision positioning result; fig. 3 is a comparison between the positioning track and the true track, in which the track No. 1 is the true value and the track No. 3 is the positioning result of the present invention. As is apparent from fig. 2, when the existing high-precision positioning method is used for positioning, the track 1 and the track 2 still have obvious separation traces in partial sections, which indicates that the positioning result is not satisfactory; in fig. 3, when the method of the present invention is used for positioning, the track 1 and the track 3 are almost always in an overlapped state, thus indicating that the method of the present invention has high accuracy.
The coordinate values of the existing high-precision positioning method and the method of the present invention are subtracted from the true values to obtain error values in the X/Y/Z direction, FIG. 4 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the X direction, FIG. 5 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the Y direction, and FIG. 6 shows the error comparison situation of the high-precision positioning method and the method of the present invention in the Z direction, and it can be seen from the drawings that the error values based on the positioning method of the present invention are far smaller than those of the existing high-precision positioning method in the X, Y, Z direction, which shows that the positioning accuracy based on the present invention can be greatly improved in the complex environment.

Claims (10)

1. A high-precision positioning method under a complex environment is characterized by comprising the following steps:
s1, acquiring original observation data of the GNSS sensor, the VISON vision sensor, the SINS inertial sensor and the WHEEL WHEEL type odometer, and preprocessing and synchronizing the sensor data;
s2, performing VISION/SINS/WHEEL combined dynamic initialization according to the acquired sensor data;
s3, aligning the vision sensor data, the inertia sensor data, the wheel type odometer data and the GNSS sensor data according to the initialization result of the step S2;
s4, constructing a pre-integration model of the inertial sensor and the wheel type odometer;
s5, based on the constructed pre-integration model, aggregating the GNSS original observation residual error, the visual residual error, the inertia pre-integration model, the wheel type odometer model and the plane constraint to construct a nonlinear residual error model;
and S6, solving the constructed nonlinear residual error model in real time to complete high-precision positioning in a complex environment.
2. The method for high-precision positioning under complex environment according to claim 1, wherein the step S1 specifically includes the following steps:
the original observation data of the GNSS sensor comprises satellite ephemeris data, carrier phase original observation data, GNSS base station differential data, real-time ambiguity resolution data and Doppler original observation data; the original observation data of the VISION vision sensor is gray image data transmitted by a camera in real time; the original observation data of the SINS inertial sensor is IMU output data; the original observation data of the WHEEL type odometer is real-time left and right WHEEL rotating speed data of the vehicle; preprocessing GNSS original observation data, and removing abnormal data; and then, based on a PPS time service system, time synchronization is carried out on the data of the four sensors.
3. The method for high-precision positioning under complex environment according to claim 2, wherein the step S2 specifically includes the following steps:
A. in the sliding window, calculating to obtain the relative scale camera pose and the feature point coordinate by using a pure vision SFM method;
B. calibrating a null shift value of the gyroscope according to the calculation result obtained in the step A;
C. constructing an initialization vector according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer;
D. and C, adjusting the initialization vector constructed in the step C by using the local actual gravity acceleration, restoring to a real scale and aligning to finally obtain an accurate initialization result under the local world coordinate system.
4. The method for high-precision positioning under complex environment according to claim 3, wherein the step C specifically comprises the following steps:
constructing the quantity to be initialized according to the initialization speed, the gravity acceleration and the scale factor of the vision, the IMU and the wheel type odometer
Figure 187209DEST_PATH_IMAGE001
Is composed of
Figure 480788DEST_PATH_IMAGE002
Wherein
Figure 927949DEST_PATH_IMAGE003
Is as followskThe velocity in the IMU coordinate system of the frame,
Figure 773414DEST_PATH_IMAGE004
is as followsc 0The acceleration of gravity of the frame is determined,sis a scale factor;
then, two consecutive frames in the sliding window satisfy the following equation 1:
Figure 938817DEST_PATH_IMAGE005
Figure 770506DEST_PATH_IMAGE006
Figure 478699DEST_PATH_IMAGE007
in the formula
Figure 104853DEST_PATH_IMAGE008
Pre-integrating the position of the IMU;
Figure 757551DEST_PATH_IMAGE009
is composed ofc 0Rotation relative to the k frame IMU;
Figure 392932DEST_PATH_IMAGE010
is as followskFrame IMU oppositionc 0The location of the frame;
Figure 939320DEST_PATH_IMAGE011
is the time interval between two frames;
Figure 1953DEST_PATH_IMAGE012
is as followskFrame IMU oppositionc 0Rotation of the frame;
Figure 876369DEST_PATH_IMAGE013
pre-integrating the IMU speed;
Figure 190806DEST_PATH_IMAGE014
the pre-integration amount of the wheel type odometer is obtained;
Figure 467067DEST_PATH_IMAGE015
is as followskFrame odom oppositionc 0Rotation of the frame;
Figure 966181DEST_PATH_IMAGE016
is as followskFrame odom oppositionc 0The location of the frame;
Figure 467175DEST_PATH_IMAGE017
the velocity of the kth frame odom;
writing the equation 1 into a linear expression of the quantity to be optimized, and then converting the linear expression into a linear least square solving method to obtain the optimal initialization quantity and finish the initialization of the speed, the gravity acceleration and the scale factor.
5. The method for high-precision positioning under complex environment according to claim 4, wherein the step S3 specifically comprises the following steps:
a. calculating to obtain a rough anchor point position based on a carrier phase double difference model;
b. calculating a yaw angle between an ENU coordinate system and a local world coordinate system according to a VISION/SINS/WHEEL combined dynamic initialization result, a Doppler observation value and the speed of a satellite;
c. and aligning a local world coordinate system and a global ECEF coordinate system according to the result of the VISION/SINS/WHEEL combined dynamic initialization to finish fine adjustment of the position of the anchor point.
6. The method for high-precision positioning under complex environment according to claim 5, wherein the step a specifically comprises the following steps:
solving the following model, and calculating to obtain the position of the rough anchor point in the global ECEF coordinate system:
Figure 444358DEST_PATH_IMAGE019
in the formula
Figure 840704DEST_PATH_IMAGE020
A difference between a difference in carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;
Figure 120507DEST_PATH_IMAGE021
is the carrier wavelength;
Figure 969514DEST_PATH_IMAGE022
an observation vector of the base station to the satellite i;b ur the relative deviation between the current position of the receiver and the position of the base station is used as the quantity to be solved;
Figure 484809DEST_PATH_IMAGE023
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mthe original observed quantity number of the carrier phase in the kth sliding window is obtained;
Figure 735662DEST_PATH_IMAGE024
is the Markov norm.
7. The method for high-precision positioning under complex environment according to claim 6, wherein the step b specifically comprises the following steps:
solving the following model, and calculating to obtain a yaw angle:
Figure 701213DEST_PATH_IMAGE026
Figure 37516DEST_PATH_IMAGE027
in the formula
Figure 356502DEST_PATH_IMAGE028
A receiver frequency drift;
Figure 602807DEST_PATH_IMAGE021
is the carrier wavelength;f d is the Doppler raw observed quantity;
Figure 349046DEST_PATH_IMAGE029
an observation vector of a receiver to a satellite i;
Figure 907066DEST_PATH_IMAGE030
the velocity of the ith satellite;
Figure 154377DEST_PATH_IMAGE031
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure 379822DEST_PATH_IMAGE032
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;v w the carrier speed under a local world coordinate system;
Figure 296962DEST_PATH_IMAGE033
frequency drift of the ith satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis as followskThe number of original observed quantities of carrier phases in each sliding window;
Figure 483224DEST_PATH_IMAGE024
is a Markov norm;yawis the yaw angle.
8. The method for high-precision positioning under complex environment according to claim 7, wherein the step c specifically comprises the following steps:
solving the following model, calculating to obtain an accurate anchor point position, and completing the alignment of a local world coordinate system and a global ECEF coordinate system:
Figure 409592DEST_PATH_IMAGE035
in the formula
Figure 223964DEST_PATH_IMAGE036
Is the rough coordinates of the anchor point;
Figure 702219DEST_PATH_IMAGE037
an observation vector of a base station to a satellite i;
Figure 234831DEST_PATH_IMAGE021
is the carrier wavelength;
Figure 964890DEST_PATH_IMAGE038
the conversion relation from a local world coordinate system to an ECEF coordinate system is obtained;P w the position of the carrier under a local world coordinate system;
Figure 509135DEST_PATH_IMAGE039
is the coordinates of the base station;
Figure 299236DEST_PATH_IMAGE040
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure 319145DEST_PATH_IMAGE041
a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;nthe size of the sliding window;kthe serial number of the sliding window;mis a firstkThe number of original observed quantities of carrier phases in each sliding window;
Figure 714879DEST_PATH_IMAGE024
is the Markov norm.
9. The method for high-precision positioning under complex environment according to claim 8, wherein the step S4 specifically includes the following steps:
according to the IMU pre-integration model, the IMU pre-integration quantity model between two frames is obtained through calculation as follows:
Figure 503843DEST_PATH_IMAGE042
Figure 199267DEST_PATH_IMAGE043
Figure 581838DEST_PATH_IMAGE044
Figure 388120DEST_PATH_IMAGE045
in the formula
Figure 297170DEST_PATH_IMAGE046
Is a position pre-integration quantity;b k IMU at time k;
Figure 22549DEST_PATH_IMAGE047
at time t relative tob k The amount of rotation of;
Figure 17050DEST_PATH_IMAGE048
is the output value of the accelerometer;b at is the null shift of the accelerometer;n a gaussian white noise for accelerometers;
Figure 627023DEST_PATH_IMAGE049
is the output value of the angular velocity meter;
Figure 265946DEST_PATH_IMAGE050
is the null shift of the angular velocity meter;
Figure 37593DEST_PATH_IMAGE051
white gaussian noise which is an angular velocity meter;
Figure 784969DEST_PATH_IMAGE052
at time t relative tob k The amount of rotation of (c);t k is the k moment;
Figure 57687DEST_PATH_IMAGE053
is a velocity pre-integration quantity;
Figure 941330DEST_PATH_IMAGE054
is a rotation pre-integration quantity;
Figure 149457DEST_PATH_IMAGE055
representing a cross product;
modeling according to the linear velocity and the angular velocity to obtain a pre-integration model of the wheel type odometer as follows:
Figure 728337DEST_PATH_IMAGE056
Figure 414533DEST_PATH_IMAGE057
in the formula
Figure 418261DEST_PATH_IMAGE058
Pre-integrating the quantity for the position;o k is the odom at the time k;
Figure 656345DEST_PATH_IMAGE059
at time t relative too k The amount of rotation of;
Figure 847155DEST_PATH_IMAGE060
is the linear velocity;n v white gaussian noise at linear velocity;
Figure 602621DEST_PATH_IMAGE061
is a rotation pre-integration quantity;
Figure 70643DEST_PATH_IMAGE062
is the yaw rate;
Figure 620573DEST_PATH_IMAGE063
gaussian white noise that is yaw rate;
Figure 298679DEST_PATH_IMAGE064
at time t relative too k The rotation of (2).
10. The method for high-precision positioning under complex environment according to claim 9, wherein the step S5 specifically includes the following steps:
the following equation is used as a nonlinear residual model:
Figure 719820DEST_PATH_IMAGE065
in the formulaxIs the estimator to be optimized;
Figure 166982DEST_PATH_IMAGE066
the information is marginalized prior information;r p is the marginalized residual;H p updating a matrix for marginalization;
Figure 887813DEST_PATH_IMAGE067
is the mahalanobis distance;nthe residual constraint quantity;
Figure 194161DEST_PATH_IMAGE068
a residual function that is a sensor measurement;
when the sensor is a SINS inertial sensor,
Figure 25851DEST_PATH_IMAGE068
take a value of
Figure 327519DEST_PATH_IMAGE069
Figure 219252DEST_PATH_IMAGE071
In the formula
Figure 262163DEST_PATH_IMAGE072
Is a position increment;
Figure 631964DEST_PATH_IMAGE073
is the speed increment;
Figure 53719DEST_PATH_IMAGE074
is an angular increment;
Figure 257298DEST_PATH_IMAGE075
is the accelerometer bias increment;
Figure 131713DEST_PATH_IMAGE076
is the angular velocity meter bias increment;
Figure 570785DEST_PATH_IMAGE077
rotation of local world coordinates to the k frame IMU;
Figure 971679DEST_PATH_IMAGE078
is as followsThe location of the k frame IMU in local world coordinates;
Figure 470793DEST_PATH_IMAGE079
the speed of the k frame IMU under a local world coordinate system;
Figure 832505DEST_PATH_IMAGE080
is the time interval between two frames;g w is the acceleration of gravity;
Figure 419475DEST_PATH_IMAGE081
is a position pre-integration quantity;
Figure 81400DEST_PATH_IMAGE082
is a velocity pre-integration quantity;
Figure 485837DEST_PATH_IMAGE083
the rotation amount from the IMU coordinate of the kth frame to the local world coordinate;
Figure 459478DEST_PATH_IMAGE084
is a rotation pre-integration quantity;
Figure 240352DEST_PATH_IMAGE085
is the imaginary part of four elements;
Figure 491205DEST_PATH_IMAGE086
acceleration zero drift for the k frame;
Figure DEST_PATH_IMAGE087
zero drift for the gyroscope of the kth frame;
Figure 207488DEST_PATH_IMAGE088
is a four-element multiplication;
when the sensor is a WHEEL-type odometer,
Figure 543792DEST_PATH_IMAGE068
take a value of
Figure 255920DEST_PATH_IMAGE089
Figure 626859DEST_PATH_IMAGE091
In the formula
Figure 373098DEST_PATH_IMAGE092
Is a position increment;
Figure 806484DEST_PATH_IMAGE093
is an angular increment;
Figure 194740DEST_PATH_IMAGE094
rotation of local world coordinates to the kth frame odom;
Figure 154606DEST_PATH_IMAGE095
the position of the wheel speed odometer in the local world coordinate is taken as the kth frame;
Figure 196380DEST_PATH_IMAGE096
the wheel speed odometer is under a local world coordinate system for the kth frame;
Figure 507276DEST_PATH_IMAGE097
is a position pre-integration quantity;
Figure 433644DEST_PATH_IMAGE098
the rotation amount from the wheel speed odometer coordinate to the local world coordinate of the kth frame;
Figure 123382DEST_PATH_IMAGE099
is a rotation pre-integration quantity;
when the sensor is a VISION sensor,
Figure 742582DEST_PATH_IMAGE068
take a value of
Figure 275195DEST_PATH_IMAGE100
Figure 598729DEST_PATH_IMAGE101
In the formula
Figure 798766DEST_PATH_IMAGE102
Normalized camera coordinates for a landmark point of a jth camera coordinate system;
Figure 323288DEST_PATH_IMAGE103
the three-dimensional coordinates of the landmark points under the jth camera coordinate system are obtained;x cj y cj andz cj is calculated by the formula
Figure 218563DEST_PATH_IMAGE104
In the formula
Figure 486733DEST_PATH_IMAGE105
Are the rotation parameters of the camera and IMU coordinate system,
Figure 665911DEST_PATH_IMAGE106
for a rotation of the local world coordinates to the IMU of frame j,
Figure 95755DEST_PATH_IMAGE107
the amount of rotation of the IMU of frame i to the local world coordinate system,
Figure 602960DEST_PATH_IMAGE108
in order to rotate the camera to the IMU,
Figure 409242DEST_PATH_IMAGE109
for the inverse depth value,
Figure 193658DEST_PATH_IMAGE110
are the translation parameters of the camera and IMU,
Figure 59983DEST_PATH_IMAGE111
the coordinates of the IMU in the i-th frame in the local world coordinate system,
Figure 788904DEST_PATH_IMAGE112
for the local world coordinate system and the jth frame IMU translation,
Figure 780301DEST_PATH_IMAGE113
translation parameters of the IMU and a camera coordinate system are obtained;
when the sensor is a GNSS sensor,
Figure 543858DEST_PATH_IMAGE068
take a value of
Figure 315505DEST_PATH_IMAGE114
Figure 938247DEST_PATH_IMAGE116
In the formula
Figure 86332DEST_PATH_IMAGE117
Is the carrier wavelength;
Figure 704395DEST_PATH_IMAGE118
an observation vector of the ith satellite;
Figure 771577DEST_PATH_IMAGE119
ECEF coordinates of the receiver;
Figure 740670DEST_PATH_IMAGE120
the difference value of the integer ambiguity observed by the ith satellite and the jth satellite;
Figure 692446DEST_PATH_IMAGE121
a difference between carrier phase observations of the receiver and the base station for the ith satellite and the jth satellite;
Figure 40381DEST_PATH_IMAGE122
is the ECEF coordinates of the reference station, and
Figure 684989DEST_PATH_IMAGE123
Figure 875799DEST_PATH_IMAGE124
the conversion relation from a local world coordinate system to an enu coordinate system is obtained;
Figure DEST_PATH_IMAGE125
for the transformation relationship from the local world coordinate system to the enu coordinate system,
Figure 21479DEST_PATH_IMAGE126
for the position of the IMU coordinate of the kth frame in the local world coordinate system,
Figure 614134DEST_PATH_IMAGE127
is the anchor point coordinate;
when a random planar constraint factor is required,
Figure 39430DEST_PATH_IMAGE068
take a value of
Figure 451957DEST_PATH_IMAGE128
Figure 11114DEST_PATH_IMAGE130
In the formula
Figure 582910DEST_PATH_IMAGE131
Is a set matrix, and
Figure 303741DEST_PATH_IMAGE132
Figure 469143DEST_PATH_IMAGE133
is the amount of rotation of the from to IMU coordinate system;
Figure 441779DEST_PATH_IMAGE134
the attitude of the IMU in a local world coordinate system;
Figure 667748DEST_PATH_IMAGE135
for local world coordinate systems to virtual planes
Figure 559481DEST_PATH_IMAGE136
The amount of rotation of;e 3is a set matrix, and
Figure 946600DEST_PATH_IMAGE137
Figure 722926DEST_PATH_IMAGE138
the position of the IMU in a local world coordinate system;
Figure 144680DEST_PATH_IMAGE139
the translation amount from the odom to the IMU coordinate system;
Figure 207314DEST_PATH_IMAGE140
is a roll angle;
Figure DEST_PATH_IMAGE141
is the pitch angle;
Figure 737521DEST_PATH_IMAGE142
is composed of
Figure DEST_PATH_IMAGE143
The noise of the gaussian noise of (a),
Figure 176593DEST_PATH_IMAGE143
for local world coordinates to a virtual plane
Figure 593799DEST_PATH_IMAGE136
The height value of (a).
CN202210643924.8A 2022-06-09 2022-06-09 High-precision positioning method in complex environment Active CN114719843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210643924.8A CN114719843B (en) 2022-06-09 2022-06-09 High-precision positioning method in complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210643924.8A CN114719843B (en) 2022-06-09 2022-06-09 High-precision positioning method in complex environment

Publications (2)

Publication Number Publication Date
CN114719843A true CN114719843A (en) 2022-07-08
CN114719843B CN114719843B (en) 2022-09-30

Family

ID=82232706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210643924.8A Active CN114719843B (en) 2022-06-09 2022-06-09 High-precision positioning method in complex environment

Country Status (1)

Country Link
CN (1) CN114719843B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115201873A (en) * 2022-09-06 2022-10-18 中冶智诚(武汉)工程技术有限公司 Multi-system collaborative indoor and outdoor precise positioning system architecture and operation method thereof
CN115523920A (en) * 2022-11-30 2022-12-27 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling
CN117288187A (en) * 2023-11-23 2023-12-26 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
US20190331496A1 (en) * 2016-12-14 2019-10-31 Commissariat A L'energie Atomique Et Aux Energies Alternatives Locating a vehicle
US20190368879A1 (en) * 2018-05-29 2019-12-05 Regents Of The University Of Minnesota Vision-aided inertial navigation system for ground vehicle localization
CN111121767A (en) * 2019-12-18 2020-05-08 南京理工大学 GPS-fused robot vision inertial navigation combined positioning method
CN113223161A (en) * 2021-04-07 2021-08-06 武汉大学 Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling
CN113252033A (en) * 2021-06-29 2021-08-13 长沙海格北斗信息技术有限公司 Positioning method, positioning system and robot based on multi-sensor fusion
CN114019552A (en) * 2021-10-21 2022-02-08 中国电子科技集团公司第五十四研究所 Bayesian multi-sensor error constraint-based location reliability optimization method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
US20190331496A1 (en) * 2016-12-14 2019-10-31 Commissariat A L'energie Atomique Et Aux Energies Alternatives Locating a vehicle
US20190368879A1 (en) * 2018-05-29 2019-12-05 Regents Of The University Of Minnesota Vision-aided inertial navigation system for ground vehicle localization
CN111121767A (en) * 2019-12-18 2020-05-08 南京理工大学 GPS-fused robot vision inertial navigation combined positioning method
CN113223161A (en) * 2021-04-07 2021-08-06 武汉大学 Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling
CN113252033A (en) * 2021-06-29 2021-08-13 长沙海格北斗信息技术有限公司 Positioning method, positioning system and robot based on multi-sensor fusion
CN114019552A (en) * 2021-10-21 2022-02-08 中国电子科技集团公司第五十四研究所 Bayesian multi-sensor error constraint-based location reliability optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DEQIANG TIAN 等: "A Design of Odometer-Aided Visual Inertial Integrated Navigation Algorithm Based on Multiple View Geometry Constraints", 《2017 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC)》 *
黄鑫: "基于多源传感器信息融合的智能汽车定位技术研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115201873A (en) * 2022-09-06 2022-10-18 中冶智诚(武汉)工程技术有限公司 Multi-system collaborative indoor and outdoor precise positioning system architecture and operation method thereof
CN115201873B (en) * 2022-09-06 2023-07-28 中冶智诚(武汉)工程技术有限公司 Multi-system cooperative indoor and outdoor precise positioning system architecture and operation method thereof
CN115523920A (en) * 2022-11-30 2022-12-27 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling
CN115523920B (en) * 2022-11-30 2023-03-10 西北工业大学 Seamless positioning method based on visual inertial GNSS tight coupling
CN117288187A (en) * 2023-11-23 2023-12-26 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium
CN117288187B (en) * 2023-11-23 2024-02-23 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114719843B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN114719843B (en) High-precision positioning method in complex environment
CN111426318B (en) Low-cost AHRS course angle compensation method based on quaternion-extended Kalman filtering
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN110307836B (en) Accurate positioning method for welt cleaning of unmanned cleaning vehicle
CN114199259B (en) Multi-source fusion navigation positioning method based on motion state and environment perception
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN113252033B (en) Positioning method, positioning system and robot based on multi-sensor fusion
CN110940344B (en) Low-cost sensor combination positioning method for automatic driving
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN110412596A (en) A kind of robot localization method based on image information and laser point cloud
CN113175933A (en) Factor graph combined navigation method based on high-precision inertia pre-integration
CN115388884A (en) Joint initialization method for intelligent body pose estimator
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN115435779A (en) Intelligent body pose estimation method based on GNSS/IMU/optical flow information fusion
CN111595332A (en) Full-environment positioning method integrating inertial technology and visual modeling
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
Park et al. Implementation of vehicle navigation system using GNSS, INS, odometer and barometer
CN113008229A (en) Distributed autonomous integrated navigation method based on low-cost vehicle-mounted sensor
CN110926483B (en) Low-cost sensor combination positioning system and method for automatic driving
CN116576849A (en) Vehicle fusion positioning method and system based on GMM assistance
CN112923934A (en) Laser SLAM technology suitable for combining inertial navigation in unstructured scene
CN115542363A (en) Attitude measurement method suitable for vertical downward-looking aviation pod
CN114674345A (en) Online combined calibration method for inertial navigation/camera/laser velocimeter
CN114646993A (en) Data fusion algorithm for accurate positioning based on GNSS, vision and IMU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant