CN111121767B - GPS-fused robot vision inertial navigation combined positioning method - Google Patents

GPS-fused robot vision inertial navigation combined positioning method Download PDF

Info

Publication number
CN111121767B
CN111121767B CN201911312943.7A CN201911312943A CN111121767B CN 111121767 B CN111121767 B CN 111121767B CN 201911312943 A CN201911312943 A CN 201911312943A CN 111121767 B CN111121767 B CN 111121767B
Authority
CN
China
Prior art keywords
image
pose
robot
sliding window
gps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911312943.7A
Other languages
Chinese (zh)
Other versions
CN111121767A (en
Inventor
郭健
黄迪
李胜
吴益飞
钱抒婷
吕思聪
朱佳森
朱文宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201911312943.7A priority Critical patent/CN111121767B/en
Publication of CN111121767A publication Critical patent/CN111121767A/en
Application granted granted Critical
Publication of CN111121767B publication Critical patent/CN111121767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a robot vision inertial navigation combined positioning method integrating GPS, which comprises the steps of extracting and matching left and right images of a binocular camera with front and rear image feature points, and calculating the relative pose of the feature point three-dimensional coordinates and an image frame; selecting a key frame in an image stream, creating a sliding window, and adding the key frame into the sliding window; calculating a visual re-projection error, and combining an IMU pre-integral residual error and a zero offset residual error into a combined pose estimation residual error; performing nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose; if GPS data are present at the current moment, performing self-adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain the final robot pose; if the GPS data is not available, the VIO pose data is used for replacing the final pose data. The invention improves the positioning precision of the robot, reduces the calculation consumption, and adapts to the requirement of large-scale and long-time inspection.

Description

GPS-fused robot vision inertial navigation combined positioning method
Technical Field
The invention belongs to the technical field of automatic inspection, and particularly relates to a GPS (global positioning system) -fused robot vision inertial navigation combined positioning method.
Background
For an autonomous inspection robot, pose estimation is important, and is the basis for the robot to complete the inspection task. The traditional mode has differential GPS pose estimation, satellite signals are received through two GPS antennae, the position, the pose, the speed and the like of the inspection robot are calculated, and the precision can reach the centimeter level. However, the existing high-precision differential GPS solution is high in price, and meanwhile, high and large obstacle shielding needs to be avoided, so that the method is only suitable for robot positioning in open areas. At present, a robot simultaneous composition and positioning (SLAM) technology is popular, a local map is built by fully sensing the surrounding environment through sensors such as laser or vision, and robot information matching is performed in the local map to realize positioning. The main implementation method comprises filtering and graph optimization, wherein the filtering method only considers the optimization problem of the current and nearby poses of the inspection robot, and the global optimization cannot be realized due to the large calculation amount, so that the SLAM technology based on graph optimization is currently the mainstream. However, for the long-time and large-range inspection task required by airport inspection, the SLAM technology is limited to the computing capability of the industrial personal computer, and cannot realize the high-precision composition positioning of a large scene and a long time.
Disclosure of Invention
The invention aims to provide a combined positioning method for visual inertial navigation of a robot fusing GPS.
The technical scheme for realizing the purpose of the invention is as follows: a combined positioning method of robot vision inertial navigation integrating GPS includes the following steps:
step 1, extracting and matching left and right images of a binocular camera with front and rear image feature points, and calculating the relative pose of the three-dimensional coordinates of the feature points and an image frame;
step 2, selecting a key frame in the image stream, creating a sliding window, and adding the key frame into the sliding window;
step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero offset residual error and combining the visual re-projection error, the IMU pre-integration residual error and the zero offset residual error into a joint pose estimation residual error;
step 4, performing nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain the optimized pose of the visual inertial navigation (VIO) robot;
step 5, if GPS data exist at the current moment, performing self-adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain the final robot pose; if the GPS data is not available, the VIO pose data is used for replacing the final pose data.
Compared with the prior art, the invention has the remarkable advantages that:
1) According to the invention, the visual inertial navigation tight coupling method is used for estimating the local relative pose of the robot, so that the estimation error caused by single sensors such as binocular vision or IMU is reduced, and the accurate local relative pose of the robot can be obtained.
2) The invention uses the GPS and binocular vision inertial navigation estimated pose as filtering fusion, and because the GPS data has no accumulated error, the robot pose estimation provides global constraint, thereby avoiding the computational power consumption of parameter global optimization and being capable of carrying out wide-range scene application.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a flow chart of binocular vision estimating robot pose and space point three-dimensional coordinates.
FIG. 3 is a flow chart of key frame determination and sliding window establishment.
Fig. 4 is a schematic diagram of sliding window optimization.
Detailed Description
As shown in fig. 1, a combined positioning method of inertial navigation of a robot vision integrating with a GPS comprises the following steps:
step 1, extracting image feature points of a left camera at the current moment according to feature points of the left camera image at the previous moment, performing feature point matching, extracting feature points of a right camera image at the current moment according to the left camera image feature points at the current moment, performing feature point matching, and calculating the relative pose of the three-dimensional coordinates of the feature points and the image frames by using the matched feature points, wherein the specific steps are as shown in fig. 2:
step 1-1, extracting a Shi-Tomashi corner point of a first frame image of a left camera by using a goodfeatureToTrack () function in opencv, and tracking by using an LK optical flow method to obtain a subsequent image characteristic point of the left camera and a right camera image characteristic point at a corresponding moment;
step 1-2, judging whether feature points obtained by tracking a subsequent left camera image by an LK optical flow method are smaller than a threshold number, if yes, extracting Shi-Tomashi corner points with phase difference number from the current left camera image;
step 1-3, triangulating matched feature points of left and right camera images, and solving three-dimensional coordinates of space points corresponding to the feature points through Singular Value Decomposition (SVD);
and step 1-4, calculating the relative pose of the image frame according to the characteristic point coordinates in the left camera image and the corresponding three-dimensional coordinates, wherein the relative pose is rotation R and displacement t respectively.
Step 2, selecting a key frame in an image stream, creating a sliding window, and adding the key frame into the sliding window, as shown in fig. 3, specifically comprising the following steps:
step 2-1, if the current image is the first frame image, taking the current image as a key frame, creating a sliding window, adding the key frame into the sliding window, and carrying out step 3; otherwise, executing the step 2-2;
step 2-2, calculating the parallax of the current image and the first frame image in the sliding window by using the formula (1), if the parallax is larger than a threshold value, taking the image as a key frame, re-establishing a new sliding window, adding the key frame into the sliding window, and performing step 3; otherwise, executing the step 2-3;
Figure BDA0002325019130000031
wherein Paralax is Parallax between two images, n is the number of matching feature points between two frames of images, and u 1i 、v 1i The pixel value of the ith feature point of the previous frame image.
Step 2-3, calculating the feature point number N obtained by tracking the current image from the previous frame image track And feature points N which can be traced by 4 continuous key frames in current image longtrack If equation (2) is satisfied, the sliding window can be added as a key frame, otherwise, the sliding window is discarded:
Figure BDA0002325019130000032
wherein N is the number of the image feature points.
Step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero bias residual error, and combining the calculated residual error and the IMU pre-integration residual error and the zero bias residual error into a combined pose estimation residual error, wherein the method specifically comprises the following steps:
obtaining a pre-integral residual rotation residual of the IMU by using an IMU pre-integral formula according to IMU observation values among key frames
Figure BDA0002325019130000033
Speed residual->
Figure BDA0002325019130000034
Position residual->
Figure BDA0002325019130000035
And IMU zero bias residual gyroscope residual +.>
Figure BDA0002325019130000036
Accelerometer residual->
Figure BDA0002325019130000037
And (3) re-projecting the space point coordinates calculated in the steps (1-3) to a projection plane by using the relative pose of the image frames obtained in the steps (1-4) and using a pinhole camera model to obtain a visual re-projection error formula as follows
Figure BDA0002325019130000038
Wherein z is k,i Is the coordinate of the feature point corresponding to the ith space point in the kth frame image, h (·) is the pinhole camera projection formula, X k Projection matrix for the k-th frame image, P i Is the i-th spatial point coordinate.
The invention designs a joint pose estimation residual error and approximates delta x as follows:
Figure BDA0002325019130000039
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00023250191300000310
f (x) is the residual quantity, N is the sliding window inner switchThe number of key frames, M, the number of three-dimensional space points of the first key frame in the sliding window.
Step 4, performing nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain the optimized pose of the visual inertial navigation (VIO) robot, wherein the method specifically comprises the following steps:
setting a trust zone matrix D, restraining the value range of Deltax, and performing first-order expansion on the formula (4) to be expressed as
Figure BDA0002325019130000041
Where J is the jacobian of f (x) and u is the radius of the trusted region. The constrained optimization problem is converted into an unconstrained optimization problem by utilizing Lagrangian multiplier lambda, and the constraint optimization problem is represented by the following formula:
Figure BDA0002325019130000042
expansion (6) and let the derivative equal to zero, it is possible to obtain:
Δx=-(H+λI) -1 Jf(x) (7)
wherein h=j T J, I is the identity matrix. And (3) determining the iteration times, continuously adding the deltax obtained in the calculation formula (7) into x, and then re-calculating the calculation formula (7) until the iteration times are reached or deltax is smaller than a set threshold value, and obtaining the three-dimensional coordinates of the relative pose and the space point of each image in the optimized sliding window.
Step 5, if GPS data are present at the current moment, performing self-adaptive robust Kalman filtering on the GPS position data and the position and pose data of the visual inertial navigation (VIO) robot to obtain the final position and pose of the robot; if the GPS data is not available, the VIO pose data is used for replacing the final pose data, and the steps are as follows.
Step 5-1, judging whether GPS data exist, if so, turning to step 5-2; otherwise, turning to the step 5-3;
step 5-2, fusing the robot pose data of the GPS and visual inertial navigation (VIO) system, namely estimating the observation error by using adaptive robust Kalman filtering, wherein the specific steps are as follows:
step 5-2-1, predicting a state prediction equation and a covariance matrix of the system according to the state equation and the prediction equation of the system:
Figure BDA0002325019130000043
Figure BDA0002325019130000044
the system state equation and the prediction equation in the invention are as follows:
x k =Ax k-1 +w k-1 (10)
z k =Hx k +v k (11)
wherein x is k =[x k y k z k v x v y v z ]The speed and GPS position data output by the VIO visual inertial navigation odometer are used as observations.
Figure BDA0002325019130000045
For state transition matrix, I 3×3 Is a 3 x 3 unit array. H is a unit prediction matrix, and system noise w k-1 And measuring noise v k The covariance matrices are considered as gaussian white noise, Q and R, respectively.
Step 5-2-2, judging the information statistic by using the method (12)
Figure BDA0002325019130000051
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002325019130000052
is an innovation matrix. When statistics T k When the threshold value is smaller than the threshold value, executing the step 5-2-3; when T is k When the current state of the robot is greater than the threshold value, the system is abnormal, the state estimation flow of the robot at the current moment is canceled, and the process goes to the step 5-3.
Step 5-2-3, designing a statistical feature updating formula of the measurement noise as follows:
Figure BDA0002325019130000053
wherein d k =(1-b)/(1-b k+1 ) B is a forgetting factor, and is generally 0.95-0.99. The kalman gain matrix is:
Figure BDA0002325019130000054
the state estimation and covariance matrix of the inspection robot at the current moment are respectively as follows:
Figure BDA0002325019130000055
P k =(I-K k H k )P k,k-1 (16)
and 5-3, using the current visual inertial navigation (VIO) robot pose data as a final robot pose.

Claims (4)

1. A combined positioning method of the visual inertial navigation of a robot fused with a GPS is characterized by comprising the following specific steps:
step 1, extracting characteristic points of a left camera and a right camera at the current moment, performing characteristic point matching, and calculating the relative pose of the three-dimensional coordinates of the characteristic points and an image frame by using the matched characteristic points, wherein the specific steps are as follows:
step 1-1, extracting corner points of a first frame image of a left camera, and tracking by using an LK optical flow method to obtain subsequent image characteristic points of the left camera and right camera image characteristic points at corresponding moments;
step 1-2, judging whether the number of feature points obtained from the subsequent left camera images is smaller than the threshold number, if so, extracting angular points with different numbers from the current left camera image;
step 1-3, triangulating the matched feature points of the left and right camera images, and solving the three-dimensional coordinates of the space points corresponding to the feature points through singular value decomposition;
step 1-4, calculating the relative pose of the image frame according to the feature point coordinates in the left camera image and the corresponding three-dimensional coordinates, wherein the relative pose is rotation R and displacement t respectively;
step 2, selecting a key frame in an image stream, creating a sliding window, adding the key frame into the sliding window, and selecting the key frame in the image stream specifically comprises the following steps:
step 2-1, if the current image is the first frame image, taking the current image as a key frame, creating a sliding window, adding the key frame into the sliding window, and carrying out step 3; otherwise, executing the step 2-2;
step 2-2, calculating the parallax of the current image and the first frame image in the sliding window, if the parallax is larger than a threshold value, taking the image as a key frame, re-establishing a new sliding window, adding the key frame into the sliding window, and performing step 3; otherwise, executing the step 2-3;
the calculation formula of the parallax between the current image and the first frame image in the sliding window is as follows:
Figure FDA0004210086810000011
wherein Paralax is Parallax between two images, n is the number of matching feature points between two frames of images, and u 1i 、v 1i The pixel value of the ith feature point of the previous frame image is the pixel value of the ith feature point of the previous frame image;
step 2-3, calculating the feature point number N obtained by tracking the current image from the previous frame image track And feature points N which can be traced by 4 continuous key frames in current image longtrack If the following formula is satisfied, the sliding window can be added as a key frame, otherwise, the sliding window is abandoned:
Figure FDA0004210086810000012
wherein N is the number of the image feature points;
step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero offset residual error and combining the two residual errors into a combined pose estimation residual error;
and (3) re-projecting the space point coordinates to a projection plane by using a pinhole camera model according to the relative pose of the image frame to obtain a vision re-projection error formula as follows:
Figure FDA0004210086810000021
wherein z is k,i Is the coordinate of the feature point corresponding to the ith space point in the kth frame image, h (·) is the pinhole camera projection formula, X k Projection matrix for the k-th frame image, P i The i-th space point coordinate;
the joint pose estimation residual is:
Figure FDA0004210086810000022
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004210086810000023
f (x) is the residual quantity, N is the number of key frames in the sliding window, and the number of three-dimensional space points of the first key frame in the sliding window M;
and 4, performing nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain the optimized pose of the visual inertial navigation robot, wherein the specific method comprises the following steps:
setting a trust zone matrix D, restraining the value range of Deltax, and performing first-order expansion on the joint pose estimation residual errors, wherein the first-order expansion is expressed as:
Figure FDA0004210086810000024
wherein J is a jacobian matrix of f (x), u is a trust zone radius;
the constrained optimization problem is converted into an unconstrained optimization problem by utilizing Lagrangian multiplier lambda, and the constraint optimization problem is represented by the following formula:
Figure FDA0004210086810000025
expanding the above equation and letting the derivative equal to zero, we can obtain:
Δx=-(H+λI) -1 Jf(x)
wherein h=j T J, I is an identity matrix;
adding delta x into x, and then re-calculating delta x until the iteration times are reached or delta x is smaller than a set threshold value, so as to obtain the optimized pose of the visual inertial navigation robot;
step 5, if GPS data exist at the current moment, performing self-adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain the final robot pose; if the GPS data is not available, the VIO pose data is used for replacing the final pose data.
2. The combined positioning method of the visual inertial navigation of the robot fused with the GPS according to claim 1, wherein the specific method for obtaining the final pose of the robot by performing adaptive robust Kalman filtering on GPS position data and visual inertial navigation (VIO) robot pose data is as follows:
the state prediction equation and covariance matrix prediction of the system are obtained according to the state equation and the prediction equation of the system, and are:
Figure FDA0004210086810000031
Figure FDA0004210086810000032
Figure FDA0004210086810000033
for state transition matrix, I 3×3 3×3 unit matrix, H unit prediction matrix, and system noise w k-1 And measuring noise v k Considered as GaussianWhite noise, the covariance matrix of which is Q and R respectively;
calculate the statistics of the new information, when statistics T k When the threshold value is smaller than the threshold value, executing the next step; when T is k When the position and orientation data is larger than a threshold value, the system is abnormal, the robot state estimation flow at the current moment is canceled, and the final position and orientation data is replaced by the VIO position and orientation data;
the statistical characteristic updating formula for determining the measurement noise is as follows:
Figure FDA0004210086810000034
wherein d k =(1-b)/(1-b k+1 ) B is a forgetting factor;
the kalman gain matrix is:
Figure FDA0004210086810000035
the state estimation and covariance matrix of the inspection robot at the current moment are respectively:
Figure FDA0004210086810000036
P k =(I-K k H k )P k,k-1
3. the combined positioning method of the robot vision inertial navigation fused with the GPS according to claim 2, wherein the system state equation and the prediction equation are as follows:
x k =Ax k-1 +w k-1
z k =Hx k +v k
wherein x is k =[x k y k z k v x v y v z ]The speed and GPS position data output by the VIO visual inertial navigation odometer are used as observations.
4. The combined positioning method of the integrated GPS robot vision inertial navigation according to claim 2, wherein the information statistics are
Figure FDA0004210086810000041
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004210086810000042
is an innovation matrix.
CN201911312943.7A 2019-12-18 2019-12-18 GPS-fused robot vision inertial navigation combined positioning method Active CN111121767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312943.7A CN111121767B (en) 2019-12-18 2019-12-18 GPS-fused robot vision inertial navigation combined positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312943.7A CN111121767B (en) 2019-12-18 2019-12-18 GPS-fused robot vision inertial navigation combined positioning method

Publications (2)

Publication Number Publication Date
CN111121767A CN111121767A (en) 2020-05-08
CN111121767B true CN111121767B (en) 2023-06-30

Family

ID=70499734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312943.7A Active CN111121767B (en) 2019-12-18 2019-12-18 GPS-fused robot vision inertial navigation combined positioning method

Country Status (1)

Country Link
CN (1) CN111121767B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739063B (en) * 2020-06-23 2023-08-18 郑州大学 Positioning method of power inspection robot based on multi-sensor fusion
CN111895989A (en) * 2020-06-24 2020-11-06 浙江大华技术股份有限公司 Robot positioning method and device and electronic equipment
CN111880207B (en) * 2020-07-09 2022-12-23 南京航空航天大学 Visual inertial satellite tight coupling positioning method based on wavelet neural network
CN111750855B (en) * 2020-08-03 2022-02-15 长安大学 Intelligent vibratory roller of independent operation of vision leading
CN112240768A (en) * 2020-09-10 2021-01-19 西安电子科技大学 Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
CN112233177B (en) * 2020-10-10 2021-07-30 中国安全生产科学研究院 Unmanned aerial vehicle pose estimation method and system
CN112305576A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Multi-sensor fusion SLAM algorithm and system thereof
CN112525197B (en) * 2020-11-23 2022-10-28 中国科学院空天信息创新研究院 Ultra-wideband inertial navigation fusion pose estimation method based on graph optimization algorithm
CN112712107B (en) * 2020-12-10 2022-06-28 浙江大学 Optimization-based vision and laser SLAM fusion positioning method
CN113031040A (en) * 2021-03-01 2021-06-25 宁夏大学 Positioning method and system for airport ground clothes vehicle
CN113110446A (en) * 2021-04-13 2021-07-13 深圳市千乘机器人有限公司 Dynamic inspection method for autonomous mobile robot
CN113203418B (en) * 2021-04-20 2022-09-16 同济大学 GNSSINS visual fusion positioning method and system based on sequential Kalman filtering
CN113432602B (en) * 2021-06-23 2022-12-02 西安电子科技大学 Unmanned aerial vehicle pose estimation method based on multi-sensor fusion
CN113465596B (en) * 2021-06-25 2023-05-30 电子科技大学 Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion
CN113516714A (en) * 2021-07-15 2021-10-19 北京理工大学 Visual SLAM method based on IMU pre-integration information acceleration feature matching
CN113838129B (en) * 2021-08-12 2024-03-15 高德软件有限公司 Method, device and system for obtaining pose information
CN114719843B (en) * 2022-06-09 2022-09-30 长沙金维信息技术有限公司 High-precision positioning method in complex environment
CN115793001B (en) * 2023-02-07 2023-05-16 立得空间信息技术股份有限公司 Vision, inertial navigation and defending fusion positioning method based on inertial navigation multiplexing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607401B2 (en) * 2013-05-08 2017-03-28 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
US9709404B2 (en) * 2015-04-17 2017-07-18 Regents Of The University Of Minnesota Iterative Kalman Smoother for robust 3D localization for vision-aided inertial navigation
CN105865452B (en) * 2016-04-29 2018-10-02 浙江国自机器人技术有限公司 A kind of mobile platform position and orientation estimation method based on indirect Kalman filtering
CN107909614B (en) * 2017-11-13 2021-02-26 中国矿业大学 Positioning method of inspection robot in GPS failure environment
CN108489482B (en) * 2018-02-13 2019-02-26 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN109993113B (en) * 2019-03-29 2023-05-02 东北大学 Pose estimation method based on RGB-D and IMU information fusion
CN110296702A (en) * 2019-07-30 2019-10-01 清华大学 Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device
CN110472585B (en) * 2019-08-16 2020-08-04 中南大学 VI-S L AM closed-loop detection method based on inertial navigation attitude track information assistance

Also Published As

Publication number Publication date
CN111121767A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111121767B (en) GPS-fused robot vision inertial navigation combined positioning method
Qin et al. A general optimization-based framework for global pose estimation with multiple sensors
CN109887057B (en) Method and device for generating high-precision map
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
Tardif et al. A new approach to vision-aided inertial navigation
US8213706B2 (en) Method and system for real-time visual odometry
US10907971B2 (en) Square root inverse Schmidt-Kalman filters for vision-aided inertial navigation and mapping
CN113376669B (en) Monocular VIO-GNSS fusion positioning algorithm based on dotted line characteristics
CN115479602A (en) Visual inertial odometer method fusing event and distance
Liu A robust and efficient lidar-inertial-visual fused simultaneous localization and mapping system with loop closure
Caruso et al. An inverse square root filter for robust indoor/outdoor magneto-visual-inertial odometry
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
Pan et al. Tightly-coupled multi-sensor fusion for localization with LiDAR feature maps
CN112945233B (en) Global drift-free autonomous robot simultaneous positioning and map construction method
Hong et al. Visual inertial odometry using coupled nonlinear optimization
CN112444245A (en) Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor
Liu et al. Semi-dense visual-inertial odometry and mapping for computationally constrained platforms
CN114646993A (en) Data fusion algorithm for accurate positioning based on GNSS, vision and IMU
Conway et al. Vision-based Velocimetry over Unknown Terrain with a Low-Noise IMU
Hsu et al. New integrated navigation scheme for the level 4 autonomous vehicles in dense urban areas
Wang et al. A visual integrated navigation for precise position estimation
Ronen et al. Development challenges and performance analysis of drone visual/inertial slam in a global reference system
Solodar et al. VIO-DualProNet: Visual-inertial odometry with learning based process noise covariance
CN117058430B (en) Method, apparatus, electronic device and storage medium for field of view matching
Nguyen et al. Likelihood-based iterated cubature multi-state-constraint Kalman filter for visual inertial navigation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant