CN110345944A - Merge the robot localization method of visual signature and IMU information - Google Patents

Merge the robot localization method of visual signature and IMU information Download PDF

Info

Publication number
CN110345944A
CN110345944A CN201910448746.1A CN201910448746A CN110345944A CN 110345944 A CN110345944 A CN 110345944A CN 201910448746 A CN201910448746 A CN 201910448746A CN 110345944 A CN110345944 A CN 110345944A
Authority
CN
China
Prior art keywords
imu
coordinate system
indicate
acceleration
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910448746.1A
Other languages
Chinese (zh)
Inventor
禹鑫燚
来磊
欧林林
金燕芳
吴加鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910448746.1A priority Critical patent/CN110345944A/en
Publication of CN110345944A publication Critical patent/CN110345944A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Navigation (AREA)

Abstract

The robot localization method of visual signature and IMU information is merged, the invention proposes a kind of methods that monocular vision is merged with IMU.Vision front-end posture tracking uses facial feature estimation robot pose first, and is estimated using pure visual information IMU buggy model, absolute measure and acceleration of gravity direction.The high precision position and posture information that resolves of IMU provides for Optimizing Search process referring initially to and participating in optimizing together as quantity of state and vision guided navigation information simultaneously.Rear end uses the close coupling nonlinear optimization method based on sliding window, real-time optimization pose and map, and computational complexity is kept fixed when slip window sampling calculating odometer, improves algorithm robustness.

Description

Merge the robot localization method of visual signature and IMU information
Technical field
The present invention relates to vision positioning method in a kind of robot chamber, specifically a kind of fusion visual signature and IMU information Robot localization method.
Background technique
Mobile robot simultaneous localization and mapping (Simultaneous Localization and Mapping) Problem, exactly allow robot environment do not add or uncertain situation under, using itself be equipped with sensor perceive external information, Self-position is positioned, and carries out map structuring on this basis.
Vision positioning is the hot spot of robot SLAM research instantly.Image sequence acquired in vision positioning method use, Mobile robot is extracted by extracting characteristics of image and executing characteristic matching in different frame according to the motion change of characteristic point Kinematic parameter.There is the scale problem of initialization in monocular SLAM and the scale drifting problem of tracking is under complex environment The defect of monocular vision is filled up, while improving the robustness of single camera vision system, need to usually to play make together with other sensors With.Inertial navigation data is stablized, but accumulated error is even more serious, and the fusion of both technologies can be by the height of monocular vision SLAM Precision combines with the stability of inertial navigation data, learns from other's strong points to offset one's weaknesses, and achievees the purpose that meet positioning accuracy.Chen Xiyuan is proposed A kind of method (Chen Xiyuan merging video camera and IMU data progress video camera Attitude Tracking using extended Kalman filter It is a kind of using iterative extended Kalman filter and the inertia of neural network/Jiangsu visual combination air navigation aid [P]: CN103983263A, 2014-08-13.).But in tracking not using the priori data of IMU, therefore system is easily It does not restrain, causes positioning accuracy poor.Wang Qiang etc. proposes the implementation method of vision inertia odometer, utilizes present image characteristic point Characteristic matching the constraint relationship between space constraint relationship corresponding with the three-dimensional map point that map expansion module is safeguarded, picture frame With the constraint information of image interframe IMU, position and posture (the Wang Qiang vision inertia of the corresponding equipment of each frame image are calculated The Shanghai realization method and system [p] of odometer: CN108489482A, 2018-09-04.).However, the method needs constantly Ground determines the pose of camera using vision algorithm, and calculation amount is larger.The it is proposeds such as Qin Tong based on monocular camera and low cost IMU sensor six degree of freedom state estimation algorithm, visual aspects using sparse optical flow method carry out tracking and positioning, paper In propose a kind of robustness vision inertia joint initialization procedure and recovery process (Tong Q, Peiliang L, Shaojie S.VINS-Mono:A Robust and Versatile Monocular Visual-Inertial State Estimator [J] .IEEE Transactions on Robotics, 2018:1-17.).However, whole system using Most basic reference frame trace model can not overcome complicated movement environment.
Summary of the invention
In order to overcome the disadvantage in existing method, the robot for melting platform the invention proposes a kind of monocular vision and IMU is fixed Position method.
Present invention vision front-end posture tracking first uses facial feature estimation robot pose, and utilizes pure visual information pair IMU buggy model, absolute measure and acceleration of gravity direction are estimated.The high precision position and posture information that IMU is resolved simultaneously It provides for Optimizing Search process referring initially to and participating in optimizing together as quantity of state and vision guided navigation information.Rear end uses Close coupling nonlinear optimization method based on sliding window, real-time optimization pose and map, and slip window sampling calculates odometer When computational complexity be kept fixed, improve algorithm robustness.The present invention realize monocular vision and inertial navigation information fusion Robot SLAM can accurately estimate robot motion and ambient enviroment.
Merge the robot localization method of visual signature and IMU information, the specific steps are as follows:
Step 1: carrying out pose estimation using monocular camera;
It is special to extract ORB abundant using ORB feature extraction algorithm to monocular camera captured image frame first by the present invention It levies point and being moved through for monocular camera is recovered using more mesh geometric knowledges according to the location of pixels of characteristic point between two field pictures Journey estimates robot pose.
Assuming that certain spatial point coordinate is Pi=[Xi, Yi, Zi]T, the pixel coordinate of projection is Ui=[ui, vi]T, pixel position It sets as follows with the relationship of spatial point position:
siUi=K exp (ξ ^) Pi (1)
In above formula, siIndicate the scale factor of depth map, K represents the internal reference matrix of camera, and ξ is the Lie algebra of camera pose It indicates, exp indicates the index mapping of Lie algebra.Finally by building least square problem, the optimal pose of camera is calculated, its mistake is made Difference minimizes.Calculation formula is as follows:
Step 2: carrying out pose estimation using IMU;
According to the data information that IMU is provided, the movement mould of camera is established using the IMU kinetic model based on pre-integration Type carries out in real time according to a preliminary estimate robot pose using motion model.
IMU can export acceleration aBAnd angular velocity omegaB, definition world coordinate system is W, and the reference frame of IMU is B, is led to IMU pre-integration is crossed, according to the state that it is inscribed in k, extrapolates its state inscribed in k+1.After pre-integration, the side of IMU It is updated respectively to, speed and position are as follows:
Wherein,Indicate spin matrix of the IMU coordinate system B at the k+1 moment relative to world coordinate system;It indicates Spin matrix of the IMU coordinate system B at the k moment relative to world coordinate system;Indicate that IMU coordinate system B is opposite at the k+1 moment In the speed of world coordinate system;The index mapping of exp expression Lie algebra;The sampling interval of Δ t expression IMU;Indicate that IMU is sat Speed of the mark system B at the k moment relative to world coordinate system;gwIndicate current acceleration of gravity;Indicate IMU coordinate system B In displacement of the k+1 moment relative to world coordinate system;Indicate position of the IMU coordinate system B at the k moment relative to world coordinate system It moves;Ba and bgRespectively indicate the deviation of gyroscope and accelerometer in IMU.
Step 3: the joint initialization of vision inertia;
The IMU data obtained in the pure vision data and step 2 that obtain in step 1 are combined, and to IMU deviation mould Type, absolute measure and acceleration of gravity direction are estimated.
Gyroscope estimation of deviation, the rotation between key frame acquired according to visual information are carried out first, and comparison utilizes IMU The rotation that pre-integration model acquires, using deviation as variable, the deviation about gyroscope is can be obtained in the difference both minimized:
Wherein, N represents the number of key frame; It is the camera being calculated by vision SLAM algorithm Rotation of the coordinate system relative to world coordinate system, RCBIt is calibration matrix;ΔRI, i+1For the gyroscope of, continuous two crucial interframe Obtained spin matrix is integrated,For Δ RI, i1With the first approximation result of gyroscope change of error equation;Finally by height This-Newton Algorithm bg
Then, using above-mentioned gyroscope estimation of deviation as a result, estimating scale and acceleration of gravity.For three Relationship two-by-two between the key frame being connected obtains linear equation using pre-integration measured value between any two:
Remember i-th, i+1, the key frame at i+2 moment is marked as 1,2,3, then above formula items are respectively as follows:
In above formula,Indicate position of the image center C under world coordinate system, Δ t is that inter frame temporal is poor, and I is single Bit matrix, RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system;RWBIndicate IMU coordinate system B relative to The spin matrix of world coordinate system;Δ v indicates interframe speed;S and gwRespectively required scale and acceleration of gravity estimation.
Finally, using scale and acceleration of gravity estimate as a result, estimating IMU acceleration bias.Consider 3 companies Linear relationship between continuous key frame, can obtain:
Remember i-th, i+1, the key frame at i+2 moment is marked as 1,2,3, then the solution procedure of φ (i), ζ (i) and ψ (i) is such as Under:
In above formula [](:, 1:2)It indicates to use matrix first two columns data;RWIIndicate inertia system in the direction of world coordinate system;Indicate the gravity direction in inertial coodinate system;RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system; RWBIndicate IMU coordinate system B in the spin matrix relative to world coordinate system;For the Δ P of the 2nd, 3 key frames2,3With acceleration The first approximation of equation of change as a result,For the Δ v of the 2nd, 3 key frames2,3With the first approximation result of acceleration change equation Δ t is that inter frame temporal is poor, and I is unit matrix, s and δ θxyFor the scale factor and acceleration of gravity deflection for needing to estimate, ba It is acceleration bias.
Step 4: pose being optimized using slip window sampling;
By the pre-integration residual error of re-projection error and IMU, the constraint relationship between adjacent key frame is established, to sensor Measured value and system state amount establish least square, using optimization method, iteratively solve out the optimal value of present frame pose.System System state vector X is expressed as follows:
Wherein x0, x1, xnIndicate the state of n+1 all cameras in sliding window,It respectively indicates Rotation of the IMU coordinate system relative to world coordinate system, speed and displacement, b are inscribed when ia, bgRespectively indicate accelerometer deviation and Gyroscope deviation.Construct objective function:
WhereinFor the measurement error of IMU,For the measurement error of monocular camera, B indicates the survey of IMU Data set is measured, C indicates the measurement data set of monocular camera,It is a from i-th of key frame to j for IMU coordinate system lower slider window The pre-integration noise item covariance of key frame,For first of characteristic point in camera coordinates system lower slider j-th of key frame of window Noise covariance.
The measurement residual error of displacement between the i-th frame of IMU and jth frame, speed, rotation and IMU deviation indicates are as follows:
In above formulaRespectively indicate translation, speed, rotation and IMU deviation bring error ?;First approximation for two interframe rotation transformation Δ R relative to acceleration of gravity;For two interframe velocity variations Δ v phases For the first approximation of acceleration of gravity;First approximation for two interframe change in displacement Δ p relative to acceleration of gravity; First approximation for two interframe velocity variations Δ v relative to acceleration;It is two interframe change in displacement Δ p relative to acceleration First approximation.
Vision residual error is re-projection error, for first of road sign point P, by P from watching its i-th of camera for the first time Coordinate system, the pixel coordinate being transformed under j-th current of camera coordinates system, wherein collimation error item are as follows:
In above formula,For any two orthogonal basis on tangential plane;It is first of road sign point of estimation in jth Camera normalizes the possibility coordinate in camera coordinates system;Camera coordinates system is normalized in j-th of camera for first of road sign o'clock In the coordinate observed;WithIndicate first of road sign o'clock in j-th of magazine pixel coordinate;Indicate IMU Coordinate system is converted relative to the pose of camera coordinates system;It indicates to become from world coordinate system to the pose of IMU coordinate system It changes.
Preferably, in step 3, the pure vision data obtained in step 1 is mutually tied with the IMU data obtained in step 2 It closes, and IMU buggy model, absolute measure and acceleration of gravity direction is estimated.
Preferably, in step 4, the pre- product by re-projection error and IMU is optimized to pose using slip window sampling Divide residual error, establish the constraint relationship between adjacent key frame, least square is established to measurement value sensor and system state amount, Using optimization method, the optimal value of present frame pose is iteratively solved out.
The invention has the advantages that the robot localization method of fusion visual signature and IMU information that the present invention designs, a side Due to passing through re-projection error and the pre-integration of IMU during the algorithm keeps track, the constraint established between adjacent key frame is closed in face System, compared to the expanded Kalman filtration algorithm for not using IMU priori data, algorithm robustness is stronger, and precision is higher.It is another Aspect uses the rear end optimization algorithm based on slip window sampling, real-time optimization pose and map in fusion process, and slides Computational complexity is kept fixed state when window technique calculates odometer, behaves whole system more stable, improves Robustness.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is experiment porch of the invention.
Fig. 3 is positioning result display diagram of the invention.
Specific embodiment
Technical solution of the present invention is further illustrated with reference to the accompanying drawing.
The robot localization method of visual signature and IMU information is merged, as shown in Figure 1, platform composition master plays including association Thinkpad computer 1, EAIBOT moving trolley 2, MYNTEYE inertial navigation camera 3.MYNT EYE inertial navigation camera 3 is placed in EAIBOT shifting Dynamic 2 top of trolley, the two pass through USB connection;Thinkpad computer 1 passes through terminal remote control EAIBOT moving trolley 2.
Platform Fig. 2 is tied, the specific embodiment of the method for the present invention is as follows:
After MYNT EYE inertial navigation camera is connected with EAIBOT moving trolley by USB, association thinkpad is opened, it is defeated Enter camera operating instruction, starting algorithm.The sample frequency of camera is 30HZ in example, and the sample frequency of key frame is lower, IMU's Sample frequency is 100HZ.
Step 1: the present invention is first extracted rich MYNT EYE camera captured image frame using ORB feature extraction algorithm Rich ORB characteristic point recovers monocular camera using more mesh geometric knowledges according to the location of pixels of characteristic point between two field pictures Motion process, estimate robot pose.
Assuming that certain spatial point coordinate is Pi=[Xi, Yi, Zi]T, the pixel coordinate of projection is Ui=[ui, vi]T, pixel position It sets as follows with the relationship of spatial point position:
siUi=K exp (ξ ^) Pi (1)
In above formula, sjIndicate the scale factor of depth map, K represents the internal reference matrix of camera, and ξ is the Lie algebra of camera pose It indicates.Finally by building least square problem, the optimal pose of camera is calculated, it is minimized the error.Calculation formula is as follows:
Step 2: the data information provided according to the IMU of MYNT EYE utilizes the IMU kinetic model based on pre-integration The motion model for establishing camera carries out in real time according to a preliminary estimate robot pose using motion model.
Definition world coordinate system is W, and the reference frame of IMU is that B is inscribed according to it in k by IMU pre-integration State extrapolates its state inscribed in k+1.After pre-integration, direction, speed and the position of IMU updates respectively are as follows:
Wherein, aBAnd ωBThe acceleration and angular speed of respectively IMU output,Indicate IMU coordinate system B at the k+1 moment Spin matrix relative to world coordinate system;Indicate spin moment of the IMU coordinate system B at the k moment relative to world coordinate system Battle array;Indicate speed of the IMU coordinate system B at the k+1 moment relative to world coordinate system;Exp indicates that the index of Lie algebra reflects It penetrates;The sampling interval of Δ t expression IMU;Indicate speed of the IMU coordinate system B at the k moment relative to world coordinate system;gwTable Show current acceleration of gravity;Indicate displacement of the IMU coordinate system B at the k+1 moment relative to world coordinate system;It indicates Displacement of the IMU coordinate system B at the k moment relative to world coordinate system;baWith bgRespectively indicate gyroscope and accelerometer in IMU Deviation.
Step 3: a distance being moved by Keyboard Control trolley in one direction, camera is made to enter initialization procedure.It will The IMU data obtained in the pure vision data and step 2 obtained in step 1 combine, and to IMU buggy model, absolute measure Estimated with acceleration of gravity direction.
Gyroscope estimation of deviation, the rotation between key frame acquired according to visual information are carried out first, and comparison utilizes IMU The rotation that pre-integration model acquires, using deviation as variable, the deviation about gyroscope is can be obtained in the difference both minimized:
Wherein, N represents the number of key frame; It is the camera being calculated by vision SLAM algorithm Rotation of the coordinate system relative to world coordinate system, RCBIt is calibration matrix;ΔRI, i+1For the gyroscope product of continuous two crucial interframe Divide obtained spin matrix,For Δ RI, i+1With the first approximation result of gyroscope change of error equation;Finally by height This-Newton Algorithm bg
Then, using above-mentioned gyroscope estimation of deviation as a result, estimating scale and acceleration of gravity.For three Relationship two-by-two between the key frame being connected obtains linear equation using pre-integration measured value between any two:
Remember i-th, i-1, the key frame at i-2 moment is marked as 1,2,3, then above formula items are respectively as follows:
In above formula,Indicate position of the image center C under world coordinate system, Δ t is that inter frame temporal is poor, and I is single Bit matrix, RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system;RWBIndicate IMU coordinate system B relative to The spin matrix of world coordinate system;Δ v indicates interframe speed;S and gWRespectively required scale and acceleration of gravity estimation.
Finally, using scale and acceleration of gravity estimate as a result, estimating IMU acceleration bias.Consider 3 companies Linear relationship between continuous key frame, can obtain:
Remember i-th, i+1, the key frame at i+2 moment is marked as 1,2,3, then the solution procedure of φ (i), ζ (i) and ψ (i) is such as Under:
In above formula [](:, 1:2)It indicates to use matrix first two columns data;RWIIndicate inertia system in the direction of world coordinate system;Indicate the gravity direction in inertial coodinate system;RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system; RWBIndicate IMU coordinate system B in the spin matrix relative to world coordinate system;For the Δ P of the 2nd, 3 key frames2,3With acceleration The first approximation of equation of change as a result,For the Δ v of the 2nd, 3 key frames2,3With the first approximation result of acceleration change equation Δ t is that inter frame temporal is poor, and I is unit matrix, s and δ θxyFor the scale factor and acceleration of gravity deflection for needing to estimate, ba It is acceleration bias.
Step 4: by the pre-integration residual error of re-projection error and IMU, the constraint relationship between adjacent key frame is established, it is right Measurement value sensor and system state amount establish least square, using optimization method, iteratively solve out present frame pose most The figure of merit.System mode vector X is expressed as follows:
Wherein x0, x1... xnIndicate the state of n+1 all cameras in sliding window,It respectively indicates Rotation of the IMU coordinate system relative to world coordinate system, speed and displacement, b are inscribed when ia, bgRespectively indicate accelerometer deviation and Gyroscope deviation.Construct objective function:
WhereinFor the measurement error of IMU,For the measurement error of monocular camera, B indicates the survey of IMU Data set is measured, C indicates the measurement data set of monocular camera,It is a from i-th of key frame to j for IMU coordinate system lower slider window The pre-integration noise item covariance of key frame,For first of characteristic point in camera coordinates system lower slider j-th of key frame of window Noise covariance.
The measurement residual error of displacement between the i-th frame of IMU and jth frame, speed, rotation and IMU deviation indicates are as follows:
In above formulaRespectively indicate translation, speed, rotation and IMU deviation bring error ?;First approximation for two interframe rotation transformation Δ R relative to acceleration of gravity;For two interframe velocity variations Δ v phases For the first approximation of acceleration of gravity;First approximation for two interframe change in displacement Δ p relative to acceleration of gravity; First approximation for two interframe velocity variations Δ v relative to acceleration;It is two interframe change in displacement Δ p relative to acceleration First approximation.
Vision residual error is re-projection error, for first of road sign point P, by P from watching its i-th of camera for the first time Coordinate system, the pixel coordinate being transformed under j-th current of camera coordinates system, wherein collimation error item are as follows:
In above formula,For any two orthogonal basis on tangential plane;It is first of road sign point of estimation in jth Camera normalizes the possibility coordinate in camera coordinates system;Camera coordinates system is normalized in j-th of camera for first of road sign o'clock In the coordinate observed;WithIndicate first of road sign o'clock in j-th of magazine pixel coordinate;Indicate IMU Coordinate system is converted relative to the pose of camera coordinates system;It indicates to become from world coordinate system to the pose of IMU coordinate system It changes.
Pose information converting and ambient enviroment map under last present invention output trolley current time in real time, such as Fig. 3 institute Show.
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention Range should not be construed as being limited to the specific forms stated in the embodiments, and protection scope of the present invention is also and in art technology Personnel conceive according to the present invention it is conceivable that equivalent technologies mean.

Claims (3)

1. merging the robot localization method of visual signature and IMU information, the specific steps are as follows:
Step 1: carrying out pose estimation using monocular camera;
ORB characteristic point abundant is extracted, according to two using ORB feature extraction algorithm to monocular camera captured image frame first The location of pixels of characteristic point between frame image recovers the motion process of monocular camera using more mesh geometric knowledges, estimates machine People's pose;
Assuming that certain spatial point coordinate is Pi=[Xi, Yi, Zi]T, the pixel coordinate of projection is Ui=[ui, vi]T, location of pixels with The relationship of spatial point position is as follows:
siUi=Kexp (ξ ^) Pi (1)
In above formula, siIndicating the scale factor of depth map, K represents the internal reference matrix of camera, and ξ is the representation of Lie algebra of camera pose, The index mapping of exp expression Lie algebra;Finally by building least square problem, the optimal pose of camera is calculated, makes its error most Smallization;Calculation formula is as follows:
Step 2: carrying out pose estimation using IMU;
According to the data information that IMU is provided, the motion model of camera is established using the IMU kinetic model based on pre-integration, is adopted Robot pose is carried out in real time according to a preliminary estimate with motion model;
IMU can export acceleration alphaBAnd angular velocity omegaB, definition world coordinate system is W, and the reference frame of IMU is B, is passed through IMU pre-integration extrapolates its state inscribed in k+1 according to the state that it is inscribed in k;After pre-integration, the direction of IMU, Speed and position update respectively are as follows:
Wherein,Indicate spin matrix of the IMU coordinate system B at the k+1 moment relative to world coordinate system;Indicate IMU coordinate It is spin matrix of the B at the k moment relative to world coordinate system;Indicate that IMU coordinate system B is sat at the k+1 moment relative to the world Mark the speed of system;The index mapping of exp expression Lie algebra;The sampling interval of Δ t expression IMU;Indicate IMU coordinate system B in k Speed of the moment relative to world coordinate system;gwIndicate current acceleration of gravity;Indicate IMU coordinate system B at the k+1 moment Displacement relative to world coordinate system;Indicate displacement of the IMU coordinate system B at the k moment relative to world coordinate system;baWith bg Respectively indicate the deviation of gyroscope and accelerometer in IMU;
Step 3: the joint initialization of vision inertia;
The IMU data obtained in the pure vision data and step 2 that obtain in step 1 are combined, and to IMU buggy model, absolutely Scale and acceleration of gravity direction are estimated;
Gyroscope estimation of deviation, the rotation between key frame acquired according to visual information are carried out first, and comparison utilizes the pre- product of IMU The rotation that sub-model acquires, using deviation as variable, the deviation about gyroscope is can be obtained in the difference both minimized:
Wherein, N represents the number of key frame; It is the camera coordinates being calculated by vision SLAM algorithm It is the rotation relative to world coordinate system, RCBIt is calibration matrix;ΔRI, i+1For the gyroscope integral of, continuous two crucial interframe Obtained spin matrix,For Δ RI, i+1With the first approximation result of gyroscope change of error equation;Finally by Gauss- Newton Algorithm bg
Then, using above-mentioned gyroscope estimation of deviation as a result, estimating scale and acceleration of gravity;It is connected for three Relationship two-by-two between the key frame connect obtains linear equation using pre-integration measured value between any two:
Remember i-th, i+1, the key frame at i+2 moment is marked as 1,2,3, then above formula items are respectively as follows:
In above formula,Indicate position of the image center C under world coordinate system, Δ t is that inter frame temporal is poor, and I is unit square Battle array, RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system;RWBIndicate IMU coordinate system B relative to the world The spin matrix of coordinate system;Δ v indicates interframe speed;S and gwRespectively required scale and acceleration of gravity estimation;
Finally, using scale and acceleration of gravity estimate as a result, estimating IMU acceleration bias;Consider 3 continuous passes Linear relationship between key frame can obtain:
Remember i-th, i+1, the key frame at i+2 moment is marked as 1,2,3, then the solution procedure of φ (i), ζ (i) and ψ (i) are as follows:
In above formula [](:, 1:2)It indicates to use matrix first two columns data;RWIIndicate inertia system in the direction of world coordinate system;Table Show the gravity direction in inertial coodinate system;RWCIndicate spin matrix of the camera coordinates system C relative to world coordinate system;RWBTable Show IMU coordinate system B in the spin matrix relative to world coordinate system;For the Δ P of the 2nd, 3 key frames2,3With acceleration change The first approximation of equation as a result,For the Δ v of the 2nd, 3 key frames2,3With the first approximation result Δ t of acceleration change equation Poor for inter frame temporal, I is unit matrix, s and δ θxyFor the scale factor and acceleration of gravity deflection for needing to estimate, baIt is to add Velocity deviation;
Step 4: pose being optimized using slip window sampling;
By the pre-integration residual error of re-projection error and IMU, the constraint relationship between adjacent key frame is established, to sensor measurement Value and system state amount establish least square, using optimization method, iteratively solve out the optimal value of present frame pose;System shape State vector X is expressed as follows:
Wherein x0, x1... xnIndicate the state of n+1 all cameras in sliding window,When respectively indicating i Inscribe rotation of the IMU coordinate system relative to world coordinate system, speed and displacement, ba, bgRespectively indicate accelerometer deviation and gyro Instrument deviation;Construct objective function:
WhereinFor the measurement error of IMU,For the measurement error of monocular camera, B indicates the measurement data of IMU Collection, C indicate the measurement data set of monocular camera,It is IMU coordinate system lower slider window from i-th of key frame to j key frame Pre-integration noise item covariance,For the noise of first of characteristic point in camera coordinates system lower slider j-th of key frame of window Covariance;
The measurement residual error of displacement between the i-th frame of IMU and jth frame, speed, rotation and IMU deviation indicates are as follows:
In above formulaRespectively indicate translation, speed, rotation and IMU deviation bring error term;First approximation for two interframe rotation transformation Δ R relative to acceleration of gravity;For two interframe velocity variations Δ v relative to The first approximation of acceleration of gravity;First approximation for two interframe change in displacement Δ p relative to acceleration of gravity;It is two First approximation of the interframe velocity variations Δ v relative to acceleration;For two interframe change in displacement Δ p relative to acceleration one Rank is approximate;
Vision residual error is re-projection error, for first of road sign point P, by P from watching its i-th of camera coordinates for the first time It is, the pixel coordinate being transformed under j-th current of camera coordinates system, wherein collimation error item are as follows:
In above formula,For any two orthogonal basis on tangential plane;It is that first of road sign o'clock of estimation is returned in j-th of camera One changes the possibility coordinate in camera coordinates system;Showing in camera coordinates system is normalized in j-th of camera for first of road sign o'clock The coordinate observed;WithIndicate first of road sign o'clock in j-th of magazine pixel coordinate;Indicate IMU coordinate system Pose relative to camera coordinates system converts;It indicates to convert from world coordinate system to the pose of IMU coordinate system.
2. a kind of robot localization method for merging visual signature and IMU information according to claim 1, feature exist In: in step 3, the IMU data obtained in the pure vision data and step 2 that obtain in step 1 are combined, and to IMU deviation Model, absolute measure and acceleration of gravity direction are estimated.
3. a kind of robot localization method for merging visual signature and IMU information according to claim 1, feature exist In: in step 4, the pre-integration residual error by re-projection error and IMU is optimized to pose using slip window sampling, establishes phase The constraint relationship between adjacent key frame establishes least square to measurement value sensor and system state amount, using optimization method, Iteratively solve out the optimal value of present frame pose.
CN201910448746.1A 2019-05-27 2019-05-27 Merge the robot localization method of visual signature and IMU information Pending CN110345944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910448746.1A CN110345944A (en) 2019-05-27 2019-05-27 Merge the robot localization method of visual signature and IMU information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910448746.1A CN110345944A (en) 2019-05-27 2019-05-27 Merge the robot localization method of visual signature and IMU information

Publications (1)

Publication Number Publication Date
CN110345944A true CN110345944A (en) 2019-10-18

Family

ID=68174118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910448746.1A Pending CN110345944A (en) 2019-05-27 2019-05-27 Merge the robot localization method of visual signature and IMU information

Country Status (1)

Country Link
CN (1) CN110345944A (en)

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN111156998A (en) * 2019-12-26 2020-05-15 华南理工大学 Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
CN111257853A (en) * 2020-01-10 2020-06-09 清华大学 Automatic driving system laser radar online calibration method based on IMU pre-integration
CN111308415A (en) * 2019-11-01 2020-06-19 华为技术有限公司 Online pose estimation method and device based on time delay
CN111340851A (en) * 2020-05-19 2020-06-26 北京数字绿土科技有限公司 SLAM method based on binocular vision and IMU fusion
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111538029A (en) * 2020-04-24 2020-08-14 江苏盛海智能科技有限公司 Vision and radar fusion measuring method and terminal
CN111553933A (en) * 2020-04-17 2020-08-18 东南大学 Optimization-based visual inertia combined measurement method applied to real estate measurement
CN111583387A (en) * 2020-04-21 2020-08-25 北京鼎路科技有限公司 Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle
CN111578937A (en) * 2020-05-29 2020-08-25 天津工业大学 Visual inertial odometer system capable of optimizing external parameters simultaneously
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111735445A (en) * 2020-06-23 2020-10-02 煤炭科学研究总院 Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111780754A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Visual inertial odometer pose estimation method based on sparse direct method
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111805535A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Positioning navigation method, device and computer storage medium
CN111932616A (en) * 2020-07-13 2020-11-13 清华大学 Binocular vision inertial odometer method for accelerating by utilizing parallel computing
CN111932611A (en) * 2020-05-26 2020-11-13 北京百度网讯科技有限公司 Object position acquisition method and device
CN111928847A (en) * 2020-09-22 2020-11-13 蘑菇车联信息科技有限公司 Inertial measurement unit pose data optimization method and device and electronic equipment
CN112102403A (en) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene
CN112097768A (en) * 2020-11-17 2020-12-18 深圳市优必选科技股份有限公司 Robot posture determining method and device, robot and storage medium
CN112129282A (en) * 2020-09-30 2020-12-25 杭州海康机器人技术有限公司 Method and device for converting positioning results among different navigation modes
CN112230242A (en) * 2020-09-30 2021-01-15 深兰人工智能(深圳)有限公司 Pose estimation system and method
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB
CN112484725A (en) * 2020-11-23 2021-03-12 吉林大学 Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion
CN112648994A (en) * 2020-12-14 2021-04-13 首都信息发展股份有限公司 Camera pose estimation method and device based on depth vision odometer and IMU
CN112652001A (en) * 2020-11-13 2021-04-13 山东交通学院 Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN112734765A (en) * 2020-12-03 2021-04-30 华南理工大学 Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN112747750A (en) * 2020-12-30 2021-05-04 电子科技大学 Positioning method based on fusion of monocular vision odometer and IMU (inertial measurement Unit)
CN112923919A (en) * 2021-01-21 2021-06-08 湖南格纳微信息科技有限公司 Pedestrian positioning method and system based on graph optimization
CN113052855A (en) * 2021-02-26 2021-06-29 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113077515A (en) * 2021-06-07 2021-07-06 之江实验室 Tight coupling initialization method for underwater vision inertial navigation pressure positioning
CN113091738A (en) * 2021-04-09 2021-07-09 安徽工程大学 Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113240597A (en) * 2021-05-08 2021-08-10 西北工业大学 Three-dimensional software image stabilization method based on visual inertial information fusion
CN113375665A (en) * 2021-06-18 2021-09-10 西安电子科技大学 Unmanned aerial vehicle pose estimation method based on multi-sensor elastic coupling
CN113436261A (en) * 2021-06-24 2021-09-24 湖南大学 Monocular vision inertial positioning method for automatic driving of closed park
CN113450411A (en) * 2021-06-30 2021-09-28 电子科技大学 Real-time self-generating posture calculation method based on variance component estimation theory
CN113483755A (en) * 2021-07-09 2021-10-08 北京易航远智科技有限公司 Multi-sensor combined positioning method and system based on non-global consistent map
CN113566833A (en) * 2021-07-28 2021-10-29 上海工程技术大学 Multi-sensor fusion vehicle positioning method and system
CN113587934A (en) * 2021-07-30 2021-11-02 深圳市普渡科技有限公司 Robot, indoor positioning method and device and readable storage medium
CN113608523A (en) * 2020-04-20 2021-11-05 中国科学院沈阳自动化研究所 Monocular vision and inertia fusion based vehicle scene dynamic analysis method
CN114001733A (en) * 2021-10-28 2022-02-01 浙江大学 Map-based consistency efficient visual inertial positioning algorithm
CN114543786A (en) * 2022-03-31 2022-05-27 华中科技大学 Wall-climbing robot positioning method based on visual inertial odometer
WO2022134060A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Camera registration via robot
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN114964276A (en) * 2022-05-26 2022-08-30 哈尔滨工业大学 Dynamic vision SLAM method fusing inertial navigation
CN115371665A (en) * 2022-09-13 2022-11-22 哈尔滨工业大学 Mobile robot positioning method based on depth camera and inertia fusion
WO2023051019A1 (en) * 2021-09-30 2023-04-06 达闼科技(北京)有限公司 Visual-inertial odometry method and apparatus, electronic device, storage medium and computer program
CN115930971A (en) * 2023-02-01 2023-04-07 七腾机器人有限公司 Data fusion processing method for robot positioning and mapping
CN116026316A (en) * 2023-03-30 2023-04-28 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS
CN112710308B (en) * 2019-10-25 2024-05-31 阿里巴巴集团控股有限公司 Positioning method, device and system of robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109631894A (en) * 2018-12-11 2019-04-16 智灵飞(北京)科技有限公司 A kind of monocular vision inertia close coupling method based on sliding window

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109631894A (en) * 2018-12-11 2019-04-16 智灵飞(北京)科技有限公司 A kind of monocular vision inertia close coupling method based on sliding window

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
禹鑫燚 等: "SLAM过程中的机器人位姿估计优化算法研究", 《高技术通讯》 *

Cited By (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN112710308B (en) * 2019-10-25 2024-05-31 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN111308415A (en) * 2019-11-01 2020-06-19 华为技术有限公司 Online pose estimation method and device based on time delay
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111156998A (en) * 2019-12-26 2020-05-15 华南理工大学 Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN111156998B (en) * 2019-12-26 2022-04-15 华南理工大学 Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN111257853A (en) * 2020-01-10 2020-06-09 清华大学 Automatic driving system laser radar online calibration method based on IMU pre-integration
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN111156997B (en) * 2020-03-02 2021-11-30 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
GB2597632A (en) * 2020-03-11 2022-02-02 Univ South China Tech RGBD sensor and IMU sensor-based positioning method
WO2021180128A1 (en) * 2020-03-11 2021-09-16 华南理工大学 Rgbd sensor and imu sensor-based positioning method
CN111462231B (en) * 2020-03-11 2023-03-31 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111553933B (en) * 2020-04-17 2022-11-18 东南大学 Optimization-based visual inertia combined measurement method applied to real estate measurement
CN111553933A (en) * 2020-04-17 2020-08-18 东南大学 Optimization-based visual inertia combined measurement method applied to real estate measurement
CN113608523B (en) * 2020-04-20 2023-03-14 中国科学院沈阳自动化研究所 Monocular vision and inertia fusion based vehicle scene dynamic analysis method
CN113608523A (en) * 2020-04-20 2021-11-05 中国科学院沈阳自动化研究所 Monocular vision and inertia fusion based vehicle scene dynamic analysis method
CN111583387A (en) * 2020-04-21 2020-08-25 北京鼎路科技有限公司 Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle
CN111538029A (en) * 2020-04-24 2020-08-14 江苏盛海智能科技有限公司 Vision and radar fusion measuring method and terminal
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111340851A (en) * 2020-05-19 2020-06-26 北京数字绿土科技有限公司 SLAM method based on binocular vision and IMU fusion
CN111932611A (en) * 2020-05-26 2020-11-13 北京百度网讯科技有限公司 Object position acquisition method and device
CN111932611B (en) * 2020-05-26 2024-05-10 阿波罗智联(北京)科技有限公司 Object position acquisition method and device
CN111578937B (en) * 2020-05-29 2024-01-09 上海新天策数字科技有限公司 Visual inertial odometer system capable of simultaneously optimizing external parameters
CN111578937A (en) * 2020-05-29 2020-08-25 天津工业大学 Visual inertial odometer system capable of optimizing external parameters simultaneously
CN111795686B (en) * 2020-06-08 2024-02-02 南京大学 Mobile robot positioning and mapping method
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111805535A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Positioning navigation method, device and computer storage medium
CN111735445A (en) * 2020-06-23 2020-10-02 煤炭科学研究总院 Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
CN111780754A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Visual inertial odometer pose estimation method based on sparse direct method
CN111739063B (en) * 2020-06-23 2023-08-18 郑州大学 Positioning method of power inspection robot based on multi-sensor fusion
CN111735445B (en) * 2020-06-23 2022-02-11 煤炭科学研究总院 Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111932616B (en) * 2020-07-13 2022-10-14 清华大学 Binocular vision inertial odometer method accelerated by utilizing parallel computation
CN111932616A (en) * 2020-07-13 2020-11-13 清华大学 Binocular vision inertial odometer method for accelerating by utilizing parallel computing
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN112102403A (en) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene
CN112102403B (en) * 2020-08-11 2022-11-25 国网安徽省电力有限公司淮南供电公司 High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN111928847A (en) * 2020-09-22 2020-11-13 蘑菇车联信息科技有限公司 Inertial measurement unit pose data optimization method and device and electronic equipment
CN112230242B (en) * 2020-09-30 2023-04-25 深兰人工智能(深圳)有限公司 Pose estimation system and method
CN112129282B (en) * 2020-09-30 2021-06-18 杭州海康机器人技术有限公司 Method and device for converting positioning results among different navigation modes
CN112230242A (en) * 2020-09-30 2021-01-15 深兰人工智能(深圳)有限公司 Pose estimation system and method
CN112129282A (en) * 2020-09-30 2020-12-25 杭州海康机器人技术有限公司 Method and device for converting positioning results among different navigation modes
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB
CN112652001A (en) * 2020-11-13 2021-04-13 山东交通学院 Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
CN112097768B (en) * 2020-11-17 2021-03-02 深圳市优必选科技股份有限公司 Robot posture determining method and device, robot and storage medium
CN112097768A (en) * 2020-11-17 2020-12-18 深圳市优必选科技股份有限公司 Robot posture determining method and device, robot and storage medium
CN112484725A (en) * 2020-11-23 2021-03-12 吉林大学 Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion
CN112734765B (en) * 2020-12-03 2023-08-22 华南理工大学 Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN112734765A (en) * 2020-12-03 2021-04-30 华南理工大学 Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN112648994B (en) * 2020-12-14 2023-12-05 首都信息发展股份有限公司 Depth vision odometer and IMU-based camera pose estimation method and device
CN112648994A (en) * 2020-12-14 2021-04-13 首都信息发展股份有限公司 Camera pose estimation method and device based on depth vision odometer and IMU
WO2022134060A1 (en) * 2020-12-25 2022-06-30 Intel Corporation Camera registration via robot
CN112747750A (en) * 2020-12-30 2021-05-04 电子科技大学 Positioning method based on fusion of monocular vision odometer and IMU (inertial measurement Unit)
CN112923919A (en) * 2021-01-21 2021-06-08 湖南格纳微信息科技有限公司 Pedestrian positioning method and system based on graph optimization
CN113052855B (en) * 2021-02-26 2021-11-02 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113052855A (en) * 2021-02-26 2021-06-29 苏州迈思捷智能科技有限公司 Semantic SLAM method based on visual-IMU-wheel speed meter fusion
CN113091738A (en) * 2021-04-09 2021-07-09 安徽工程大学 Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113240597A (en) * 2021-05-08 2021-08-10 西北工业大学 Three-dimensional software image stabilization method based on visual inertial information fusion
CN113240597B (en) * 2021-05-08 2024-04-26 西北工业大学 Three-dimensional software image stabilizing method based on visual inertial information fusion
CN113077515B (en) * 2021-06-07 2021-09-21 之江实验室 Tight coupling initialization method for underwater vision inertial navigation pressure positioning
CN113077515A (en) * 2021-06-07 2021-07-06 之江实验室 Tight coupling initialization method for underwater vision inertial navigation pressure positioning
CN113375665B (en) * 2021-06-18 2022-12-02 西安电子科技大学 Unmanned aerial vehicle pose estimation method based on multi-sensor elastic coupling
CN113375665A (en) * 2021-06-18 2021-09-10 西安电子科技大学 Unmanned aerial vehicle pose estimation method based on multi-sensor elastic coupling
CN113436261A (en) * 2021-06-24 2021-09-24 湖南大学 Monocular vision inertial positioning method for automatic driving of closed park
CN113436261B (en) * 2021-06-24 2022-04-29 湖南大学 Monocular vision inertial positioning method for automatic driving of closed park
CN113450411B (en) * 2021-06-30 2023-02-28 电子科技大学 Real-time self-pose calculation method based on variance component estimation theory
CN113450411A (en) * 2021-06-30 2021-09-28 电子科技大学 Real-time self-generating posture calculation method based on variance component estimation theory
CN113483755B (en) * 2021-07-09 2023-11-07 北京易航远智科技有限公司 Multi-sensor combination positioning method and system based on non-global consistent map
CN113483755A (en) * 2021-07-09 2021-10-08 北京易航远智科技有限公司 Multi-sensor combined positioning method and system based on non-global consistent map
CN113566833A (en) * 2021-07-28 2021-10-29 上海工程技术大学 Multi-sensor fusion vehicle positioning method and system
CN113587934B (en) * 2021-07-30 2024-03-19 深圳市普渡科技有限公司 Robot, indoor positioning method and device and readable storage medium
CN113587934A (en) * 2021-07-30 2021-11-02 深圳市普渡科技有限公司 Robot, indoor positioning method and device and readable storage medium
WO2023051019A1 (en) * 2021-09-30 2023-04-06 达闼科技(北京)有限公司 Visual-inertial odometry method and apparatus, electronic device, storage medium and computer program
CN114001733A (en) * 2021-10-28 2022-02-01 浙江大学 Map-based consistency efficient visual inertial positioning algorithm
CN114001733B (en) * 2021-10-28 2024-03-15 浙江大学 Map-based consistent efficient visual inertial positioning algorithm
CN114543786B (en) * 2022-03-31 2024-02-02 华中科技大学 Wall climbing robot positioning method based on visual inertial odometer
CN114543786A (en) * 2022-03-31 2022-05-27 华中科技大学 Wall-climbing robot positioning method based on visual inertial odometer
CN114964276A (en) * 2022-05-26 2022-08-30 哈尔滨工业大学 Dynamic vision SLAM method fusing inertial navigation
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN115371665A (en) * 2022-09-13 2022-11-22 哈尔滨工业大学 Mobile robot positioning method based on depth camera and inertia fusion
CN115930971B (en) * 2023-02-01 2023-09-19 七腾机器人有限公司 Data fusion processing method for robot positioning and map building
CN115930971A (en) * 2023-02-01 2023-04-07 七腾机器人有限公司 Data fusion processing method for robot positioning and mapping
CN116026316B (en) * 2023-03-30 2023-08-29 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS
CN116026316A (en) * 2023-03-30 2023-04-28 山东科技大学 Unmanned ship dead reckoning method coupling visual inertial odometer and GNSS

Similar Documents

Publication Publication Date Title
CN110345944A (en) Merge the robot localization method of visual signature and IMU information
CN110125928B (en) Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames
CN109376785B (en) Navigation method based on iterative extended Kalman filtering fusion inertia and monocular vision
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
CN108489482B (en) The realization method and system of vision inertia odometer
CN107193279A (en) Robot localization and map structuring system based on monocular vision and IMU information
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN108665540A (en) Robot localization based on binocular vision feature and IMU information and map structuring system
CN111462231B (en) Positioning method based on RGBD sensor and IMU sensor
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
CN110095116A (en) A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110702107A (en) Monocular vision inertial combination positioning navigation method
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN109631887A (en) Inertial navigation high-precision locating method based on binocular, acceleration and gyroscope
CN110553648A (en) method and system for indoor navigation
CN110675453B (en) Self-positioning method for moving target in known scene
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN109579825A (en) Robot positioning system and method based on binocular vision and convolutional neural networks
CN103994765A (en) Positioning method of inertial sensor
CN114529576A (en) RGBD and IMU hybrid tracking registration method based on sliding window optimization
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
Ling et al. RGB-D inertial odometry for indoor robot via keyframe-based nonlinear optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191018

RJ01 Rejection of invention patent application after publication