CN108648215A - SLAM motion blur posture tracking algorithms based on IMU - Google Patents

SLAM motion blur posture tracking algorithms based on IMU Download PDF

Info

Publication number
CN108648215A
CN108648215A CN201810651885.XA CN201810651885A CN108648215A CN 108648215 A CN108648215 A CN 108648215A CN 201810651885 A CN201810651885 A CN 201810651885A CN 108648215 A CN108648215 A CN 108648215A
Authority
CN
China
Prior art keywords
pose
imu
camera
motion
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810651885.XA
Other languages
Chinese (zh)
Other versions
CN108648215B (en
Inventor
霍智勇
陈钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201810651885.XA priority Critical patent/CN108648215B/en
Publication of CN108648215A publication Critical patent/CN108648215A/en
Application granted granted Critical
Publication of CN108648215B publication Critical patent/CN108648215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Present invention is disclosed a kind of SLAM motion blur posture tracking algorithms based on IMU, include the following steps:S1, ORB feature extractions are carried out to the image sequence of input, is counted out according to feature and judge image sequence type, according to judging result, selects a carry out S2 or S3 steps;S2, if normal picture the bundle collection adjustment of kinematic parameter is executed using the uniform motion model estimation initial pose of camera;S3, if motion blur image, obtain estimation pose using the IMU equations of motion, optimization pose obtained using extended Kalman filter, estimation pose is merged with pose is optimized, obtains the final pose of camera.The present invention can not carry out motion blur section in image sequence camera positioning and tracking loss problem for SLAM, it calculates in conjunction with and using Inertial Measurement Unit kinematical equation and extended Kalman filter and optimizes camera pose so that SLAM can obtain continuous reliable camera pose positioning and tracking.

Description

SLAM motion blur pose tracking algorithm based on IMU
Technical Field
The invention relates to a tracking algorithm, in particular to an IMU-based SLAM motion blur pose tracking algorithm, and belongs to the field of computer vision and robots.
Background
Synchronous localization and mapping (SLAM) is a research hotspot in the field of computer vision and robotics in recent years. SLAM technology enables the construction and updating of maps in unknown environments and the tracking of position fixes in real time. The early SLAM method solves the problem by using filtering, and Davinson et al propose a MonoSLAM method of a real-time single camera, which takes extended kalman filtering as a rear end to track very sparse feature points at the front end. The eage et al propose a monocular SLAM method with variable scales that uses particle filtering and top-down searching to achieve real-time rendering of a large number of waypoints. Most of the methods process the image frames by using a filter to correlate the position of the estimated map point with the camera pose, and because the continuous image frames only contain a small amount of new information, the methods have high calculation cost, accumulate linear errors and have low accuracy of calculation results.
Strasdat et al later demonstrated that the key frame based SLAM method is more accurate than the filtering method at the same computational cost. Klein et al proposed a PTAM based on the key frame technique, which implemented parallelization of the tracking and mapping process, and was the first solution to use nonlinear optimization rather than the traditional filter as the back-end. Mur-Artal et al propose ORB-SLAM2, which supports monocular, binocular and RGB-D modes; the whole system calculates around ORB features, including visual odometer and ORB dictionary for loop detection; the loopback detection of ORB is its bright spot; SLAM is innovatively accomplished using three threads, and a number of optimizations are made around feature points.
To comply with the above trend, a related technique of tracking the camera pose using keyframe-based ORB-SLAM2 is also emerging. However, some problems still exist to be researched, such as better performing a camera pose tracking task by combining visual features with IMU information.
In summary, how to provide an SLAM motion blur pose tracking algorithm based on IMU to solve the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur segment in an image sequence is one of the problems that the technicians in the industry commonly expect to solve.
Disclosure of Invention
In view of the above defects in the prior art, the present invention aims to provide an IMU-based SLAM motion blur pose tracking algorithm.
Specifically, an IMU-based SLAM motion blur pose tracking algorithm comprises the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
and S3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain the final pose of the camera.
Preferably, the ORB feature extraction on the input image sequence in S1 includes the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10As moments of image blocks,
The BRIEF descriptor is a vector consisting of 0 and 1, wherein 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken.
Preferably, the estimating of the initial pose of the camera by using the uniform motion model and then performing the bundle set adjustment of the motion parameters in S2 includes the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VT1Where V is the velocity matrix, TlThe camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, and b is the baseline.
Preferably, the estimated pose comprises estimated position parameters and estimated pose parameters, the optimized pose comprises optimized position parameters and optimized pose parameters, and the final pose comprises final position parameters and final pose parameters.
Preferably, the obtaining of the estimated pose by using the IMU motion equation and then obtaining the optimized pose by using the extended kalman filter in S3 includes the following steps:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
and S33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose.
Preferably, the S31 specifically includes the following steps:
the expression of the rotation matrix C is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and a differential equation is obtained by deriving a rotation matrix c
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, then carrying out second integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,for its velocity, g is the gravity vector in the world coordinate system.
Preferably, the S32 specifically includes the following steps:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is seen from the perspectiveRotation of the sense coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,in order to estimate the attitude parameters of the vehicle,representing a rotation from the IMU coordinate system to the camera coordinate system.
Preferably, the S33 specifically includes the following steps:
the measurement matrix is such that,
wherein,
andin order to linearize the matrix of error measurements,
and updating an estimated value according to the measurement matrix H and a Kalman filtering process, and calculating by using a correction value to obtain an optimized pose.
Preferably, the updating the estimated value according to the measurement matrix H and the kalman filtering process specifically includes the following steps:
s331, calculating the marginThe calculation formula is as follows,
wherein,
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
s333, calculating Kalman gain K, wherein the calculation formula is that K is PHTS-1
S334, calculating a correction valueThe calculation formula is as follows,
preferably, the S3 further includes determining the visual scale factor using the binocular image.
Compared with the prior art, the invention has the advantages that:
aiming at the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur section in an image sequence, the method combines and utilizes an inertia measurement unit kinematic equation and an extended Kalman filter to calculate and optimize the camera pose, so that the SLAM can obtain continuous and reliable camera pose positioning and tracking. The method not only effectively improves the accuracy of the calculation result, but also optimizes the calculation process, reduces the calculation cost, improves the algorithm efficiency, and lays a foundation for subsequent popularization and use. In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to technical schemes of other algorithms in the field, and has very wide application prospect.
In conclusion, the invention provides an IMU-based SLAM motion blur pose tracking algorithm, and the method has high use and popularization values.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings for the purpose of facilitating understanding and understanding of the technical solutions of the present invention.
Drawings
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
As shown in FIG. 1, the invention discloses an IMU-based SLAM motion blur pose tracking algorithm, which is improved based on an ORB-SLAM2 framework, and combines and utilizes IMU measurement values to track the camera pose.
Specifically, the method comprises the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
and S3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain the final pose of the camera.
The ORB feature extraction of the input image sequence in S1 includes the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10Is the moment of the image block or blocks,
the BRIEF descriptor is a vector consisting of 0 and 1, wherein 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken.
In S2, estimating an initial pose of the camera by using the uniform motion model, and then performing a bundle set adjustment of the motion parameters, includes the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VT1Where V is the velocity matrix, TlThe camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, and b is the baseline. These can all be obtained from calibration.
The estimated pose comprises estimated position parameters and estimated pose parameters, the optimized pose comprises optimized position parameters and optimized pose parameters, and the final pose comprises final position parameters and final pose parameters.
In the step S3, obtaining the estimated pose by using the IMU equation of motion, and obtaining the optimized pose by using the extended kalman filter, includes the following steps:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
and S33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose.
The S31 specifically includes the following steps:
the expression of the rotation matrix c is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and the rotation matrix c is obtained by derivation
Then there is a differential equation
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, namely removing the gravity component from the acceleration, then carrying out quadratic integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,for its velocity, g is the gravity vector in the world coordinate system.
The S32 specifically includes the following steps:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is the rotation from the visual coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,in order to estimate the attitude parameters of the vehicle,representing a rotation from the IMU coordinate system to the camera coordinate system.
The S33 specifically includes the following steps:
the measurement matrix is such that,
wherein,
andin order to linearize the matrix of error measurements,
and updating an estimated value according to the measurement matrix H and a Kalman filtering process, and calculating by using a correction value to obtain an optimized pose.
The method specifically comprises the following steps of updating an estimated value according to a measurement matrix H and a Kalman filtering process:
s331, calculating the marginThe calculation formula is as follows,
wherein,
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
s333, calculating Kalman gain K, wherein the calculation formula is that K is PHTS-1
S334, calculating a correction valueThe calculation formula is as follows,
the S3 further includes determining a visual scale factor using the binocular image.
Aiming at the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur section in an image sequence, the method combines and utilizes an inertia measurement unit kinematic equation and an extended Kalman filter to calculate and optimize the camera pose, so that the SLAM can obtain continuous and reliable camera pose positioning and tracking. The method not only effectively improves the accuracy of the calculation result, but also optimizes the calculation process, reduces the calculation cost, improves the algorithm efficiency, and lays a foundation for subsequent popularization and use. In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to technical schemes of other algorithms in the field, and has very wide application prospect.
In conclusion, the invention provides an IMU-based SLAM motion blur pose tracking algorithm, and the method has high use and popularization values.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. An IMU-based SLAM motion blur pose tracking algorithm is characterized by comprising the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
and S3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain the final pose of the camera.
2. The IMU-based SLAM motion-blurred pose tracking algorithm of claim 1, wherein the ORB feature extraction on the input image sequence in S1 comprises the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10Is the moment of the image block or blocks,
the BRIEF descriptor is a vector consisting of 0 and 1, wherein 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken.
3. The IMU-based SLAM motion blur pose tracking algorithm of claim 1, wherein the estimating of the initial pose of the camera using the uniform motion model and then performing the bundle set adjustment of the motion parameters in S2 comprises the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VT1Where V is a velocity matrix,T1The camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, and b is the baseline.
4. The IMU based SLAM motion blur pose tracking algorithm of claim 1, wherein the estimated pose comprises estimated position parameters and estimated pose parameters, the optimized pose comprises optimized position parameters and optimized pose parameters, and the final pose comprises final position parameters and final pose parameters.
5. The IMU based SLAM motion blur pose tracking algorithm of claim 1, wherein the obtaining of the estimated pose using IMU equations of motion and then the optimized pose using an extended Kalman filter in S3 comprises the steps of:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
and S33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose.
6. The IMU-based SLAM motion-blurred pose tracking algorithm of claim 5, wherein the S31 specifically comprises the steps of:
the expression of the rotation matrix c is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and a differential equation is obtained by deriving a rotation matrix c
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, then carrying out second integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,for its velocity, g is the gravity vector in the world coordinate system.
7. The IMU-based SLAM motion-blurred pose tracking algorithm of claim 5, wherein the S32 specifically comprises the steps of:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is the rotation from the visual coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,for estimating attitude parametersThe number of the first and second groups is,representing a rotation from the IMU coordinate system to the camera coordinate system.
8. The IMU-based SLAM motion-blurred pose tracking algorithm of claim 5, wherein the S33 specifically comprises the steps of:
the measurement matrix is such that,
wherein,
andin order to linearize the matrix of error measurements,
and updating an estimated value according to the measurement matrix H and a Kalman filtering process, and calculating by using a correction value to obtain an optimized pose.
9. The IMU-based SLAM motion blur pose tracking algorithm of claim 8, wherein the updating of the estimate from the measurement matrix H and from the Kalman filtering process comprises the steps of:
s331, calculating the marginThe calculation formula is as follows,
wherein,
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
s333, calculating Kalman gain K, wherein the calculation formula is that K is PHTS-1
S334, calculating a correction valueThe calculation formula is as follows,
10. the IMU-based SLAM motion-blurred pose tracking algorithm of claim 1, wherein: the S3 further includes determining a visual scale factor using the binocular image.
CN201810651885.XA 2018-06-22 2018-06-22 SLAM motion blur pose tracking algorithm based on IMU Active CN108648215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810651885.XA CN108648215B (en) 2018-06-22 2018-06-22 SLAM motion blur pose tracking algorithm based on IMU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810651885.XA CN108648215B (en) 2018-06-22 2018-06-22 SLAM motion blur pose tracking algorithm based on IMU

Publications (2)

Publication Number Publication Date
CN108648215A true CN108648215A (en) 2018-10-12
CN108648215B CN108648215B (en) 2022-04-15

Family

ID=63752985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810651885.XA Active CN108648215B (en) 2018-06-22 2018-06-22 SLAM motion blur pose tracking algorithm based on IMU

Country Status (1)

Country Link
CN (1) CN108648215B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109682385A (en) * 2018-11-05 2019-04-26 天津大学 A method of instant positioning and map structuring based on ORB feature
CN110070577A (en) * 2019-04-30 2019-07-30 电子科技大学 Vision SLAM key frame and feature point selection method based on characteristic point distribution
CN110118556A (en) * 2019-04-12 2019-08-13 浙江工业大学 A kind of robot localization method and device based on covariance mixing together SLAM
CN110232662A (en) * 2019-05-10 2019-09-13 昆明理工大学 A kind of how new breath anti-interference filtering method under face pose cooperative system
CN110782492A (en) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 Pose tracking method and device
CN111044054A (en) * 2020-01-06 2020-04-21 哈尔滨工业大学 Monocular SLAM algorithm-based pose optimization method by Kalman filtering
CN111161348A (en) * 2018-11-08 2020-05-15 深圳市优必选科技有限公司 Monocular camera-based object pose estimation method, device and equipment
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN113237484A (en) * 2021-04-21 2021-08-10 四川轻化工大学 SLAM-based camera and IMU external rotation parameter solving method
CN113658260A (en) * 2021-07-12 2021-11-16 南方科技大学 Robot pose calculation method and system, robot and storage medium
US11468599B1 (en) * 2021-06-17 2022-10-11 Shenzhen Reolink Technology Co., Ltd. Monocular visual simultaneous localization and mapping data processing method apparatus, terminal, and readable storage medium
US11610330B2 (en) 2019-10-08 2023-03-21 Samsung Electronics Co., Ltd. Method and apparatus with pose tracking
WO2023130842A1 (en) * 2022-01-06 2023-07-13 广东汇天航空航天科技有限公司 Camera pose determining method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN107437258A (en) * 2016-05-27 2017-12-05 株式会社理光 Feature extracting method, estimation method of motion state and state estimation device
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN107437258A (en) * 2016-05-27 2017-12-05 株式会社理光 Feature extracting method, estimation method of motion state and state estimation device
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUR-ARTAL R 等: "ORB-SLAM: a Versatile and Accurate Monocular SLAM", 《IEEE TRANSACTIONS ON ROBOTICS》 *
王丽佳: "基于图优化的单目视觉SLAM技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109682385A (en) * 2018-11-05 2019-04-26 天津大学 A method of instant positioning and map structuring based on ORB feature
CN111161348B (en) * 2018-11-08 2023-12-05 深圳市优必选科技有限公司 Object pose estimation method, device and equipment based on monocular camera
CN111161348A (en) * 2018-11-08 2020-05-15 深圳市优必选科技有限公司 Monocular camera-based object pose estimation method, device and equipment
CN110118556A (en) * 2019-04-12 2019-08-13 浙江工业大学 A kind of robot localization method and device based on covariance mixing together SLAM
CN110070577A (en) * 2019-04-30 2019-07-30 电子科技大学 Vision SLAM key frame and feature point selection method based on characteristic point distribution
CN110070577B (en) * 2019-04-30 2023-04-28 电子科技大学 Visual SLAM key frame and feature point selection method based on feature point distribution
CN110232662B (en) * 2019-05-10 2022-06-10 昆明理工大学 Multi-information anti-interference filtering method under face pose collaborative system
CN110232662A (en) * 2019-05-10 2019-09-13 昆明理工大学 A kind of how new breath anti-interference filtering method under face pose cooperative system
US11610330B2 (en) 2019-10-08 2023-03-21 Samsung Electronics Co., Ltd. Method and apparatus with pose tracking
CN110782492A (en) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 Pose tracking method and device
CN111044054A (en) * 2020-01-06 2020-04-21 哈尔滨工业大学 Monocular SLAM algorithm-based pose optimization method by Kalman filtering
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN111540016B (en) * 2020-04-27 2023-11-10 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN113237484A (en) * 2021-04-21 2021-08-10 四川轻化工大学 SLAM-based camera and IMU external rotation parameter solving method
US11468599B1 (en) * 2021-06-17 2022-10-11 Shenzhen Reolink Technology Co., Ltd. Monocular visual simultaneous localization and mapping data processing method apparatus, terminal, and readable storage medium
CN113658260A (en) * 2021-07-12 2021-11-16 南方科技大学 Robot pose calculation method and system, robot and storage medium
WO2023130842A1 (en) * 2022-01-06 2023-07-13 广东汇天航空航天科技有限公司 Camera pose determining method and apparatus

Also Published As

Publication number Publication date
CN108648215B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN108648215B (en) SLAM motion blur pose tracking algorithm based on IMU
CN110009681B (en) IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method
CN106446815B (en) A kind of simultaneous localization and mapping method
CN107564061B (en) Binocular vision mileage calculation method based on image gradient joint optimization
CN109676604B (en) Robot curved surface motion positioning method and motion positioning system thereof
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN111210477B (en) Method and system for positioning moving object
CN109165680B (en) Single-target object dictionary model improvement method in indoor scene based on visual SLAM
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN109993113A (en) A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN111932674A (en) Optimization method of line laser vision inertial system
CN110751123B (en) Monocular vision inertial odometer system and method
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN113532420B (en) Visual inertial odometer method integrating dotted line characteristics
CN112541423A (en) Synchronous positioning and map construction method and system
Li et al. A binocular MSCKF-based visual inertial odometry system using LK optical flow
CN112284381B (en) Visual inertia real-time initialization alignment method and system
CN112767481A (en) High-precision positioning and mapping method based on visual edge features
CN112419411A (en) Method for realizing visual odometer based on convolutional neural network and optical flow characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant