CN108648215B - SLAM motion blur pose tracking algorithm based on IMU - Google Patents
SLAM motion blur pose tracking algorithm based on IMU Download PDFInfo
- Publication number
- CN108648215B CN108648215B CN201810651885.XA CN201810651885A CN108648215B CN 108648215 B CN108648215 B CN 108648215B CN 201810651885 A CN201810651885 A CN 201810651885A CN 108648215 B CN108648215 B CN 108648215B
- Authority
- CN
- China
- Prior art keywords
- pose
- imu
- camera
- matrix
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 29
- 238000005259 measurement Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 51
- 238000004364 calculation method Methods 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an IMU-based SLAM motion blur pose tracking algorithm, which comprises the following steps: s1, ORB feature extraction is carried out on the input image sequence, the image sequence type is judged according to the number of feature points, and one of the steps is selected to carry out the step S2 or S3 according to the judgment result; s2, if the image is a normal image, estimating the initial pose of the camera by using the uniform motion model, and executing bundle set adjustment of motion parameters; and S3, if the image is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and combining the estimated pose and the optimized pose to obtain a final pose of the camera. Aiming at the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur section in an image sequence, the method combines and utilizes an inertia measurement unit kinematic equation and an extended Kalman filter to calculate and optimize the camera pose, so that the SLAM can obtain continuous and reliable camera pose positioning and tracking.
Description
Technical Field
The invention relates to a tracking algorithm, in particular to an IMU-based SLAM motion blur pose tracking algorithm, and belongs to the field of computer vision and robots.
Background
Synchronous localization and mapping (SLAM) is a research hotspot in the field of computer vision and robotics in recent years. SLAM technology enables the construction and updating of maps in unknown environments and the tracking of position fixes in real time. The early SLAM method solves the problem by using filtering, and Davinson et al propose a MonoSLAM method of a real-time single camera, which takes extended kalman filtering as a rear end to track very sparse feature points at the front end. The eage et al propose a monocular SLAM method with variable scales that uses particle filtering and top-down searching to achieve real-time rendering of a large number of waypoints. Most of the methods process the image frames by using a filter to correlate the position of the estimated map point with the camera pose, and because the continuous image frames only contain a small amount of new information, the methods have high calculation cost, accumulate linear errors and have low accuracy of calculation results.
Strasdat et al later demonstrated that the key frame based SLAM method is more accurate than the filtering method at the same computational cost. Klein et al proposed a PTAM based on the key frame technique, which implemented parallelization of the tracking and mapping process, and was the first solution to use nonlinear optimization rather than the traditional filter as the back-end. Mur-Artal et al propose ORB-SLAM2, which supports monocular, binocular and RGB-D modes; the whole system calculates around ORB features, including visual odometer and ORB dictionary for loop detection; the loopback detection of ORB is its bright spot; SLAM is innovatively accomplished using three threads, and a number of optimizations are made around feature points.
To comply with the above trend, a related technique of tracking the camera pose using keyframe-based ORB-SLAM2 is also emerging. However, some problems still exist to be researched, such as better performing a camera pose tracking task by combining visual features with IMU information.
In summary, how to provide an SLAM motion blur pose tracking algorithm based on IMU to solve the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur segment in an image sequence is one of the problems that the technicians in the industry commonly expect to solve.
Disclosure of Invention
In view of the above defects in the prior art, the present invention aims to provide an IMU-based SLAM motion blur pose tracking algorithm.
Specifically, an IMU-based SLAM motion blur pose tracking algorithm comprises the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
and S3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain the final pose of the camera.
Preferably, the ORB feature extraction on the input image sequence in S1 includes the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10Is the moment of the image block or blocks,
the BRIEF descriptor is a vector consisting of 0 and 1, wherein 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken.
Preferably, the estimating of the initial pose of the camera by using the uniform motion model and then performing the bundle set adjustment of the motion parameters in S2 includes the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VT1Where V is the velocity matrix, TlThe camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, and b is the baseline.
Preferably, the estimated pose comprises estimated position parameters and estimated pose parameters, the optimized pose comprises optimized position parameters and optimized pose parameters, and the final pose comprises final position parameters and final pose parameters.
Preferably, the obtaining of the estimated pose by using the IMU motion equation and then obtaining the optimized pose by using the extended kalman filter in S3 includes the following steps:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
and S33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose.
Preferably, the S31 specifically includes the following steps:
the expression of the rotation matrix C is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and a differential equation is obtained by deriving a rotation matrix c
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, then carrying out second integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,for its velocity, g is the gravity vector in the world coordinate system.
Preferably, the S32 specifically includes the following steps:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is the rotation from the visual coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,in order to estimate the attitude parameters of the vehicle,representing a rotation from the IMU coordinate system to the camera coordinate system.
Preferably, the S33 specifically includes the following steps:
the measurement matrix is such that,
wherein,
and updating an estimated value according to the measurement matrix H and a Kalman filtering process, and calculating by using a correction value to obtain an optimized pose.
Preferably, the updating the estimated value according to the measurement matrix H and the kalman filtering process specifically includes the following steps:
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
s333, calculating Kalman gain K, wherein the calculation formula is that K is PHTS-1;
preferably, the S3 further includes determining the visual scale factor using the binocular image.
Compared with the prior art, the invention has the advantages that:
aiming at the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur section in an image sequence, the method combines and utilizes an inertia measurement unit kinematic equation and an extended Kalman filter to calculate and optimize the camera pose, so that the SLAM can obtain continuous and reliable camera pose positioning and tracking. The method not only effectively improves the accuracy of the calculation result, but also optimizes the calculation process, reduces the calculation cost, improves the algorithm efficiency, and lays a foundation for subsequent popularization and use. In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to technical schemes of other algorithms in the field, and has very wide application prospect.
In conclusion, the invention provides an IMU-based SLAM motion blur pose tracking algorithm, and the method has high use and popularization values.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings for the purpose of facilitating understanding and understanding of the technical solutions of the present invention.
Drawings
Fig. 1 is a schematic diagram of the principle of the present invention.
Detailed Description
As shown in FIG. 1, the invention discloses an IMU-based SLAM motion blur pose tracking algorithm, which is improved based on an ORB-SLAM2 framework, and combines and utilizes IMU measurement values to track the camera pose.
Specifically, the method comprises the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
and S3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain the final pose of the camera.
The ORB feature extraction of the input image sequence in S1 includes the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10Is the moment of the image block or blocks,
the BRIEF descriptor is a vector consisting of 0 and 1, wherein 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken.
In S2, estimating an initial pose of the camera by using the uniform motion model, and then performing a bundle set adjustment of the motion parameters, includes the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VT1Where V is the velocity matrix, TlThe camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, and b is the baseline. These can all be obtained from calibration.
The estimated pose comprises estimated position parameters and estimated pose parameters, the optimized pose comprises optimized position parameters and optimized pose parameters, and the final pose comprises final position parameters and final pose parameters.
In the step S3, obtaining the estimated pose by using the IMU equation of motion, and obtaining the optimized pose by using the extended kalman filter, includes the following steps:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
and S33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose.
The S31 specifically includes the following steps:
the expression of the rotation matrix c is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and the rotation matrix c is obtained by derivation
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, namely removing the gravity component from the acceleration, then carrying out quadratic integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,for its velocity, g is the gravity vector in the world coordinate system.
The S32 specifically includes the following steps:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is the rotation from the visual coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,in order to estimate the attitude parameters of the vehicle,representing a rotation from the IMU coordinate system to the camera coordinate system.
The S33 specifically includes the following steps:
the measurement matrix is such that,
wherein,
and updating an estimated value according to the measurement matrix H and a Kalman filtering process, and calculating by using a correction value to obtain an optimized pose.
The method specifically comprises the following steps of updating an estimated value according to a measurement matrix H and a Kalman filtering process:
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
s333, calculating Kalman gain K, wherein the calculation formula is that K is PHTS-1;
the S3 further includes determining a visual scale factor using the binocular image.
Aiming at the problem that the SLAM cannot perform camera positioning and tracking loss on a motion blur section in an image sequence, the method combines and utilizes an inertia measurement unit kinematic equation and an extended Kalman filter to calculate and optimize the camera pose, so that the SLAM can obtain continuous and reliable camera pose positioning and tracking. The method not only effectively improves the accuracy of the calculation result, but also optimizes the calculation process, reduces the calculation cost, improves the algorithm efficiency, and lays a foundation for subsequent popularization and use. In addition, the invention also provides reference for other related problems in the same field, can be expanded and extended on the basis of the reference, is applied to technical schemes of other algorithms in the field, and has very wide application prospect.
In conclusion, the invention provides an IMU-based SLAM motion blur pose tracking algorithm, and the method has high use and popularization values.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.
Claims (1)
1. An IMU-based SLAM motion blur pose tracking algorithm is characterized by comprising the following steps:
s1, ORB feature extraction is carried out on the input image sequence, whether the image sequence belongs to a normal image or a motion blurred image is judged according to the number of the extracted feature points, and one of the steps S2 and S3 is selected according to the judgment result;
s2, if the judgment result is a normal image, estimating the initial pose of the camera by using the uniform motion model, and then executing bundle set adjustment of the motion parameters;
s3, if the judgment result is a motion blurred image, obtaining an estimated pose by using an IMU motion equation, then obtaining an optimized pose by using an extended Kalman filter, and finally combining the estimated pose and the optimized pose to obtain a final pose of the camera;
the ORB feature extraction of the input image sequence in S1 includes the following steps:
s11, FAST corner detection, selecting a pixel p in the image, and assuming that the brightness of the pixel p is IpSetting a threshold T, selecting 16 pixel points on a circle with the radius of 3 by taking a pixel p as a center, and if the brightness of continuous N points on the selected circle is greater than Ip+ T or less than Ip-T, then the pixel p is considered as a feature point, and the above steps are repeated, performing the same operation on each pixel;
s12, calculating a BRIEF descriptor, adding rotation invariant characteristics to the BRIEF descriptor by using the main direction of the feature point, calculating the main direction of the feature point, and calculating the main direction of the feature point according to the formula,
wherein m is01、m10Is the moment of the image block or blocks,
the BRIEF descriptor is a vector consisting of 0 and 1, wherein the 0 and 1 encode the size relationship of two pixels p and q near a key point, if p is larger than q, 1 is taken, otherwise, 0 is taken;
in S2, estimating an initial pose of the camera by using the uniform motion model, and then performing a bundle set adjustment of the motion parameters, includes the following steps:
s21, updating the speed matrix in the uniform motion model, and obtaining the initial pose T of the current frame according to the uniform motion modelc=VTlWhere V is the velocity matrix, TlThe camera pose of the previous frame;
s22, matching the 3-D points in the world coordinate system with the key points in the image, executing the bundling adjustment of the motion parameters to optimize the camera pose, wherein the direction R belongs to SO (3) and the position t belongs to R3For a 3-D point X in the matched world coordinate systemi∈R3And key pointsThe reprojection error between them is minimized,
where X is the set of all matching point pairs, ρ is the robust Huber cost function, Σ is the covariance matrix associated with the keypoint scale,
projection function pisThe definition is that,
wherein (f)x,fy) Is the focal length, (c)x,cy) Is the origin offset, b is the baseline;
the estimated pose comprises estimated position parameters and estimated attitude parameters, the optimized pose comprises optimized position parameters and optimized attitude parameters, and the final pose comprises final position parameters and final attitude parameters;
in the step S3, obtaining the estimated pose by using the IMU equation of motion, and obtaining the optimized pose by using the extended kalman filter, includes the following steps:
s31, constructing a rotation matrix for an inertial world coordinate system and an IMU coordinate system, then deriving the matrix to obtain a differential equation of the rotation matrix, obtaining a matrix variable by solving the differential equation, and finally converting the matrix variable into an attitude quaternion of the IMU;
s32, obtaining an estimated pose of the camera by using the position and the posture of the IMU through the space relation between the coordinate systems;
s33, after the estimated pose of the camera is obtained, optimizing the estimated pose by using an extended Kalman filter to obtain an optimized pose;
the S31 specifically includes the following steps:
the expression of the rotation matrix C is,
wherein i, j, k is a unit vector of an inertial world coordinate system, i ', j ', k ' is a unit vector of an IMU coordinate system, and a differential equation is obtained by deriving a rotation matrix C
Equation of solution differentialObtaining matrix variable and converting the matrix variable into IMU attitude quaternionEstimating the IMU position, then carrying out second integration to obtain a displacement value,
wherein,the position of the IMU in the inertial world coordinate system,g is the gravity vector under the world coordinate system;
the S32 specifically includes the following steps:
the calculation formula is as follows,
wherein,in order to estimate the location parameters of the mobile terminal,is the rotation from the visual coordinate system to the inertial world coordinate system,in the IMU attitude, npThe noise is measured for the camera position and,in order to estimate the attitude parameters of the vehicle,expressed by IMU coordinate systemRotation to a camera coordinate system;
the S33 specifically includes the following steps:
the measurement matrix is such that,
wherein,
updating an estimated value according to a Kalman filtering process according to the measurement matrix H, and calculating by using a correction value to obtain an optimized pose;
the method specifically comprises the following steps of updating an estimated value according to a measurement matrix H and a Kalman filtering process:
s332, calculating an innovation S, wherein the calculation formula is that S is HPHT+R,
Wherein R is a covariance matrix of the translational and rotational measurement noise of the camera;
S333. calculating Kalman gain K, wherein K is PHTS-1;
the S3 further includes determining a visual scale factor using the binocular image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810651885.XA CN108648215B (en) | 2018-06-22 | 2018-06-22 | SLAM motion blur pose tracking algorithm based on IMU |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810651885.XA CN108648215B (en) | 2018-06-22 | 2018-06-22 | SLAM motion blur pose tracking algorithm based on IMU |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108648215A CN108648215A (en) | 2018-10-12 |
CN108648215B true CN108648215B (en) | 2022-04-15 |
Family
ID=63752985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810651885.XA Active CN108648215B (en) | 2018-06-22 | 2018-06-22 | SLAM motion blur pose tracking algorithm based on IMU |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108648215B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109682385A (en) * | 2018-11-05 | 2019-04-26 | 天津大学 | A method of instant positioning and map structuring based on ORB feature |
CN111161348B (en) * | 2018-11-08 | 2023-12-05 | 深圳市优必选科技有限公司 | Object pose estimation method, device and equipment based on monocular camera |
CN110118556A (en) * | 2019-04-12 | 2019-08-13 | 浙江工业大学 | A kind of robot localization method and device based on covariance mixing together SLAM |
CN110070577B (en) * | 2019-04-30 | 2023-04-28 | 电子科技大学 | Visual SLAM key frame and feature point selection method based on feature point distribution |
CN110232662B (en) * | 2019-05-10 | 2022-06-10 | 昆明理工大学 | Multi-information anti-interference filtering method under face pose collaborative system |
CN110782492B (en) * | 2019-10-08 | 2023-03-28 | 三星(中国)半导体有限公司 | Pose tracking method and device |
US11610330B2 (en) | 2019-10-08 | 2023-03-21 | Samsung Electronics Co., Ltd. | Method and apparatus with pose tracking |
CN111044054B (en) * | 2020-01-06 | 2023-02-03 | 哈尔滨工业大学 | Method for optimizing pose by utilizing Kalman filtering based on monocular SLAM algorithm |
CN111540016B (en) * | 2020-04-27 | 2023-11-10 | 深圳南方德尔汽车电子有限公司 | Pose calculation method and device based on image feature matching, computer equipment and storage medium |
CN113237484A (en) * | 2021-04-21 | 2021-08-10 | 四川轻化工大学 | SLAM-based camera and IMU external rotation parameter solving method |
CN113392909B (en) * | 2021-06-17 | 2022-12-27 | 深圳市睿联技术股份有限公司 | Data processing method, data processing device, terminal and readable storage medium |
CN113658260B (en) * | 2021-07-12 | 2024-07-23 | 南方科技大学 | Robot pose calculation method, system, robot and storage medium |
CN114399532A (en) * | 2022-01-06 | 2022-04-26 | 广东汇天航空航天科技有限公司 | Camera position and posture determining method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931275A (en) * | 2016-05-23 | 2016-09-07 | 北京暴风魔镜科技有限公司 | Monocular and IMU fused stable motion tracking method and device based on mobile terminal |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN106940704A (en) * | 2016-11-25 | 2017-07-11 | 北京智能管家科技有限公司 | A kind of localization method and device based on grating map |
CN107437258A (en) * | 2016-05-27 | 2017-12-05 | 株式会社理光 | Feature extracting method, estimation method of motion state and state estimation device |
CN108036785A (en) * | 2017-11-24 | 2018-05-15 | 浙江大学 | A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion |
-
2018
- 2018-06-22 CN CN201810651885.XA patent/CN108648215B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931275A (en) * | 2016-05-23 | 2016-09-07 | 北京暴风魔镜科技有限公司 | Monocular and IMU fused stable motion tracking method and device based on mobile terminal |
CN107437258A (en) * | 2016-05-27 | 2017-12-05 | 株式会社理光 | Feature extracting method, estimation method of motion state and state estimation device |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN106940704A (en) * | 2016-11-25 | 2017-07-11 | 北京智能管家科技有限公司 | A kind of localization method and device based on grating map |
CN108036785A (en) * | 2017-11-24 | 2018-05-15 | 浙江大学 | A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion |
Non-Patent Citations (2)
Title |
---|
ORB-SLAM: a Versatile and Accurate Monocular SLAM;Mur-Artal R 等;《IEEE Transactions on Robotics》;20151231;全文 * |
基于图优化的单目视觉SLAM技术研究;王丽佳;《中国优秀硕士学位论文全文数据库信息科技辑》;20171115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108648215A (en) | 2018-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108648215B (en) | SLAM motion blur pose tracking algorithm based on IMU | |
CN110125928B (en) | Binocular inertial navigation SLAM system for performing feature matching based on front and rear frames | |
CN107564061B (en) | Binocular vision mileage calculation method based on image gradient joint optimization | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN109676604B (en) | Robot curved surface motion positioning method and motion positioning system thereof | |
CN109166149B (en) | Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN106446815B (en) | A kind of simultaneous localization and mapping method | |
CN110084832B (en) | Method, device, system, equipment and storage medium for correcting camera pose | |
CN109165680B (en) | Single-target object dictionary model improvement method in indoor scene based on visual SLAM | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN109993113A (en) | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN109579825B (en) | Robot positioning system and method based on binocular vision and convolutional neural network | |
CN111307146B (en) | Virtual reality wears display device positioning system based on binocular camera and IMU | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
CN104281148A (en) | Mobile robot autonomous navigation method based on binocular stereoscopic vision | |
CN110751123B (en) | Monocular vision inertial odometer system and method | |
WO2023005457A1 (en) | Pose calculation method and apparatus, electronic device, and readable storage medium | |
CN111882602A (en) | Visual odometer implementation method based on ORB feature points and GMS matching filter | |
CN112541423A (en) | Synchronous positioning and map construction method and system | |
CN112767546A (en) | Binocular image-based visual map generation method for mobile robot | |
CN116007609A (en) | Positioning method and computing system for fusion of multispectral image and inertial navigation | |
CN112284381B (en) | Visual inertia real-time initialization alignment method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |