CN111539982B - Visual inertial navigation initialization method based on nonlinear optimization in mobile platform - Google Patents

Visual inertial navigation initialization method based on nonlinear optimization in mobile platform Download PDF

Info

Publication number
CN111539982B
CN111539982B CN202010302564.6A CN202010302564A CN111539982B CN 111539982 B CN111539982 B CN 111539982B CN 202010302564 A CN202010302564 A CN 202010302564A CN 111539982 B CN111539982 B CN 111539982B
Authority
CN
China
Prior art keywords
imu
frame
initial value
visual
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010302564.6A
Other languages
Chinese (zh)
Other versions
CN111539982A (en
Inventor
康健
王志军
冯辰
于艺晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weisheng Tyco Technology Co ltd
Original Assignee
Beijing Weisheng Tyco Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weisheng Tyco Technology Co ltd filed Critical Beijing Weisheng Tyco Technology Co ltd
Priority to CN202010302564.6A priority Critical patent/CN111539982B/en
Publication of CN111539982A publication Critical patent/CN111539982A/en
Application granted granted Critical
Publication of CN111539982B publication Critical patent/CN111539982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a visual inertial navigation initialization method based on nonlinear optimization in a mobile platform, which comprises the following steps: acquiring video streams by using a camera of a mobile platform to obtain image frames and common viewpoints thereof, acquiring IMU data streams by using an IMU of the mobile platform, and performing pre-integration to obtain IMU pose; judging whether the IMU moves sufficiently, and if so, starting initialization; if the IMU movement is insufficient, continuing to acquire the video stream and the IMU data stream; calculating to obtain a gravity initial value, a distance parameter d initial value from each point to the optical center of the visible camera frame and a camera speed initial value by using a vector subtraction triangle rule formed by common view points and a linear equation set formed by N image frames and common view points; and obtaining optimized gravity, camera pose, speed and common view 3D position by using a nonlinear tight coupling optimization algorithm. Aiming at the defects of the prior art, the method improves the accuracy of the initial estimation of the mobile platform and achieves the initialization effects of higher positioning accuracy and higher robustness.

Description

Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
Technical Field
The invention relates to the technical field of positioning, in particular to a visual inertial navigation initialization method based on nonlinear optimization in a mobile platform.
Background
In recent years, the positioning technology is a hot spot studied in mobile platforms such as current robots, unmanned aerial vehicles and the like. Among the various sensor combinations, vision and inertial navigation are fused into the most potential and economical solution. First, the camera in the mobile platform may provide information about the environment, enabling location technology to locate the camera and identify places that have been reached. But the initialization method of pure vision is very prone to failure due to motion blur and suffers from scale uncertainty. Second, the IMU (inertial measurement unit) in the mobile platform can provide information about the self-motion, which allows the positioning technique to recover scale information that monocular vision initialization cannot do and to estimate the direction of gravity. However, the low-precision inertial navigation initialization has limited performance, and the high-precision inertial navigation initialization cannot be used for civil use and has high cost. The parameter estimation obtained by combining the two methods is more accurate, so that the positioning technology can calculate the pose of the camera and the incremental movement of the mobile platform with high precision and robustness.
Disclosure of Invention
Aiming at the defects of the prior art, the invention adopts the vector triangle rule formed by common view points to carry out initial estimation on parameters such as gravity and the like, adopts a nonlinear tight coupling optimization algorithm to realize the initialization of visual inertial navigation, and provides a visual inertial navigation initialization method based on nonlinear optimization in a mobile platform with higher precision and higher robustness.
The technical scheme adopted for solving the technical problems is as follows:
acquiring video streams by using a camera in a mobile platform to obtain image frames and common viewpoints thereof, and acquiring IMU data streams by using an IMU in the mobile platform;
step two, pre-integrating the IMU data stream to obtain the IMU pose aligned with the image frame;
judging whether the IMU moves sufficiently, and if so, starting initialization; if the IMU movement is insufficient, continuing to acquire the video stream and the IMU data stream;
selecting points of common view of any three continuous image frames, calculating by using a vector subtraction triangle rule to obtain visual displacement change, and carrying out least square solution on a linear equation set consisting of N image frames and the points of common view to obtain a gravity initial value of a first frame of a camera under a world coordinate system and a distance parameter d initial value from each common view point to the optical center of the frame of the visible camera, wherein the world coordinate system refers to a reference coordinate system by taking the first frame of the camera; calculating to obtain a first frame speed initial value of the camera under the world coordinate system through the initial value of gravity and the initial value of the parameter d;
step five, the IMU pre-integral calculation is carried out to obtain the error influence of the zero drift of the gyroscope in the process of attitude change, and the zero drift initial value of the IMU gyroscope is estimated by carrying out least square solution on the linear equation set in the step four;
step six, averaging the optimized visual displacement change in the step four to obtain an initial value of the visual displacement change; calculating to obtain the optimized IMU posture change by using the gyroscope zero drift obtained in the fifth step, wherein the IMU is aligned with the visual frame to obtain an initial value of the visual posture change; calculating the initial position of each common view point through the distance parameter d calculated in the step four; the gravity direction vector is obtained by utilizing the characteristic that the gravity modulus is unchanged; selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, and respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimension removal; and summing the two obtained dimensionless errors, and minimizing the sum of the errors by using an LM (Levenberg-Marquardt) algorithm to obtain the optimized gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, the gyroscope zero drift of the IMU and the accelerometer zero drift.
Judging whether the IMU moves sufficiently or not, specifically whether the standard deviation of the acceleration of the acquired IMU is larger than a threshold value alpha or not; if the standard deviation of the acceleration is greater than the threshold value alpha, the IMU is considered to be fully moved; if the standard deviation of the acceleration is less than or equal to the threshold value α, the IMU is deemed to be insufficiently moved.
The fourth step specifically comprises:
4.1 Of the 20 image frames to be initialized, any three consecutive image frames c are selected 1 、c 2 、c 3 Taking one of the points y of common view i This point to image frame c 1 、c 2 、c 3 The distance parameters of the optical center are respectivelyRespectively image frame c 1 、c 2 、c 3 The unit direction vector of the optical center pointing to the point can be used for obtaining the image frame c by using the vector subtraction triangle rule 1 、c 2 Inter-visual displacement variation s c1c2
R c1c2 For image frame c 2 To c 1 Using the same visual displacement change and IMU displacement change over the same period of time, the following equation can be obtained:
wherein R is IC 、P IC Respectively, IMU and camera external parameter rotation and translation, R b1b2 For and image frame c 2 Aligned IMUb 2 To and from image frame c 1 Aligned IMUb 1 Is a rotation matrix, Δt 23 For image frame c 2 And image frame c 3 Time difference g w Is the gravity, delta, of the first frame of the camera in the world coordinate system p12 .、Δv 12 IMUbs respectively obtained by IMU pre-integration 2 And b 1 A displacement change and a velocity change between the two;
4.2 A linear equation set formed by 20 image frames and points of common view thereof is combined, the initial value of the gravity of the first frame of the camera under the world coordinate system and the initial value of the distance parameter d from each common view point to the optical center of the frame of the visual camera are obtained by carrying out least square solution, and the world coordinate system is the initial value of the speed of the first frame of the camera under the world coordinate system by taking the first frame of the camera as a reference coordinate system and calculating the initial value of the gravity and the initial value of the parameter d.
The sixth step specifically comprises the following steps:
6.1 Averaging the optimized visual displacement changes in the fourth step to obtain an initial value of the visual displacement changes, and calculating to obtain the optimized IMU posture changes by using the gyroscope zero drift obtained in the fifth step, wherein the initial value of the visual posture changes can be obtained due to the alignment of the IMU and the visual frame;
6.2 Calculating the initial position of each common view point through the distance parameter d calculated in the step four, and obtaining a gravity direction vector by utilizing the characteristic that the gravity modulus is unchanged;
6.3 Selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimensionality removal, summing the obtained two dimensionless errors, and minimizing the sum of the errors by using an LM (Levenberg-Marquardt) algorithm to obtain the gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, and the gyroscope null drift and the accelerometer null drift of the IMU.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention adopts the vector triangle rule formed by common view points to carry out initial estimation on parameters such as gravity, and compared with the visual inertial navigation initialization method in the existing mobile platform, the initial estimation accuracy of the invention is higher; meanwhile, the initialization of visual inertial navigation is realized by adopting a nonlinear tight coupling optimization algorithm, and the initialization effect of the mobile platform with higher positioning precision and higher robustness is achieved.
Drawings
FIG. 1 is a flow chart of a method of visual inertial navigation initialization based on nonlinear optimization in a mobile platform.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below in connection with specific embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In a specific embodiment of the present invention, as shown in fig. 1, the present invention provides a visual inertial navigation initialization method based on nonlinear optimization in a mobile platform, which includes steps one to six.
And firstly, acquiring video streams by using a camera in the mobile platform to obtain image frames and common viewpoints thereof, and acquiring IMU data streams by using an IMU in the mobile platform.
And step two, pre-integrating the IMU data stream to obtain the IMU pose aligned with the image frame.
Judging whether the IMU moves sufficiently, and if so, starting initialization; if the IMU movement is insufficient, continuing to collect the video stream and the IMU data stream.
Judging whether the IMU moves sufficiently or not, specifically whether the standard deviation of the acceleration of the acquired IMU is larger than a threshold value alpha or not; if the standard deviation of the acceleration is greater than the threshold value alpha, the IMU is considered to be fully moved; if the standard deviation of the acceleration is less than or equal to the threshold value α, the IMU is deemed to be insufficiently moved.
Selecting points of common view of any three continuous image frames, calculating by using a vector subtraction triangle rule to obtain visual displacement change, and carrying out least square solution by using a linear equation set consisting of N image frames and the points of common view of the image frames which are identical to each other within the same period of time to obtain a gravity initial value of a first frame of a camera under a world coordinate system and a distance parameter d initial value from each common view point to the optical center of the frame of the visible camera, wherein the world coordinate system refers to the first frame of the camera as a reference coordinate system; and calculating to obtain the initial speed value of the first frame of the camera under the world coordinate system through the initial gravity value and the initial parameter d value.
The fourth step specifically comprises:
4.1 Of the 20 image frames to be initialized, any three consecutive image frames c are selected 1 、c 2 、c 3 Taking one of the points y of common view i This point to image frame c 1 、c 2 、c 3 The distance parameters of the optical center are respectivelyRespectively image frame c 1 、c 2 、c 3 The unit direction vector of the optical center pointing to the point can be used for obtaining the image frame c by using the vector subtraction triangle rule 1 、c 2 Inter-visual displacement variation s c1c2
R c1c2 For image frame c 2 To c 1 Using the same visual displacement change and IMU displacement change over the same period of time, the following equation can be obtained:
wherein R is IC 、P IC Respectively, IMU and camera external parameter rotation and translation, R b1b2 For and image frame c 2 Aligned IMUb 2 To and from image frame c 1 Aligned IMUb 1 Is a rotation matrix, Δt 23 For image frame c 2 And image frame c 3 Time difference g w Is the gravity, delta, of the first frame of the camera in the world coordinate system p12 .、Δv 12 IMUbs respectively obtained by IMU pre-integration 2 And b 1 A displacement change and a velocity change between the two;
4.2 A linear equation set formed by 20 image frames and points of common view thereof is combined, the initial value of the gravity of the first frame of the camera under the world coordinate system and the initial value of the distance parameter d from each common view point to the optical center of the frame of the visual camera are obtained by carrying out least square solution, and the world coordinate system is the initial value of the speed of the first frame of the camera under the world coordinate system by taking the first frame of the camera as a reference coordinate system and calculating the initial value of the gravity and the initial value of the parameter d.
And fifthly, carrying out least square solution on the linear equation set in the fourth step to estimate and obtain an initial value of the zero drift of the IMU gyroscope.
Step six, averaging the optimized visual displacement change in the step four to obtain an initial value of the visual displacement change; calculating to obtain the optimized IMU posture change by using the gyroscope zero drift obtained in the fifth step, wherein the IMU is aligned with the visual frame to obtain an initial value of the visual posture change; calculating the initial position of each common view point through the distance parameter d calculated in the step four; the gravity direction vector is obtained by utilizing the characteristic that the gravity modulus is unchanged; selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, and respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimension removal; and summing the two obtained dimensionless errors, and minimizing the sum of the errors by using an LM (Levenberg-Marquardt) algorithm to obtain the optimized gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, the gyroscope zero drift of the IMU and the accelerometer zero drift.
The sixth step specifically comprises the following steps:
6.1 Averaging the optimized visual displacement changes in the fourth step to obtain an initial value of the visual displacement changes, and calculating to obtain the optimized IMU posture changes by using the gyroscope zero drift obtained in the fifth step, wherein the initial value of the visual posture changes can be obtained due to the alignment of the IMU and the visual frame;
6.2 Calculating the initial position of each common view point through the distance parameter d calculated in the step four, and obtaining a gravity direction vector by utilizing the characteristic that the gravity modulus is unchanged;
6.3 Selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimensionality removal, summing the obtained two dimensionless errors, and minimizing the sum of the errors by using an LM (Levenberg-Marquardt) algorithm to obtain the gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, and the gyroscope null drift and the accelerometer null drift of the IMU.
The technical scheme of the present invention is explained in detail below with reference to the accompanying drawings, and is convenient for those skilled in the art to understand.
Fig. 1 is a flow chart of a visual inertial navigation initialization method based on nonlinear optimization in a mobile platform: firstly, acquiring video streams by using a camera in a mobile platform to obtain image frames and common viewpoints thereof, acquiring IMU data streams by using an IMU in the mobile platform, and performing pre-integration to obtain IMU pose aligned with the image frames; then judging whether the IMU moves sufficiently, and if so, starting initialization; if the IMU movement is insufficient, continuing to acquire the video stream and the IMU data stream; then, calculating to obtain visual displacement change by using a vector subtraction triangle rule formed by common view points, and calculating to obtain a gravity initial value of a first frame of a camera under a world coordinate system, a distance parameter d initial value of each common view point to an optical center of a visual camera frame and a speed initial value of the first frame of the camera under the world coordinate system, wherein the world coordinate system refers to the first frame of the camera as a reference coordinate system; then, calculating to obtain a gyroscope null shift; and finally, obtaining the gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, and the gyroscope zero drift and the accelerometer zero drift of the IMU after optimization by using a nonlinear tight coupling optimization algorithm.
Aiming at the defects of uncertain scale and easy influence by environmental conditions of the pure vision positioning method in the existing mobile platform; the low-precision inertial navigation positioning method in the existing mobile platform has the defect of limited performance. The invention adopts a nonlinear tight coupling optimization algorithm to realize the initialization of visual inertial navigation, and achieves the mobile platform initialization effect with higher positioning precision and higher robustness. Aiming at the defect of inaccurate initial estimation in the existing method in the mobile platform, the method adopts the vector triangle rule formed by the common view points to perform initial estimation on parameters such as gravity and the like, thereby achieving the effect of higher initial estimation accuracy of the mobile platform.

Claims (4)

1. A visual inertial navigation initialization method based on nonlinear optimization in a mobile platform, comprising the steps of:
acquiring video streams by using a camera in a mobile platform to obtain image frames and common viewpoints thereof, and acquiring IMU data streams by using an IMU in the mobile platform;
step two, pre-integrating the IMU data stream to obtain the IMU pose aligned with the image frame;
judging whether the IMU moves sufficiently, and if so, starting initialization; if the IMU movement is insufficient, continuing to acquire the video stream and the IMU data stream;
selecting points of common view of any three continuous image frames, calculating by using a vector subtraction triangle rule to obtain visual displacement change, and carrying out least square solution on a linear equation set consisting of N image frames and common view points of the N image frames in the same period of time by using the visual displacement change and the IMU displacement change to obtain a gravity initial value of a first frame of a camera under a world coordinate system and a distance parameter d initial value from each common view point to the optical center of the frame of the visible camera, wherein the world coordinate system refers to the first frame of the camera as a reference coordinate system; calculating to obtain a first frame speed initial value of the camera under the world coordinate system through the initial value of gravity and the initial value of the parameter d;
step five, the IMU pre-integral calculation is carried out to obtain the error influence of the zero drift of the gyroscope in the process of attitude change, and the zero drift initial value of the IMU gyroscope is estimated by carrying out least square solution on the linear equation set in the step four;
step six, averaging the optimized visual displacement change in the step four to obtain an initial value of the visual displacement change; calculating to obtain the optimized IMU posture change by using the gyroscope zero drift obtained in the fifth step, wherein the IMU is aligned with the visual frame to obtain an initial value of the visual posture change; calculating the initial position of each common view point through the distance parameter d calculated in the step four; the gravity direction vector is obtained by utilizing the characteristic that the gravity modulus is unchanged; selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, and respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimension removal; and summing the two obtained dimensionless errors, and minimizing the sum of the errors by using an LM algorithm to obtain the optimized gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, and the gyroscope null shift and the accelerometer null shift of the IMU.
2. The method for initializing visual inertial navigation based on nonlinear optimization according to claim 1, wherein in the third step, it is determined whether the IMU is sufficiently moved, specifically, whether the standard deviation of the acceleration of the acquired IMU is greater than a threshold α; if the standard deviation of the acceleration is greater than the threshold value alpha, the IMU is considered to be fully moved; if the standard deviation of the acceleration is less than or equal to the threshold value α, the IMU is deemed to be insufficiently moved.
3. The method for initializing visual inertial navigation based on nonlinear optimization in a mobile platform according to claim 1, wherein the step four specifically comprises: 4.1 Of the 20 image frames to be initialized, any three consecutive image frames c are selected 1 、c 2 、c 3 Taking one of the points y of common view i This point to image frame c 1 、c 2 、c 3 The distance parameters of the optical center are respectivelyRespectively image frame c 1 、c 2 、c 3 The unit direction vector of the optical center pointing to the point can be used for obtaining the image frame c by using the vector subtraction triangle rule 1 、c 2 Inter-visual displacement variation s c1c2
R c1c2 For image frame c 2 To c 1 Using the same visual displacement change and IMU displacement change over the same period of time, the following equation can be obtained:
wherein R is IC 、P IC Respectively the IMU and the camera are rotated and translated by external parameters,for and image frame c 2 Aligned IMUb 2 To and from image frame c 1 Aligned IMUb 1 Is a rotation matrix, Δt 23 For image frame c 2 And image frame c 3 Time difference g w Is the gravity of the first frame of the camera in the world coordinate system, deltap 12 、Δv 12 IMUbs respectively obtained by IMU pre-integration 2 And b 1 A displacement change and a velocity change between the two; 4.2 A linear equation set formed by 20 image frames and points of common view thereof is combined, the initial value of the gravity of the first frame of the camera under the world coordinate system and the initial value of the distance parameter d from each common view point to the optical center of the frame of the visual camera are obtained by carrying out least square solution, and the world coordinate system is the initial value of the speed of the first frame of the camera under the world coordinate system by taking the first frame of the camera as a reference coordinate system and calculating the initial value of the gravity and the initial value of the parameter d.
4. The method for initializing visual inertial navigation based on nonlinear optimization in a mobile platform according to claim 1, wherein said step six specifically comprises:
6.1 Averaging the optimized visual displacement changes in the fourth step to obtain an initial value of the visual displacement changes, and calculating to obtain the optimized IMU posture changes by using the gyroscope zero drift obtained in the fifth step, wherein the initial value of the visual posture changes can be obtained due to the alignment of the IMU and the visual frame;
6.2 Calculating the initial position of each common view point through the distance parameter d calculated in the step four, and obtaining a gravity direction vector by utilizing the characteristic that the gravity modulus is unchanged;
6.3 Selecting two continuous images, simultaneously selecting two discontinuous images with the common view point number larger than a threshold mu and translating the two discontinuous images belonging to a set threshold interval, carrying out re-projection error calculation on the selected images, carrying out IMU residual error calculation on IMU corresponding to the image frames, respectively obtaining dimensionless errors corresponding to the re-projection errors and the IMU residual errors through dimensionality removal, summing the obtained two dimensionless errors, and minimizing the sum of the errors by using an LM algorithm to obtain the optimized gravity of the first frame of the camera under the world coordinate system, the pose and the speed of the camera under the world coordinate system, the 3D position of each common view point, the gyroscope null shift and the accelerometer null shift of the IMU.
CN202010302564.6A 2020-04-17 2020-04-17 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform Active CN111539982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010302564.6A CN111539982B (en) 2020-04-17 2020-04-17 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010302564.6A CN111539982B (en) 2020-04-17 2020-04-17 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform

Publications (2)

Publication Number Publication Date
CN111539982A CN111539982A (en) 2020-08-14
CN111539982B true CN111539982B (en) 2023-09-15

Family

ID=71979963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010302564.6A Active CN111539982B (en) 2020-04-17 2020-04-17 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform

Country Status (1)

Country Link
CN (1) CN111539982B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor
WO2022061799A1 (en) * 2020-09-27 2022-03-31 深圳市大疆创新科技有限公司 Pose estimation method and device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187758A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN109147058A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Initial method and device and storage medium for the fusion of vision inertial navigation information
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110084832A (en) * 2019-04-25 2019-08-02 亮风台(上海)信息科技有限公司 Correcting method, device, system, equipment and the storage medium of camera pose
CN110125928A (en) * 2019-03-27 2019-08-16 浙江工业大学 A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
WO2019157925A1 (en) * 2018-02-13 2019-08-22 视辰信息科技(上海)有限公司 Visual-inertial odometry implementation method and system
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110599545A (en) * 2019-09-06 2019-12-20 电子科技大学中山学院 Feature-based dense map construction system
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN110726406A (en) * 2019-06-03 2020-01-24 北京建筑大学 Improved nonlinear optimization monocular inertial navigation SLAM method
CN113011231A (en) * 2019-12-20 2021-06-22 舜宇光学(浙江)研究院有限公司 Classified sliding window method, SLAM positioning method, system and electronic equipment thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140323148A1 (en) * 2013-04-30 2014-10-30 Qualcomm Incorporated Wide area localization from slam maps
WO2014179745A1 (en) * 2013-05-02 2014-11-06 Qualcomm Incorporated Methods for facilitating computer vision application initialization
SG10202110833PA (en) * 2017-03-29 2021-11-29 Agency Science Tech & Res Real time robust localization via visual inertial odometry
CN110657803B (en) * 2018-06-28 2021-10-29 深圳市优必选科技有限公司 Robot positioning method, device and storage device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187758A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
CN107392964A (en) * 2017-07-07 2017-11-24 武汉大学 The indoor SLAM methods combined based on indoor characteristic point and structure lines
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
WO2019157925A1 (en) * 2018-02-13 2019-08-22 视辰信息科技(上海)有限公司 Visual-inertial odometry implementation method and system
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109147058A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Initial method and device and storage medium for the fusion of vision inertial navigation information
CN110125928A (en) * 2019-03-27 2019-08-16 浙江工业大学 A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110084832A (en) * 2019-04-25 2019-08-02 亮风台(上海)信息科技有限公司 Correcting method, device, system, equipment and the storage medium of camera pose
CN110726406A (en) * 2019-06-03 2020-01-24 北京建筑大学 Improved nonlinear optimization monocular inertial navigation SLAM method
CN110599545A (en) * 2019-09-06 2019-12-20 电子科技大学中山学院 Feature-based dense map construction system
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN113011231A (en) * 2019-12-20 2021-06-22 舜宇光学(浙江)研究院有限公司 Classified sliding window method, SLAM positioning method, system and electronic equipment thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
T Liu 等.High altitude monocular visuai-inertial state estimation:Initialization and sensor fusion.《IEEE International Conference on Robotics and Automation》.2017,全文. *
盛淼.基于双目视觉惯导的SLAM算法研究.《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》.2020,I140-446. *
陈常 等.基于视觉的同时定位与地图构建的研究进展.计算机应用研究.2017,(第03期),第7-13页. *

Also Published As

Publication number Publication date
CN111539982A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109540126B (en) Inertial vision integrated navigation method based on optical flow method
CN111811506B (en) Visual/inertial odometer combined navigation method, electronic equipment and storage medium
US8761439B1 (en) Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit
CN111795686B (en) Mobile robot positioning and mapping method
Heo et al. EKF-based visual inertial navigation using sliding window nonlinear optimization
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN110207693B (en) Robust stereoscopic vision inertial pre-integration SLAM method
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN108981693A (en) VIO fast joint initial method based on monocular camera
CN111539982B (en) Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
Tang et al. LE-VINS: A robust solid-state-LiDAR-enhanced visual-inertial navigation system for low-speed robots
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN113345032B (en) Initialization map building method and system based on wide-angle camera large distortion map
CN112284381A (en) Visual inertia real-time initialization alignment method and system
CN114440877B (en) Asynchronous multi-camera visual inertial odometer positioning method
CN114543786B (en) Wall climbing robot positioning method based on visual inertial odometer
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN109737950B (en) Underwater robot positioning system and method based on VIO and dynamic model
Chen et al. The performance analysis of stereo visual odometry assisted low-cost INS/GPS integration system
CN110864685A (en) Vehicle monocular vision wheeled type odometer positioning method based on loose coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant