CN111340851A - SLAM method based on binocular vision and IMU fusion - Google Patents

SLAM method based on binocular vision and IMU fusion Download PDF

Info

Publication number
CN111340851A
CN111340851A CN202010422434.6A CN202010422434A CN111340851A CN 111340851 A CN111340851 A CN 111340851A CN 202010422434 A CN202010422434 A CN 202010422434A CN 111340851 A CN111340851 A CN 111340851A
Authority
CN
China
Prior art keywords
formula
imu
coordinate system
frame
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010422434.6A
Other languages
Chinese (zh)
Inventor
刘继廷
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lyushi Intelligent Technology Co ltd
Beijing Greenvalley Technology Co ltd
Original Assignee
Shenzhen Lyushi Intelligent Technology Co ltd
Beijing Greenvalley Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lyushi Intelligent Technology Co ltd, Beijing Greenvalley Technology Co ltd filed Critical Shenzhen Lyushi Intelligent Technology Co ltd
Priority to CN202010422434.6A priority Critical patent/CN111340851A/en
Publication of CN111340851A publication Critical patent/CN111340851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a SLAM method based on fusion of binocular vision and IMU, which comprises the following steps: initializing the IMU according to rigid constraint between the binocular camera and the IMU, specifically comprising initializing the bias of the angular velocity of the IMU and initializing the velocity and gravity of the IMU; an objective function is constructed that contains the IMU residual and the camera reprojection error. Compared with the prior art, the technical scheme of the invention can realize the fusion of binocular vision and IMU, and simultaneously increases the robustness of the system.

Description

SLAM method based on binocular vision and IMU fusion
Technical Field
The invention relates to the technical field of navigation survey, in particular to a SLAM method based on fusion of binocular vision and IMU.
Background
The camera can provide rich image information, and is widely applied to the field of mobile SLAM and the simultaneous positioning and map creation.
When the mobile robot moves too fast, the camera has motion blur, or the overlapping area between frames is too small to perform feature matching; compared with a camera, the inertial measurement unit IMU can measure angular velocity and acceleration, and for short-time rapid movement, the inertial measurement unit IMU can provide good movement estimation and measurement, in addition, the inertial measurement unit IMU can have drift, and data of the camera basically cannot have drift.
In order to solve the problems, research and development personnel provide an effective solution to the problem that information of vision and an IMU are fused, an inertial measurement unit IMU provides a better solution for fast motion, and a camera can solve the problem of IMU drift under slow motion. It is therefore important to say how to achieve both visual and IMU fusion.
The prior art at present mainly has the following defects: 1. for example: fusing by adopting a loose coupling method: namely, the IMU and the camera respectively carry out self motion estimation, and then the pose estimation results are fused. The method can effectively improve the robustness and the accuracy of the system, and has small calculation amount, but the method does not consider the internal relation of the information of two sensors (namely an IMU sensor and a visual image sensor) and lacks effective constraint and compensation. 2. For example: fusing by adopting a tight coupling method: namely, the states of the IMU and the camera are merged together, a motion equation and a state equation are jointly constructed, and then state estimation is carried out. The method fully utilizes the information of the sensor, improves the robustness and the accuracy of the system, but has large calculation amount.
Disclosure of Invention
Aiming at the problems, the invention provides a SLAM method based on fusion of a binocular camera and an IMU.
The main technical purpose of the invention is to solve the pose of the robot by constructing a reprojection error equation of residual error of IMU and vision and by an optimization method and construct a map of the surrounding environment. Specifically, initializing and constructing a reprojection error equation of residual error and vision of the IMU; the final purpose is to solve the three-dimensional information of the feature points and simultaneously obtain pose information, and the camera and the IMU obtain external parameters;
in view of this, the embodiment of the present invention provides a method for SLAM based on fusion of binocular vision and IMU.
The invention provides a SLAM method based on binocular vision and IMU fusion, which comprises the following operation steps:
initializing the IMU according to rigid constraint between the binocular camera and the IMU, specifically comprising initializing the bias of a gyroscope of the IMU and initializing the speed and gravity of the IMU;
an objective function is constructed that contains the IMU residual and the camera reprojection error.
Preferably, as one possible embodiment; initializing the bias of a gyroscope of the IMU based on a rigid constraint condition between the binocular camera and the IMU, and specifically comprising the following operation steps:
the initial calculation formula is as follows:
Figure 766428DEST_PATH_IMAGE001
(ii) a Formula 1;
wherein the content of the first and second substances,
Figure 837021DEST_PATH_IMAGE002
indicates that the j frame picture is in C0Relative rotation of the coordinate systems;
Figure 576306DEST_PATH_IMAGE003
for the ith frame picture at C0Relative rotation of the coordinate systems; item II above
Figure 76558DEST_PATH_IMAGE004
And the first item
Figure 415617DEST_PATH_IMAGE005
Is obtained by carrying out generalized multiplication, the
Figure 75137DEST_PATH_IMAGE006
Relative rotation of the camera from the ith frame to the jth frame;
Figure 313220DEST_PATH_IMAGE007
relative rotation of the gyroscope from the j-th time to the i-th time; c0Is the reference coordinate system;
the calculation result obtained by the initial calculation formula is the initial bias of the gyroscope, and the specific implementation also comprises the following calculation steps;
for the above
Figure 566347DEST_PATH_IMAGE008
Performing first-order approximate calculation on the relative rotation of the gyroscope from the jth moment to the ith moment to obtain a formula 2;
Figure 183798DEST_PATH_IMAGE009
(ii) a Formula 2;
the 2 nd formula is
Figure 714136DEST_PATH_IMAGE011
The first order linear approximation of (1), substituting its formula 2 into the formula 1, and then according to the second term in the formula 1
Figure 44492DEST_PATH_IMAGE012
And the first item
Figure 722598DEST_PATH_IMAGE013
Making a constraint condition that the difference is approximately zero, and solving to obtain the initial bias of the gyroscope; wherein the 2 nd formula is mainly to
Figure 161318DEST_PATH_IMAGE011
In the explanation of (1) the description,
Figure 61010DEST_PATH_IMAGE011
relative rotation of the IMU from the j-th time to the i-th time;
Figure 437633DEST_PATH_IMAGE015
the deviation of the angular speed is calculated by representing the angular change function of the IMU;
Figure 399773DEST_PATH_IMAGE017
the initial offset of the angular velocity of the IMU is obtained by minimizing a least squares function for the angular change that is sought by the measured angular velocity of the IMU.
Preferably, as one possible embodiment; initializing the velocity and gravity of the IMU based on the rigid constraint condition between the binocular camera and the IMU, and specifically comprising the following operation steps:
setting target solution values for initialized speed and gravitational acceleration, i.e. setting
Figure 765551DEST_PATH_IMAGE018
Solving a value for the target, wherein the expression formula is as follows;
Figure 988591DEST_PATH_IMAGE019
(ii) a Formula 3;
in the formula 3, in the formula,
Figure 270537DEST_PATH_IMAGE020
solving a value for the target; wherein
Figure 310518DEST_PATH_IMAGE021
The speed of the ith frame picture;
Figure 539374DEST_PATH_IMAGE022
the speed of the kth frame picture; since a binocular camera is used, the scale factor is 1;
Figure 554604DEST_PATH_IMAGE023
is shown in C0Acceleration of gravity under a coordinate system, C0Is the reference coordinate system; as described above
Figure 7450DEST_PATH_IMAGE024
The factors in the middle brackets are the initialized speed and the gravity acceleration;
list(s) of
Figure 881866DEST_PATH_IMAGE025
A displacement change formula from the jth frame to the ith frame;
Figure 120605DEST_PATH_IMAGE026
(ii) a Formula 4;
list(s) of
Figure 849395DEST_PATH_IMAGE027
An angle change formula from the jth frame to the ith frame;
Figure 614089DEST_PATH_IMAGE028
(ii) a Formula 5;
in the formula 4, in the formula,
Figure 569275DEST_PATH_IMAGE029
the displacement change from the jth frame to the ith frame;
Figure 668163DEST_PATH_IMAGE030
indicates the j-th time to C0A change in position of the coordinate system;
Figure 923563DEST_PATH_IMAGE031
indicates the ith time to C0Transforming the position of the coordinate system;
Figure 655896DEST_PATH_IMAGE032
this parameter is the i-th time to C0Carrying out rotation transformation on coordinates;
Figure 567220DEST_PATH_IMAGE033
is as follows
Figure 6816DEST_PATH_IMAGE034
The speed of the frame picture;
Figure 54407DEST_PATH_IMAGE035
is the time difference from the ith frame to the jth frame;
in the formula 5, in the following formula,
Figure 19958DEST_PATH_IMAGE036
the angle change from the jth frame to the ith frame;
Figure 74370DEST_PATH_IMAGE037
is C0Relative rotation of the coordinate system to the ith frame;
Figure 718323DEST_PATH_IMAGE038
is from the i-th time to C0Carrying out rotation transformation on coordinates; wherein
Figure 276212DEST_PATH_IMAGE039
The speed of the jth frame picture;
Figure 147085DEST_PATH_IMAGE040
is the time difference from the ith frame to the jth frame;
Figure 35931DEST_PATH_IMAGE041
is shown in C0Gravitational acceleration under a coordinate system;
Figure 17662DEST_PATH_IMAGE042
the speed of the ith frame picture;
enumerating the translation information of the camera coordinate system converted into the translation information of the IMU to the initial coordinate system, the calculation formula is as follows:
Figure 102162DEST_PATH_IMAGE043
(ii) a Formula 6;
in the above-mentioned formula 6, the first,
Figure 612777DEST_PATH_IMAGE044
from the ith time to C0Transforming the position of the coordinate system;
Figure 454831DEST_PATH_IMAGE045
for the ci frame to C0Transforming the position of the coordinate system;
Figure 791920DEST_PATH_IMAGE047
is the CiTime to C0Carrying out rotation transformation on coordinates;
Figure 262084DEST_PATH_IMAGE048
for the translation information of the camera to the IMU, the 6 th formula is to convert the translation information in the camera coordinate system to the translation information of the IMU to the initial coordinate system; the above formula 6 is mainly used for coordinate transformation between the camera and the IMU;
then combining the 4 th formula, the 5 th formula and the 6 th formula to form a target equation, namely a 7 th formula, and finally solving according to the target equation to obtain the initial velocity and the gravitational acceleration of the IMU; wherein the target equation is obtained in this way:
Figure 209181DEST_PATH_IMAGE049
(ii) a Formula 7;
in the above formula 7
Figure 603777DEST_PATH_IMAGE050
By solving the objective equation, a numerical value of an objective solution value can be obtained; namely, the initialization velocity and the gravitational acceleration of the IMU can be obtained according to the target equation in the formula 7.
Preferably, as one possible embodiment; the construction of the objective function containing the IMU residual error and the camera reprojection error specifically comprises the following operation steps:
determining the state quantity to be optimized of the IMU, and setting a target solution value related to the state quantity to be optimized, namely setting
Figure 317524DEST_PATH_IMAGE051
Solving for the target value:
Figure 111037DEST_PATH_IMAGE052
(ii) a Formula 8;
wherein, the above formula 8 is only a target value to be solved, in other words, it is only a label formula; in the above equation 8
Figure 491684DEST_PATH_IMAGE053
In order to translate the information in the second direction,
Figure 839489DEST_PATH_IMAGE054
in order to be the speed information,
Figure 29031DEST_PATH_IMAGE055
in order to obtain the pose information,
Figure 677050DEST_PATH_IMAGE056
is a bias to the acceleration of the IMU,
Figure 234457DEST_PATH_IMAGE057
bias of a gyroscope that is an IMU;
enumerating an objective function equation according to the state quantity to be optimized as follows:
Figure 69558DEST_PATH_IMAGE058
(ii) a Formula 9;
wherein a first term in the objective function equation is a residual of the IMU; performing expansion conversion on the first term in the 9 th formula to obtain a 10 th formula;
Figure 734895DEST_PATH_IMAGE059
(ii) a A 10 th formula;
in the above-mentioned formula 10, the,
Figure 909524DEST_PATH_IMAGE060
in order to be the difference in the displacement,
Figure 553345DEST_PATH_IMAGE061
in order to be the speed difference,
Figure 875742DEST_PATH_IMAGE062
in order to be the angle difference between the two angles,
Figure 813611DEST_PATH_IMAGE063
is the difference in the bias of the accelerometer,
Figure 439152DEST_PATH_IMAGE064
is the bias difference of the gyroscope;
Figure 850280DEST_PATH_IMAGE065
rotating the transpose of the matrix under the reference coordinate system at the moment of the ith frame;
Figure 188201DEST_PATH_IMAGE067
the speed of the j frame picture under the reference coordinate system;
Figure 929761DEST_PATH_IMAGE069
is the gravity acceleration under the reference coordinate system;
Figure 141300DEST_PATH_IMAGE071
the speed of the ith frame of picture under a reference coordinate system;
Figure 270799DEST_PATH_IMAGE072
translation information of a reference coordinate system at the j frame moment;
Figure 570718DEST_PATH_IMAGE073
translation information of the ith frame in a reference coordinate system;
Figure 365236DEST_PATH_IMAGE074
is the acceleration of gravity;
Figure 842002DEST_PATH_IMAGE075
is the time difference;
Figure 142402DEST_PATH_IMAGE076
displacement obtained by IMU pre-integration;
Figure 257513DEST_PATH_IMAGE077
pre-integrating the IMU to obtain a speed;
Figure 403193DEST_PATH_IMAGE078
pre-integrating the IMU to obtain an angle;
Figure 386061DEST_PATH_IMAGE079
representing the posture transposition of the ith frame time under a reference coordinate system;
Figure 792116DEST_PATH_IMAGE080
the posture of the jth frame moment under a reference coordinate system;
Figure 860435DEST_PATH_IMAGE082
the acceleration offset of the jth frame picture under a reference coordinate system is obtained;
Figure 154013DEST_PATH_IMAGE084
the acceleration offset of the ith frame picture under a reference coordinate system;
Figure 788125DEST_PATH_IMAGE086
the gyroscope bias is the j frame picture under the reference coordinate system;
Figure 698837DEST_PATH_IMAGE088
the gyroscope bias is the ith frame picture under the reference coordinate system;
in addition, in the 10 th formula, the content of the matrix of the 10 th formula is based on that the first term on the matrix side in the 10 th formula can pass through the 4 th formula
Figure 395398DEST_PATH_IMAGE089
Substitution conversion, and the second term on the matrix side in the above 10 th formula can be expressed by the 5 th formula
Figure 820563DEST_PATH_IMAGE090
Substitution conversion in which the third term on the matrix side in the above 10 th formula is multiplication between two quaternions
Figure 981286DEST_PATH_IMAGE091
Represents the inverse of the quaternion at time i in the world coordinate system, which is referred to herein as the top C0A coordinate system; warp beamThe first term related to the state quantity to be optimized can be obtained through the operation of the 10 th formula;
wherein the second term in the objective function equation is a re-projection error of the camera: carrying out expansion conversion on the second term in the 9 th formula to obtain an 11 th formula;
Figure 525881DEST_PATH_IMAGE092
(ii) a Formula 11.
The embodiment of the invention realizes the following technical effects:
the invention provides a SLAM method based on fusion of binocular vision and IMU, which increases the robustness of a system compared with the traditional fusion method of the vision SLAM and the IMU; it uses the construction of an objective function containing the IMU residual and the camera reprojection error, namely
Figure 913000DEST_PATH_IMAGE093
To realize the output state quantity to be optimized; after the optimization processing of the step, the final value to be optimized can be obtained by minimizing the objective function by adopting an optimization method, and the pose and the feature points in the environment are displayed, so that a technical basis is provided for the subsequent positioning and composition.
The invention provides a SLAM method based on fusion of binocular vision and IMU, which fuses the angle information of the IMU at the front end of the vision, improves the accuracy of feature matching, increases binocular baseline constraint and stereo matching, and screens wrong matching;
the invention provides a SLAM method based on binocular vision and IMU fusion, which uses a binocular camera and adds the matching from 3D points to 3D points, (the method not only considers IMU residual errors, but also considers the reprojection errors of characteristic points) to improve the precision of the system.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
Fig. 1 shows a first flowchart of a SLAM method based on binocular vision and IMU fusion according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Example one
The present embodiment provides a method for implementing a SLAM based on fusion of binocular vision and an IMU, which can be applied to occasions requiring multiple sensors to perform information fusion, such as instant positioning and map building (SLAM), autonomous navigation positioning, and the like. It should be noted that the inertial measurement unit is a device for measuring the three-axis attitude angle (or angular velocity) and acceleration of the object. An IMU contains three single-axis accelerometers and three single-axis gyroscopes (angular velocity sensors), the accelerometers detecting acceleration signals of the object in three independent axes of the carrier coordinate system, and the gyroscopes detecting angular velocity signals of the carrier relative to the navigation coordinate system, measuring the angular velocity and acceleration of the object in three-dimensional space, and calculating the attitude of the object based thereon.
The following describes a SLAM method based on binocular vision and IMU fusion in an embodiment of the present invention in detail.
Referring to fig. 1, an embodiment of the present invention provides a method for SLAM based on binocular vision and IMU fusion, including the following steps:
step S102, initializing the IMU according to rigid constraint between the binocular camera and the IMU, specifically including initializing the bias of the angular velocity of the IMU and initializing the velocity and gravity of the IMU;
and step S103, constructing an objective function containing the IMU residual error and the camera reprojection error.
Before the step S102, the method further includes executing step S101; acquiring image information of a binocular camera and data of an IMU (inertial measurement Unit), and extracting feature points; for example, a certain feature point P mentioned in the following description provides a technical basis for performing feature point matching subsequently through the above feature point extraction. The embodiment of the present invention for extracting the feature points is not described in detail.
In addition, it has been found through research that IMU initialization (visual-inertial joint initialization), which is the first step of fusion of visual and inertial data, combines visual data with fast IMU data, including: bias calibration (zero bias) of the gyroscope, bias calibration of the accelerometer, and the like, and the bias of the gyroscope is initialized, and detailed technical contents are described in the following description.
Preferably, as one possible embodiment; in step S102, initializing a bias of a gyroscope of the IMU based on a rigid constraint condition between the binocular camera and the IMU, specifically including the following operation steps:
step S1021: the initial calculation formula is as follows:
Figure 407435DEST_PATH_IMAGE094
(ii) a Formula 1;
wherein the content of the first and second substances,
Figure 547298DEST_PATH_IMAGE095
the actual first step is the first step of obtaining R (i.e. R in the camera calibration external reference matrix), here expressed as a quaternion, which represents the relative rotation of the camera in the reference coordinate system of the jth frame picture (where the reference coordinate system refers to C)0) The upper right-1 is the inverse, i.e. the first term of the above formula
Figure 875512DEST_PATH_IMAGE096
From C0Relative rotation of the coordinate system to the jth frame picture; second term of the above formula
Figure 143069DEST_PATH_IMAGE097
For the ith frame picture at C0Relative rotation of the coordinate systems; item II above
Figure 237933DEST_PATH_IMAGE097
And the first item
Figure 638828DEST_PATH_IMAGE096
The generalized multiplication results in a relative rotation of the camera from frame i to frame j,
Figure 259646DEST_PATH_IMAGE098
relative rotation of the gyroscope from the j-th time to the i-th time; c0Is the reference coordinate system; the above equation 1 concerning initialization of the gyroscope, where the input factors are the first term, the second term and the third term of the equation, and the output calculation result is the initialized gyroscope bias;
regarding the meaning of the above formula 1, it should be noted that, as it can be known from the analysis that the camera and the IMU are on the same rigid body, the angle changes of the camera and the IMU are the same in the same time, so that the gyroscope bias (i.e. the bias of the gyroscope) can be solved by using the formula 1, where the input factors of the formula 1 are:
Figure 808308DEST_PATH_IMAGE099
Figure 378967DEST_PATH_IMAGE100
Figure 965193DEST_PATH_IMAGE101
the following results can be calculated from formula 1 for the three main elements: i.e. initial bias of the gyroscope;
in the above-mentioned formula 1, the first,
Figure 369630DEST_PATH_IMAGE102
change the angle of the camera in the ith frame and the jth frame;
Figure 343271DEST_PATH_IMAGE103
changing the angle of the IMU in the ith frame and the jth frame;
Figure 983200DEST_PATH_IMAGE104
is C0Relative rotation of the coordinate system to the jth frame picture;
Figure 379194DEST_PATH_IMAGE105
for the ith frame picture at C0Relative rotation of the coordinate systems;
Figure 813586DEST_PATH_IMAGE106
relative rotation of the gyroscope from the j-th time to the i-th time; c0Is the reference coordinate system; the calculation result obtained by the initialization calculation formula is the initialized bias of the gyroscope (or called initial bias of the gyroscope), and the specific implementation further comprises the following calculation steps;
step S1022: for the above
Figure 884311DEST_PATH_IMAGE106
Performing first-order approximate calculation on the relative rotation of the gyroscope from the jth moment to the ith moment to obtain a formula 2;
Figure 124668DEST_PATH_IMAGE107
(ii) a Formula 2;
wherein the 2 nd formula is mainly to
Figure 951066DEST_PATH_IMAGE108
In the explanation of (1) the description,
Figure 962884DEST_PATH_IMAGE109
for the relative rotation of the IMU in the ideal case (noise and bias are not taken into account),
Figure 301331DEST_PATH_IMAGE110
is the difference in the bias of the gyroscope,
Figure 545712DEST_PATH_IMAGE111
is a matrix of the Jacobian, and is,
Figure 895790DEST_PATH_IMAGE111
the bias of the angular velocity is calculated as the bias of the angular change function representing the IMU, which is actually a first order approximation.
In the formula 2, in the formula,
Figure 671985DEST_PATH_IMAGE108
relative rotation of the IMU from the j-th time to the i-th time;
Figure 576356DEST_PATH_IMAGE112
the deviation of the angular speed is calculated by representing the angular change function of the IMU;
Figure 958184DEST_PATH_IMAGE109
for the angular variation solved by the measurement of the angular velocity of the IMU, the initial offset of the angular velocity of the IMU can be obtained by minimizing the least squares function;
step S1023: the 2 nd formula is
Figure 897190DEST_PATH_IMAGE108
The first order linear approximation of (A) is to bring its formula 2 into formula 1 (i.e., the initial calculation formula), and thenAccording to the second term of formula 1
Figure 700411DEST_PATH_IMAGE113
And the first item
Figure 951132DEST_PATH_IMAGE114
And solving to obtain the initial bias of the gyroscope under the constraint condition that the difference is approximately zero. It should be briefly noted that, since the binocular camera and the IMU are on the same rigid body (i.e., both mounted on the drone onboard platform), the angular changes of the binocular camera and the IMU are consistent within the same time, and the angular velocity bias (i.e., gyroscope bias) of the IMU is initialized according to this constraint.
Preferably, as one possible embodiment; in step S102, initializing the velocity and gravity of the IMU based on the rigid constraint condition between the binocular camera and the IMU, specifically including the following operation steps: it should be noted that, based on the rigid constraint conditions in the same manner, the velocity and gravity of the IMU are initialized, and then the initialized velocity and gravity acceleration are obtained by solving;
step S1024: setting target solution values for initialized speed and gravitational acceleration, i.e. setting
Figure 415612DEST_PATH_IMAGE115
Solving a value for the target, wherein the expression formula is as follows;
Figure 740283DEST_PATH_IMAGE116
(ii) a Formula 3;
in the formula 3, in the formula,
Figure 330052DEST_PATH_IMAGE115
solving a value for the target; wherein
Figure 68069DEST_PATH_IMAGE117
The speed of the ith frame picture;
Figure 601819DEST_PATH_IMAGE118
the speed of the kth frame picture; due to the use ofBinocular camera, so the scale factor is 1;
Figure 902700DEST_PATH_IMAGE119
is shown in C0Acceleration of gravity under a coordinate system, C0Is the reference coordinate system; it should be noted that the above formula 3 is only a set label, and is used as a target solution value to perform solution; it is clear that the above
Figure 191599DEST_PATH_IMAGE120
Factors in the middle brackets are the initial speed and the gravity acceleration of the target to be solved;
step S1025: list(s) of
Figure 292279DEST_PATH_IMAGE121
A displacement change formula from the ith frame to the jth frame;
Figure 957616DEST_PATH_IMAGE122
(ii) a Formula 4;
step S1026: list(s) of
Figure 728650DEST_PATH_IMAGE123
An angle change formula from the jth frame to the ith frame;
Figure 188450DEST_PATH_IMAGE124
(ii) a Formula 5;
in the formula 4, in the formula,
Figure 776427DEST_PATH_IMAGE029
the displacement change from the jth frame to the ith frame;
Figure 651979DEST_PATH_IMAGE030
indicates the j-th time to C0A change in position of the coordinate system;
Figure 701073DEST_PATH_IMAGE031
indicates the ith time to C0Transforming the position of the coordinate system;
Figure 269458DEST_PATH_IMAGE032
this parameter is the i-th time to C0Carrying out rotation transformation on coordinates;
Figure 751255DEST_PATH_IMAGE033
is as follows
Figure 820711DEST_PATH_IMAGE034
The speed of the frame picture;
Figure 894233DEST_PATH_IMAGE035
is the time difference from the ith frame to the jth frame;
in the formula 5, in the following formula,
Figure 367940DEST_PATH_IMAGE036
the angle change from the jth frame to the ith frame;
Figure 258405DEST_PATH_IMAGE037
is C0Relative rotation of the coordinate system to the ith frame;
Figure 600393DEST_PATH_IMAGE126
is from time j to C0Carrying out rotation transformation on coordinates;
Figure 70033DEST_PATH_IMAGE038
is from the i-th time to C0Carrying out rotation transformation on coordinates; wherein
Figure 104854DEST_PATH_IMAGE039
The speed of the jth frame picture;
Figure 561243DEST_PATH_IMAGE040
is the time difference from the ith frame to the jth frame;
Figure 847868DEST_PATH_IMAGE041
is shown in C0Gravitational acceleration under a coordinate system;
Figure 627474DEST_PATH_IMAGE042
the speed of the ith frame picture; the above formula 4 and formula 5 mainly obtain displacement and angle according to the IMU information, that is, a displacement formula and an angle formula respectively;
step S1026: enumerating the translation information of the camera coordinate system converted into the translation information of the IMU to the initial coordinate system, the calculation formula is as follows:
Figure 508230DEST_PATH_IMAGE043
(ii) a Formula 6;
in the above-mentioned formula 6, the first,
Figure 638866DEST_PATH_IMAGE044
from the ith time to C0Transforming the position of the coordinate system;
Figure 791498DEST_PATH_IMAGE045
is the CiFrame to C0Transforming the position of the coordinate system;
Figure 157102DEST_PATH_IMAGE128
is the CiTime to C0Carrying out rotation transformation on coordinates;
Figure 674671DEST_PATH_IMAGE048
for the translation information of the camera to the IMU, the 6 th formula is to convert the translation information in the camera coordinate system to the translation information of the IMU to the initial coordinate system; the above formula 6 is mainly used for coordinate transformation between the camera and the IMU;
step S1027: combining the formula 4, the formula 5 and the formula 6 to form a target equation, namely a formula 7, and solving according to the target equation to obtain the initial velocity and the gravitational acceleration of the IMU; wherein the target equation is obtained in this way:
Figure 902390DEST_PATH_IMAGE129
(ii) a Formula 7;
in the above formula 7
Figure 858714DEST_PATH_IMAGE130
The numerical value of the target solution value can be obtained by solving the target equation; the value of the target solution is
Figure 691540DEST_PATH_IMAGE131
The factors in the middle brackets are the initial speed and the gravity acceleration; the initial velocity and the gravitational acceleration of the IMU can be obtained from the target equation in the formula 7 (the target equation is an overdetermined equation when the input parameters are sufficient).
It should be noted that the above step is that the above formula 7 is actually formed by combining the formula 4, the formula 5 and the formula 6; the specific merging process is as follows: this equation 6 is actually substituted into equation 4 to derive: based on the 4 th formula and the 6 th formula we can get
Figure 507574DEST_PATH_IMAGE132
(i.e., the formula is actually the 6 th formula
Figure 284906DEST_PATH_IMAGE133
Substituted into equation 4
Figure 779341DEST_PATH_IMAGE134
Can be obtained by derivation); then again because
Figure 997833DEST_PATH_IMAGE135
Thus can be calculated
Figure 916592DEST_PATH_IMAGE136
(ii) a Equation 6
Figure 977958DEST_PATH_IMAGE137
Substituted into equation 5
Figure 744925DEST_PATH_IMAGE138
The expansion can also obtain a similar formula, and finally the formula is written into a matrix formThe formula is the 7 th formula;
in step S103, an objective function is constructed that includes the IMU residual and the camera reprojection error.
Preferably, as one possible embodiment; the construction of the objective function containing the IMU residual error and the camera reprojection error specifically comprises the following operation steps:
step S1031: determining the state quantity to be optimized of the IMU as follows:
Figure 617591DEST_PATH_IMAGE139
(ii) a Formula 8;
setting a target solution value relating to the quantity of state to be optimized, i.e. setting
Figure 241339DEST_PATH_IMAGE140
To solve the value for the target, note that the above formula 8 is only one target value to be solved, in other words, it is only one index formula; in the above equation 8
Figure 993264DEST_PATH_IMAGE141
In order to translate the information in the second direction,
Figure 829501DEST_PATH_IMAGE142
in order to be the speed information,
Figure 167727DEST_PATH_IMAGE143
in order to obtain the pose information,
Figure 759114DEST_PATH_IMAGE144
is a bias to the acceleration of the IMU,
Figure 266843DEST_PATH_IMAGE145
bias of a gyroscope that is an IMU;
step S1032: enumerating an objective function equation according to the state quantity to be optimized as follows:
Figure 969089DEST_PATH_IMAGE146
(ii) a 9 th publicationFormula (I);
it should be noted that the meaning of the 9 th formula is to construct an objective function, so as to solve the state quantity; the input factors of the above 9 th formula are the residual of the IMU and the reprojection error of the camera, respectively, and the calculation result output by the 9 th formula is the state quantity to be optimized, that is:
Figure 344576DEST_PATH_IMAGE147
it should be noted that, in the objective function equation of the above formula 9, the first term in the parentheses of the formula 9
Figure 369514DEST_PATH_IMAGE149
As residual error of IMU, second term
Figure 299292DEST_PATH_IMAGE151
Is a reprojection error of the camera, wherein
Figure 70808DEST_PATH_IMAGE153
Is represented by CiAn observed value of a time;
Figure 569310DEST_PATH_IMAGE155
indicating the state quantity obtained
Figure 174604DEST_PATH_IMAGE157
(ii) a In the formula 9 (i.e., the objective function equation), it should be noted that all the factors in the formula 9 are only one symbol, so as to facilitate writing and subsequent solution;
Figure 857258DEST_PATH_IMAGE158
the residual error of the IMU is shown;
Figure 245514DEST_PATH_IMAGE159
representing the observed value at time i;
Figure 654980DEST_PATH_IMAGE160
indicating the state quantity obtained
Figure 962333DEST_PATH_IMAGE161
Same principle of
Figure 538808DEST_PATH_IMAGE162
Representing the reprojection error of the camera;
Figure 120968DEST_PATH_IMAGE163
Figure 328483DEST_PATH_IMAGE164
a covariance matrix representing the IMU and the camera; regarding the objective function equation, the final value to be optimized can be obtained by minimizing the objective function by adopting an optimization method, and the pose and the feature points in the environment are displayed, so that a technical basis is provided for the subsequent positioning and composition.
Wherein the first term in the objective function equation (i.e., formula 9) is the residual of the IMU; step S1033: expanding and converting the first term in the objective function equation (namely according to the 9 th formula) related to the state quantity to be optimized to obtain a 10 th formula; while in equation 10
Figure 10000DEST_PATH_IMAGE165
This symbol represents the residual error of the IMU, which includes information representative of displacement, velocity, rotation, accelerometer bias, and gyroscope bias, i.e., in turn, respectively
Figure 808192DEST_PATH_IMAGE166
Figure 846905DEST_PATH_IMAGE167
Figure 374839DEST_PATH_IMAGE168
Figure 148628DEST_PATH_IMAGE169
Figure 437046DEST_PATH_IMAGE170
Figure 970795DEST_PATH_IMAGE171
(ii) a A 10 th formula;
in the above-mentioned formula 10, the,
Figure 9027DEST_PATH_IMAGE172
in order to be the difference in the displacement,
Figure 94664DEST_PATH_IMAGE173
in order to be the speed difference,
Figure 887956DEST_PATH_IMAGE174
in order to be the angle difference between the two angles,
Figure 881188DEST_PATH_IMAGE175
is the difference in the bias of the accelerometer,
Figure 711610DEST_PATH_IMAGE176
is the bias difference of the gyroscope;
Figure 843514DEST_PATH_IMAGE177
rotating the transpose of the matrix under the reference coordinate system at the moment of the ith frame;
Figure 168841DEST_PATH_IMAGE067
the speed of the j frame picture under the reference coordinate system;
Figure 637868DEST_PATH_IMAGE069
is the gravity acceleration under the reference coordinate system;
Figure 526059DEST_PATH_IMAGE071
the speed of the ith frame of picture under a reference coordinate system;
Figure 94443DEST_PATH_IMAGE178
translation information of a reference coordinate system at the j frame moment;
Figure 697944DEST_PATH_IMAGE179
translation information of the ith frame in a reference coordinate system;
Figure 32979DEST_PATH_IMAGE180
is the acceleration of gravity;
Figure 775676DEST_PATH_IMAGE181
is the time difference;
Figure 845788DEST_PATH_IMAGE182
displacement obtained by IMU pre-integration;
Figure 673936DEST_PATH_IMAGE183
pre-integrating the IMU to obtain a speed;
Figure 484766DEST_PATH_IMAGE184
pre-integrating the IMU to obtain an angle;
Figure DEST_PATH_IMAGE185
representing the posture (expressed by quaternion) of the ith frame time under a reference coordinate system;
Figure 265990DEST_PATH_IMAGE186
the posture of the jth frame moment under a reference coordinate system;
Figure 645019DEST_PATH_IMAGE082
the acceleration offset of the jth frame picture under a reference coordinate system is obtained;
Figure 288359DEST_PATH_IMAGE084
the acceleration offset of the ith frame picture under a reference coordinate system;
Figure 902880DEST_PATH_IMAGE086
the gyroscope bias is the j frame picture under the reference coordinate system;
Figure 826361DEST_PATH_IMAGE088
the gyroscope bias is the ith frame picture under the reference coordinate system;
in addition, the invention is disclosed in item 10In the formula, the content of the matrix of the 10 th formula is based on that the first term on the matrix side in the 10 th formula can pass through the 4 th formula
Figure DEST_PATH_IMAGE187
Substitution conversion, and the second term on the matrix side in the above 10 th formula can be expressed by the 5 th formula
Figure 891138DEST_PATH_IMAGE188
Substituting conversion, wherein the third term on the matrix side in the above 10 th formula is the inverse of the quaternion under the world coordinate system at the i-th time point represented by the multiplication between two quaternions, where the world coordinate system is the top C0A coordinate system; obtaining a first term related to the state quantity to be optimized through the operation of the 10 th formula;
step S1034: wherein the second term in the objective function equation is a re-projection error of the camera: expanding and converting the second term in the objective function equation (namely according to the 9 th formula) related to the state quantity to be optimized to obtain an 11 th formula;
Figure DEST_PATH_IMAGE189
(ii) a Formula 11;
the above 11 th formula is developed and decomposed to obtain
Figure 330428DEST_PATH_IMAGE190
(ii) a Formula 12;
the above 11 th formula is developed and decomposed to obtain
Figure DEST_PATH_IMAGE191
(ii) a Formula 13;
in the 11 th formula, the reprojection error is formed by that the ith feature in the ith frame picture is observed in the jth frame picture;
wherein, in the 12 th formula,
Figure 407362DEST_PATH_IMAGE192
is the first characteristicAt the pixel coordinates of the picture of the i-th frame,
Figure DEST_PATH_IMAGE193
is an internal reference of the camera, and converts pixel coordinates into normalized coordinates;
wherein, in the 13 th formula,
Figure 510316DEST_PATH_IMAGE194
in order to obtain external reference between the IMU and the camera,
Figure DEST_PATH_IMAGE195
for the outer parameter transpose (where R is denoted as rotation information),
Figure 828819DEST_PATH_IMAGE196
for external reference between the camera and the IMU (where p is primarily denoted translation information),
Figure DEST_PATH_IMAGE197
representing the attitude of the ith frame under the reference coordinate system,
Figure 977910DEST_PATH_IMAGE198
representing the pose of the j frame time under the reference coordinate system,
Figure DEST_PATH_IMAGE199
is 1;
Figure DEST_PATH_IMAGE201
translation information in the reference coordinate system for the ith frame time,
Figure DEST_PATH_IMAGE203
Translation information of a reference coordinate system at the j frame moment; the above 12 th formula and the above 13 th formula explain the factors in the 11 th formula, respectively.
It should be further explained that the 11 th formula is a comprehensive formula, and the 12 th formula is an explanation of the 11 th formula; in the above-mentioned formula 11, the first,
Figure 308135DEST_PATH_IMAGE204
representing the reprojection error from the k frame to the (k + 1) frame, namely feature matching; (generally, there will be errors in feature matching when performing feature matching, and the state quantities (pose and feature points) in the optimization by calculating the reprojection error are to minimize the above feature matching error;
by way of example, a certain feature point P is now exemplified: that is, the feature point p is calculated into the 12 th formula to obtain the following result: now suppose that a feature point p is observed in the ith and jth frames, based on its pixel coordinates in the jth frame
Figure DEST_PATH_IMAGE205
And reference information of camera
Figure 642426DEST_PATH_IMAGE206
Three-dimensional information of the feature point can be obtained, wherein the internal reference information refers to the focal length, distortion parameters and the like of the camera,
Figure DEST_PATH_IMAGE207
the parameter represents that the Lth feature point is observed in the ith frame and the j frame at the same time, the L-th feature point is observed in the three-dimensional space coordinate of the j frame, and the coordinate is normalized (the normalization process is to divide the three-dimensional coordinate by a module value);
that is, the feature point p is calculated into the 13 th formula to obtain the following result: in the above-mentioned formula 13, the first expression,
Figure 517847DEST_PATH_IMAGE208
this parametric representation is the three-dimensional spatial coordinates found by the rotation and translation between two frames, i.e. formula 13;
then, the 13 th formula is decomposed in one step
Figure DEST_PATH_IMAGE209
Looking inside first is the pixel coordinate of the L-th feature in the i-th frame
Figure 354566DEST_PATH_IMAGE210
Firstly, converting internal parameters of a camera into three-dimensional space coordinates; based on the depth information
Figure DEST_PATH_IMAGE211
Obtaining three-dimensional information with the depth of 1, wherein the three-dimensional information is obtained according to external parameters from the camera to the IMU under a camera coordinate system
Figure 239215DEST_PATH_IMAGE212
Figure DEST_PATH_IMAGE213
Marking the point as A when the point is transferred to the IMU coordinate system; then according to
Figure 772221DEST_PATH_IMAGE214
Turning to the world coordinate system and marking as B, and then according to
Figure DEST_PATH_IMAGE215
C is obtained under the IMU coordinate system when the world coordinate system collides with the coordinate system at the jth moment, and then the C is transferred to the camera coordinate system according to the external parameters from the IMU to the camera, namely the jth frame coordinate system
Figure 268143DEST_PATH_IMAGE216
This time it is obtained that
Figure DEST_PATH_IMAGE217
(i.e., in equation 13)
Figure 519389DEST_PATH_IMAGE217
) Finally, normalization processing is carried out; then, the result calculated by the 12 th formula and the result calculated by the 13 th formula are substituted into the 11 th formula to obtain the reprojection error which is the result of the present embodiment (i.e. the normalized coordinate calculated by the 12 th formula and the result obtained by the 13 th formula are further converted to obtain
Figure 738887DEST_PATH_IMAGE218
A difference reprojection error). The operation being a certain characteristic point PCalculating the line projection error to obtain a result; in this way, the reprojection errors of all the feature points are combined
Figure DEST_PATH_IMAGE219
I.e. the reprojection error of the camera involved in the present embodiment; it should be noted that, the kernel function is always used, so as to avoid that values with large errors in the re-solving cause relatively large interference to the optimal solution. After comprehensive calculation, a first term related to the state quantity to be optimized can be obtained through the operation of the 10 th formula; through the operation of the 11 th formula, a second term related to the state quantity to be optimized can be obtained, so that the solution process of the state quantity to be optimized to the enumeration objective function equation can be completed: namely, the embodiment
Figure 222606DEST_PATH_IMAGE220
(ii) a Formula 9.
In the technical scheme of the embodiment of the invention, the rotation information of the IMU is fused in the feature extraction at the visual front end, the external parameters of the IMU and the camera and the drift of the IMU are estimated on line, and in addition, the projection error between three-dimensional points is added in the optimization function. According to the embodiment of the invention, the pose of the robot is solved by constructing a reprojection error equation of residual errors of the IMU and vision and an optimization method, and meanwhile, a map of the surrounding environment is constructed. Specifically, initializing and constructing a reprojection error equation of residual error and vision of the IMU; the final purpose is to solve the three-dimensional information of the feature points, pose information and external parameters of a camera and an IMU.
The embodiment of the invention realizes the following technical effects:
the invention provides a SLAM method based on fusion of binocular vision and IMU, which increases the robustness of a system compared with the traditional fusion method of the vision SLAM and the IMU; it uses the construction of an objective function containing the IMU residual and the camera reprojection error, namely
Figure DEST_PATH_IMAGE221
To achieve outputA quantity of state to be optimized; after the optimization processing of the step, the final value to be optimized can be obtained by minimizing the objective function by adopting an optimization method, and the pose and the feature points in the environment are displayed, so that a technical basis is provided for the subsequent positioning and composition.
The invention provides a SLAM method based on fusion of binocular vision and IMU, which fuses the angle information of the IMU at the front end of the vision, improves the accuracy of feature matching, increases binocular baseline constraint and stereo matching, and screens wrong matching;
the invention provides a SLAM method based on binocular vision and IMU fusion, which uses a binocular camera and adds the matching from 3D points to 3D points, (the method not only considers IMU residual errors, but also considers the reprojection errors of characteristic points) to improve the precision of the system.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (3)

1. A SLAM method based on binocular vision and IMU fusion is characterized by comprising the following steps:
initializing the IMU according to rigid constraint between the binocular camera and the IMU, specifically comprising initializing the bias of a gyroscope of the IMU and initializing the speed and gravity of the IMU;
constructing an objective function containing an IMU residual error and a camera reprojection error;
initializing the bias of a gyroscope of the IMU based on a rigid constraint condition between the binocular camera and the IMU, and specifically comprising the following operation steps:
the initial calculation formula is as follows:
Figure DEST_PATH_IMAGE001
(ii) a Formula 1;
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
indicates that the j frame picture is in C0Relative rotation of the coordinate systems;
Figure DEST_PATH_IMAGE003
for the ith frame picture at C0Relative rotation of the coordinate systems; item II above
Figure DEST_PATH_IMAGE004
And the first item
Figure DEST_PATH_IMAGE005
Carry out generalized multiplication to obtain
Figure DEST_PATH_IMAGE006
Said
Figure 346584DEST_PATH_IMAGE006
Relative rotation of the camera from the ith frame to the jth frame;
Figure DEST_PATH_IMAGE007
relative rotation of the gyroscope from the j-th time to the i-th time; c0Is the reference coordinate system;
the calculation result obtained by the initial calculation formula is the initial bias of the gyroscope, and the specific implementation also comprises the following calculation steps;
for the above
Figure 864326DEST_PATH_IMAGE007
Performing first-order approximate calculation on the relative rotation of the gyroscope from the jth moment to the ith moment to obtain a formula 2;
Figure DEST_PATH_IMAGE008
(ii) a Formula 2;
the 2 nd formula is
Figure 847326DEST_PATH_IMAGE007
The first order linear approximation of (1), substituting its formula 2 into the formula 1, and then according to the second term in the formula 1
Figure 914639DEST_PATH_IMAGE004
And the first item
Figure 260169DEST_PATH_IMAGE005
Making a constraint condition that the difference is approximately zero, and solving to obtain the initial bias of the gyroscope; wherein the 2 nd formula is mainly to
Figure 754736DEST_PATH_IMAGE007
In the explanation of (1) the description,
Figure 192408DEST_PATH_IMAGE007
relative rotation of the IMU from the j-th time to the i-th time;
Figure DEST_PATH_IMAGE009
the deviation of the angular speed is calculated by representing the angular change function of the IMU;
Figure DEST_PATH_IMAGE010
the initial offset of the angular velocity of the IMU is obtained by minimizing a least squares function for the angular change that is sought by the measured angular velocity of the IMU.
2. The SLAM method based on binocular vision and IMU fusion of claim 1,
initializing the velocity and gravity of the IMU based on the rigid constraint condition between the binocular camera and the IMU, and specifically comprising the following operation steps:
setting target solution values for initialized speed and gravitational acceleration, i.e. setting
Figure DEST_PATH_IMAGE011
Solving for a value for the target, which is expressed by the formulaThe following steps of (1);
Figure DEST_PATH_IMAGE012
(ii) a Formula 3;
in the formula 3, in the formula,
Figure 735516DEST_PATH_IMAGE011
solving a value for the target; wherein
Figure DEST_PATH_IMAGE013
The speed of the ith frame picture;
Figure DEST_PATH_IMAGE014
the speed of the kth frame picture; since a binocular camera is used, the scale factor is 1;
Figure DEST_PATH_IMAGE015
is shown in C0Acceleration of gravity under a coordinate system, C0Is the reference coordinate system; as described above
Figure 184821DEST_PATH_IMAGE012
The factors in the middle brackets are the initialized speed and the gravity acceleration;
list(s) of
Figure DEST_PATH_IMAGE016
A displacement change formula from the jth frame to the ith frame;
Figure DEST_PATH_IMAGE017
(ii) a Formula 4;
list(s) of
Figure DEST_PATH_IMAGE018
An angle change formula from the jth frame to the ith frame;
Figure DEST_PATH_IMAGE019
(ii) a Formula 5;
in the formula 4, in the formula,
Figure 755348DEST_PATH_IMAGE016
the displacement change from the jth frame to the ith frame;
Figure DEST_PATH_IMAGE020
indicates the j-th time to C0A change in position of the coordinate system;
Figure DEST_PATH_IMAGE021
indicates the ith time to C0Transforming the position of the coordinate system;
Figure DEST_PATH_IMAGE022
this parameter is the i-th time to C0Carrying out rotation transformation on coordinates;
Figure DEST_PATH_IMAGE023
is as follows
Figure DEST_PATH_IMAGE024
The speed of the frame picture;
Figure DEST_PATH_IMAGE025
is the time difference from the ith frame to the jth frame;
in the formula 5, in the following formula,
Figure 290104DEST_PATH_IMAGE018
the angle change from the jth frame to the ith frame;
Figure DEST_PATH_IMAGE026
is C0Relative rotation of the coordinate system to the ith frame;
Figure DEST_PATH_IMAGE027
is from time j to C0Carrying out rotation transformation on coordinates;
Figure 541962DEST_PATH_IMAGE022
is from the i-th time to C0Carrying out rotation transformation on coordinates; wherein
Figure DEST_PATH_IMAGE028
The speed of the jth frame picture;
Figure 534189DEST_PATH_IMAGE025
is the time difference from the ith frame to the jth frame;
Figure DEST_PATH_IMAGE029
is shown in C0Gravitational acceleration under a coordinate system;
Figure 308241DEST_PATH_IMAGE022
this parameter is the i-th time to C0Carrying out rotation transformation on coordinates;
Figure DEST_PATH_IMAGE030
the speed of the ith frame picture;
enumerating the translation information of the camera coordinate system converted into the translation information of the IMU to the initial coordinate system, the calculation formula is as follows:
Figure DEST_PATH_IMAGE031
(ii) a Formula 6;
in the above-mentioned formula 6, the first,
Figure DEST_PATH_IMAGE032
from the ith time to C0Transforming the position of the coordinate system;
Figure DEST_PATH_IMAGE033
is the CiFrame to C0Transforming the position of the coordinate system;
Figure DEST_PATH_IMAGE034
is the CiTime to C0Carrying out rotation transformation on coordinates;
Figure DEST_PATH_IMAGE035
is a phase ofTranslation information from the machine to the IMU, the 6 th formula being translation information in the camera coordinate system to translation information in the IMU to the initial coordinate system; the above formula 6 is mainly used for coordinate transformation between the camera and the IMU;
then combining the 4 th formula, the 5 th formula and the 6 th formula to form a target equation, namely a 7 th formula, and finally solving according to the target equation to obtain the initial velocity and the gravitational acceleration of the IMU; wherein the target equation is obtained in this way:
Figure DEST_PATH_IMAGE036
(ii) a Formula 7;
in the above formula 7
Figure DEST_PATH_IMAGE037
By solving this objective equation, it is possible to obtain
Figure DEST_PATH_IMAGE038
Solving a value of the value for the target; namely, the initialization velocity and the gravitational acceleration of the IMU can be obtained according to the target equation in the formula 7.
3. The SLAM method based on binocular vision and IMU fusion of claim 2,
the construction of the objective function containing the IMU residual error and the camera reprojection error specifically comprises the following operation steps:
determining the state quantity to be optimized of the IMU, and setting a target solution value related to the state quantity to be optimized, namely setting
Figure DEST_PATH_IMAGE039
Solving for the target value:
Figure DEST_PATH_IMAGE040
(ii) a Formula 8;
wherein the above formula 8 is only oneThe target value to be solved, in other words it is only a labeled formula; in the above equation 8
Figure DEST_PATH_IMAGE041
In order to translate the information in the second direction,
Figure DEST_PATH_IMAGE042
in order to be the speed information,
Figure DEST_PATH_IMAGE043
in order to obtain the pose information,
Figure DEST_PATH_IMAGE044
is a bias to the acceleration of the IMU,
Figure DEST_PATH_IMAGE045
bias of a gyroscope that is an IMU;
enumerating an objective function equation according to the state quantity to be optimized as follows:
Figure DEST_PATH_IMAGE046
(ii) a Formula 9;
wherein a first term in the objective function equation is a residual of the IMU; performing expansion conversion on the first term in the 9 th formula to obtain a 10 th formula;
Figure DEST_PATH_IMAGE047
(ii) a A 10 th formula;
in the above-mentioned formula 10, the,
Figure DEST_PATH_IMAGE048
in order to be the difference in the displacement,
Figure DEST_PATH_IMAGE049
in order to be the speed difference,
Figure DEST_PATH_IMAGE050
in order to be the angle difference between the two angles,
Figure DEST_PATH_IMAGE051
is the difference in the bias of the accelerometer,
Figure DEST_PATH_IMAGE052
is the bias difference of the gyroscope;
Figure DEST_PATH_IMAGE053
rotating the transpose of the matrix under the reference coordinate system at the moment of the ith frame;
Figure DEST_PATH_IMAGE054
the speed of the j frame picture under the reference coordinate system;
Figure DEST_PATH_IMAGE055
is the gravity acceleration under the reference coordinate system;
Figure DEST_PATH_IMAGE056
the speed of the ith frame of picture under a reference coordinate system;
Figure DEST_PATH_IMAGE057
translation information of a reference coordinate system at the j frame moment;
Figure DEST_PATH_IMAGE058
translation information of the ith frame in a reference coordinate system;
Figure DEST_PATH_IMAGE059
is the acceleration of gravity;
Figure DEST_PATH_IMAGE060
is the time difference;
Figure DEST_PATH_IMAGE061
displacement obtained by IMU pre-integration;
Figure DEST_PATH_IMAGE062
pre-integrating the IMU to obtain a speed;
Figure DEST_PATH_IMAGE063
pre-integrating the IMU to obtain an angle;
Figure DEST_PATH_IMAGE064
representing the posture transposition of the ith frame time under a reference coordinate system;
Figure DEST_PATH_IMAGE065
the posture of the jth frame moment under a reference coordinate system;
Figure DEST_PATH_IMAGE066
the acceleration offset of the jth frame picture under a reference coordinate system is obtained;
Figure DEST_PATH_IMAGE067
the acceleration offset of the ith frame picture under a reference coordinate system;
Figure DEST_PATH_IMAGE068
the gyroscope bias is the j frame picture under the reference coordinate system;
Figure DEST_PATH_IMAGE069
the gyroscope bias is the ith frame picture under the reference coordinate system;
in addition, in the 10 th formula, the content of the matrix of the 10 th formula is based on that the first term on the matrix side in the 10 th formula can pass through the 4 th formula
Figure DEST_PATH_IMAGE070
Substitution conversion, and the second term on the matrix side in the above 10 th formula can be expressed by the 5 th formula
Figure DEST_PATH_IMAGE071
Substitution conversion, wherein the third term on the matrix side in the above-mentioned formula 10Is a multiplication between two quaternions
Figure DEST_PATH_IMAGE072
Represents the inverse of the quaternion at time i in the world coordinate system, which is referred to herein as the top C0A coordinate system; obtaining a first term related to the state quantity to be optimized through the operation of the 10 th formula; wherein the second term in the objective function equation is a re-projection error of the camera: carrying out expansion conversion on the second term in the 9 th formula to obtain an 11 th formula;
Figure DEST_PATH_IMAGE073
(ii) a Formula 11.
CN202010422434.6A 2020-05-19 2020-05-19 SLAM method based on binocular vision and IMU fusion Pending CN111340851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010422434.6A CN111340851A (en) 2020-05-19 2020-05-19 SLAM method based on binocular vision and IMU fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010422434.6A CN111340851A (en) 2020-05-19 2020-05-19 SLAM method based on binocular vision and IMU fusion

Publications (1)

Publication Number Publication Date
CN111340851A true CN111340851A (en) 2020-06-26

Family

ID=71187572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010422434.6A Pending CN111340851A (en) 2020-05-19 2020-05-19 SLAM method based on binocular vision and IMU fusion

Country Status (1)

Country Link
CN (1) CN111340851A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN111811502A (en) * 2020-07-10 2020-10-23 北京航空航天大学 Motion carrier multi-source information fusion navigation method and system
CN111899276A (en) * 2020-07-07 2020-11-06 武汉大学 SLAM method and system based on binocular event camera
CN113324542A (en) * 2021-06-07 2021-08-31 北京京东乾石科技有限公司 Positioning method, device, equipment and storage medium
CN114119885A (en) * 2020-08-11 2022-03-01 中国电信股份有限公司 Image feature point matching method, device and system and map construction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
盛淼: "基于双目视觉惯导的SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陶阳: "双目与IMU融合的无人机定位技术研究及系统设计", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899276A (en) * 2020-07-07 2020-11-06 武汉大学 SLAM method and system based on binocular event camera
CN111811502A (en) * 2020-07-10 2020-10-23 北京航空航天大学 Motion carrier multi-source information fusion navigation method and system
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN114119885A (en) * 2020-08-11 2022-03-01 中国电信股份有限公司 Image feature point matching method, device and system and map construction method and system
CN113324542A (en) * 2021-06-07 2021-08-31 北京京东乾石科技有限公司 Positioning method, device, equipment and storage medium
CN113324542B (en) * 2021-06-07 2024-04-12 北京京东乾石科技有限公司 Positioning method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111340851A (en) SLAM method based on binocular vision and IMU fusion
Huang Visual-inertial navigation: A concise review
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
US20180031387A1 (en) State estimation for aerial vehicles using multi-sensor fusion
CN110702107A (en) Monocular vision inertial combination positioning navigation method
Nützi et al. Fusion of IMU and vision for absolute scale estimation in monocular SLAM
Weiss et al. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments
CN110986939B (en) Visual inertia odometer method based on IMU (inertial measurement Unit) pre-integration
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN112649016A (en) Visual inertial odometer method based on point-line initialization
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
US20130162785A1 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
Kneip et al. Closed-form solution for absolute scale velocity determination combining inertial measurements and a single feature correspondence
CN110207693B (en) Robust stereoscopic vision inertial pre-integration SLAM method
CN112577493B (en) Unmanned aerial vehicle autonomous positioning method and system based on remote sensing map assistance
CN111161337A (en) Accompanying robot synchronous positioning and composition method in dynamic environment
Batista et al. Low-cost attitude and heading reference system: Filter design and experimental evaluation
CN112985450B (en) Binocular vision inertial odometer method with synchronous time error estimation
CN110954102A (en) Magnetometer-assisted inertial navigation system and method for robot positioning
CN109685852B (en) Calibration method, system, equipment and storage medium for camera and inertial sensor
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
Zheng et al. SE (2)-constrained visual inertial fusion for ground vehicles
CN112902956A (en) Course initial value acquisition method for handheld GNSS/MEMS-INS receiver, electronic equipment and storage medium
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN112731503A (en) Pose estimation method and system based on front-end tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626

RJ01 Rejection of invention patent application after publication