CN117705107B - Visual inertial positioning method based on two-stage sparse Shuerbu - Google Patents

Visual inertial positioning method based on two-stage sparse Shuerbu Download PDF

Info

Publication number
CN117705107B
CN117705107B CN202410170461.7A CN202410170461A CN117705107B CN 117705107 B CN117705107 B CN 117705107B CN 202410170461 A CN202410170461 A CN 202410170461A CN 117705107 B CN117705107 B CN 117705107B
Authority
CN
China
Prior art keywords
matrix
variable
inertial
block
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410170461.7A
Other languages
Chinese (zh)
Other versions
CN117705107A (en
Inventor
周军
黄坤
刘野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202410170461.7A priority Critical patent/CN117705107B/en
Publication of CN117705107A publication Critical patent/CN117705107A/en
Application granted granted Critical
Publication of CN117705107B publication Critical patent/CN117705107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a visual inertial positioning method based on two-stage sparse Shuerbu, and belongs to the technical field of visual positioning. The invention uses two-stage sparse Shuerbu to perform dimension reduction processing on the hessian matrix, which not only remarkably reduces the calculated amount of equation set solving in the visual inertial positioning process, thereby improving the positioning efficiency of the visual inertial positioning process, but also further reduces the calculated amount generated in the dimension reduction process by utilizing the special sparsity of the hessian matrix. In addition, the method further reduces redundant calculation and improves the positioning efficiency facing visual inertial positioning through a parallel solving mode of the inertial variable and the landmark variable. The method of the embodiment of the invention improves the operation speed of synchronous positioning while ensuring the calculation precision, so that the method can be better applied to automatic driving, mobile robots, augmented reality and other applications.

Description

Visual inertial positioning method based on two-stage sparse Shuerbu
Technical Field
The invention belongs to the technical field of visual positioning, and particularly relates to a visual inertial positioning method based on two-stage sparse Shuerbu.
Background
Synchronous positioning based on a visual inertial sensor is a core module in applications such as automatic driving, mobile robots, augmented reality and the like, and the positioning result is more robust compared with a pure visual method due to the fact that the complementary characteristics of the visual sensor and the inertial sensor are utilized, and even if the synchronous positioning method is applied to some scenes which are relatively troublesome to the pure visual sensor, good effects can be shown. However, in contrast, due to the introduction of the inertial sensor, the theory of synchronous positioning based on the visual inertial sensor is more complex, the algorithm complexity is higher, and for automatic driving and mobile robots, the application of augmented reality with relatively strong real-time requirements is very unfriendly. In synchronous positioning based on a visual inertial sensor, the solving of an equation set takes a lot of time, so that the computational complexity of the visual inertial synchronous positioning process is high and the computational resource is wasted.
Disclosure of Invention
The invention provides a visual inertial positioning method based on two-stage sparse Shuerbu, which can be used for reducing the computational complexity of visual inertial synchronous positioning processing.
The invention adopts the technical scheme that:
A visual inertial positioning method based on two-stage sparse Shuerbu comprises the following steps:
step S1, inputting an initial pose of a target to be positioned;
Step S2, respectively acquiring visual data and inertial data (generally, acceleration and angular velocity of the target to be positioned) based on a visual sensor and an inertial sensor which are arranged on or carried by the target to be positioned;
step S3, determining the current pose of the target to be positioned:
if the current positioning process is the first positioning process, taking the input initial pose as the current pose of the target to be positioned;
otherwise, the updated pose at the previous moment obtains the current pose of the target to be positioned;
S4, constructing a Heisen matrix and a standard vector thereof based on the current pose, the currently acquired visual data and the inertial data;
S5, performing two-stage sparse Shuerbu processing on the Heisen matrix and the standard vector to obtain a final equation set to be solved for visual inertia synchronous positioning;
Step S6, solving the equation set to be solved obtained in the step S5 to obtain visual variables; then solving the inertial variable and the landmark point variable in parallel;
step S7, updating the pose of the target to be positioned at the current moment based on the visual variable, the inertial variable and the landmark variable obtained by current solving;
Step S8, detecting whether a synchronous positioning ending instruction is received, if not, returning to the step; if yes, outputting pose and landmark variables at all times, forming a track map of the target to be positioned based on the pose at all times, and forming a sparse point cloud map of the target to be positioned based on the landmark variables at all times.
Further, the step S4 specifically includes the following steps:
Step S401, acquiring a pose residual variable r v, a pose jacobian matrix J v, a landmark point residual variable r m, a landmark point jacobian matrix J m based on visual data, and acquiring an inertial residual variable r i and an inertial jacobian matrix J i based on inertial data;
Step S402, determining a hessian matrix structure and a standard vector structure according to the sequence of an inertial variable, a pose variable and a landmark variable, wherein the hessian matrix is a square matrix, and the variables in the row and column directions are as follows: inertial variable, pose variable and landmark variable, the standard vector of the hessian matrix is an N-dimensional column vector, and the variables in the row direction are as follows: inertial, pose, and landmark variables, where N represents the dimension of a hessian matrix row or column.
Defining the dimension of the inertial variable as Ni, the dimension of the pose variable as Nv, the dimension of the landmark variable as Nm, and Ni generally relates to Nv and the number of frames included in the actual optimization, such as Nk for each optimization frame in practice, each frame represents all observation data (including visual data and inertial data) at a certain moment recently, assuming that the dimension of the inertial variable of each frame is 9, the dimension of the visual variable is 6, ni is 9Nk, and Nv is 6Nk. The Nm is related to the number of the landmark points to be optimized in practice, and if the number of the landmark points to be optimized in practice is M, the Nm is equal to M, so that the dimension N of the Heisen matrix is equal to 9 Nk+6Nk+M finally.
The matrix block of the hessian matrix includes: inertia-inertia matrix block H ii, inertia-vision matrix block H iv, vision-inertia matrix block H vi, vision-vision matrix block H vv, vision-landmark point matrix block H vm, landmark point-vision matrix block H mv, and landmark point-landmark point matrix block H mm;
The vector block of the standard vector includes: an inertial vector block b i, a pose vector block b v and a landmark point vector block b m;
step S403, filling each matrix block of the Heisen matrix and each vector block of the standard vector according to a preset data filling rule; and filling the matrix blocks and the vector blocks into the hessian matrix and the standard vector according to the position relation.
Further, the step S5 specifically includes:
Step S501, partitioning the hessian matrix and the standard vector:
Wherein,
Matrix block includes: inertia-inertia matrix block H ii, inertia-vision matrix block H iv, vision-inertia matrix block H vi, and vision-vision matrix block H vv;
Matrix block includes a vision-landmark matrix block H vm;
Matrix block includes a landmark point-vision matrix block H mv;
Vector block includes inertial vector block b i and pose vector block b v;
step S502, solving an inverse matrix of a matrix block H mm;
Step S503, performing transformation processing on the matrix block form of step S501:
Wherein I represents an identity matrix, represents an inertial variable,/> represents a landmark variable;
Step S504, obtaining a new Heisen matrix and a standard vector required by the standard equation of the second stage:
The update matrix block is: /();
and updates the standard vector as: /();
step S505, performing second-stage sparse Shu' er compensation to obtain a final solving equation set:
Matrix blocking for the second stage is based on and/> :
Wherein represents a pose variable,/> represents an inertial variable;
Solving an inverse matrix of the matrix ;
and performing transformation processing on the matrix blocks in the second stage:
Obtaining a final equation set to be solved:
the technical scheme provided by the invention has at least the following beneficial effects:
According to the invention, the two-stage sparse Shuerbu is adopted to reduce the computational complexity of solving the equation set in the synchronous positioning based on the visual inertial sensor, so that the speed of solving the equation set is greatly improved. Meanwhile, redundant calculation is reduced, parallel solving of an inertial variable and a landmark variable is realized, and the realization of a parallel hardware structure is facilitated. On the other hand, the method reduces the computational complexity and ensures the accuracy of solving, so that the synchronous positioning based on the visual inertial sensor can be better applied to automatic driving, mobile robots, augmented reality and other applications.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a Heisen matrix and standard vector structure in a first variable arrangement;
FIG. 2 is a schematic diagram of a Heisen matrix and standard vector structure in a second variable arrangement;
FIG. 3 is a schematic diagram of a hessian matrix and standard vector structure;
FIG. 4 is a schematic diagram of a first stage block of a hessian matrix and a standard vector;
FIG. 5 is a block diagram of a second stage of the new hessian matrix and standard vector;
FIG. 6 is a schematic diagram of a new inertial-inertial matrix block architecture;
Fig. 7 is a flow chart of a visual inertial positioning method based on two-stage sparse sull compensation.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a visual inertial positioning method based on two-stage sparse Shull compensation, which uses the two-stage sparse Shull compensation to perform dimension reduction processing on a hessian matrix, so that the calculated amount of equation set solving in the visual inertial positioning is remarkably reduced, the positioning efficiency of the visual inertial positioning is improved, and the calculated amount generated in the dimension reduction process is further reduced by utilizing the special sparsity of the hessian matrix. In addition, the method further reduces redundant calculation and improves the positioning efficiency facing visual inertial positioning through a parallel solving mode of the inertial variable and the landmark variable. The method of the embodiment of the invention improves the operation speed of synchronous positioning while ensuring the calculation precision, so that the method can be better applied to automatic driving, mobile robots, augmented reality and other applications.
The solution of the system of equations in a process flow based on the synchronous positioning of the visual inertial sensors (acceleration and angular velocity by means of accelerometers and gyroscopic measuring devices) often takes a lot of time, the standard equation usually solved is as follows:
(1)
Where H is the Heisen matrix, b is the standard vector, and x is the variable to be solved. Usually, the hessian matrix H and the standard vector b are calculated first, and then solved by using a cholesky method (also called square root method) to obtain x. Normally, the dimension of the standard equation is relatively large, for example, the dimension of the hessian matrix is 465×465, the standard vector is 465×1, and the calculated amount of solving the equation set with large dimension is greatly increased, which is unfavorable for quick calculation.
In the embodiment of the invention, the two-stage sparse Shuerbu is adopted to reduce the dimension of the hessian matrix so as to reduce the calculated amount of solving the equation set, and meanwhile, the special sparsity of the hessian matrix is also utilized to further reduce the calculated amount generated in the dimension reduction process, and in addition, the inertial variable and the road point variable are solved in parallel so as to further reduce redundant calculation.
Analysis of the structure of the hessian matrix and the standard vector shows that, taking the hessian matrix of 465×465 and the standard vector of 465×1 as examples, 465 represents the dimensions of all variables, and the hessian matrix is unified with the standard vector. The whole hessian matrix is a symmetrical matrix and is composed of three variables in total, namely an inertial variable, a pose variable and a landmark variable. Each inertial variable has 9 degrees of freedom, each pose variable has 6 degrees of freedom, the position and rotation of the device at the corresponding moment are represented, and each landmark variable has 1 degree of freedom. Based on the problem of synchronous positioning construction based on visual inertial sensors, assuming that there are 11 key frames, each frame having one inertial variable and one pose variable, all key frames are associated with 165 dimensions in the hessian matrix, associating 165 dimensions of the standard vector. Assuming 300 landmark points are present, 300 dimensions in the hessian matrix are associated, 300 dimensions in the standard vector are associated. The hessian matrix dimension is 465 x 465 and the dimension of the standard vector is 465 x 1.
As a possible implementation manner, in the embodiment of the present invention, firstly, a hessian matrix and a standard vector in inertial synchronization positioning of a pair of vision are solved based on two-stage sparse sull compensation, and the specific implementation process includes:
The first step: input data is acquired through a sensor, namely, a picture is acquired through a visual sensor, and 6-dimensional data is acquired through an inertial sensor, wherein the acceleration is 3-dimensional, and the angular acceleration is 3-dimensional.
And a second step of: according to the acquired current pose, different preprocessing is respectively carried out on the visual picture and the 6-dimensional inertial data to obtain a pose residual variable r v, a pose jacobian matrix J v, a landmark point residual variable r m and a landmark point jacobian matrix J m; inertial residual variable r i, inertial jacobian J i.
And (3) recording the overall jacobian matrix as J and the overall residual variable as r, and calculating a corresponding matrix block in the hessian matrix H and a corresponding vector block in the standard vector b according to the following formula:
(2)
since there is no correlation between the inertia variable and the jacobian of the landmark point variable, the upper right and lower left corner portions of the H matrix are shown with 0 for both the inertia variable and the jacobian of the landmark point variable.
And a third step of: the dimensions and structure of the hessian matrix are determined.
The structure of the hessian matrix and the standard vector is determined by the arrangement relation of three variables, such as the two different variable arrangements shown in fig. 1 and 2, and the internal structure is completely different although the dimensions of the hessian matrix and the standard vector are the same. Regular variable arrangements can be found, which are easier to optimize. The method of the embodiment of the invention uses the following specific arrangement:
According to the arrangement of the inertial variable, the pose variable and the landmark point variable, the corresponding hessian matrix and standard vector structure are shown in fig. 3, and the hessian matrix can be further segmented into four parts, namely an upper left corner H pp, an upper right corner H pm, a lower left corner H mp and a lower right corner H mm, as shown in fig. 4. The standard vectors are partitioned, above b p and below b m.
Wherein the parameter symbols in fig. 3 and 4 are annotated as follows:
h pp: data related to inertial variables and pose variables, including an inertial-inertial matrix block H ii, an inertial-visual matrix block H iv and a visual-inertial matrix block H vi, a visual-visual matrix block H vv;
H mm: the landmark variable-related data is a landmark-landmark matrix block, and H mm is a diagonal matrix;
H pm: the pose variable and landmark variable related data comprise a vision-landmark matrix block H vm;
H pm: the data related to the landmark point variable and the pose variable comprises a landmark point-vision matrix block H mv;
b p: the data related to the inertial variable and the pose variable comprises an inertial vector block b i and a pose vector block b v.
B m: the data related to the landmark point variables is a landmark point vector block.
Fourth step: filling the hessian matrix.
The pose residual variable r v, the pose jacobian matrix J v, the landmark point residual variable r m and the landmark point jacobian matrix J m are obtained through pretreatment; inertial residual variable r i, inertial jacobian J i.
Fifth step: and (5) two-stage sparse Shuerbu solution.
501: Sparse schulb compensation in the first stage: and eliminating the dimension brought by the landmark variable.
Matrix and vector partitioning is performed based on a standard equation to be solved shown in a formula (1), the specific partitioning condition is shown in fig. 4, a landmark variable is partitioned from all variables x, and pose variables and inertial variables are put together and recorded as/> , so that the following is obtained:
(3)
the transformation of the matrix is realized through matrix multiplication, and the transformation matrix is multiplied left and right at the same time by the equation, so as to obtain the following steps:
(4)
(5)
at this time, the equation (3) is equivalently converted into the equation (5), at this time, the equation (6) and the equation (7) can be obtained by simplifying according to the equation (5), the pose variable and the inertial variable can be obtained according to the equation (5), and then the landmark variable/> is obtained according to the equation (7):
(6)
(7)
since Hmm is a diagonal block matrix, only the diagonal elements need to be inverted one by one when the inverse of Hmm is calculated, and complex inverse operation of the equation set is not needed, so that the calculated amount of the equation in the dimension reduction is reduced.
The purpose of the first-stage sparse Shuerbu is mainly represented by:
(1) The dimensionality of the landmark point variable is eliminated in the solving of the equation set, the solving of the equation set with the original 465x465 dimensionality is converted into the solving of the equation set with 165x165 and matrix multiplication, and the calculated amount of the solving of the equation set is greatly reduced.
(2) Since Hmm is a diagonal matrix, the inverse matrix of Hmm is easily found by its special sparsity. The variable x m of the road mark point is obtained by matrix multiplication, so that the method has higher parallelism and is convenient for hardware acceleration.
502: Sparse Shu Er Tong in the second stage.
After the first stage sparse schulk, since the positions in H pm related to the inertial variables are all 0's, there are a large number of 0-element multiplications in the computation of the variable x m for the waypoint, which can be omitted in the matrix multiplication. On the other hand, the waypoint variables are uncorrelated with the inertial variables and therefore can be processed in parallel. To sum up, on one hand, in order to save redundant computation in computation of the variable x m of the landmark point, on the other hand, in order to improve the parallelism of processing.
After the first-stage sparse schulk, equation (5) is obtained and re-recorded as a new hessian matrix:
(8)
The new standard vectors are:
(9)
and then equation (10):
(10)
The new hessian matrix and the standard vector are segmented, the specific segmentation situation is shown in fig. 5, and the pose variable and the inertial variable are segmented, wherein the pose variable is/> , the inertial variable is/> , and a formula (11) is obtained, wherein a new inertial-inertial matrix block/> , a new inertial-pose matrix block/> , a new pose-inertial matrix block/> , and a new pose-pose matrix block/> are obtained. The corresponding vector blocks are calculated respectively, the new inertia vector block/> and the new visual vector block/> :
(11)
the transformation of the matrix is realized through matrix multiplication, and the transformation matrix is multiplied left and right at the same time by the equation, so as to obtain the following steps:
(12)
(13)
At this time, the equation (10) is equivalently converted into the equation (13), then the equation (14) and the equation (15) are obtained by simplification, the pose variable can be obtained according to the equation (14), and then the inertial variable/> can be obtained according to the equation (15):
(14)
(15)
Because the new inertia-inertia matrix block is a diagonal block matrix, the specific structure is shown in fig. 6, in the embodiment of the invention, an inversion mode (using the special sparsity of/> ) for the diagonal block matrix is adopted, compared with the complete matrix inversion, the calculated amount of the zero element position is saved, the inversion complexity is greatly reduced, and the calculated amount of the matrix in the dimension reduction is further reduced.
Through the sparse schulz compensation of the second stage, it can be found that in practice, the solution of the formula (14) can be performed first, and after the pose variable is obtained, the landmark variable/> and the inertial variable/> can be solved in parallel. When solving the landmark point variable/> according to the formula (7), because the inertial variable/> is not needed, the corresponding part of 0 element multiplication calculation is omitted, and the pose variable/> is directly used for solving, and the formula (16) is shown as follows:
(16)
At the same time, inertial variables can be solved according to equation (15). The two solving routes are not associated and can be completely executed in parallel.
The purpose of the second-stage sparse Shuerbu is mainly represented by:
(1) The dimension of solving the equation set is further reduced, the space is reduced from the first-stage Shu Er complement 165x165 to 66x66, the complexity of solving the equation set is further reduced, and the calculation speed is improved.
(2) The method realizes parallel solving of the landmark point variable and the inertial variable, and is convenient for hardware to realize acceleration.
(3) The 0 element multiplication calculation when calculating the road marking point variable is omitted, and the redundant calculation is reduced.
(4) Because is a diagonal block matrix, inversion is easier than normal matrices, and no greater computational effort is introduced.
As a possible implementation manner, as shown in fig. 7, the visual inertial positioning method based on two-stage sparse sull compensation provided by the embodiment of the present invention specifically includes the following steps:
Step 1: obtaining an initial pose of the equipment (to be positioned) by an initialization method;
Step 2: acquiring vision sensor data (image data) by a vision sensor, and acquiring inertial data by an inertial sensor;
Step 3: and acquiring the current pose of the equipment.
If the initial pose is the first time, taking out the initial pose obtained by initialization; otherwise, the updated pose in the step 11 is taken out.
Then, the image data and the inertia data are carried, and the preprocessing is carried out in step 4.
Step 4: preprocessing the visual data and the inertial data respectively:
step 4.1: the visual data are preprocessed independently to obtain a pose residual variable r v, a pose jacobian matrix J v, a landmark point residual variable r m and a landmark point jacobian matrix J m;
Step 4.2: independently preprocessing inertial data to obtain an inertial residual variable r i and an inertial jacobian J i;
step 5: constructing and filling a hessian matrix and a standard vector:
Step 5.1: and determining the hessian matrix structure and the standard vector structure according to the sequence of the inertial variable, the pose variable and the landmark variable.
Step 5.2: corresponding matrix blocks, inertia-inertia matrix block H ii, inertia-vision matrix blocks H iv and H vi, vision-vision matrix block H vv, vision-landmark point matrix blocks H vm and H mv, landmark point-landmark point matrix block H mm, are calculated, respectively. The corresponding vector blocks, an inertial vector block b i, a visual vector block b v and a landmark vector block b m are respectively calculated.
Step 5.3: and filling the calculated matrix blocks and vector blocks into the hessian matrix and the standard vector according to the position relation.
Step 6: and (3) performing sparse schulk compensation in the first stage to obtain a new hessian matrix and a new standard vector in the standard equation in the second stage.
Step 6.1: partitioning the matrix and vector according to formula (3);
Step 6.2: solving an inverse matrix of the matrix H mm;
Step 6.3: multiplying the equation by the transformation matrix to obtain transformation results shown in formulas (4) and (5):
Step 6.4: new hessian matrix and standard vector needed by standard equation of the second stage are obtained: ;
step 7: performing sparse Shu's compensation in the second stage to obtain a final solving equation set;
step 7.1: partitioning the matrix according to formula (11);
Step 7.2: solving an inverse matrix of the matrix ;
step 7.3: multiplying the equation left and right by the transformation matrix to obtain transformation results shown in formulas (12) and (13);
Step 7.4: obtaining a final equation set to be solved based on the step 7.3, as shown in a formula (14);
step 8: solving an equation set by using cholesky to obtain a visual variable x v;
Step 9: the parallel solving of the inertial variable x i and the landmark variable x m, i.e. step 9.1 and step 9.2, may be performed in parallel.
Step 9.1: solving an inertial variable x i by utilizing matrix multiplication to obtain a solving result shown as a formula (15);
step 9.2: and solving the landmark point variable x m by matrix multiplication to obtain a solving result shown as a formula (16). Note that here places related to the inertial variables are directly removed, and redundant calculation of 0 is reduced;
Step 10: outputting all solving variables, namely a visual variable x v, an inertial variable x i and a landmark variable x m;
step 11: and (5) updating the current pose according to the result output in the step (10) to finish synchronous positioning.
Step 12: judging whether the process is finished, if so, performing a step 13; otherwise, step 2 is performed.
Step 13: outputting all the pose and landmark point variables at all the moments, wherein all the pose forms a track map, and all the landmark point variables form a sparse point cloud map.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
What has been described above is merely some embodiments of the present invention. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit of the invention.

Claims (3)

1. The visual inertial positioning method based on two-stage sparse Shuerbu is characterized by comprising the following steps of:
step S1, inputting an initial pose of a target to be positioned;
Step S2, visual data and inertial data are respectively obtained based on a visual sensor and an inertial sensor which are arranged on or carried by a target to be positioned;
step S3, determining the current pose of the target to be positioned:
if the current positioning process is the first positioning process, taking the input initial pose as the current pose of the target to be positioned; otherwise, the updated pose at the previous moment obtains the current pose of the target to be positioned;
Step S4, constructing a Heisen matrix and a standard vector thereof based on the current pose, the currently acquired visual data and the inertial data:
Determining a hessian matrix structure and a standard vector structure according to the sequence of an inertial variable, a pose variable and a landmark point variable, wherein the hessian matrix is a square matrix, and the variables in the row and column directions are as follows: inertial variable, pose variable and landmark variable, the standard vector of the hessian matrix is an N-dimensional column vector, and the variables in the row direction are as follows: inertial variables, pose variables and landmark variables, wherein N represents the dimension of a hessian matrix row or column;
The matrix block of the hessian matrix includes: inertia-inertia matrix block H ii, inertia-vision matrix block H iv, vision-inertia matrix block H vi, vision-vision matrix block H vv, vision-landmark point matrix block H vm, landmark point-vision matrix block H mv, and landmark point-landmark point matrix block H mm;
The vector block of the standard vector includes: an inertial vector block b i, a pose vector block b v and a landmark point vector block b m;
S5, performing two-stage sparse Shuerbu processing on the Heisen matrix and the standard vector to obtain a final equation set to be solved for visual inertia synchronous positioning;
Step S6, solving the equation set to be solved obtained in the step S5 to obtain visual variables; then solving the inertial variable and the landmark point variable in parallel;
step S7, updating the pose of the target to be positioned at the current moment based on the visual variable, the inertial variable and the landmark variable obtained by current solving;
Step S8, detecting whether a synchronous positioning ending instruction is received, if not, returning to the step S2; if yes, outputting pose and landmark variables at all times, forming a track map of the target to be positioned based on the pose at all times, and forming a sparse point cloud map of the target to be positioned based on the landmark variables at all times;
The step S5 specifically comprises the following steps:
Step S501, partitioning the hessian matrix and the standard vector:
Wherein,
Matrix block includes: inertia-inertia matrix block H ii, inertia-vision matrix block H iv, vision-inertia matrix block H vi, and vision-vision matrix block H vv;
Matrix block includes a vision-landmark matrix block H vm;
Matrix block includes a landmark point-vision matrix block H mv;
Vector block includes inertial vector block b i and pose vector block b v;
step S502, solving an inverse matrix of a matrix block H mm;
Step S503, performing transformation processing on the matrix block form of step S501:
Wherein, I represents an identity matrix, represents a pose variable and an inertial variable, and/() represents a landmark variable;
Step S504, obtaining a new Heisen matrix and a standard vector required by the standard equation of the second stage:
The update matrix block is: /();
and updates the standard vector as: /();
step S505, performing second-stage sparse Shu' er compensation to obtain a final solving equation set:
Matrix blocking for the second stage is based on and/> :
Wherein represents a pose variable,/> represents an inertial variable;
Solving an inverse matrix of the matrix ;
and performing transformation processing on the matrix blocks in the second stage:
Obtaining a final equation set to be solved:
2. The visual inertial positioning method based on two-stage sparse sull complement according to claim 1, wherein step S4 specifically comprises the steps of:
Step S401, acquiring a pose residual variable r v, a pose jacobian matrix J v, a landmark point residual variable r m, a landmark point jacobian matrix J m based on visual data, and acquiring an inertial residual variable r i and an inertial jacobian matrix J i based on inertial data;
Step S402, recording the overall jacobian matrix as J, the overall residual variable as r, and calculating a corresponding matrix block in the Hemson matrix H and a corresponding vector block in the standard vector b according to a formula:
step S403, filling each matrix block of the Heisen matrix and each vector block of the standard vector according to a preset data filling rule; and filling the matrix blocks and the vector blocks into the hessian matrix and the standard vector according to the position relation.
3. The visual inertial positioning method based on two-stage sparse sull complement according to claim 1, wherein the dimension N of the hessian matrix H is specifically set as: n 1×Nk+n2×Nk +M;
N k is a preset target frame number for synchronous positioning processing and is used for representing the dimension defining the inertial variable and the pose variable, N 1 represents the inertial variable dimension of each frame, N 2 represents the visual variable dimension of each frame, and M represents the target point number of the road mark point for synchronous positioning processing and is used for representing the dimension of the pose variable.
CN202410170461.7A 2024-02-06 2024-02-06 Visual inertial positioning method based on two-stage sparse Shuerbu Active CN117705107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410170461.7A CN117705107B (en) 2024-02-06 2024-02-06 Visual inertial positioning method based on two-stage sparse Shuerbu

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410170461.7A CN117705107B (en) 2024-02-06 2024-02-06 Visual inertial positioning method based on two-stage sparse Shuerbu

Publications (2)

Publication Number Publication Date
CN117705107A CN117705107A (en) 2024-03-15
CN117705107B true CN117705107B (en) 2024-04-16

Family

ID=90148434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410170461.7A Active CN117705107B (en) 2024-02-06 2024-02-06 Visual inertial positioning method based on two-stage sparse Shuerbu

Country Status (1)

Country Link
CN (1) CN117705107B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110160522A (en) * 2019-04-16 2019-08-23 浙江大学 A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN111797906A (en) * 2020-06-15 2020-10-20 北京三快在线科技有限公司 Method and device for positioning based on vision and inertial mileage
GB2599948A (en) * 2020-10-16 2022-04-20 Slamcore Ltd Initialising keyframes for visual-inertial localisation and/or mapping
CN114440877A (en) * 2022-01-26 2022-05-06 北京航空航天大学 Asynchronous multi-camera visual inertial odometer positioning method
CN114812601A (en) * 2021-01-29 2022-07-29 阿里巴巴集团控股有限公司 State estimation method and device of visual inertial odometer and electronic equipment
CN114860857A (en) * 2021-02-04 2022-08-05 三星电子株式会社 Method for performing synchronous positioning and mapping and apparatus using the same
CN115168283A (en) * 2022-07-05 2022-10-11 北京航空航天大学 Data fusion module accelerator and method
CN115265529A (en) * 2022-07-05 2022-11-01 北京鉴智科技有限公司 Target object positioning method, system, device, electronic equipment and storage medium
CN115371665A (en) * 2022-09-13 2022-11-22 哈尔滨工业大学 Mobile robot positioning method based on depth camera and inertia fusion
WO2023155258A1 (en) * 2022-02-21 2023-08-24 武汉大学 Visual inertial odometry method that contains self-calibration and is based on keyframe sliding window filtering

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607401B2 (en) * 2013-05-08 2017-03-28 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
AU2014306813A1 (en) * 2013-08-12 2016-03-31 Flyby Media, Inc. Visual-based inertial navigation
US9658070B2 (en) * 2014-07-11 2017-05-23 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems
KR20220146901A (en) * 2021-04-26 2022-11-02 삼성전자주식회사 Method and Apparatus for Accelerating Simultaneous Localization and Mapping
KR20230063686A (en) * 2021-11-02 2023-05-09 삼성전자주식회사 Appratus for accelerating simultaneous localization and mapping and electronic device including thereof

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN110160522A (en) * 2019-04-16 2019-08-23 浙江大学 A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN111797906A (en) * 2020-06-15 2020-10-20 北京三快在线科技有限公司 Method and device for positioning based on vision and inertial mileage
GB2599948A (en) * 2020-10-16 2022-04-20 Slamcore Ltd Initialising keyframes for visual-inertial localisation and/or mapping
CN114812601A (en) * 2021-01-29 2022-07-29 阿里巴巴集团控股有限公司 State estimation method and device of visual inertial odometer and electronic equipment
CN114860857A (en) * 2021-02-04 2022-08-05 三星电子株式会社 Method for performing synchronous positioning and mapping and apparatus using the same
CN114440877A (en) * 2022-01-26 2022-05-06 北京航空航天大学 Asynchronous multi-camera visual inertial odometer positioning method
WO2023155258A1 (en) * 2022-02-21 2023-08-24 武汉大学 Visual inertial odometry method that contains self-calibration and is based on keyframe sliding window filtering
CN115168283A (en) * 2022-07-05 2022-10-11 北京航空航天大学 Data fusion module accelerator and method
CN115265529A (en) * 2022-07-05 2022-11-01 北京鉴智科技有限公司 Target object positioning method, system, device, electronic equipment and storage medium
CN115371665A (en) * 2022-09-13 2022-11-22 哈尔滨工业大学 Mobile robot positioning method based on depth camera and inertia fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A low-cost visual inertial odometry for mobile vehicle based on double stage Kalman filter;Cen et al.;SIGNAL PROCESSING;20220714;第197卷;1-10 *
Information sparsification for visual-inertial odometry by manipulating Bayes tree;Xin et al.;Aerospace Science and Technology;20210826;第118卷;1-12 *
惯性信息辅助的单目视觉SLAM技术研究;张月圆;中国优秀硕士学位论文全文数据库 信息科技辑;20210715(2021年第07期);I138-543 *
移动终端定位参数联合估计方法的研究;刘颖, 王树勋, 史公正;长春邮电学院学报;20010630;第19卷(第02期);1-9 *

Also Published As

Publication number Publication date
CN117705107A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110009681B (en) IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method
EP3540637B1 (en) Neural network model training method, device and storage medium for image processing
CN108648215B (en) SLAM motion blur pose tracking algorithm based on IMU
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
Li et al. Real-time 3D motion tracking and reconstruction system using camera and IMU sensors
CN110264528B (en) Rapid self-calibration method for binocular camera with fish-eye lens
CN111507222B (en) Three-dimensional object detection frame based on multisource data knowledge migration
CN111754579B (en) Method and device for determining external parameters of multi-view camera
CN105931275A (en) Monocular and IMU fused stable motion tracking method and device based on mobile terminal
EP4160271B1 (en) Method and apparatus for processing data for autonomous vehicle, electronic device, and storage medium
CN110517324B (en) Binocular VIO implementation method based on variational Bayesian adaptive algorithm
CN110533724B (en) Computing method of monocular vision odometer based on deep learning and attention mechanism
CN114001733B (en) Map-based consistent efficient visual inertial positioning algorithm
CN112985450B (en) Binocular vision inertial odometer method with synchronous time error estimation
CN116342661A (en) Binocular vision inertial odometer method for correcting pose by using road mark point offset
CN108731700B (en) Weighted Euler pre-integration method in visual inertial odometer
CN112183506A (en) Human body posture generation method and system
CN114821105A (en) Optical flow calculation method combining image pyramid guidance and circular cross attention
CN114440877B (en) Asynchronous multi-camera visual inertial odometer positioning method
CN115014346A (en) Consistent efficient filtering algorithm based on map and oriented to visual inertial positioning
CN117705107B (en) Visual inertial positioning method based on two-stage sparse Shuerbu
CN115578417A (en) Monocular vision inertial odometer method based on feature point depth
CN114993338A (en) Consistent efficient visual inertial mileage calculation method based on multi-segment independent map sequence
CN114638858A (en) Moving target position and speed estimation method based on vehicle-mounted double-camera system
WO2023101662A1 (en) Methods and systems for implementing visual-inertial odometry based on parallel simd processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant