CN115601573A - Visual SLAM feature matching method, system, device and readable storage medium - Google Patents
Visual SLAM feature matching method, system, device and readable storage medium Download PDFInfo
- Publication number
- CN115601573A CN115601573A CN202211176803.3A CN202211176803A CN115601573A CN 115601573 A CN115601573 A CN 115601573A CN 202211176803 A CN202211176803 A CN 202211176803A CN 115601573 A CN115601573 A CN 115601573A
- Authority
- CN
- China
- Prior art keywords
- camera
- moment
- feature matching
- imu
- integration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The invention provides a visual SLAM feature matching method, a system and a device and a readable storage medium, wherein the method comprises the following steps; step 1, preprocessing image data of a binocular camera; step 2, pre-integrating the IMU measurement value of the inertial measurement unit; step 3, carrying out error analysis on the pre-integration; step 4, predicting the pixel coordinates; and 5, determining an adaptive threshold. The self-adaptive algorithm provided by the invention can effectively reduce the search radius of feature matching, provide regional constraint for feature points, improve the accuracy of image feature matching, provide a more accurate pose initial value for an SLAM system, and further improve the overall pose precision of the system.
Description
Technical Field
The invention relates to an image feature matching algorithm, in particular to a visual SLAM feature matching method, a system, a device and a readable storage medium.
Background
As a front-end technology of visual SLAM (Simultaneous Localization And Mapping), a main task of a visual odometer is to estimate camera motion according to matching information of adjacent images, wherein feature matching of the images is the most critical link, and the accuracy of matching directly affects the VSLAM system precision. Common feature matching algorithms include a brute force matching algorithm and a fast nearest neighbor matching algorithm. However, when the camera moves too fast, the image blur, dynamic scene, light change and local characteristics of the image features themselves make feature mismatching widely existed, and become a big bottleneck restricting the improvement of the visual SLAM performance.
In order to solve the problem of high feature mismatching rate, most scholars at home and abroad adopt a VSLAM and Inertial Measurement Unit (IMU) fusion technology. The learners propose an improved ASIFT feature matching method, which uses the motion information provided by the IMU to reconstruct a camera view and estimate an essential matrix, so as to improve the robustness of feature matching. And wu and wang xu utilize epipolar constraints of images to establish a mathematical relationship of epipolar lines where pixels of front and rear frames of images are located, and an IMU (inertial measurement unit) is combined to predict a camera pose to acquire an epipolar line where a feature point is located, so that feature points which are mismatched in error are effectively eliminated. The Qi Guan provides a feature matching algorithm with alternative operation of global search and local search and automatic switching, predicts a region where feature points may appear by using a camera projection model and an IMU pre-integration result, and searches and matches the local region. Similarly, inertial navigation constraint is introduced into a visual inertial SLAM system ORB-SLAM3 proposed by Campos C and Elvira R in a feature matching stage, the pose of a camera is predicted by utilizing an IMU, map points obtained by left-right matching or triangularization of a binocular camera are projected to an image, and the pixel position is predicted. The inertial navigation solution can provide regional constraint for feature point matching and improve the accuracy of feature matching, so that the method is widely applied.
However, since the IMU measurement value itself has an error, when the inertial navigation assistant image features are matched, the feature matching of the full image is usually converted into the feature matching of the local area with the predicted position as the origin and a certain fixed value as the radius. In order to adapt to IMUs with different precisions, a large search radius value is generally set so as to avoid missing feature points. When the IMU precision is higher, the waste of computing resources is easily caused, the mismatching rate of image features is improved, and the pose estimation precision of the SLAM system is reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a self-adaptive inertial navigation aided vision SLAM feature matching method and a system thereof, which automatically adjust the image feature matching search radius according to the error propagation law on the basis of the error of an inertial unit, thereby not only improving the accuracy of feature point matching, but also reducing the calculation amount and time consumption required by matching and improving the efficiency of feature matching.
The technical scheme adopted by the invention for realizing the purpose is as follows:
the invention provides a visual SLAM characteristic matching method based on self-adaptive inertial navigation assistance, which comprises the following steps of;
step 1, preprocessing image data of a binocular camera;
step 2, pre-integrating the IMU measurement value of the inertial measurement unit;
step 3, carrying out error analysis on the pre-integration;
step 4, predicting the pixel coordinates;
and 5, determining an adaptive threshold.
The invention provides a visual SLAM feature matching system based on adaptive inertial navigation assistance, which comprises:
the first processing module is used for preprocessing image data of the binocular camera;
the second processing module is used for pre-integrating the IMU measurement value of the inertial measurement unit;
the third processing module is used for carrying out error analysis on the pre-integration;
the fourth processing module is used for predicting the pixel coordinates;
a fifth processing module for determining an adaptive threshold;
the first processing module, the second processing module, the third processing module, the fourth processing module and the fifth processing module are connected in sequence to complete the visual SLAM feature matching method based on the adaptive inertial navigation assistance.
The third aspect of the present invention provides a visual SLAM feature matching device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to execute the adaptive inertial navigation assistance-based visual SLAM feature matching method based on instructions stored in the memory.
A fourth aspect of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the adaptive inertial navigation assistance-based visual SLAM feature matching method.
Compared with the prior art, the invention has prominent substantive features and remarkable progress, and particularly has the following beneficial effects and advantages:
the invention provides an image feature matching algorithm assisted by an adaptive threshold inertia measurement unit, which can automatically adjust the image feature matching search radius according to an error propagation law on the basis of the error of an inertia unit, thereby efficiently assisting the feature matching of a visual SLAM by using inertia measurement data with different precisions. The self-adaptive algorithm provided by the invention can effectively reduce the search radius of feature matching, provide regional constraint for feature points, improve the accuracy of image feature matching, provide a more accurate pose initial value for an SLAM system, and further improve the overall pose precision of the system.
Drawings
FIG. 1 is a flow chart of inertial navigation assisted image feature matching.
Fig. 2 is an imaging model diagram of a binocular camera.
Fig. 3 is a flow chart for determining an adaptive feature matching search radius.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Description of the technical name:
the characteristic point is a representative pixel in a pixel coordinate system and is a representative point selected from an image, so that the corresponding coordinate is called a pixel coordinate;
map points refer to three-dimensional space points corresponding to the feature points;
the carrier is a platform for carrying the camera and inertial navigation.
Example 1
As shown in fig. 1-3, the present embodiment provides a visual SLAM feature matching method based on adaptive inertial navigation assistance, which includes the following steps;
step 1, preprocessing image data of a binocular camera;
the method for preprocessing binocular image data comprises the following steps:
step 1-1, obtaining the depth value z of the map point under the coordinate system of the binocular camera according to the similar triangle principle
Wherein f is the focal length of the camera, and b is the base line of the binocular camera;
step 1-2, substituting depth values z of the feature points in a binocular camera coordinate system into a pinhole camera projection model to obtain coordinates P of the map points in the camera coordinate system l Is composed of
Wherein (u) l ,v l ) Is the pixel coordinate of the map point P in the plane coordinate system of the image of the eye camera, c x ,c y Is an internal reference of the camera.
Step 2, pre-integrating the IMU measurement value of the inertial measurement unit;
multi-frame IMU data are arranged between two continuous image frames i, j, and relative motion increment of the two frames of images can be obtained by pre-integrating the IMU measurement value of the inertia measurement unit;
the method for pre-integrating the IMU measurement value of the inertial measurement unit comprises the following steps:
in the formula,. DELTA.R ij ,Δv ij ,Δp ij Obtaining relative attitude, speed and position of two adjacent image frames i, j through pre-integration calculation;measured values of a gyroscope and an accelerometer at the time k, zero offset and discrete white Gaussian noise respectively; delta t is the sampling time interval of the inertial measurement unit IMU; exp (-) is an exponential mapping that maps the rotation vector to a rotation matrix of the lie group space.
Step 3, carrying out error analysis on the pre-integration;
the method for carrying out error analysis on the pre-integration comprises the following steps:
step 3-1, because the pre-integral quantity and the IMU noise of the inertia measurement unit have a complex dependency relationship, it is difficult to directly use the error propagation law to deliver the pre-integral information matrix, so the IMU noise of the inertia measurement unit can be separated from the pre-integral quantity, and the relationship between the obtained pre-integral measurement noise and the IMU noise of the inertia measurement unit is:
in the formula (I), the compound is shown in the specification,for the attitude error delta R in the IMU pre-integral quantity of the inertia measurement unit from the moment i to the moment j ij Corresponding lie algebra; delta v ij ,δp ij Respectively obtaining the errors of the speed and the position in the IMU pre-integration quantity of the inertia measurement unit from the moment i to the moment j;discrete noise of a gyroscope and an accelerometer at the time of k is the product of continuous noise and sampling frequency evolution, and the unit is rad/s and m/s respectively 2 ;
Step 3-2, because the expressions of the three pre-integral state quantity errors are complex, direct calculation brings resource waste to a calculation platform, and the recursion formula of the pre-integral state quantity errors is obtained through derivation and simplification:
in the formula (I), the compound is shown in the specification,the attitude error delta R in the IMU pre-integral quantity of the inertial measurement unit from the moment i to the moment j-1 i,j-1 Corresponding lie algebra; delta v i,j-1 ,δp i,j-1 The errors of the speed and the position in the IMU pre-integration quantity of the inertia measurement unit from the moment i to the moment j-1 are respectively;the relative posture of the carrier is from j to j-1;accelerometer measurements at time j-1;defining the accelerometer zero offset at the time i, wherein the offset between two adjacent image frames is unchanged;is the inverse of the Right Jacobian matrix;discrete noise of the gyroscope and the accelerometer at the moment j-1 respectively;
step 3-3, obtaining the covariance sigma of IMU pre-integration quantity between two adjacent image frames according to the error propagation law ij Comprises the following steps:
in the formula, sigma i,j-1 Defining the initial moment inertial measurement unit IMU pre-integration error-free, namely sigma, for the covariance of the IMU pre-integration of the i to j-1 moment inertial measurement unit ii Is a zero matrix;is j-1Covariance of noise of measurement values of the inertial measurement unit IMU at any moment is defined as a fixed value, and the noise eta is calibrated by a gyroscope and an accelerometer gd ,η ad And (4) forming.
Step 4, predicting pixel coordinates;
the method for predicting the pixel coordinates comprises the following steps:
calculating the rotation matrix of the carrier between two adjacent frames of images by using the measurement value of the inertial measurement unit and the pre-integration principleAnd translation vectorRespectively used as relative postures obtained by the pre-integration calculation of an inertial measurement unit IMU (inertial measurement Unit) of two adjacent image frames i, jAnd position
According to the pose of the carrier at the moment of the previous image frame under the world coordinate and the relative pose between two adjacent images, predicting the pose of the carrier at the moment of the current image frame under the world coordinate system as follows:
wherein the content of the first and second substances,respectively representing the posture, the speed and the position of the carrier under a world coordinate system at the moment of the last image frame, wherein delta T is the sampling interval of the camera, and g is the gravity of the earth;
converting the pose of the current image frame time carrier system under the world coordinate system into the pose of the world system relative to the carrier system as follows:
and predicting the pose of the camera relative to the world system at the moment of the current image frame by using a conversion matrix of the camera system and the carrier system calibrated by visual inertia SLAM:
wherein R is cb ,t cb Respectively a conversion matrix and a translation vector of the camera system and the carrier system;
when the camera model is a pinhole camera model, the corresponding pixel coordinates of the map points acquired by the binocular camera model on the current frame image are as follows:
P c =R cw P w +t cw
wherein, f x ,f y ,c x ,c y As an intrinsic parameter of the camera, P w Sit around the world for map pointsThree-dimensional coordinates under the system of marks, P c The coordinates of the map point in the current frame camera system are shown, and u and v are the coordinates of the corresponding pixels of the map point.
Step 5, determining an adaptive threshold;
method of determining an adaptive threshold:
step 5-1, obtaining the covariance of the relative pose between the images by the inertial measurement unit IMU pre-integration error and the error propagation lawThe pixels corresponding to each map point transmitted to the map point can be obtained by utilizing an inertial navigation auxiliary feature matching principle, and the median error of the predicted pixel position is obtained; because the function used in the feature matching process is related to the pose, the function is linearized from the angle of lie algebra during error transfer; the relative pose error of the inertial measurement unit IMU pre-integration is transmitted to the pose error process of the current image frame time carrier relative to the world systemAnd positionLinearization, so calculation is about relative attitudeAnd positionDerivative of (c):
step 5-2, calculating the covariance of the current image frame time carrier relative to the world system pose:
wherein the content of the first and second substances,is composed ofThe corresponding lie algebra is used as the lie algebra,is composed ofThe inverse of the right-hand jacobian matrix,obtaining covariance of relative pose for IMU pre-integration;
step 5-3, similarly, according to a recurrence formula of the error of the pre-integration quantity and the covariance sigma of the IMU pre-integration quantity between two adjacent image frames ij The covariance of the pose of the camera relative to the world system can be obtained by the covariance of the pose of the current image frame relative to the world system, and the predicted covariance of the pose of the camera is transmitted to a pixel corresponding to each map point by using a pinhole projection model of the camera to obtain the covariance of the pixel;
the corresponding relation between the adaptive feature matching search radius and the pixel covariance is as follows:
r=3max{σ u ,σ v }
wherein σ 0 Error in unit weight, σ u ,σ v The pixel coordinate u and the pixel coordinate v are respectively the median error, r is the search radius of the feature matching, and the pixel coordinate is taken as the ultimate error.
Experimental verification
To verify the feasibility and effectiveness of the method described in this example, experimental analysis was performed using different scene sequences in two public datasets, euRoC and TUM, respectively. The EuRoC data set is collected by a micro aircraft carrying an MT9V034 camera and an ADIS16448 inertial measurement unit, and meanwhile, a laser tracker or a Vicon motion capture system is used for capturing the pose in flight in real time to serve as the real track of the aircraft; the TUM data set was collected by a dolly carrying a uEye camera and a BMI160 inertial measurement unit, with the true trajectory provided by the OptiTrack motion capture system.
This experiment selects 7 representative sequences MH01, V101, V201, corridor1, magistrale1, rom 1 and Slides1 in two public datasets, each giving a specific scene, distance and noise of the accelerometers and gyroscopes, see table 1:
TABLE 1 sequences selected for the experiments
In order to verify the usability of the method of the embodiment, the absolute pose errors of the output poses processed by the ORB-SLAM3 system before and after the improvement of different image sequences in different data sets are compared in the experiment, see Table 2, and the data is the average value of the root mean square of the absolute pose errors evaluated 10 times in each sequence.
TABLE 2 root mean square (m) of absolute pose errors for each sequence pose as a whole
As can be seen from Table 2, in the selected image sequence, the absolute track error of the ORB-SLAM3 system improved according to the method of the embodiment is better than that of the original algorithm, and the overall pose accuracy of the improved system is averagely improved by 24.32%. The improved ORB-SLAM3 system can well operate in different scene sequences, has stronger robustness and improves the positioning accuracy. This is because the ORB-SLAM3 system back-end uses a nonlinear optimization method, and even though the final output pose undergoes three optimizations, it still has strong correlation with the given initial pose. The initial pose is provided by the tracking thread, so that the method provided by the embodiment directly improves the estimation precision of the pose of the tracking thread, provides a more accurate initial value for the SLAM system, and further improves the overall positioning precision of the SLAM system.
Example 2
The embodiment provides a visual SLAM feature matching system based on adaptive inertial navigation assistance, which includes:
the first processing module is used for preprocessing image data of the binocular camera;
the second processing module is used for pre-integrating the IMU measurement value;
the third processing module is used for carrying out error analysis on the pre-integration;
the fourth processing module is used for predicting the pixel coordinates;
a fifth processing module for determining an adaptive threshold;
the first processing module, the second processing module, the third processing module, the fourth processing module and the fifth processing module are connected in sequence to complete the adaptive inertial navigation assistance-based visual SLAM feature matching method in embodiment 1.
The specific implementation method of the system of this embodiment refers to the method described in embodiment 1, and is not described herein again.
Example 3
The present embodiment provides a visual SLAM feature matching apparatus, including:
a memory; and
a processor coupled to the memory, the processor configured to perform the adaptive inertial navigation assistance-based visual SLAM feature matching method of embodiment 1 based on instructions stored in the memory.
The memory may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The apparatus may also include an input-output interface, a network interface, a storage interface, and the like. These interfaces and the memory and processor may be connected by a bus, for example. The input and output interface provides a connection interface for input and output equipment such as a display, a mouse, a keyboard and a touch screen. The network interface provides a connection interface for various networking devices. The storage interface provides a connection interface for external storage equipment such as an SD card and a U disk.
Example 4
The present embodiment provides a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the adaptive inertial navigation assistance-based visual SLAM feature matching method described in embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-non-transitory readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (9)
1. A vision SLAM feature matching method based on self-adaptive inertial navigation assistance is characterized by comprising the following steps;
step 1, preprocessing image data of a binocular camera;
step 2, pre-integrating the IMU measurement value of the inertial measurement unit;
step 3, carrying out error analysis on the pre-integration;
step 4, predicting pixel coordinates;
and 5, determining an adaptive threshold.
2. The adaptive inertial navigation assistance-based visual SLAM feature matching method according to claim 1, wherein the method for preprocessing binocular image data comprises:
step 1-1, obtaining the depth value z of the map point under the coordinate system of the binocular camera according to the similar triangle principle
Wherein f is the focal length of the camera, and b is the base line of the binocular camera;
step 1-2, substituting the depth value z of the feature point in the binocular camera coordinate system into the pinhole camera projection model to obtain the coordinate P of the map point in the camera coordinate system l Is composed of
In the formula (u) l ,v l ) Is the pixel coordinate of the map point P in the plane coordinate system of the image of the eye camera, c x ,c y Is the camera internal reference.
3. The adaptive inertial navigation assistance-based visual SLAM feature matching method according to claim 2, wherein the method of pre-integrating Inertial Measurement Unit (IMU) measurements comprises:
in the formula,. DELTA.R ij ,Δv ij ,Δp ij Respectively obtaining relative attitude, speed and position of two adjacent image frames i, j through pre-integration calculation;measured values of a gyroscope and an accelerometer at the time k, zero offset and discrete white Gaussian noise are respectively obtained; delta t is the sampling time interval of the inertial measurement unit IMU; exp (-) is an exponential mapping that maps the rotation vector to a rotation matrix of the lie group space.
4. The adaptive inertial navigation assistance-based visual SLAM feature matching method according to claim 3, wherein the method for performing error analysis on the pre-integration comprises:
step 3-1, the relation between the pre-integration measurement noise and IMU noise of the inertial measurement unit is as follows:
in the formula (I), the compound is shown in the specification,for the attitude error delta R in the IMU pre-integration quantity of the inertia measurement unit from the moment i to the moment j ij Corresponding lie algebra; delta v ij ,δp ij Respectively obtaining the errors of the speed and the position in the IMU pre-integration quantity of the inertia measurement unit from the moment i to the moment j;discrete noise of the gyroscope and the accelerometer at the time k is the product of continuous noise and sampling frequency square, and the unit is rad/s and m/s 2 ;
Step 3-2, obtaining a recursion formula of the error of the pre-integration quantity as
In the formula (I), the compound is shown in the specification,the attitude error delta R in the IMU pre-integral quantity of the inertial measurement unit from the moment i to the moment j-1 i,j-1 Corresponding lie algebra; delta v i,j-1 ,δp i,j-1 The errors of the speed and the position in the IMU pre-integration quantity of the inertia measurement unit from the moment i to the moment j-1 are respectively;the relative posture of the carrier is from j to j-1;accelerometer measurements at time j-1;defining the accelerometer zero offset at the time i, wherein the offset between two adjacent image frames is unchanged;is the inverse of the Right Jacobian matrix;discrete noise of the gyroscope and the accelerometer at the moment j-1 respectively;
step 3-3, obtaining the covariance sigma of IMU pre-integral quantity between two adjacent image frames ij Is composed of
In the formula, sigma i,j-1 Defining the initial moment inertial measurement unit IMU pre-integration error-free, namely sigma, for the covariance of the IMU pre-integration of the i to j-1 moment inertial measurement unit ii Is a zero matrix;defining the covariance of IMU measured value noise of the inertial measurement unit at the moment j-1 as a fixed value at any moment, and calibrating the noise eta by a gyroscope and an accelerometer gd ,η ad And (4) forming.
5. The adaptive inertial navigation assistance-based visual SLAM feature matching method according to claim 4, wherein the method for predicting pixel coordinates comprises:
calculating a rotation matrix of the carrier between two adjacent frames of images by using the measurement value of the inertial measurement unit and a pre-integration principleAnd translation vectorRespectively as two adjacent image framesi, j are subjected to the relative attitude obtained by the IMU pre-integration solution of the inertial measurement unitAnd position
According to the pose of the carrier at the moment of the previous image frame under the world coordinate and the relative pose between two adjacent images, predicting the pose of the carrier at the moment of the current image frame under the world coordinate system as follows:
wherein, the first and the second end of the pipe are connected with each other,respectively representing the posture, the speed and the position of the carrier under a world coordinate system at the moment of the last image frame, wherein delta T is the sampling interval of the camera, and g is the gravity of the earth;
converting the pose of the current image frame time carrier system under the world coordinate system into the pose of the world system relative to the carrier system as follows:
and predicting the pose of the camera relative to the world system at the moment of the current image frame by using a conversion matrix of the camera system and the carrier system calibrated by visual inertia SLAM:
wherein R is cb ,t cb The transformation matrix and the translation vector of the camera system and the carrier system respectively,the camera pose predicted by the inertial measurement unit IMU pre-integration;
when the camera model is a pinhole camera model, the corresponding pixel coordinates of the map points acquired by the binocular camera model on the current frame image are as follows:
P c =R cw P w +t cw
wherein f is x ,f y ,c x ,c y As an intrinsic parameter of the camera, P w Is the three-dimensional coordinate of a map point in the world coordinate system, P c The coordinates of the map point in the current frame camera system are shown, and u and v are the coordinates of the corresponding pixels of the map point.
6. The adaptive inertial navigation assistance-based visual SLAM feature matching method according to claim 5, wherein the method of determining an adaptive threshold comprises:
step 5-2, calculating the covariance of the relative pose of the current image frame time carrier to the world system:
wherein the content of the first and second substances,is composed ofThe corresponding lie algebra is set as the lie algebra,is composed ofThe inverse of the matrix of the right-handed jacobian,obtaining covariance of relative pose for IMU pre-integration;
step 5-3, calculating the self-adaptive feature matching search radius:
r=3max{σ u ,σ v }
wherein,σ 0 Error in unit weight, σ u ,σ v The pixel coordinate u and the pixel coordinate v are respectively the median error, r is the search radius of the feature matching, and the pixel coordinate is taken as the ultimate error.
7. A visual SLAM feature matching system based on adaptive inertial navigation assistance, comprising:
the first processing module is used for preprocessing image data of the binocular camera;
the second processing module is used for pre-integrating the IMU measurement value of the inertial measurement unit;
the third processing module is used for carrying out error analysis on the pre-integration;
the fourth processing module is used for predicting the pixel coordinates;
a fifth processing module for determining an adaptive threshold;
the first processing module, the second processing module, the third processing module, the fourth processing module and the fifth processing module are connected in sequence to complete the visual SLAM feature matching method based on adaptive inertial navigation assistance as claimed in any one of claims 1 to 6.
8. A visual SLAM feature matching apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the adaptive inertial navigation assistance-based visual SLAM feature matching method of any one of claims 1-6 based on instructions stored in the memory.
9. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the adaptive inertial navigation assistance-based visual SLAM feature matching method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211176803.3A CN115601573A (en) | 2022-09-26 | 2022-09-26 | Visual SLAM feature matching method, system, device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211176803.3A CN115601573A (en) | 2022-09-26 | 2022-09-26 | Visual SLAM feature matching method, system, device and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115601573A true CN115601573A (en) | 2023-01-13 |
Family
ID=84845262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211176803.3A Pending CN115601573A (en) | 2022-09-26 | 2022-09-26 | Visual SLAM feature matching method, system, device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115601573A (en) |
-
2022
- 2022-09-26 CN CN202211176803.3A patent/CN115601573A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN107888828B (en) | Space positioning method and device, electronic device, and storage medium | |
WO2019157925A1 (en) | Visual-inertial odometry implementation method and system | |
US20190204084A1 (en) | Binocular vision localization method, device and system | |
CN108615246B (en) | Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm | |
CN110782494A (en) | Visual SLAM method based on point-line fusion | |
CN109523589B (en) | Design method of more robust visual odometer | |
WO2020221307A1 (en) | Method and device for tracking moving object | |
CN111156984A (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
JP2009266224A (en) | Method and system for real-time visual odometry | |
CN112115874B (en) | Cloud-fused visual SLAM system and method | |
CN111623773B (en) | Target positioning method and device based on fisheye vision and inertial measurement | |
CN114001733B (en) | Map-based consistent efficient visual inertial positioning algorithm | |
JP7173471B2 (en) | 3D position estimation device and program | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
CN116205947A (en) | Binocular-inertial fusion pose estimation method based on camera motion state, electronic equipment and storage medium | |
CN114013449A (en) | Data processing method and device for automatic driving vehicle and automatic driving vehicle | |
CN110751123A (en) | Monocular vision inertial odometer system and method | |
CN114494150A (en) | Design method of monocular vision odometer based on semi-direct method | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
CN114812601A (en) | State estimation method and device of visual inertial odometer and electronic equipment | |
CN115601573A (en) | Visual SLAM feature matching method, system, device and readable storage medium | |
CN116934829B (en) | Unmanned aerial vehicle target depth estimation method and device, storage medium and electronic equipment | |
JP2016011951A (en) | Method and device of acquiring positional information of virtual marker, and motion measurement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |