CN110009681A - A kind of monocular vision odometer position and posture processing method based on IMU auxiliary - Google Patents
A kind of monocular vision odometer position and posture processing method based on IMU auxiliary Download PDFInfo
- Publication number
- CN110009681A CN110009681A CN201910226917.6A CN201910226917A CN110009681A CN 110009681 A CN110009681 A CN 110009681A CN 201910226917 A CN201910226917 A CN 201910226917A CN 110009681 A CN110009681 A CN 110009681A
- Authority
- CN
- China
- Prior art keywords
- imu
- frame image
- camera
- characteristic point
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of monocular vision odometer position and posture processing methods based on IMU auxiliary.ORB feature extraction is carried out to the consecutive image of monocular camera acquisition, simultaneously at the time of adjacent two field pictures between to IMU data carry out pre-integration, the camera pose inscribed when exporting each according to the relation situation of the characteristic point number and given threshold extracted: if the number of characteristic point is greater than given threshold, then judge the image for normal image sequence, the motion pose of camera is solved using pure vision mode, then the re-projection error for utilizing image, optimizes camera pose;If the number of characteristic point is less than given threshold, judge that the image is characterized a missing image, using Inertial Measurement Unit (IMU) pre-integration model, obtains the camera pose between feature missing image.Calculating speed of the present invention is fast, and has robustness to movement and build the accuracy of figure and increase.
Description
Technical field
The invention belongs to computer visions and robotic technology field, and in particular to a kind of monocular view based on IMU auxiliary
Feel odometer position and posture processing method.
Background technique
Due to utilizing GPS that can not accurately obtain itself accurate location information indoors, mobile robot is caused to navigate
It will appear very big deviation in the process, it is at this moment synchronous to position and build diagram technology (Simultaneous localization and
Mapping, SLAM) it comes into being.SLAM technology can utilize sensing in the case where unknown environment and GPS signal lack
Device perceives ambient enviroment, and constructs ambient enviroment map, positions to self-position, to achieve the purpose that navigation.?
Monocular camera is relative to binocular camera and RGB-D camera in SLAM technology, cheap although algorithm calculates complexity, just
In being transplanted to the mobile terminals such as mobile phone, there are wide market application values.Mobile device common in mobile phone or plate etc. at present
The upper most common sensor has monocular cam and IMU sensor, how to be realized using data between the two pinpoint
Problem is the hot issue of current SLAM research direction.
Too fast, image is fuzzy, under single movement scene moving by the pure vision SLAM of traditional monocular, it may appear that characteristic point lacks
The case where mistake, fails in tracking characteristics point process so as to cause front-end vision odometer, the motion bit of camera can not be calculated
Appearance.For the defect that the pure vision SLAM of above-mentioned monocular occurs in positioning, the innovative IMU auxiliary monocular that proposes of the present invention is regarded
The method for feeling SLAM positioning, when visual pattern does not work, selection carries out pre-integration with IMU data and obtains camera pose
Information, when vision extracts enough characteristic points again, reselection vision mode enhances monocular vision SLAM algorithm
Robustness extend its application scenarios to keep the map constructed more accurate.
Summary of the invention
Present invention aim to address the pure vision SLAM of monocular, and too fast, image is fuzzy, single movement scene is inferior moving
The problem of extracting characteristic point lazy weight, leading to the positioning failure of monocular vision odometer proposes a kind of based on monocular vision information
With the SLAM visual odometry position and posture processing method of IMU information fusion.
Technical solution of the present invention the following steps are included:
Step 1: monocular camera and IMU module being fixedly connected on same plane and constitute camera-IMU sensor, to camera-
IMU sensor, which is demarcated to obtain the sampling period of monocular camera and IMU module, camera internal reference K and t moment world coordinate system, to be arrived
The spin matrix Rt of IMU coordinate systemw, according to the spin matrix Rt of t moment world coordinate system to IMU coordinate systemwWhen available t
Carve IMU coordinate system to world coordinate system spin matrix Rt w, two spin matrixs are reciprocal.
The world coordinate system is obtained according to camera coordinates system and image coordinate system, specifically: camera coordinates system
Refer to using the optical center of camera as coordinate origin, X-axis and Y-axis are respectively parallel to the X-axis and Y-axis of image coordinate system, the optical axis of camera
For Z axis;Image coordinate system refers to that the center of camera imaging plane is coordinate origin, and X-axis and Y-axis are respectively parallel to imaging plane
Two vertical edges;Camera coordinates system is multiplied by camera pose (R, t) available world coordinate system;By to camera-IMU sensor
Demarcate and then obtain IMU coordinate system, IMU coordinate system is parallel with camera coordinates system but is not overlapped.
The IMU module includes three axis accelerometer and three-axis gyroscope, and three axis accelerometer acquires acceleration analysis
Value, three-axis gyroscope acquisition angle velocity measurement.The IMU kinetic model under IMU coordinate system is established, specific as follows:
In formula,at、bat、naRespectively indicate three axis accelerometer the acceleration measurement of t moment, acceleration true value,
Acceleration offset noise, acceleration white Gaussian noise (being regarded as fixed value), gwIndicate three axis accelerometer in world coordinates
It is the acceleration value under w, is 9.8m/s2,ωt、bwtAnd nωThree-axis gyroscope is respectively indicated in the angular velocity measurement of t moment
Value, angular speed true value, angular speed offset noise and angular speed white Gaussian noise (being regarded as fixed value).
Step 2: as shown in Fig. 2, since the sample frequency of IMU module is far longer than the sample frequency of camera, i.e., in camera
Include the sampled point of multiple IMU modules in period between two adjacent sampled points, thus camera acquisition image with
IMU acquire data when, need the sampling period according to monocular camera, by the sampling period of IMU module in time series with list
Mesh camera is aligned the accuracy to ensure IMU pre-integration, and monocular camera continuous acquisition image records the acquisition time of every frame image
Point, IMU module exports corresponding IMU measured value of each acquisition time, and (i.e. every frame image Corresponding matching one in time series
The IMU measured value of upper alignment), IMU measured value includes acceleration measurement and angular velocity measurement value;
For every frame image of monocular camera acquisition, the characteristic point of current frame image is extracted using ORB feature extraction algorithm,
Characteristic point amount threshold is set, if characteristic point sum included by current frame image is less than the characteristic point amount threshold of setting, into
Enter step 3;If characteristic point sum included by current frame image is greater than the characteristic point amount threshold of setting, 5 are entered step;
Step 3: the present frame that monocular camera is acquired according to IMU measured value (acceleration measurement and angular velocity measurement value)
Acceleration true value and angular speed true value between image and the previous frame image of present frame carry out pre-integration, and present frame is calculated
IMU pose of the image under world coordinate system:
Dual-integration is carried out to acceleration true value to be displaced, and substance is carried out to acceleration true value and integrates to obtain speed, it is right
Angular speed true value carries out substance and integrates to obtain rotation angle, specifically:
In formula, t indicates moment, [Tk,Tk+1] indicate time before and after monocular camera between two field pictures acquisition time
Section, also referred to as visual information missing time period;WithIt is illustrated respectively in IMU module and is corresponding to current frame image (at this time
Current frame image is a later frame image of two frame of front and back with respect to previous frame image) sampling time point bk+1It is sampled with previous frame image
Time point bkThe displacement of lower opposite world coordinates w,WithIMU module is respectively indicated in the corresponding current frame image sampling time
Point bk+1With previous frame image sampling time point bkThe speed of lower opposite world coordinate system, △ tkIndicate the sample frequency of IMU,
WithIMU module is respectively indicated in corresponding current frame image sampling time point bk+1With previous frame image sampling time point bkLower phase
To the quaternary number representation of the rotation angle of world coordinate system w,Operator is that the multiplication of quaternary number indicates that Ω indicates quaternary
Number operation;
According to IMU module in corresponding current frame image sampled point bk+1Under displacementObtain the IMU translation square of IMU pose
Battle array, according to IMU module in corresponding current frame image sampled point bk+1The quaternary number of backspin gyration obtains the IMU rotation of IMU pose
Matrix, to obtain the IMU pose under world coordinate system;
Step 4: by the IMU pose of current frame image multiplied by the spin matrix of IMU coordinate system to world coordinate systemIt obtains
Camera pose [R | t] of the current frame image under world coordinate system regard obtained camera pose [R | t] output as present frame figure
The final output pose of picture;Return step 2 judges the characteristic point sum for the next frame image that monocular camera acquires;Until
It is detected in the model for entering step 5 pure vision positionings after the frame image greater than characteristic point amount threshold again.
Step 5: calculating camera pose according to a preliminary estimate into pure vision: the characteristic point of image present frame is calculated using BRIEF
Method calculates description and carries out characteristic matching with a later frame image of present frame, to obtain the matching characteristic of front and back two field pictures
Point collection is calculated according to matching characteristic point the basis matrix F of front and back two field pictures to collection, carries out SVD square to basis matrix F
Battle array decomposes and obtains the first camera pose of current frame image, further according to basis matrix F and camera internal reference K, is calculated and singly answers square
Battle array E, decomposes homography matrix E to obtain the second camera pose of current frame image;First and second camera pose cameras are equal
Including spin matrix R and camera translation matrix t;
Step 6: calculate separately the re-projection error of first camera pose and second camera pose and optimize obtain it is corresponding
First, which minimizes re-projection error value and second, minimizes re-projection error value, as shown in figure 3, according to the feature of previous frame image
Point p1Spatial point P, p' are obtained with camera internal reference K1For in a later frame image with p1Matched characteristic point, spatial point P are projected in latter
The subpoint of frame image isWhen optimizing pose, error should be madeValue it is minimum, therefore containing n matching characteristic
Point clock synchronization, the calculation formula for minimizing re-projection error are as follows:
Wherein, i indicates the serial number of characteristic point pair, and n indicates the sum of characteristic point pair, p'iIt indicates in a later frame image with before
One frame image features point piCorresponding matching characteristic point, PiIndicate previous frame image characteristic point piCorresponding spatial point, ξ*Indicate the
One or second camera pose minimum error, ξ be first or second camera pose representation of Lie algebra form, siFor image
Zoom factor, K are camera internal reference, and ξ ^ indicates the antisymmetric matrix of first or second camera pose ξ,Indicate that minimizing two multiplies,It indicates to calculate the minimum of parameter ξ;
Step 7: the first and second minimum re-projection error values, which are compared judgement, makes the current location of camera more
Accurately, output error is worth final output pose of the smallest camera pose as current frame image;Return step 3 is to a later frame figure
The characteristic point sum of picture is judged, until all frame images judgement of monocular camera acquisition is finished.
The basis matrix F of the step 5 is the matrix of 3X3, at least needs 8 pairs of matching characteristic points to solution basis matrix F.
The homography matrix E of the step 5 is the matrix of a 3X3, and 4 pairs of match points is at least needed to solve homography matrix E.Institute
It states in step 5, homography matrix E is decomposed using analytic method or numerical method.
The quaternary number operation specifically:
Wherein, ()xIndicate the first dimensional vector of angular speed true value, ()yIndicate the second dimensional vector of angular speed true value,
(·)zIndicate the third dimensional vector of angular speed true value.
The characteristic point amount threshold is at least more than 8.
The present invention carries out ORB feature extraction to the consecutive image that camera acquires, while at the time of adjacent two field pictures
Between to IMU data carry out pre-integration, according to the relation situation for the characteristic point number and given threshold extracted output it is each when
The camera pose inscribed: if the number of characteristic point is greater than given threshold, the image is judged for normal image sequence, using pure view
Feel that model solves the motion pose of camera, then utilize the re-projection error of image, optimizes camera pose;If the number of characteristic point
Less than given threshold, then judge that the image is characterized a missing image, using Inertial Measurement Unit (IMU) pre-integration model, obtains
Obtain the camera pose between feature missing image.
Beneficial effects of the present invention:
The present invention solves the pure vision SLAM of monocular, and too fast, image is fuzzy, the inferior extraction of single movement scene is special moving
Sign point lazy weight, causes monocular vision odometer to position the problem of failing, entire SLAM system is made more to have stronger Shandong
Stick and adaptability to environment, calculating speed of the present invention is fast, has robustness to movement, improves the accuracy for building figure.
Detailed description of the invention
Fig. 1 is the overall flow figure of inventive algorithm;
Fig. 2 is the schematic diagram that monocular camera is aligned on the sampling time with IMU module;
Fig. 3 is the schematic diagram of re-projection error model.
Specific embodiment
The invention will be further described for explanation with reference to the accompanying drawing.
As shown in Figure 1, specific embodiments of the present invention are as follows:
S1: the reading and the acquisition of IMU data of the consecutive image sequence of monocular RGB camera acquisition are to pass through integrated sensor
What Intel Realsense D435i was completed, first Intel Realsense D435i should be demarcated before use, implemented
Zhang Zhengyou calibration method is used in example, is determined the internal reference K of RGB camera in Intel Realsense D435i, is utilized kalibr tool
Spin matrix of the coordinate system to coordinate system where IMU where determining RGB camera
S2: as shown in Fig. 2, since the sample frequency of IMU is far longer than the sample frequency of camera, scheme so being acquired in camera
When as acquiring data with IMU, it should be aligned both in time series, alignment thereof is the sampling period according to the two in program
Middle alignment.
S3: extracting characteristic point with ORB feature extraction algorithm to every frame image in the consecutive image sequence obtained in S1,
ORB feature extraction algorithm is put forward on the basis of famous FAST feature detection and BRIEF Feature Descriptor, is run
Time is far superior to SIFT and SURF, can be applied to the detection of real-time feature, advantage is with scale and invariable rotary shape, ORB
Feature detection is broadly divided into following two step: (1) direction FAST characteristic point detection (2) BRIEF feature description.
(1) detection of direction FAST characteristic point be to point of interest 16 pixels circumferentially judge, specifically do
Method is to be compared the gray value of 16 pixels in its neighborhood of the sum of the grayscale values of the point, if the pixel value and circle of the point
The pixel value of n continuous pixels subtracts each other greater than given threshold t in neighborhood on circle, it is judged that the point is an angle point,
Select n=12 in practical applications, t=2, detection effect is best.
(2) ORB description: the base for the angle point that the detection of FAST characteristic point is calculated in direction is extracted to every frame image uniform
Calculate the description of BRIEF feature on plinth, specific practice is to choose N number of point pair around the angle point with certain pattern, and this N number of point
Pair comparison result in combination as description, and binary representation is used, if the Hamming that the description of two continuous frames image is sub
Distance is less than given threshold, then using the angle point as the characteristic point extracted, Hamming distance here refer to two descriptions it is sub (two into
System string) similarity, i.e., by comparing binary string whether each identical, Hamming distance adds 1 if different.
S4: being calculated the corresponding characteristic point quantity of every frame image by ORB feature extraction algorithm, and judging characteristic point
The reason of whether quantity is greater than the characteristic point amount threshold of setting, and characteristic point amount threshold is arranged is that the pure vision positioning of anticipation goes out
Mistake, characteristic point amount threshold can prejudge the pure visual odometry positioning failure of monocular in advance, because less characteristic point quantity is not
It is enough to calculate believable camera pose.
S5: if the number of characteristic point is greater than characteristic point amount threshold, the image is judged for normal image sequence, use is pure
Vision mode calculates the motion pose of camera, then utilizes the re-projection error of image, optimizes camera pose.Camera pose is
By estimating what basis matrix F and homography matrix H was obtained.If the existing rotation of camera has translation again, basis matrix F is calculated;
Homography matrix E is calculated if camera is only rotated without translation.
The basis matrix F is the matrix of a 3X3, if only considering a pair of of characteristic matching point, p1=[u1,v1,1]TWith
p'1=[u'1,v'1,1]T, T indicates transposed matrix, then calculation formula are as follows:
Due to pixel coordinate be indicate (z-axis direction be constant 1) by homogeneous coordinates, so have in basis matrix F 8 it is unknown
Element, that is to say, that at least need 8 pairs of characteristic matching points that can just solve basis matrix F, thus, characteristic point quantity
The minimum value of threshold value is 8.
Homography matrix H is the matrix of a 3X3, if only considering a pair of of characteristic matching point, p1=[u1,v1,1]TAnd p'1=
[u'1,v'1,1]T, then calculation formula are as follows:
By above formula as it can be seen that the homography matrix H that freedom degree is 8 at least needs 4 pairs of characteristic matching points that can just solve.By
This can be obtained, and the minimum value of characteristic point amount threshold is 4.Therefore the setting of characteristic point amount threshold at least should be greater than 8, through the present invention
Many experiments test shows that characteristic point amount threshold is set as 45 the most rationally.
By solving basis matrix and homography matrix, can the two matrixes be carried out with matrix decomposition respectively and obtain the first of camera
Begin posture information to be optimized, i.e. spin matrix (R) and translation matrix (t).
It, can be excellent to posture information progress by minimizing re-projection error to the initial posture information to be optimized solved
Change, as shown in figure 3, according to the characteristic point p of previous frame image1Spatial point P, p' are obtained with camera internal reference K1For in a later frame image
With p1Matched characteristic point, the subpoint that spatial point P is projected in a later frame image areWhen optimizing pose, error should be madeValue it is minimum, therefore containing n matching characteristic point clock synchronization, the calculation formula for minimizing re-projection error is as follows:
Wherein, i indicates the serial number of characteristic point pair, and n indicates the sum of characteristic point pair, p'iIt indicates in a later frame image with before
One frame image features point piCorresponding matching characteristic point, PiIndicate previous frame image characteristic point piCorresponding spatial point, ξ*Indicate phase
The minimum error of seat in the plane appearance, ξ are the representation of Lie algebra form of camera pose, siFor the zoom factor of image, K is in camera
Ginseng, ξ^Indicate the antisymmetric matrix of camera pose ξ,Indicate that minimizing two multiplies,Indicate the minimum meter to parameter ξ
It calculates.
S6: if the number of characteristic point is less than characteristic point amount threshold, judge that the image is characterized a missing image, pure view
Feel that positioning will lead to pose estimation failure.Therefore on the basis of the kinematical equation of the kinetic model of IMU and camera is established,
Using Inertial Measurement Unit (IMU) pre-integration model, the camera pose between feature missing image is obtained.
Kinetic model is established to IMU in S1:
Whereinat、bat、gw、naIMU accelerometer is respectively indicated in the observation of t moment, true value, offset noise, generation
Acceleration value under boundary's coordinate system, is defaulted as 9.8m/s2With the white Gaussian noise (being regarded as fixed value) of accelerometer measures,For t moment world coordinate system to the rotation of IMU coordinate system, which can be obtained by camera-IMU extrinsic calibration;Similarly,
ωt、bwt、nωRespectively indicate white Gaussian of the IMU gyroscope in the measurement of the observation of t moment, true value, offset noise and gyroscope
Noise (is regarded as fixed value).With the initialization excessively to above-mentioned parameter constant value, the IMU module initialization in Fig. 1 is completed.
S7: integrating the IMU kinetic model in S6, and acceleration dual-integration is displacement, acceleration substance integral
For speed, angular speed substance integral is angle, then in visual information missing time period [Tk,Tk+1] in, IMU information is carried out
Pre-integration formula is as follows:
In formula, t indicates moment, [Tk,Tk+1] indicate time before and after monocular camera between two field pictures acquisition time
Section, also referred to as visual information missing time period;WithIt is illustrated respectively in IMU module and is corresponding to current frame image (at this time
Current frame image is a later frame image of two frame of front and back with respect to previous frame image) sampling time point bk+1It is sampled with previous frame image
Time point bkThe displacement of lower opposite world coordinates w,WithIMU module is respectively indicated in the corresponding current frame image sampling time
Point bk+1With previous frame image sampling time point bkThe speed of lower opposite world coordinate system, △ tkIndicate the sample frequency of IMU,
WithIMU module is respectively indicated in corresponding current frame image sampling time point bk+1With previous frame image sampling time point bkLower phase
To the quaternary number representation of the rotation angle of world coordinate system w,Operator is that the multiplication of quaternary number indicates that Ω indicates quaternary
Number operation.According to IMU module in corresponding current frame image sampled point bk+1Under displacementObtain the IMU translation square of IMU pose
Battle array, according to IMU module in corresponding current frame image sampled point bk+1The quaternary number of backspin gyration obtains the IMU rotation of IMU pose
Matrix, to obtain the IMU pose under world coordinate system;
S8: the IMU pose that S7 is calculated carries out the conversion of coordinate system, and the spin matrix of coordinate transform is to demarcate in S1
It obtainsBy to the pre-integration formula calculated result in S7 multiplied by transformation spin matrixThe pose of camera can be obtained
Information.
S9: the final camera pose of every frame image output of monocular camera is all the extraction according to the ORB extraction algorithm of S3
How much the number of characteristic point determines it is the motion pose of camera to be solved using pure vision mode, or use Inertial Measurement Unit
(IMU) pre-integration model obtains the camera pose between feature missing image.Above step is repeated to all frames of monocular camera
Image is judged, the corresponding camera pose of every frame image is exported.
It obscures for the pure vision SLAM of monocular in image, move the case where too fast and single movement environment and may cause feature
It is the problem of point tracking failure, different from the feature point tracking method of IMU information and camera information is merged in the prior art, the present invention
Using IMU as auxiliary element, divide situation output phase machine posture information so that monocular vision odometer to move it is more robust and
There is stronger adaptability to environment.The above is only a preferred embodiment of the present invention, it should be pointed out that: this technology is led
For the those of ordinary skill in domain, various improvements and modifications may be made without departing from the principle of the present invention, these
Improvements and modifications also should be regarded as protection scope of the present invention.
Claims (6)
1. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary, it is characterised in that the following steps are included:
Step 1: monocular camera is fixedly connected with same plane with Inertial Measurement Unit IMU module and constitutes camera-IMU sensor,
Camera-IMU sensor is demarcated to obtain sampling period, camera internal reference K and the t moment world of monocular camera and IMU module
Spin matrix of the coordinate system to IMU coordinate systemAccording to t moment world coordinate system to the spin matrix of IMU coordinate systemIt can
With obtain t moment IMU coordinate system to world coordinate system spin matrix
The IMU module includes three axis accelerometer and three-axis gyroscope, and three axis accelerometer acquires acceleration measurement, and three
Axis gyroscope acquisition angle velocity measurement establishes the IMU kinetic model under IMU coordinate system, specific as follows:
In formula,at、bat、naThree axis accelerometer is respectively indicated in the acceleration measurement of t moment, acceleration true value, acceleration
Spend offset noise, acceleration white Gaussian noise, gwIt indicates acceleration value of the three axis accelerometer at world coordinate system w, is
9.8m/s2;ωt、bwtAnd nωThree-axis gyroscope is respectively indicated in angular velocity measurement value, angular speed true value, the angle speed of t moment
Spend offset noise and angular speed white Gaussian noise;
Step 2: monocular camera continuous acquisition image records the acquisition time of every frame image, when IMU module exports each acquisition
Between put corresponding IMU measured value, IMU measured value includes acceleration measurement and angular velocity measurement value;Monocular camera is acquired
Every frame image, utilize ORB feature extraction algorithm extract current frame image characteristic point, be arranged characteristic point amount threshold, if working as
Characteristic point sum included by prior image frame is less than the characteristic point amount threshold of setting, enters step 3;If current frame image is wrapped
The characteristic point sum included is greater than the characteristic point amount threshold of setting, enters step 5;
Step 3: monocular camera acquire according to the acceleration measurement of IMU output and angular velocity measurement value current frame image and
Acceleration true value and angular speed true value between the previous frame image of present frame carry out pre-integration, and current frame image is calculated and exists
IMU pose under world coordinate system:
In formula, t indicates moment, [Tk,Tk+1] indicate period before and after monocular camera between two field pictures acquisition time,
WithIMU module is illustrated respectively in corresponding current frame image sampling time point bk+1With previous frame image sampling time point bkUnder
The displacement of opposite world coordinate system w,WithIMU module is respectively indicated in corresponding current frame image sampling time point bk+1With
Previous frame image sampling time point bkThe speed of lower opposite world coordinate system, △ tkIndicate the sample frequency of IMU,WithPoint
Not Biao Shi IMU module in corresponding current frame image sampling time point bk+1With previous frame image sampling time point bkThe lower opposite world
The quaternary number of the rotation angle of coordinate system w,Operator is that the multiplication of quaternary number indicates that Ω indicates the operation of quaternary number;
According to IMU module in corresponding current frame image sampling time point bk+1Under displacementObtain the IMU translation square of IMU pose
Battle array, according to IMU module in corresponding current frame image sampled point bk+1The quaternary number of backspin gyration obtains the IMU rotation of IMU pose
Matrix, to obtain IMU pose;
Step 4: by the IMU pose of current frame image multiplied by spin matrixThe camera pose [R | t] of current frame image is obtained, it will
Final output pose of obtained camera pose [R | the t] output as current frame image;Return step 2 acquires monocular camera
The characteristic point sum of next frame image is judged;
Step 5: to the characteristic point of image present frame using BRIEF algorithm calculate description and with a later frame image of present frame into
Row characteristic matching, to obtain the matching characteristic points of front and back two field pictures to collection, before collection is calculated according to matching characteristic point
The basis matrix F of two field pictures afterwards carries out SVD matrix decomposition to basis matrix F and obtains the first camera pose of current frame image,
Further according to basis matrix F and camera internal reference K, homography matrix E is calculated, homography matrix E is decomposed to obtain present frame figure
The second camera pose of picture;
Step 6: calculating separately the re-projection error of first camera pose and second camera pose and optimize acquisition corresponding first
It minimizes re-projection error value and second and minimizes re-projection error value, the calculation phase of two minimum re-projection errors
Together, specific as follows:
Wherein, i indicates the serial number of characteristic point pair, and n indicates the sum of characteristic point pair, p'iIndicate a later frame image in former frame figure
As characteristic point piCorresponding matching characteristic point, PiIndicate previous frame image characteristic point piCorresponding spatial point, ξ*Indicate camera pose
Minimum error, ξ be camera pose representation of Lie algebra form, siFor the zoom factor of image, K is camera internal reference, ξ ^ table
Show the antisymmetric matrix of camera pose ξ,Indicate that minimizing two multiplies,It indicates to minimize and calculate;
Step 7: the first and second minimum re-projection error values being compared judgement, output error is worth the smallest camera pose
Final output pose as current frame image;Return step 3 judges the characteristic point sum of a later frame image, until inciting somebody to action
All frame images judgement of monocular camera acquisition finishes.
2. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary according to claim 1, feature exist
In: the basis matrix F is the matrix of 3X3, at least needs 8 pairs of matching characteristic points to solution basis matrix F.
3. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary according to claim 1, feature exist
In: the homography matrix E is the matrix of a 3X3, and 4 pairs of match points is at least needed to solve homography matrix E.
4. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary according to claim 1, feature exist
In: the quaternary number operation specifically:
Wherein, ()xIndicate the first dimensional vector of angular speed true value, ()yIndicate the second dimensional vector of angular speed true value, ()z
Indicate the third dimensional vector of angular speed true value.
5. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary according to claim 1, feature exist
In: the characteristic point amount threshold is at least more than 8.
6. a kind of monocular vision odometer position and posture processing method based on IMU auxiliary according to claim 1, feature exist
In: in the step 5), homography matrix E is decomposed using analytic method or numerical method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910226917.6A CN110009681B (en) | 2019-03-25 | 2019-03-25 | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910226917.6A CN110009681B (en) | 2019-03-25 | 2019-03-25 | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110009681A true CN110009681A (en) | 2019-07-12 |
CN110009681B CN110009681B (en) | 2021-07-30 |
Family
ID=67167922
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910226917.6A Active CN110009681B (en) | 2019-03-25 | 2019-03-25 | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110009681B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110455301A (en) * | 2019-08-01 | 2019-11-15 | 河北工业大学 | A kind of dynamic scene SLAM method based on Inertial Measurement Unit |
CN110823225A (en) * | 2019-10-29 | 2020-02-21 | 北京影谱科技股份有限公司 | Positioning method and device under indoor dynamic situation |
CN111091621A (en) * | 2019-12-11 | 2020-05-01 | 东南数字经济发展研究院 | Binocular vision synchronous positioning and composition method, device, equipment and storage medium |
CN111156998A (en) * | 2019-12-26 | 2020-05-15 | 华南理工大学 | Mobile robot positioning method based on RGB-D camera and IMU information fusion |
CN111160362A (en) * | 2019-11-27 | 2020-05-15 | 东南大学 | FAST feature homogenization extraction and IMU-based inter-frame feature mismatching removal method |
CN111583387A (en) * | 2020-04-21 | 2020-08-25 | 北京鼎路科技有限公司 | Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle |
CN111595362A (en) * | 2020-06-05 | 2020-08-28 | 联想(北京)有限公司 | Parameter calibration method and device for inertial measurement unit and electronic equipment |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
CN112129287A (en) * | 2020-09-24 | 2020-12-25 | 北京华捷艾米科技有限公司 | Method and related device for processing based on visual inertial odometer |
CN112380966A (en) * | 2020-11-12 | 2021-02-19 | 西安电子科技大学 | Monocular iris matching method based on feature point reprojection |
CN112731503A (en) * | 2020-12-25 | 2021-04-30 | 中国科学技术大学 | Pose estimation method and system based on front-end tight coupling |
CN112837373A (en) * | 2021-03-03 | 2021-05-25 | 福州视驰科技有限公司 | Multi-camera pose estimation method without feature point matching |
CN112907633A (en) * | 2021-03-17 | 2021-06-04 | 中国科学院空天信息创新研究院 | Dynamic characteristic point identification method and application thereof |
CN112013858B (en) * | 2020-10-16 | 2021-07-02 | 北京猎户星空科技有限公司 | Positioning method, positioning device, self-moving equipment and storage medium |
CN113155140A (en) * | 2021-03-31 | 2021-07-23 | 上海交通大学 | Robot SLAM method and system used in outdoor characteristic sparse environment |
CN113220017A (en) * | 2021-04-16 | 2021-08-06 | 同济大学 | Underground unmanned aerial vehicle flight method and system |
CN113256711A (en) * | 2021-05-27 | 2021-08-13 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113390408A (en) * | 2021-06-30 | 2021-09-14 | 深圳市优必选科技股份有限公司 | Robot positioning method and device, robot and storage medium |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113628284A (en) * | 2021-08-10 | 2021-11-09 | 深圳市人工智能与机器人研究院 | Pose calibration data set generation method, device and system, electronic equipment and medium |
CN113759384A (en) * | 2020-09-22 | 2021-12-07 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN114370882A (en) * | 2020-10-14 | 2022-04-19 | 蘑菇车联信息科技有限公司 | Method and related device for realizing SLAM positioning based on monocular automobile data recorder |
CN114593735A (en) * | 2022-01-26 | 2022-06-07 | 奥比中光科技集团股份有限公司 | Pose prediction method and device |
CN114782550A (en) * | 2022-04-25 | 2022-07-22 | 高德软件有限公司 | Camera calibration method, device, electronic equipment and program product |
CN116309885A (en) * | 2023-05-24 | 2023-06-23 | 同致电子科技(厦门)有限公司 | Vehicle-mounted camera online calibration method based on visual odometer |
CN117237417A (en) * | 2023-11-13 | 2023-12-15 | 南京耀宇视芯科技有限公司 | System for realizing optical flow tracking based on image and imu data hardware |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
CN104240297A (en) * | 2014-09-02 | 2014-12-24 | 东南大学 | Rescue robot three-dimensional environment map real-time construction method |
CN105931275A (en) * | 2016-05-23 | 2016-09-07 | 北京暴风魔镜科技有限公司 | Monocular and IMU fused stable motion tracking method and device based on mobile terminal |
US20160349379A1 (en) * | 2015-05-28 | 2016-12-01 | Alberto Daniel Lacaze | Inertial navigation unit enhaced with atomic clock |
CN107767425A (en) * | 2017-10-31 | 2018-03-06 | 南京维睛视空信息科技有限公司 | A kind of mobile terminal AR methods based on monocular vio |
CN108615248A (en) * | 2018-04-27 | 2018-10-02 | 腾讯科技(深圳)有限公司 | Method for relocating, device, equipment and the storage medium of camera posture tracing process |
CN108827339A (en) * | 2018-04-10 | 2018-11-16 | 南京航空航天大学 | A kind of efficient visual odometry based on inertia auxiliary |
CN108955718A (en) * | 2018-04-10 | 2018-12-07 | 中国科学院深圳先进技术研究院 | A kind of visual odometry and its localization method, robot and storage medium |
CN109141433A (en) * | 2018-09-20 | 2019-01-04 | 江阴市雷奥机器人技术有限公司 | A kind of robot indoor locating system and localization method |
-
2019
- 2019-03-25 CN CN201910226917.6A patent/CN110009681B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177468A (en) * | 2013-03-29 | 2013-06-26 | 渤海大学 | Three-dimensional motion object augmented reality registration method based on no marks |
CN104240297A (en) * | 2014-09-02 | 2014-12-24 | 东南大学 | Rescue robot three-dimensional environment map real-time construction method |
US20160349379A1 (en) * | 2015-05-28 | 2016-12-01 | Alberto Daniel Lacaze | Inertial navigation unit enhaced with atomic clock |
CN105931275A (en) * | 2016-05-23 | 2016-09-07 | 北京暴风魔镜科技有限公司 | Monocular and IMU fused stable motion tracking method and device based on mobile terminal |
CN107767425A (en) * | 2017-10-31 | 2018-03-06 | 南京维睛视空信息科技有限公司 | A kind of mobile terminal AR methods based on monocular vio |
CN108827339A (en) * | 2018-04-10 | 2018-11-16 | 南京航空航天大学 | A kind of efficient visual odometry based on inertia auxiliary |
CN108955718A (en) * | 2018-04-10 | 2018-12-07 | 中国科学院深圳先进技术研究院 | A kind of visual odometry and its localization method, robot and storage medium |
CN108615248A (en) * | 2018-04-27 | 2018-10-02 | 腾讯科技(深圳)有限公司 | Method for relocating, device, equipment and the storage medium of camera posture tracing process |
CN109141433A (en) * | 2018-09-20 | 2019-01-04 | 江阴市雷奥机器人技术有限公司 | A kind of robot indoor locating system and localization method |
Non-Patent Citations (3)
Title |
---|
丁文东等: "移动机器人视觉里程计综述", 《自动化学报》 * |
张建越: "基于嵌入式并行处理的视觉惯导SLAM算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
杜义龙: "基于视觉和惯性的移动机器人室内定位算法研究和实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110455301A (en) * | 2019-08-01 | 2019-11-15 | 河北工业大学 | A kind of dynamic scene SLAM method based on Inertial Measurement Unit |
CN110823225A (en) * | 2019-10-29 | 2020-02-21 | 北京影谱科技股份有限公司 | Positioning method and device under indoor dynamic situation |
CN111160362A (en) * | 2019-11-27 | 2020-05-15 | 东南大学 | FAST feature homogenization extraction and IMU-based inter-frame feature mismatching removal method |
CN111091621A (en) * | 2019-12-11 | 2020-05-01 | 东南数字经济发展研究院 | Binocular vision synchronous positioning and composition method, device, equipment and storage medium |
CN111156998B (en) * | 2019-12-26 | 2022-04-15 | 华南理工大学 | Mobile robot positioning method based on RGB-D camera and IMU information fusion |
CN111156998A (en) * | 2019-12-26 | 2020-05-15 | 华南理工大学 | Mobile robot positioning method based on RGB-D camera and IMU information fusion |
CN111583387A (en) * | 2020-04-21 | 2020-08-25 | 北京鼎路科技有限公司 | Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle |
CN111595362A (en) * | 2020-06-05 | 2020-08-28 | 联想(北京)有限公司 | Parameter calibration method and device for inertial measurement unit and electronic equipment |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
CN111795686B (en) * | 2020-06-08 | 2024-02-02 | 南京大学 | Mobile robot positioning and mapping method |
CN113759384A (en) * | 2020-09-22 | 2021-12-07 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN113759384B (en) * | 2020-09-22 | 2024-04-05 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN112129287A (en) * | 2020-09-24 | 2020-12-25 | 北京华捷艾米科技有限公司 | Method and related device for processing based on visual inertial odometer |
CN114370882A (en) * | 2020-10-14 | 2022-04-19 | 蘑菇车联信息科技有限公司 | Method and related device for realizing SLAM positioning based on monocular automobile data recorder |
CN112013858B (en) * | 2020-10-16 | 2021-07-02 | 北京猎户星空科技有限公司 | Positioning method, positioning device, self-moving equipment and storage medium |
WO2022078513A1 (en) * | 2020-10-16 | 2022-04-21 | 北京猎户星空科技有限公司 | Positioning method and apparatus, self-moving device, and storage medium |
CN112380966B (en) * | 2020-11-12 | 2023-06-02 | 西安电子科技大学 | Monocular iris matching method based on feature point re-projection |
CN112380966A (en) * | 2020-11-12 | 2021-02-19 | 西安电子科技大学 | Monocular iris matching method based on feature point reprojection |
CN112731503B (en) * | 2020-12-25 | 2024-02-09 | 中国科学技术大学 | Pose estimation method and system based on front end tight coupling |
CN112731503A (en) * | 2020-12-25 | 2021-04-30 | 中国科学技术大学 | Pose estimation method and system based on front-end tight coupling |
CN112837373A (en) * | 2021-03-03 | 2021-05-25 | 福州视驰科技有限公司 | Multi-camera pose estimation method without feature point matching |
CN112837373B (en) * | 2021-03-03 | 2024-04-26 | 福州视驰科技有限公司 | Multi-camera pose estimation method without feature point matching |
CN112907633B (en) * | 2021-03-17 | 2023-12-01 | 中国科学院空天信息创新研究院 | Dynamic feature point identification method and application thereof |
CN112907633A (en) * | 2021-03-17 | 2021-06-04 | 中国科学院空天信息创新研究院 | Dynamic characteristic point identification method and application thereof |
CN113155140A (en) * | 2021-03-31 | 2021-07-23 | 上海交通大学 | Robot SLAM method and system used in outdoor characteristic sparse environment |
CN113155140B (en) * | 2021-03-31 | 2022-08-02 | 上海交通大学 | Robot SLAM method and system used in outdoor characteristic sparse environment |
CN113220017A (en) * | 2021-04-16 | 2021-08-06 | 同济大学 | Underground unmanned aerial vehicle flight method and system |
CN113256711B (en) * | 2021-05-27 | 2024-03-12 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113256711A (en) * | 2021-05-27 | 2021-08-13 | 南京航空航天大学 | Pose estimation method and system of monocular camera |
CN113390408A (en) * | 2021-06-30 | 2021-09-14 | 深圳市优必选科技股份有限公司 | Robot positioning method and device, robot and storage medium |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113628284B (en) * | 2021-08-10 | 2023-11-17 | 深圳市人工智能与机器人研究院 | Pose calibration data set generation method, device and system, electronic equipment and medium |
CN113628284A (en) * | 2021-08-10 | 2021-11-09 | 深圳市人工智能与机器人研究院 | Pose calibration data set generation method, device and system, electronic equipment and medium |
CN114593735A (en) * | 2022-01-26 | 2022-06-07 | 奥比中光科技集团股份有限公司 | Pose prediction method and device |
CN114593735B (en) * | 2022-01-26 | 2024-05-31 | 奥比中光科技集团股份有限公司 | Pose prediction method and device |
CN114782550A (en) * | 2022-04-25 | 2022-07-22 | 高德软件有限公司 | Camera calibration method, device, electronic equipment and program product |
CN116309885B (en) * | 2023-05-24 | 2023-09-01 | 同致电子科技(厦门)有限公司 | Vehicle-mounted camera online calibration method based on visual odometer |
CN116309885A (en) * | 2023-05-24 | 2023-06-23 | 同致电子科技(厦门)有限公司 | Vehicle-mounted camera online calibration method based on visual odometer |
CN117237417A (en) * | 2023-11-13 | 2023-12-15 | 南京耀宇视芯科技有限公司 | System for realizing optical flow tracking based on image and imu data hardware |
Also Published As
Publication number | Publication date |
---|---|
CN110009681B (en) | 2021-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110009681A (en) | A kind of monocular vision odometer position and posture processing method based on IMU auxiliary | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN108765498B (en) | Monocular vision tracking, device and storage medium | |
EP2959315B1 (en) | Generation of 3d models of an environment | |
CN104704384B (en) | Specifically for the image processing method of the positioning of the view-based access control model of device | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
Clipp et al. | Robust 6dof motion estimation for non-overlapping, multi-camera systems | |
CN109141433A (en) | A kind of robot indoor locating system and localization method | |
CN111968228B (en) | Augmented reality self-positioning method based on aviation assembly | |
CN108711166A (en) | A kind of monocular camera Scale Estimation Method based on quadrotor drone | |
EP3114647A2 (en) | Method and system for 3d capture based on structure from motion with simplified pose detection | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
CN208751577U (en) | A kind of robot indoor locating system | |
CN208323361U (en) | A kind of positioning device and robot based on deep vision | |
CN110648362B (en) | Binocular stereo vision badminton positioning identification and posture calculation method | |
CN110751123B (en) | Monocular vision inertial odometer system and method | |
CN108413917A (en) | Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
Zingoni et al. | Real-time 3D reconstruction from images taken from an UAV | |
CN113487674A (en) | Human body pose estimation system and method | |
Ait-Aider et al. | Exploiting rolling shutter distortions for simultaneous object pose and velocity computation using a single view | |
CN111145267A (en) | IMU (inertial measurement unit) assistance-based 360-degree panoramic view multi-camera calibration method | |
CN114429487A (en) | Robot visual feature matching and positioning method and device based on plane motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |