CN110095116A - A kind of localization method of vision positioning and inertial navigation combination based on LIFT - Google Patents

A kind of localization method of vision positioning and inertial navigation combination based on LIFT Download PDF

Info

Publication number
CN110095116A
CN110095116A CN201910355769.8A CN201910355769A CN110095116A CN 110095116 A CN110095116 A CN 110095116A CN 201910355769 A CN201910355769 A CN 201910355769A CN 110095116 A CN110095116 A CN 110095116A
Authority
CN
China
Prior art keywords
inertial navigation
coordinate system
acceleration
alignment
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910355769.8A
Other languages
Chinese (zh)
Inventor
罗笑南
李红帅
钟艳如
杨兵
李一媛
李芳�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201910355769.8A priority Critical patent/CN110095116A/en
Publication of CN110095116A publication Critical patent/CN110095116A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention discloses the localization methods of a kind of vision positioning based on LIFT and inertial navigation combination, this method first uses the information collection of Kinect camera completion image, to obtain surrounding's scene information of image, then vision positioning is realized simultaneously using acquisition relevant parameter after the completion of the processing of the LIFT depth network architecture, gyroscope and accelerometer measures angular speed, acceleration, three axle speeds, spatial position and three attitude angles on motion carrier are also needed at the same time, and then realize inertial navigation located in connection;It is finally that the data surveyed based on data measured by the LIFT depth network architecture and inertial navigation are subjected to fusion calibration, it is final to realize the integrated positioning based on LIFT vision positioning and inertial navigation, obtain optimal pose track result;LIFT is a new depth network architecture, it completes these three problems using unified mode, maintain differentiability end to end together with the feature detection of image, direction estimation being described these three set of steps with feature simultaneously.

Description

A kind of localization method of vision positioning and inertial navigation combination based on LIFT
Technical field
The present invention relates to field of locating technology, specifically a kind of vision positioning and inertial navigation combination based on LIFT is determined Position method.
Background technique
With the development of China's economic, the high buildings and large mansions in city are more and more.According to investigations, the time of people 90% is in room It is interior, therefore the safety of indoor life is critically important.Such as when fire occurs, fireman enters high-rise rescue, pacifies with it Level sensor is set, when encountering emergency situations, commander can know its position at the scene, then take correct measure Rescue.
Inertial navigation positioning is measured based on inertia device, mainly by gyroscope on the motion carrier and plus Measurement of velocity angulation velocity and acceleration.Three axle speeds of carrier, spatial position are calculated by the measurement data of different moments With three attitude angles.Inertial navigation position the advantages of be tool have independence and initiative, not by extraneous factor influenced predicted motion Track, the disadvantage is that the variation with run duration can generate certain cumulative errors.
Vision positioning be based on image sequence realize positioning, to acquired image sequence carry out characteristic point extraction and Match, the feature point set after having matching solves the transformation relation of different frame image, and then estimates kinematic parameter.Compared to other The advantages of sensor, visual sensor is the scene information that can obtain running environment, can preferably be used to navigate, the disadvantage is that It is easy to be influenced by external environment and oneself factor, once fuzzy or measurement error occurs in the image of acquisition, it is fixed to cause Position error.But it uses and converts (Learning Invariant Feature based on study invariant features Transformation, LIFT) the new depth network structure of one kind, which includes detector, direction estimation device and Descriptor.It wherein can be very good to merge each good method when the detection of the carry out characteristic point of detector.Such as it can be used fast Speed finds turning process using machine learning techniques, carries out the spot in detection scale space using SIFT, adopts when using SURF Speed up processing is carried out with Haar filter;SFOP is enhanced using edge to illumination variation using node and edge angle point Robustness.
Summary of the invention
The present invention provides the localization method of a kind of vision positioning based on LIFT and inertial navigation combination, and this method uses The depth network architecture of LIFT carries out vision positioning, and it is used expansion card with the acquired corresponding sports parameter of inertial navigation positioning Kalman Filtering is combined optimization, and then obtains optimal pose track result;LIFT is a new depth net in this method Network framework, it is used together with the feature detection of image, direction estimation being described these three set of steps with feature simultaneously Unified mode completes these three problems, maintains differentiability end to end at the same time.
Realizing the technical solution of the object of the invention is:
A kind of localization method of vision positioning and inertial navigation combination based on LIFT, includes the following steps:
1) based on the vision positioning of LIFT: Kinect sensor is acquired image data to place to be positioned, obtains The scene information of environment carries out image characteristic point using the method based on LIFT to acquired image data and extracts and processes, Complete the parameter Estimation of location-independent;
2) inertial navigation position: inertial sensor to by being mounted on motion carrier gyroscope and accelerometer treat The place of positioning carries out data acquisition, obtains angular speed and acceleration information, carries out data using coarse alignment and accurate alignment method Alignment carries out attitude algorithm using Euler's horn cupping to the angular speed and acceleration information measured, and is found out and worked as according to initial position Estimation is completed in front position;
3) vision positioning based on LIFT and the positioning of the dynamic combined of inertial navigation: to Kinect sensor and inertia sensing The data estimated result of device carries out data correlation processing, carries out dynamic combined positioning using combined strategy, obtains vision positioning rail Mark estimated result.
In step 1, the vision positioning based on LIFT, the specific steps are as follows:
1-1) using the acquisition image data in place to be positioned of Kinect sensor acquisition, and obtain the scene letter of environment Breath;
1-2) color image is identified and matched using LIFT method, and then extracts image characteristic point, is constructed twin Network, and scene is trained using the characteristic point that structure-movement (SFM) algorithm generates, obtain the position and side of image block To;The twin network, specifically creates four twin network architectures, and each branch includes three different convolutional Neural nets Network: detector, direction estimation device and descriptor;Further include four groups of image patch: P1, P2, P3 and P4, P1 and P2 are respectively corresponded together The different views of 1 3D point, P3 include the projection of different 3D points, and P4 does not include any characteristic point, in training, every quadruplets I-th of patch Pi will pass through i branch;It is in order to realize differentiability end to end, the component connection of each branch is as follows:
1. giving an input picture patch P, detector provides a shot chart S;
2. executing a soft argmax function on shot chart S, and return to the position x of single potential characteristic point;
3. extracting a fritter p centered on x, cut with space transformer layer, as the defeated of direction estimation device Enter;
4. direction estimation device predicts that a patch is orientated θ;
5. rotating p using second space transformer layer according to direction, label is in Fig. 3, generates P θ;
6. P θ is connected to description network, and calculates feature vector d;
1-3) color image and depth image be registrated in frame, two dimensional character point set coordinate is obtained, with two three-dimensionals Point set estimates the transformation relation between two three-dimensional system of coordinates, and then estimates to obtain corresponding kinematic parameter.
In step 2), the inertial navigation positioning is to be positioned using inertial navigation, including carry out to inertial navigation Initial calibration, the attitude algorithm to inertial navigation, inertial navigation speed and position resolve, the specific steps are as follows:
2-1) the initial alignment of inertial navigation, obtains initial velocity and initial position, and initial alignment includes two processes: coarse alignment Process and fine alignment process;Coarse alignment process generallys use analytic method, is mainly calculated using the measured value of inertia device just The attitude matrix of beginning;Fine alignment process is on the basis of coarse alignment using filtering method or compass alignment method;
2-1-1) coarse alignment
Coarse alignment is mainly in combination with inertia device measured value, using acceleration of gravity and rotational-angular velocity of the earth in the short period Interior calculating initial attitude matrix and three attitude angles are projected into if taking geographic coordinate system northeast day is navigational coordinate system It is expressed as under navigational coordinate system:
gn=[0 0-g]T (1)
In formula (1), g is local gravitational acceleration, gnThe matrix projected under navigational coordinate system for g indicates that T is matrix The symbol (similarly hereinafter) of transposition.Projection of the rotational-angular velocity of the earth under navigational coordinate system are as follows:
In formula (2), L is local latitude, ωieIndicate the angular speed being converted under navigational coordinate system,Indicate navigation Matrix under coordinate system indicates;
Define above-mentioned gnThe orthogonal vector of two vectors are as follows:
M=g × ωie=[g ωie cosL 0 0]T (3)
The acceleration g exported using accelerometerbWith the angular speed of gyroscope outputInertial coodinate system-b system with lead Coordinate system-n system relationship of navigating calculates gnWithIt is specific as follows:
Equally, also have
It is as follows then as matrix form to be write to (4) and (5) two formulas:
Formula (6) are subjected to transposition, and are substituted into formula (1), (2) and (3), then are had:
Find out attitude matrixThen find out initial attitude angle;
2-1-2) fine alignment
Fine alignment is that misalignment is modified and estimated to attitude matrix, is initially aligned using Kalman filtering, will Platform error angular estimation comes out, and calculates attitude matrix, while the estimation error of inertia type instrument is come out;
Include random disturbances and zero bias in the output of accelerometer, also includes interference in gyroscope output And drift, it is contemplated that the estimation to gyroscopic drift and acceleration zero bias is extended in filter state, takes the state to beφx, φyAnd φzThe misalignment in respectively three directions Angle, εxyAnd εzThe respectively drift of three axis of gyroscope,WithFor the zero bias of accelerometer, δ vxWith δ vyFor speed mistake Difference takes δ vxWith δ vyFor observed quantity, it is as follows to establish observational equation Z (t):
Wherein ηxAnd ηyIt indicates observation noise, equation is subjected to discretization, using Kalman's recurrence estimation platform error angle, And be corrected, realize initial alignment;
2-2) the attitude algorithm of inertial navigation is to complete to convert to coordinate system used in inertial navigation, MEMS gyroscope It is connected with mems accelerometer and carrier, the acceleration and angular speed data of measurement are to need to turn under carrier coordinate system It changes under navigational coordinate system, then calculates posture, speed and the location information of carrier, attitude updating algorithm uses Eulerian angles Method;
Euler's horn cupping calculates yaw angle, pitch angle and the roll angle of motion carrier, table by the Eulerian angles differential equation Show Angle Position relationship of the carrier coordinate system b system relative to navigational coordinate system n system, if b system is relative to the angular velocity vector of n systemIts expression formula are as follows:
Wherein,Respectively three coordinates of the carrier coordinate system b system relative to navigational coordinate system n system Shown in the differential equation such as following formula (11) for establishing Eulerian angles, motion carrier is can be obtained by solving the differential equation in the angular speed of axis Three attitude angles,
2-3) speed of inertial navigation and position resolve, and the location updating of inertial navigation uses an accelerometer to realization, The acceleration information of accelerometer measures is transformed into navigational coordinate system first, due to being influenced by acceleration of gravity, needs elder generation It compensates, then carries out integral operation and acquire speed, further integral obtains displacement information, it is assumed that the three of accelerometer output Axle acceleration are as follows:
Wherein, abx,aby,absRespectively inertial navigation measures the acceleration on three reference axis x, y, z at the k moment, need by The acceleration value that inertial coodinate system measures is transformed into navigational coordinate system, it may be assumed that
It is written as matrix form are as follows:
In above-mentioned formula (13),Indicate k moment inertial coodinate system to navigational coordinate system spin matrix, by gyro The resolving that the data of instrument carry out posture updates;
In order to eliminate the influence of terrestrial gravitation acceleration, need to subtract gravitational acceleration component when carrying out integral operation, Then integral calculation speed and displacement information are carried out, a is usedg(k) it indicates to remove the velocity amplitude after component of acceleration, formula is such as Under:
First to acceleration ag(k) integral can obtain speed of the carrier in navigational coordinate system, and △ t is the period of very little, Speed is sought herein and distance is all using physics formula, it may be assumed that
vn(k+1)=vn(k)+ag(k)·Δt (15)
In above-mentioned formula (15), Δ t=t (k+1)-t (k), integral can obtain motion carrier in navigational coordinate system again Position S (k+1), it may be assumed that
The displacement calculation formula that three axis decompose are as follows:
Through the above steps, it calculates current time displacement and attitude angle successively carries out in chronological order in conjunction with initial position The track of entire motion process can be obtained in iteration.
In step 3), the dynamic combined of the vision positioning and inertial navigation based on LIFT is positioned, including data pair Quasi- and data fusion, the specific steps are as follows:
3-1) the data alignment based on Kinect and inertial navigation, including time alignment and spacial alignment:
3-1-1) time alignment
Time alignment is that nonsynchronous data of Kinect sensor and inertial sensor are synchronized to unified reference time scale Under, in integrated positioning system, leading to data asynchronous reason includes the sampling period difference and two sensings of two sensors The unified fiducial time of device is inconsistent, for hardware, realizes time synchronization using CPLD or by means of fpga chip;It is right For software, time synchronization is carried out using the software program algorithm of host computer, carries out time synchronization using least square method, most Small square law assumes that the ratio between sampling period of two class sensors is an Integer n, by the n times Measurement fusion of high frequency sensors It is merged at the virtual measurement at kth moment, then with the measured value of low-frequency sensor at the kth moment, finally realizes that the time is same Step;
3-1-2) spacial alignment
Spacial alignment is the unification of coordinate system, i.e., the information in each sensor coordinate system is transformed to unified navigation coordinate In system, spacial alignment initially sets up combination coordinate system, when Kinect sensor is horizontally mounted, camera lens outwardly direction of advance, Three reference axis at this time are respectively Z axis positive direction forward, are downwards Y-axis positive direction, are to the left X-axis positive direction;For victory Join inertial navigation system, space coordinates mainly establish carrier coordinate system, commonly abbreviated as b system, and origin is inertial navigation mould The center of three axis of block, X-axis are directed toward the right of carrier, and Y-axis is directed toward the front of carrier, meets the right-hand rule, be that will be used to herein Property module is connected in visual sensor, then is fixedly connected with carrier, and the corresponding relationship between the coordinate system of combined system is formed,
Before integrated positioning, the position of two sensors is initially aligned, is realized to inertial coodinate system and Kinect Then the unification of visual coordinate system can be switched to the same coordinate system using the relationship of inertial coodinate system and navigational coordinate system, complete At unification spatially, the relative pose of two sensors in combined system is estimated using time-domain constraints estimation method herein Meter;
Alignment procedures include two steps:
1. establishing inertial navigation and the Alignment model of Kinect, determine that the initial pose of inertial navigation is estimated according to camera calibration result Value, and then determine filter status initial value;
2. estimation is filtered to the output of image sequence and inertial navigation according to the filtering equations of foundation, determine inertial navigation and The relative pose of Kinect.
The alignment procedures of specific inertial navigation and vision positioning are as follows;
Note inertial coodinate system is B system, and Kinect visual coordinate system is A system, and image coordinate system is W system.Choose quantity of state are as follows:Wherein,Between W system and B system coordinate origin Displacement,For the displacement between A system and B system coordinate origin,Be set to B system to W system rotation quaternary number,For A system to B system Rotation quaternary number,For B system to the speed of W system, g is acceleration of gravity, ωBFor the angular speed of B system,For W system relative to The acceleration of B system, each state change process are as follows:
Wherein, Ω (ω) is quaternary number kinetics equation, form are as follows:
Assuming that acceleration value and magnitude of angular velocity are constant value, indicate are as follows: ωB=nω,Observation can be by inertial navigation It is obtained with Kinect, for inertial navigation, the angular speed and acceleration under local Coordinate System can be measured.
mωBBcmd+bg
For Kinect, available observational equation relevant with characteristic point pixel coordinate are as follows:
Wherein, K indicates camera internal reference, viIndicate characteristic point pixel coordinate observation noise.Its observed quantity can be written as:
For different moments, there are certain the constraint relationships between the quantity of state and observed quantity of system;
3-2) the data fusion based on Kinect and inertial navigation
Fusion estimation can be divided into Linear Estimation and non-linear estimations, corresponding linear Kalman filtering and non-linear Extended Kalman filter, Kalman filter are that filtering estimation problem is solved with recursive thought, carry out recursion with minimal error, It is that state estimation is carried out to the predicted value and the measured value at current time of previous sampling instant.Extended Kalman filter is also benefit Estimate that state, it is suitable for nonlinear systems with predicted value and measured value;Fusion based on Extended Kalman filter is calculated It is as follows that method designs key step:
3-2-1) establishment process model
State equation is the relationship established between adjacent moment state variable, the state vector for needing to estimate are as follows: Xk=(Pk, ψk,Vkkk)T, state variable XkIn, Pk=(xk,yk,zk)TIndicate the position at k moment, xk,yk,zkRespectively indicate three seats Mark;Indicate the attitude angle at k moment,Respectively yaw angle, pitch angle and roll angle;Vk= (vx,vy,vz)TIndicate the velocity vector at k moment, vx,vy,vz
For the velocity amplitude in three reference axis;δk=(δxyz)TIndicate the acceleration error at k moment, εk=(εxy, εz)TIndicate the error of k moment angular speed.With the measured value building process model of inertial navigation, by acceleration measurement abAnd top Spiral shell instrument measured valueAs the input variable of process model, u is usedkIt indicates:
Accelerometer and gyroscope can be influenced in measurement by environment and itself measurement factor, have deviation and noise, then have:
Wherein, agIndicate the acceleration value of navigational coordinate system,It is attitude matrix, indicates to sit from inertial coodinate system to navigation The transformational relation of system is marked, attitude algorithm can be carried out by the measured value of gyroscope and acquired.The influence of acceleration of gravity is subtracted, then formulaUsing the measured value of inertial navigation as the input variable of process model, spreading kalman filter is established The state model of wave are as follows:
δkk-1
εkk-1
Wherein Δ t is sampling time interval,Indicate inertial coodinate system to navigational coordinate system transformation matrix,It indicates Specific rotation transformation matrix of the inertial coodinate system to navigational coordinate system, filter state equation are as follows:
Xk=f (Xk-1,Uk-1,Wk-1)
3-2-2) establish observation model
Observation model expresses the relationship between observed quantity and state variable, for the measurement based on Kinect sensor, leads to The image for crossing two adjacent moments can calculate the optimal value for acquiring relative pose, that is, seeking following formula,
The relative pose of two adjacent moments, including spin matrix R and translation vector T are obtained by above formula, and then are acquired current The position at moment and posture can be obtained opposite linear velocity, use V to motion vector T divided by the time difference of adjacent momentvoIt indicates, Therefore, it is measuring value by spin matrix R, translation vector T, establishes Extended Kalman filter observation model:
Tk=VkΔt+ag(Δt)2/2+vt,k
Filter observational equation are as follows:
Have been generally acknowledged that VkFor white Gaussian noise, meet VkThe value of~N (0, R), noise covariance R depend on Kinect vision The confidence level of positioning, the confidence level determine that confidence level is smaller by correct matched feature points and depth error, the association of noise Variance yields is bigger.The relative pose that can get two adjacent moments by the reckoning of the separate equations is established after observation model, successively Integrated positioning motion pose after circulation execution step can be optimized estimates track.
A kind of localization method of vision positioning and inertial navigation combination based on LIFT provided by the invention, this method should Method carries out vision positioning using the depth network architecture of LIFT, and it is made with the acquired corresponding sports parameter of inertial navigation positioning It is combined optimization with Extended Kalman filter, and then obtains optimal pose track result;LIFT is one new in this method The depth network architecture, the detection of the feature of image, direction estimation and feature can be described these three set of steps and existed by it simultaneously Together, these three problems are completed using unified mode, maintains differentiability end to end at the same time.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the localization method of vision positioning and inertial navigation combination based on LIFT;
Fig. 2 is the frame process flow diagram based on LIFT;
Fig. 3 is the flow chart of the twin network of training;
Fig. 4 is inertial navigation fine alignment schematic diagram;
Fig. 5 is inertial navigation pose derivation algorithm figure;
Fig. 6 is dead reckoning schematic diagram;
Fig. 7 is vision and inertial navigation combined positioning method flow chart;
Fig. 8 is integrated positioning system coordinate relational graph;
Fig. 9 is Kinect sensor and inertial navigation sensor pose relational graph.
Specific embodiment
The present invention is further elaborated with reference to the accompanying drawings and examples, but is not limitation of the invention.
Embodiment:
As shown in Figure 1, a kind of localization method of vision positioning and inertial navigation combination based on LIFT, including walk as follows It is rapid:
1) based on the vision positioning of LIFT: Kinect sensor is acquired image data to place to be positioned, obtains The scene information of environment carries out image characteristic point using the method based on LIFT to acquired image data and extracts and processes, Complete the parameter Estimation of location-independent;
2) inertial navigation position: inertial sensor to by being mounted on motion carrier gyroscope and accelerometer treat The place of positioning carries out data acquisition, obtains angular speed and acceleration information, carries out data using coarse alignment and accurate alignment method Alignment carries out attitude algorithm using Euler's horn cupping to the angular speed and acceleration information measured, and is found out and worked as according to initial position Estimation is completed in front position;
3) vision positioning based on LIFT and the positioning of the dynamic combined of inertial navigation: to Kinect sensor and inertia sensing The data estimated result of device carries out data correlation processing, carries out dynamic combined positioning using combined strategy, obtains vision positioning rail Mark estimated result.
In step 1, the vision positioning based on LIFT, the specific steps are as follows:
1-1) using the acquisition image data in place to be positioned of Kinect sensor acquisition, and obtain the scene letter of environment Breath;
1-2) color image is identified and matched using LIFT method, and then extracts image characteristic point, is constructed twin Network, and scene is trained using the characteristic point that structure-movement (SFM) algorithm generates, obtain the position and side of image block To, the frame process flow diagram based on LIFT specifically creates four twin network architectures as shown in Fig. 2, the twin network, Each branch includes three different convolutional neural networks: detector, direction estimation device and descriptor;It further include that four groups of images are mended Fourth: P1, P2, P3 and P4, P1 and P2 respectively correspond the different views of same 3D point, and P3 includes the projection of different 3D points, and P4 is not wrapped Containing any characteristic point, in training, i-th of patch Pi of every quadruplets will pass through i branch, the process of the twin network of training Figure is as shown in Figure 3;It is in order to realize differentiability end to end, the component connection of each branch is as follows:
1. giving an input picture patch P, detector provides a shot chart S;
2. executing a soft argmax function on shot chart S, and return to the position x of single potential characteristic point;
3. extracting a fritter p centered on x, cut with space transformer layer, as the defeated of direction estimation device Enter;
4. direction estimation device predicts that a patch is orientated θ;
5. rotating p using second space transformer layer according to direction, label is in Fig. 3, generates Pθ
6. P θ is connected to description network, and calculates feature vector d;
1-3) color image and depth image be registrated in frame, two dimensional character point set coordinate is obtained, with two three-dimensionals Point set estimates the transformation relation between two three-dimensional system of coordinates, and then estimates to obtain corresponding kinematic parameter.
In step 2), the inertial navigation positioning is to be positioned using inertial navigation, including carry out to inertial navigation Initial calibration, the attitude algorithm to inertial navigation, inertial navigation speed and position resolve, the specific steps are as follows:
2-1) the initial alignment of inertial navigation obtains initial velocity and initial position, since inertial navigation belongs to relative positioning, initial strip Part influences whether that subsequent posture solves the calculating with navigation accuracy, therefore the initially alignment of inertial navigation is critically important.It is initially aligned and includes Two processes: coarse alignment process and fine alignment process;Coarse alignment process generallys use analytic method, mainly utilizes inertia device The measured value of (gyroscope and accelerometer) calculates initial attitude matrix;Fine alignment process is used on the basis of coarse alignment Filtering method or compass are directed at method;
2-1-1) coarse alignment
Coarse alignment is mainly in combination with inertia device measured value, using acceleration of gravity and rotational-angular velocity of the earth in the short period Interior calculating initial attitude matrix and three attitude angles are projected into if taking geographic coordinate system northeast day is navigational coordinate system It is expressed as under navigational coordinate system:
gn=[0 0-g]T (1)
In formula (1), g is local gravitational acceleration, gnThe matrix projected under navigational coordinate system for g indicates that T is matrix The symbol (similarly hereinafter) of transposition.Projection of the rotational-angular velocity of the earth under navigational coordinate system are as follows:
In formula (2), L is local latitude, ωieIndicate the angular speed being converted under navigational coordinate system,Indicate navigation Matrix under coordinate system indicates;
Define above-mentioned gnThe orthogonal vector of two vectors are as follows:
M=g × ωie=[g ωie cosL 0 0]T (3)
The acceleration g exported using accelerometerbWith the angular speed of gyroscope outputInertial coodinate system-b system with lead Coordinate system-n system relationship of navigating calculates gnWithIt is specific as follows:
Equally, also have
It is as follows then as matrix form to be write to (4) and (5) two formulas:
Formula (6) are subjected to transposition, and are substituted into formula (1), (2) and (3), then are had:
Find out attitude matrixThen find out initial attitude angle;
2-1-2) fine alignment
Fine alignment is that misalignment is modified and estimated to attitude matrix, is initially aligned using Kalman filtering, will Platform error angular estimation comes out, and calculates attitude matrix, while the estimation error of inertia type instrument is come out, alignment principles block diagram is such as Shown in Fig. 4,
Include random disturbances and zero bias in the output of accelerometer, in gyroscope output also comprising interference and Drift, it is contemplated that the estimation to gyroscopic drift and acceleration zero bias is extended in filter state, takes the state to beφx, φyAnd φzThe misalignment in respectively three directions Angle, εxyAnd εzThe respectively drift of three axis of gyroscope,WithFor the zero bias of accelerometer, δ vxWith δ vyFor speed mistake Difference takes δ vxWith δ vyFor observed quantity, it is as follows to establish observational equation Z (t):
Wherein ηxAnd ηyIt indicates observation noise, equation is subjected to discretization, using Kalman's recurrence estimation platform error angle, And be corrected, realize initial alignment.
2-2) the attitude algorithm of inertial navigation is to complete to convert to coordinate system used in inertial navigation, MEMS gyroscope It is connected with mems accelerometer and carrier, the acceleration and angular speed data of measurement are to need to turn under carrier coordinate system It changes under navigational coordinate system, then calculates posture, speed and the location information of carrier, attitude updating algorithm uses Eulerian angles Method, Computing Principle and process are as shown in Figure 5;
Euler's horn cupping calculates yaw angle, pitch angle and the roll angle of motion carrier, table by the Eulerian angles differential equation Show Angle Position relationship of the carrier coordinate system b system relative to navigational coordinate system n system, if b system is relative to the angular velocity vector of n systemIts expression formula are as follows:
Wherein,Respectively three coordinates of the carrier coordinate system b system relative to navigational coordinate system n system Shown in the differential equation such as following formula (11) for establishing Eulerian angles, motion carrier is can be obtained by solving the differential equation in the angular speed of axis Three attitude angles,
2-3) speed of inertial navigation and position resolve, and the location updating of inertial navigation uses an accelerometer to realization, The acceleration information of accelerometer measures is transformed into navigational coordinate system first, due to being influenced by acceleration of gravity, needs elder generation It compensates, then carries out integral operation and acquire speed, further integral obtains displacement information, it is assumed that the three of accelerometer output Axle acceleration are as follows:
Wherein, abx,aby,absRespectively inertial navigation measures the acceleration on three reference axis x, y, z at the k moment, need by The acceleration value that inertial coodinate system measures is transformed into navigational coordinate system, it may be assumed that
It is written as matrix form are as follows:
In above-mentioned formula (13),Indicate k moment inertial coodinate system to navigational coordinate system spin matrix, by gyro The resolving that the data of instrument carry out posture updates;
In order to eliminate the influence of terrestrial gravitation acceleration, need to subtract gravitational acceleration component when carrying out integral operation, Then integral calculation speed and displacement information are carried out.Use ag(k) it indicates to remove the velocity amplitude after component of acceleration, formula is such as Under:
First to acceleration ag(k) integral can obtain speed of the carrier in navigational coordinate system, and △ t is the period of very little, Speed is sought herein and be all physics formula apart from what is used, it may be assumed that
vn(k+1)=vn(k)+ag(k)·Δt (15)
In above-mentioned formula (15), Δ t=t (k+1)-t (k), integral can obtain motion carrier in navigational coordinate system again Position S (k+1), it may be assumed that
The displacement calculation formula that three axis decompose are as follows:
Through the above steps, it calculates current time displacement and attitude angle successively carries out in chronological order in conjunction with initial position The track of entire motion process can be obtained in iteration.
By taking the movement of two-dimensional surface as an example, it is assumed that carrier is moved in two-dimensional surface, using the front of carrier movement as x-axis side To being to the left v axis direction, carry out dead reckoning by inertial guidance data, concrete principle is illustrated in fig. 6 shown below;
In Fig. 6, it is assumed that oneself knows in t0The initial position at moment is P0(x0,y0), in next t1The position at moment is P1 (x1,y1) and so on the tj moment position be Pi(xj,yi), the course angle θ from position P0 to position P10It can by attitude algorithm , displacement s0It is acquired by integrated acceleration, then t1The position P at moment1Calculating formula are as follows:
Then in t2The position P at moment2Are as follows:
And so on, tjThe position at moment is Pi(xi,yi) are as follows:
It is the position calculating of two-dimensional surface above, can be gone continuously to calculate subsequent time according to initial position and deflection Position.Movement for three-dimensional space, displacement that can be required by attitude matrix and integral that current time resolves, in conjunction with Initial position and initial attitude angle carry out dead reckoning, obtain position and the attitude angle at current time.
In step 3), the dynamic combined of the vision positioning and inertial navigation based on LIFT is positioned, step 1) and 2) Vision and inertial navigation autonomous positioning are individually realized, and has obtained corresponding motion pose track, is positioned in this view-based access control model In inertial navigation integrated positioning, first the parameter sets of vision positioning and inertial navigation positioning based on LIFT are got up, in fusion, are needed Alignment unification is carried out to the data of each ego-motion estimation, then into fusion center, and then obtains optimal motion pose rail Mark result.Specific Data Fusion Structure is as shown in fig. 7, comprises data alignment and data fusion, the specific steps are as follows:
3-1) the data alignment based on Kinect and inertial navigation, since the vision positioning based on LIFT is first to be acquired with Kinect Then data carry out the extraction of characteristic point using LIFT depth network frame, finally using Kinect sensor to color image Be registrated in frame with depth image, obtains two dimensional character point set coordinate, two three-dimensional coordinates are estimated with two three-dimensional point sets Transformation relation between system, and then estimate kinematic parameter, start and finally all use Kinect sensor, therefore say it is base herein In the data alignment of Kinect and inertial navigation.Including time alignment and spacial alignment:
3-1-1) time alignment
Time alignment is that nonsynchronous data of Kinect sensor and inertial sensor are synchronized to unified reference time scale Under, in integrated positioning system, leading to data asynchronous reason includes the sampling period difference and two sensings of two sensors The unified fiducial time of device is inconsistent, for hardware, realizes time synchronization using CPLD or by means of fpga chip;It is right For software, time synchronization is carried out using the software program algorithm of host computer, carries out time synchronization using least square method, most Small square law assumes that the ratio between sampling period of two class sensors is an Integer n, by the n times Measurement fusion of high frequency sensors It is merged at the virtual measurement at kth moment, then with the measured value of low-frequency sensor at the kth moment, finally realizes that the time is same Step;
3-1-2) spacial alignment
Spacial alignment is the unification of coordinate system, i.e., the information in each sensor coordinate system is transformed to unified navigation coordinate In system, spacial alignment initially sets up combination coordinate system, when Kinect sensor is horizontally mounted, camera lens outwardly direction of advance, Three reference axis at this time are respectively Z axis positive direction forward, are downwards Y-axis positive direction, are to the left X-axis positive direction;For victory Join inertial navigation system, space coordinates mainly establish carrier coordinate system, commonly abbreviated as b system, and origin is inertial navigation mould The center of three axis of block, X-axis are directed toward the right of carrier, and Y-axis is directed toward the front of carrier, meets the right-hand rule, be that will be used to herein Property module is connected in visual sensor, then is fixedly connected with carrier, forms the corresponding relationship between the coordinate system of combined system, group Corresponding relationship between the coordinate system of collaboration system is as shown in Figure 8.
Before integrated positioning, the position to two sensors be initially aligned, realize to inertial coodinate system and The unification of Kinect visual coordinate system, then can be switched to the same coordinate using the relationship of inertial coodinate system and navigational coordinate system System, completes unification spatially, herein using time-domain constraints estimation method to the relative pose of two sensors in combined system Estimated;
Alignment procedures include two steps:
1. establishing inertial navigation and the Alignment model of Kinect, determine that the initial pose of inertial navigation is estimated according to camera calibration result Value, and then determine filter status initial value;
2. estimation is filtered to the output of image sequence and inertial navigation according to the filtering equations of foundation, determine inertial navigation and The relative pose of Kinect.
The alignment procedures of specific inertial navigation and vision positioning are as follows;
Note inertial coodinate system is B system, and Kinect visual coordinate system is A system, and image coordinate system is W system.Choose quantity of state are as follows:Wherein,Between W system and B system coordinate origin Displacement,For the displacement between A system and B system coordinate origin,Be set to B system to W system rotation quaternary number,For A system to B system Rotation quaternary number,For B system to the speed of W system, g is acceleration of gravity, ωBFor the angular speed of B system,For W system relative to The acceleration of B system, each state change process are as follows:
Wherein, Ω (ω) is quaternary number kinetics equation, form are as follows:
Assuming that acceleration value and magnitude of angular velocity are constant value, indicate are as follows: ωB=nω,Observation can be by inertial navigation It is obtained with Kinect, for inertial navigation, the angular speed and acceleration under local Coordinate System can be measured.
mωBBcmd+bg
For Kinect, available observational equation relevant with characteristic point pixel coordinate are as follows:
Wherein, K indicates camera internal reference, viIndicate characteristic point pixel coordinate observation noise.Its observed quantity can be written as:
For different moments, there are certain the constraint relationships between the quantity of state and observed quantity of system;As shown in figure 9, being The position orientation relation of adjacent moment Kinect sensor and inertial sensor;
In Fig. 9, small rectangle indicates inertial sensor, is connected in Kinect sensor, because inertial navigation and Kinect are rigid Property connection, therefore, inertial navigation and Kinect spin angle velocity having the same, the posture restraint of adjacent moment video camera are as follows:
In formulaIt indicates to be transformed into attitude angle from direction cosine matrix, Δ t indicates the time difference of adjacent moment.It is right In the translation vector of adjacent moment video cameraMeet constraint:
Because inertial navigation and Kinect are rigid connections, then have:
It is translated into:
Observed quantity h0, h1 and h2 have collectively constituted observational equation, the image sequence acquired according to filtering equations to Kinect Output with inertial navigation device is filtered estimation, solves the relative pose of inertial navigation and Kinect, realizes the unification of space coordinates.
3-2) the data fusion based on Kinect and inertial navigation
Fusion estimation can be divided into Linear Estimation and non-linear estimations, corresponding linear Kalman filtering and non-linear Extended Kalman filter.Kalman filter is that filtering estimation problem is solved with recursive thought, carries out recursion with minimal error, It is that state estimation is carried out to the predicted value and the measured value at current time of previous sampling instant.Extended Kalman filter is also benefit Estimate that state, it is suitable for nonlinear systems with predicted value and measured value;Fusion based on Extended Kalman filter is calculated It is as follows that method designs key step:
3-2-1) establishment process model
State equation is the relationship established between adjacent moment state variable, the state vector for needing to estimate are as follows: Xk=(Pk, ψk,Vkkk)T, state variable XkIn, Pk=(xk,yk,zk)TIndicate the position at k moment, xk,yk,zkRespectively indicate three seats Mark;Indicate the attitude angle at k moment,Respectively yaw angle, pitch angle and roll angle;Vk= (vx,vy,vz)TIndicate the velocity vector at k moment, vx,vy,vz
For the velocity amplitude in three reference axis;δk=(δxyz)TIndicate the acceleration error at k moment, εk=(εxy, εz)TIndicate the error of k moment angular speed.With the measured value building process model of inertial navigation, by acceleration measurement abAnd top Spiral shell instrument measured valueAs the input variable of process model, u is usedkIt indicates:
Accelerometer and gyroscope can be influenced in measurement by environment and itself measurement factor, have deviation and noise, then have:
Wherein, agIndicate the acceleration value of navigational coordinate system,It is attitude matrix, indicates to sit from inertial coodinate system to navigation The transformational relation of system is marked, attitude algorithm can be carried out by the measured value of gyroscope and acquired.The influence of acceleration of gravity is subtracted, then formulaUsing the measured value of inertial navigation as the input variable of process model, Extended Kalman filter is established State model are as follows:
δkk-1
εkk-1
Wherein Δ t is sampling time interval,Indicate inertial coodinate system to navigational coordinate system transformation matrix,It indicates Specific rotation transformation matrix of the inertial coodinate system to navigational coordinate system, filter state equation are as follows:
Xk=f (Xk-1,Uk-1,Wk-1)
3-2-2) establish observation model
Observation model expresses the relationship between observed quantity and state variable, for the measurement based on Kinect sensor, leads to The image for crossing two adjacent moments can calculate the optimal value for acquiring relative pose, that is, seeking following formula.
The relative pose of two adjacent moments, including spin matrix R and translation vector T are obtained by above formula, and then are acquired current The position at moment and posture can be obtained opposite linear velocity, use V to motion vector T divided by the time difference of adjacent momentvoIt indicates, Therefore, it is measuring value by spin matrix R, translation vector T, establishes Extended Kalman filter observation model:
Tk=VkΔt+ag(Δt)2/2+vt,k
Filter observational equation are as follows:
Have been generally acknowledged that VkFor white Gaussian noise, meet VkThe value of~N (0, R), noise covariance R depend on Kinect vision The confidence level of positioning, the confidence level determine that confidence level is smaller by correct matched feature points and depth error, the association of noise Variance yields is bigger.The relative pose that can get two adjacent moments by the reckoning of the separate equations is established after observation model, successively Integrated positioning motion pose after circulation execution step can be optimized estimates track.

Claims (4)

1. a kind of localization method of vision positioning and inertial navigation combination based on LIFT, which comprises the steps of:
1) based on the vision positioning of LIFT: Kinect sensor is acquired image data to place to be positioned, obtains environment Scene information, to acquired image data using based on LIFT method carry out image characteristic point extract and process, complete The parameter Estimation of location-independent;
2) inertial navigation position: inertial sensor to by being mounted on motion carrier gyroscope and accelerometer to be positioned Place carry out data acquisition, obtain angular speed and acceleration information, data alignment carried out using coarse alignment and accurate alignment method, Attitude algorithm is carried out using Euler's horn cupping to the angular speed and acceleration information measured, and present bit is found out according to initial position It sets, completes estimation;
3) vision positioning based on LIFT and the positioning of the dynamic combined of inertial navigation: to Kinect sensor and inertial sensor Data estimated result carries out data correlation processing, carries out dynamic combined positioning using combined strategy, obtains vision positioning track and estimate Count result.
2. a kind of localization method of vision positioning and inertial navigation combination based on LIFT according to claim 1, special Sign is, in step 1), the vision positioning based on LIFT, the specific steps are as follows:
1-1) using the acquisition image data in place to be positioned of Kinect sensor acquisition, and obtain the scene information of environment;
1-2) color image is identified and is matched using LIFT method, and then extracts image characteristic point, constructs twin network, And scene is trained using the characteristic point that structure-movement (SFM) algorithm generates, obtain the position and direction of image block;Institute Twin network is stated, four twin network architectures are specifically created, each branch includes three different convolutional neural networks: detection Device, direction estimation device and descriptor;Further include four groups of image patch: P1, P2, P3 and P4, P1 and P2 respectively correspond same 3D point Different views, P3 includes the projection of different 3D points, and P4 does not include any characteristic point, in training, i-th of every quadruplets benefit Fourth Pi will pass through i branch;It is in order to realize differentiability end to end, the component connection of each branch is as follows:
1. giving an input picture patch P, detector provides a shot chart S;
2. executing a soft argmax function on shot chart S, and return to the position x of single potential characteristic point;
3. extracting a fritter p centered on x, cut with space transformer layer, as the input of direction estimation device;
4. direction estimation device predicts that a patch is orientated θ;
5. rotating p using second space transformer layer according to direction, label is in Fig. 3, generates P θ;
6. P θ is connected to description network, and calculates feature vector d;
1-3) color image and depth image be registrated in frame, two dimensional character point set coordinate is obtained, with two three-dimensional point sets Estimate the transformation relation between two three-dimensional system of coordinates, and then estimates to obtain corresponding kinematic parameter.
3. a kind of localization method of vision positioning and inertial navigation combination based on LIFT according to claim 1, special Sign is, in step 2), the inertial navigation positioning is to be positioned using inertial navigation, including carry out to inertial navigation Initial calibration, the attitude algorithm to inertial navigation, inertial navigation speed and position resolve, the specific steps are as follows:
2-1) the initial alignment of inertial navigation, obtains initial velocity and initial position, and initial alignment includes two processes: coarse alignment process With fine alignment process;Coarse alignment process generallys use analytic method, is mainly calculated using the measured value of inertia device initial Attitude matrix;Fine alignment process is on the basis of coarse alignment using filtering method or compass alignment method;
2-1-1) coarse alignment
Coarse alignment is counted mainly in combination with inertia device measured value using acceleration of gravity and rotational-angular velocity of the earth within a short period of time Initial attitude matrix and three attitude angles are calculated, if taking geographic coordinate system northeast day is navigational coordinate system, are projected into navigation It is expressed as under coordinate system:
gn=[0 0-g]T (1)
In formula (1), gnThe matrix projected under navigational coordinate system for g indicates that g is local gravitational acceleration, and T is matrix transposition Symbol, projection of the rotational-angular velocity of the earth under navigational coordinate system are as follows:
In formula (2), L is local latitude, ωieIndicate the angular speed being converted under navigational coordinate system,Indicate navigation coordinate Matrix under system indicates;
Define above-mentioned gnThe orthogonal vector of two vectors are as follows:
M=g × ωie=[g ωie cosL 0 0]T (3)
The acceleration g exported using accelerometerbWith the angular speed of gyroscope outputInertial coodinate system-b system and navigation are sat It marks system-n system relationship and calculates gnWithIt is specific as follows:
Equally, also have
It is as follows then as matrix form to be write to (4) and (5) two formulas:
Formula (6) are subjected to transposition, and are substituted into formula (1), (2) and (3), then are had:
Find out attitude matrixThen find out initial attitude angle;
2-1-2) fine alignment
Fine alignment is that misalignment is modified and estimated to attitude matrix, is initially aligned using Kalman filtering, by platform Error angle estimates, and calculates attitude matrix, while the estimation error of inertia type instrument is come out;
Include random disturbances and zero bias in the output of accelerometer, in gyroscope output also comprising interference and Drift, it is contemplated that the estimation to gyroscopic drift and acceleration zero bias is extended in filter state, takes the state to beφx, φyAnd φzThe misalignment in respectively three directions Angle, εxyAnd εzThe respectively drift of three axis of gyroscope,WithFor the zero bias of accelerometer, δ vxWith δ vyFor speed mistake Difference takes δ vxWith δ vyFor observed quantity, it is as follows to establish observational equation Z (t):
Wherein ηxAnd ηyIt indicates observation noise, equation is subjected to discretization, using Kalman's recurrence estimation platform error angle, is gone forward side by side Initial alignment is realized in row correction;
2-2) the attitude algorithm of inertial navigation, be to coordinate system used in inertial navigation complete convert, MEMS gyroscope and Mems accelerometer and carrier are connected, and the acceleration and angular speed data of measurement are to need to convert under carrier coordinate system To under navigational coordinate system, posture, speed and the location information of carrier are then calculated, attitude updating algorithm uses Euler's horn cupping;
Euler's horn cupping calculates yaw angle, pitch angle and the roll angle of motion carrier by the Eulerian angles differential equation, indicates to carry Angle Position relationship of the body coordinate system b system relative to navigational coordinate system n system, if b system is relative to the angular velocity vector of n systemIts Expression formula are as follows:
Wherein,Respectively angle of the carrier coordinate system b system relative to three reference axis of navigational coordinate system n system Speed shown in the differential equation such as following formula (11) for establishing Eulerian angles, can be obtained three of motion carrier by solving the differential equation Attitude angle,
2-3) speed of inertial navigation and position resolve, and the location updating of inertial navigation uses an accelerometer to realization, first The acceleration information of accelerometer measures is transformed into navigational coordinate system, due to being influenced by acceleration of gravity, needs first to carry out Then compensation carries out integral operation and acquires speed, further integral obtains displacement information, it is assumed that three axis of accelerometer output add Speed are as follows:
Wherein, abx,aby,absRespectively inertial navigation measures the acceleration on three reference axis x, y, z at the k moment, and needing will be in inertia The acceleration value that coordinate system measures is transformed into navigational coordinate system, it may be assumed that
It is written as matrix form are as follows:
In above-mentioned formula (13),Indicate k moment inertial coodinate system to navigational coordinate system spin matrix, by the number of gyroscope It is updated according to the resolving for carrying out posture;
In order to eliminate the influence of terrestrial gravitation acceleration, need to subtract gravitational acceleration component when carrying out integral operation, then Integral calculation speed and displacement information are carried out, a is usedg(k) it indicates to remove the velocity amplitude after component of acceleration, formula is as follows:
First to acceleration ag(k) integral can obtain speed of the carrier in navigational coordinate system, and △ t is the period of very little, ask herein Speed and distance all use physics formula, it may be assumed that
vn(k+1)=vn(k)+ag(k)×Δt (15)
In above-mentioned formula (15), Δ t=t (k+1)-t (k), integral can obtain position S of the motion carrier in navigational coordinate system again (k+1), it may be assumed that
The displacement calculation formula that three axis decompose are as follows:
Through the above steps, it calculates current time displacement and attitude angle is successively iterated in chronological order in conjunction with initial position The track of entire motion process can be obtained.
4. a kind of localization method of vision positioning and inertial navigation combination based on LIFT according to claim 1, special Sign is, in step 3), the dynamic combined of the vision positioning and inertial navigation based on LIFT is positioned, including data alignment And data fusion, the specific steps are as follows:
3-1) the data alignment based on Kinect and inertial navigation, including time alignment and spacial alignment:
3-1-1) time alignment
Time alignment is synchronized to nonsynchronous data of Kinect sensor and inertial sensor under unified reference time scale, In integrated positioning system, leading to data asynchronous reason includes the sampling period difference of two sensors and the system of two sensors One fiducial time is inconsistent, for hardware, realizes time synchronization using CPLD or by means of fpga chip;For software For, time synchronization is carried out using the software program algorithm of host computer, carries out time synchronization, least square using least square method Method assumes that the ratio between sampling period of two class sensors is an Integer n, by the n times Measurement fusion of high frequency sensors at kth The virtual measurement at moment, then merged with the measured value of low-frequency sensor at the kth moment, finally realize time synchronization;
3-1-2) spacial alignment
Spacial alignment is the unification of coordinate system, i.e., the information in each sensor coordinate system is transformed to unified navigational coordinate system In, spacial alignment initially sets up combination coordinate system, when Kinect sensor is horizontally mounted, camera lens outwardly direction of advance, this When three reference axis respectively forward be Z axis positive direction, downwards be Y-axis positive direction, be to the left X-axis positive direction;For strapdown Inertial navigation system, space coordinates mainly establish carrier coordinate system, commonly abbreviated as b system, and origin is inertial navigation module The center of three axis, X-axis are directed toward the right of carrier, and Y-axis is directed toward the front of carrier, meets the right-hand rule, be by inertia herein Module is connected in visual sensor, then is fixedly connected with carrier, and the corresponding relationship between the coordinate system of combined system is formed.
Before integrated positioning, the position of two sensors is initially aligned, is realized to inertial coodinate system and Kinect vision Then the unification of coordinate system can be switched to the same coordinate system using the relationship of inertial coodinate system and navigational coordinate system, complete empty Between on unification, the relative pose of two sensors in combined system is estimated using time-domain constraints estimation method herein;
Alignment procedures include two steps:
1. establishing inertial navigation and the Alignment model of Kinect, the initial pose estimated value of inertial navigation is determined according to camera calibration result, into And determine filter status initial value;
2. being filtered estimation to the output of image sequence and inertial navigation according to the filtering equations of foundation, inertial navigation and Kinect are determined Relative pose;
The alignment procedures of specific inertial navigation and vision positioning are as follows:
Note inertial coodinate system is B system, and Kinect visual coordinate system is A system, and image coordinate system is W system, chooses quantity of state are as follows:Wherein,Between W system and B system coordinate origin Displacement,For the displacement between A system and B system coordinate origin,Be set to B system to W system rotation quaternary number,For A system to B system Rotation quaternary number,For B system to the speed of W system, g is acceleration of gravity, ωBFor the angular speed of B system,For W system relative to The acceleration of B system, each state change process are as follows:
Wherein, Ω (ω) is quaternary number kinetics equation, form are as follows:
Assuming that acceleration value and magnitude of angular velocity are constant value, indicate are as follows: ωB=nω,Observation can by inertial navigation and Kinect is obtained, and for inertial navigation, can measure the angular speed and acceleration under local Coordinate System;
mωBBcmd+bg
For Kinect, available observational equation relevant with characteristic point pixel coordinate are as follows:
Wherein, K indicates camera internal reference, viIndicate characteristic point pixel coordinate observation noise, observed quantity can be written as:
For different moments, there are certain the constraint relationships between the quantity of state and observed quantity of system;
3-2) the data fusion based on Kinect and inertial navigation
Fusion estimation can be divided into Linear Estimation and non-linear estimations, corresponding linear Kalman filtering and nonlinear extensions Kalman filtering, Kalman filter are that filtering estimation problem is solved with recursive thought, carry out recursion with minimal error, are pair The predicted value and the measured value at current time of previous sampling instant carry out state estimation, and extended Kalman filter is also with pre- Measured value and measured value estimate that state, it is suitable for nonlinear systems;Blending algorithm based on Extended Kalman filter is set It is as follows to count key step:
3-2-1) establishment process model
State equation is the relationship established between adjacent moment state variable, the state vector for needing to estimate are as follows: Xk=(Pkk, Vkkk)T, state variable XkIn, Pk=(xk,yk,zk)TIndicate the position at k moment, xk,yk,zkRespectively indicate three coordinates;Indicate the attitude angle at k moment,αkkRespectively yaw angle, pitch angle and roll angle;Vk=(vx, vy,vz)TIndicate the velocity vector at k moment, vx,vy,vzFor the velocity amplitude in three reference axis;δk=(δxyz)TWhen indicating k The acceleration error at quarter, εk=(εxyz)TThe error of k moment angular speed is indicated, with the measured value building process of inertial navigation Model, by acceleration measurement abWith gyroscope measured valueAs the input variable of process model, u is usedkIt indicates:
Accelerometer and gyroscope can be influenced in measurement by environment and itself measurement factor, have deviation and noise, then have:
Wherein, agIndicate the acceleration value of navigational coordinate system,It is attitude matrix, indicates from inertial coodinate system to navigational coordinate system Transformational relation, can by the measured value of gyroscope carry out attitude algorithm acquire, subtract the influence of acceleration of gravity, then formulaUsing the measured value of inertial navigation as the input variable of process model, Extended Kalman filter is established State model are as follows:
δkk-1
εkk-1
Wherein Δ t is sampling time interval,Indicate inertial coodinate system to navigational coordinate system transformation matrix,Indicate inertia Specific rotation transformation matrix of the coordinate system to navigational coordinate system, filter state equation are as follows:
Xk=f (Xk-1,Uk-1,Wk-1)
3-2-2) establish observation model
Relationship between observation model expression observed quantity and state variable passes through two for the measurement based on Kinect sensor The image of a adjacent moment can calculate the optimal value for acquiring relative pose, that is, seeking following formula,
The relative pose of two adjacent moments, including spin matrix R and translation vector T are obtained by above formula, and then acquire current time Position and posture opposite linear velocity can be obtained, use V to motion vector T divided by the time difference of adjacent momentvoIt indicates, because This, is measuring value by spin matrix R, translation vector T, establishes Extended Kalman filter observation model:
Tk=VkΔt+ag(Δt)2/2+vt,k
Filter observational equation are as follows:
VkFor white Gaussian noise, meet VkThe value of~N (0, R), noise covariance R depend on the confidence level of Kinect vision positioning, The confidence level determines that confidence level is smaller by correct matched feature points and depth error, and the covariance value of noise is bigger, builds The relative pose that can get two adjacent moments after vertical observation model by the reckoning of the separate equations, circuiting sequentially execution step can Integrated positioning motion pose after being optimized estimates track.
CN201910355769.8A 2019-04-29 2019-04-29 A kind of localization method of vision positioning and inertial navigation combination based on LIFT Pending CN110095116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910355769.8A CN110095116A (en) 2019-04-29 2019-04-29 A kind of localization method of vision positioning and inertial navigation combination based on LIFT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910355769.8A CN110095116A (en) 2019-04-29 2019-04-29 A kind of localization method of vision positioning and inertial navigation combination based on LIFT

Publications (1)

Publication Number Publication Date
CN110095116A true CN110095116A (en) 2019-08-06

Family

ID=67446387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910355769.8A Pending CN110095116A (en) 2019-04-29 2019-04-29 A kind of localization method of vision positioning and inertial navigation combination based on LIFT

Country Status (1)

Country Link
CN (1) CN110095116A (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110411457A (en) * 2019-08-27 2019-11-05 纵目科技(上海)股份有限公司 Localization method, system, terminal and the storage medium merged with vision is perceived based on stroke
CN110595468A (en) * 2019-09-25 2019-12-20 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110991085A (en) * 2019-12-20 2020-04-10 上海有个机器人有限公司 Robot image simulation data construction method, medium, terminal and device
CN111174781A (en) * 2019-12-31 2020-05-19 同济大学 Inertial navigation positioning method based on wearable device combined target detection
CN111595332A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Full-environment positioning method integrating inertial technology and visual modeling
CN111595349A (en) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 Navigation method and device, electronic equipment and storage medium
CN111652155A (en) * 2020-06-04 2020-09-11 北京航空航天大学 Human body movement intention identification method and system
CN111709990A (en) * 2020-05-22 2020-09-25 贵州民族大学 Camera repositioning method and system
CN112308998A (en) * 2020-11-02 2021-02-02 广东电网有限责任公司 Indoor positioning intelligent inspection system and method based on Bluetooth
CN112362085A (en) * 2021-01-12 2021-02-12 中国石油大学胜利学院 Method for acquiring correction experiment data of nine-axis sensor
CN112562077A (en) * 2020-11-25 2021-03-26 西北工业大学 Pedestrian indoor positioning method integrating PDR and prior map
CN112556719A (en) * 2020-11-27 2021-03-26 广东电网有限责任公司肇庆供电局 Visual inertial odometer implementation method based on CNN-EKF
CN112577489A (en) * 2020-12-08 2021-03-30 北京电子工程总体研究所 Seeker sight rotation rate extraction method based on interactive multi-model filtering
CN112683261A (en) * 2020-11-19 2021-04-20 电子科技大学 Unmanned aerial vehicle robustness navigation method based on speed prediction
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN112818898A (en) * 2021-02-20 2021-05-18 北京字跳网络技术有限公司 Model training method and device and electronic equipment
CN112857367A (en) * 2021-01-21 2021-05-28 中国煤炭科工集团太原研究院有限公司 Heading machine pose detection method based on machine vision and inertial navigation
CN112880672A (en) * 2021-01-14 2021-06-01 武汉元生创新科技有限公司 AI-based inertial sensor fusion strategy self-adaption method and device
CN113065572A (en) * 2019-12-31 2021-07-02 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN113111480A (en) * 2021-02-22 2021-07-13 同济大学 Method and device for diagnosing and detecting running state of drainage pipe network
CN113109853A (en) * 2021-03-12 2021-07-13 上海卫星工程研究所 Satellite attitude variable frequency calculation output method and system based on double-frequency and double-mode design
CN113375646A (en) * 2021-05-06 2021-09-10 武汉海达数云技术有限公司 Positioning attitude determination and point cloud data real-time resolving and fusing method for mobile measurement
CN113379803A (en) * 2021-07-07 2021-09-10 上海谦尊升网络科技有限公司 Positioning method based on visual image
WO2021196983A1 (en) * 2020-03-30 2021-10-07 华为技术有限公司 Ego-motion estimation method and apparatus
CN113483769A (en) * 2021-08-17 2021-10-08 清华大学 Particle filter based vehicle self-positioning method, system, device and medium
CN113551665A (en) * 2021-06-25 2021-10-26 中国科学院国家空间科学中心 High dynamic motion state sensing system and sensing method for motion carrier
CN113779166A (en) * 2021-08-20 2021-12-10 上海瑾盛通信科技有限公司 Geo-fence control method and device, storage medium and electronic equipment
CN113899363A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Vehicle positioning method and device and automatic driving vehicle
CN113965646A (en) * 2021-11-10 2022-01-21 Oppo广东移动通信有限公司 Positioning control method and device, electronic equipment and storage medium
CN114719848A (en) * 2022-01-25 2022-07-08 西安微电子技术研究所 Unmanned aerial vehicle height estimation method based on neural network fused with visual and inertial navigation information
CN115024715A (en) * 2022-05-20 2022-09-09 北京航天时代光电科技有限公司 Intelligent measurement and digital training system for human body movement
CN115273399A (en) * 2022-07-29 2022-11-01 西安电子科技大学 Old people rescue equipment and method based on gyroscope three-axis acceleration sensor
EP3851806B1 (en) 2020-01-15 2023-01-11 Leuze electronic GmbH + Co. KG Sensor assembly and method for operating a sensor assembly
CN116222306A (en) * 2023-02-15 2023-06-06 湖南云箭科技有限公司 Method and system for designing bias guidance law of laser guided fire extinguishing bomb
CN110411457B (en) * 2019-08-27 2024-04-19 纵目科技(上海)股份有限公司 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor orientation method based on pedestrian's reckoning
CN108008739A (en) * 2017-12-28 2018-05-08 南京航空航天大学 A kind of unmanned plane tracking of mobile target and landing system and method
CN108307174A (en) * 2018-01-26 2018-07-20 上海深视信息科技有限公司 A kind of depth image sensor precision improvement method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084718A (en) * 2017-04-14 2017-08-22 桂林电子科技大学 Indoor orientation method based on pedestrian's reckoning
CN108008739A (en) * 2017-12-28 2018-05-08 南京航空航天大学 A kind of unmanned plane tracking of mobile target and landing system and method
CN108307174A (en) * 2018-01-26 2018-07-20 上海深视信息科技有限公司 A kind of depth image sensor precision improvement method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KWANG MOO YI 等: "LIFT: Learned Invariant Feature Transform", 《COMPUTER VISION - ECCV 2016》 *
张云: "基于视觉和惯性导航动态组合定位方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110411457A (en) * 2019-08-27 2019-11-05 纵目科技(上海)股份有限公司 Localization method, system, terminal and the storage medium merged with vision is perceived based on stroke
CN110411457B (en) * 2019-08-27 2024-04-19 纵目科技(上海)股份有限公司 Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN110595468A (en) * 2019-09-25 2019-12-20 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110595468B (en) * 2019-09-25 2021-05-07 中国地质科学院地球物理地球化学勘查研究所 Three-component induction coil attitude measurement system and method based on deep learning
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110991085A (en) * 2019-12-20 2020-04-10 上海有个机器人有限公司 Robot image simulation data construction method, medium, terminal and device
CN110991085B (en) * 2019-12-20 2023-08-29 上海有个机器人有限公司 Method, medium, terminal and device for constructing robot image simulation data
CN111174781B (en) * 2019-12-31 2022-03-04 同济大学 Inertial navigation positioning method based on wearable device combined target detection
CN113065572B (en) * 2019-12-31 2023-09-08 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN113065572A (en) * 2019-12-31 2021-07-02 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN111174781A (en) * 2019-12-31 2020-05-19 同济大学 Inertial navigation positioning method based on wearable device combined target detection
EP3851806B1 (en) 2020-01-15 2023-01-11 Leuze electronic GmbH + Co. KG Sensor assembly and method for operating a sensor assembly
WO2021196983A1 (en) * 2020-03-30 2021-10-07 华为技术有限公司 Ego-motion estimation method and apparatus
CN111595332A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Full-environment positioning method integrating inertial technology and visual modeling
CN111709990A (en) * 2020-05-22 2020-09-25 贵州民族大学 Camera repositioning method and system
CN111709990B (en) * 2020-05-22 2023-06-20 贵州民族大学 Camera repositioning method and system
CN111652155A (en) * 2020-06-04 2020-09-11 北京航空航天大学 Human body movement intention identification method and system
CN111595349A (en) * 2020-06-28 2020-08-28 浙江商汤科技开发有限公司 Navigation method and device, electronic equipment and storage medium
CN112308998A (en) * 2020-11-02 2021-02-02 广东电网有限责任公司 Indoor positioning intelligent inspection system and method based on Bluetooth
CN112683261A (en) * 2020-11-19 2021-04-20 电子科技大学 Unmanned aerial vehicle robustness navigation method based on speed prediction
CN112562077B (en) * 2020-11-25 2024-01-09 西北工业大学 Pedestrian indoor positioning method integrating PDR and priori map
CN112562077A (en) * 2020-11-25 2021-03-26 西北工业大学 Pedestrian indoor positioning method integrating PDR and prior map
CN112556719A (en) * 2020-11-27 2021-03-26 广东电网有限责任公司肇庆供电局 Visual inertial odometer implementation method based on CNN-EKF
CN112556719B (en) * 2020-11-27 2022-01-21 广东电网有限责任公司肇庆供电局 Visual inertial odometer implementation method based on CNN-EKF
CN112577489A (en) * 2020-12-08 2021-03-30 北京电子工程总体研究所 Seeker sight rotation rate extraction method based on interactive multi-model filtering
CN112362085A (en) * 2021-01-12 2021-02-12 中国石油大学胜利学院 Method for acquiring correction experiment data of nine-axis sensor
CN112880672A (en) * 2021-01-14 2021-06-01 武汉元生创新科技有限公司 AI-based inertial sensor fusion strategy self-adaption method and device
CN112857367B (en) * 2021-01-21 2023-10-13 中国煤炭科工集团太原研究院有限公司 Heading machine pose detection method based on machine vision and inertial navigation
CN112857367A (en) * 2021-01-21 2021-05-28 中国煤炭科工集团太原研究院有限公司 Heading machine pose detection method based on machine vision and inertial navigation
CN112818898B (en) * 2021-02-20 2024-02-20 北京字跳网络技术有限公司 Model training method and device and electronic equipment
CN112818898A (en) * 2021-02-20 2021-05-18 北京字跳网络技术有限公司 Model training method and device and electronic equipment
CN113111480A (en) * 2021-02-22 2021-07-13 同济大学 Method and device for diagnosing and detecting running state of drainage pipe network
CN113111480B (en) * 2021-02-22 2022-07-29 同济大学 Method and device for diagnosing and detecting running state of drainage pipe network
CN113109853B (en) * 2021-03-12 2022-11-15 上海卫星工程研究所 Satellite attitude variable frequency calculation output method and system based on double-frequency and double-mode design
CN113109853A (en) * 2021-03-12 2021-07-13 上海卫星工程研究所 Satellite attitude variable frequency calculation output method and system based on double-frequency and double-mode design
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN112729294B (en) * 2021-04-02 2021-06-25 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN113375646A (en) * 2021-05-06 2021-09-10 武汉海达数云技术有限公司 Positioning attitude determination and point cloud data real-time resolving and fusing method for mobile measurement
CN113551665A (en) * 2021-06-25 2021-10-26 中国科学院国家空间科学中心 High dynamic motion state sensing system and sensing method for motion carrier
CN113551665B (en) * 2021-06-25 2023-08-11 中国科学院国家空间科学中心 High-dynamic motion state sensing system and sensing method for motion carrier
CN113379803A (en) * 2021-07-07 2021-09-10 上海谦尊升网络科技有限公司 Positioning method based on visual image
CN113379803B (en) * 2021-07-07 2024-02-02 上海谦尊升网络科技有限公司 Positioning method based on visual image
CN113483769B (en) * 2021-08-17 2024-03-29 清华大学 Vehicle self-positioning method, system, equipment and medium based on particle filter
CN113483769A (en) * 2021-08-17 2021-10-08 清华大学 Particle filter based vehicle self-positioning method, system, device and medium
CN113779166A (en) * 2021-08-20 2021-12-10 上海瑾盛通信科技有限公司 Geo-fence control method and device, storage medium and electronic equipment
CN113779166B (en) * 2021-08-20 2024-02-13 上海瑾盛通信科技有限公司 Geofence control method and device, storage medium and electronic equipment
CN113899363A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Vehicle positioning method and device and automatic driving vehicle
US11953609B2 (en) 2021-09-29 2024-04-09 Beijing Baidu Netcom Science Technology Co., Ltd. Vehicle positioning method, apparatus and autonomous driving vehicle
CN113965646A (en) * 2021-11-10 2022-01-21 Oppo广东移动通信有限公司 Positioning control method and device, electronic equipment and storage medium
CN114719848A (en) * 2022-01-25 2022-07-08 西安微电子技术研究所 Unmanned aerial vehicle height estimation method based on neural network fused with visual and inertial navigation information
CN115024715A (en) * 2022-05-20 2022-09-09 北京航天时代光电科技有限公司 Intelligent measurement and digital training system for human body movement
CN115273399A (en) * 2022-07-29 2022-11-01 西安电子科技大学 Old people rescue equipment and method based on gyroscope three-axis acceleration sensor
CN116222306A (en) * 2023-02-15 2023-06-06 湖南云箭科技有限公司 Method and system for designing bias guidance law of laser guided fire extinguishing bomb
CN116222306B (en) * 2023-02-15 2023-11-07 湖南云箭科技有限公司 Method and system for designing bias guidance law of laser guided fire extinguishing bomb

Similar Documents

Publication Publication Date Title
CN110095116A (en) A kind of localization method of vision positioning and inertial navigation combination based on LIFT
US11900536B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108303099B (en) Autonomous navigation method in unmanned plane room based on 3D vision SLAM
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
Heo et al. EKF-based visual inertial navigation using sliding window nonlinear optimization
CN104748751B (en) The calculation method of attitude matrix and positioning navigation method based on attitude matrix
Oskiper et al. Multi-sensor navigation algorithm using monocular camera, IMU and GPS for large scale augmented reality
CN108051002A (en) Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN105953796A (en) Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN103994765B (en) Positioning method of inertial sensor
CN105953795B (en) A kind of navigation device and method for the tour of spacecraft surface
CN109931955A (en) Strapdown inertial navigation system Initial Alignment Method based on the filtering of state correlation Lie group
CN112833892B (en) Semantic mapping method based on track alignment
CN107144278A (en) A kind of lander vision navigation method based on multi-source feature
CN108106613A (en) The localization method and system of view-based access control model auxiliary
JP2002532770A (en) Method and system for determining a camera pose in relation to an image
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
Fang et al. A motion tracking method by combining the IMU and camera in mobile devices
Xie et al. Angular Tracking Consistency Guided Fast Feature Association for Visual-Inertial SLAM
Liu et al. Integrated velocity measurement algorithm based on optical flow and scale-invariant feature transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190806

RJ01 Rejection of invention patent application after publication