CN108036785A - A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion - Google Patents

A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion Download PDF

Info

Publication number
CN108036785A
CN108036785A CN201711190702.0A CN201711190702A CN108036785A CN 108036785 A CN108036785 A CN 108036785A CN 201711190702 A CN201711190702 A CN 201711190702A CN 108036785 A CN108036785 A CN 108036785A
Authority
CN
China
Prior art keywords
mrow
msub
mover
msubsup
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711190702.0A
Other languages
Chinese (zh)
Inventor
许超
翁桢
翁一桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201711190702.0A priority Critical patent/CN108036785A/en
Publication of CN108036785A publication Critical patent/CN108036785A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

The invention discloses a kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion, this method passes through the vision measurement based on direct method, optimization aircraft six-freedom degree pose makes re-projection luminosity error minimum, measurement with inertial navigation component, which is merged, obtains scale, in the case of the extraneous assisting navigation such as no gps signal, it can realize the long-term accurate positionin of aircraft, and robustness is good.

Description

A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
Technical field
The present invention relates to Aerial vehicle position field, and in particular to the aircraft pose of a kind of direct method and inertial navigation fusion is estimated Method.
Background technology
Autonomous positioning is the core component of robot autonomous navigation system, on the basis of autonomous positioning is realized, Robot can complete more multi-functional, for example barrier is evaded, path planning, independent navigation etc..
And the flight control for unmanned vehicle (UAV, Unmanned Aerial Vehicle) then needs 6DOF Attitude estimation.When unmanned vehicle is in outdoor flight, global positioning system (GPS, Global Positioning is utilized System), in the case of gps satellite abundance its precision up to 1cm.But when blocked there are building or aircraft at When indoor flight, GPS possibly can not be accurately positioned, or even can not be used.Another method is to utilize Inertial Measurement Unit (IMU, Inertial Measurement Unit), the aircraft linear acceleration obtained to it carry out quadratic integral, are flown The location estimation of device in three dimensions.But there is very big accumulated error in this method, and at the uniform velocity floated when aircraft is in When shifting or approximate uniform motion, Inertial Measurement Unit possibly can not accurately measure the acceleration of aircraft drift, this will give winged Row device autonomous positioning brings great error.It is a kind of dexterous, light, real-time, accurate that the above unfavorable factor forces us to find True method, to realize the autonomous positioning of unmanned vehicle.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of flight based on direct method and inertial navigation fusion Device position and orientation estimation method, this method can realize the long-term accurate fixed of aircraft in the case of the extraneous assisting navigation such as no gps signal Position, and robustness is good, accumulated error smaller.
Technical solution provided by the present invention is:A kind of aircraft pose estimation side based on direct method and inertial navigation fusion Method, it is characterised in that include the following steps:
(1) the real-time acceleration and angular speed information of aircraft is measured by accelerometer and gyroscope respectively, passes through magnetic Power meter determines absolute yaw angle, so as to obtain the real-time attitude information of aircraft, the real-time attitude information includes aircraft Pitch angle, roll angle and yaw angle;
(2) visual sensor is utilized, is estimated using direct method to obtain the visual sensor motion pose of scale missing, should The motion pose obtained in step includes:Visual sensor is relative to the rotation and visual sensor of reference frame with reference to seat Lack the translation of scale in mark system;
(3) equation of motion is established, constructs Extended Kalman filter, using acceleration output and angular velocity information, is expanded Kalman filter prediction is opened up, the visual sensor motion pose estimation that the scale exported with reference to vision direct method lacks is as amount Measured value, is updated, and obtains the estimation of aircraft pose.
A kind of Aerial vehicle position method based on direct method and inertial navigation fusion according to claim 1, its feature exist Use in, the step (2) direct method obtain implementation that the motion pose of visual sensor scale missing estimates for:
Establish total luminosity error Ephoto
Wherein,Represent the set of picture frame in sliding window, p is representedIn a feature point coordinates,Represent i-th The set of all feature point coordinates in two field picture, obs (p) represent the set of all observations of characteristic point p;
Wherein EpjRepresent the re-projection luminosity error of single feature point, Np represents the block where characteristic point, wpRepresent power Weight, IiAnd IjRepresent the i-th two field picture and jth two field picture respectively, p ' represents p point re-projections in the i-th two field picture to jth two field picture On coordinate;ai、biRepresent the photometric parameter of the i-th two field picture, aj、bjRepresent the photometric parameter of jth two field picture, tiAnd tjPoint The time for exposure of the i-th frame and jth two field picture is not represented, | | | |γExpression uses Huber kernel functions;
The coordinate p ' on p point re-projections to jth two field picture in i-th two field picture is:
Wherein πcIt is pin-hole imaging model for the projection model of 3D-2D, the internal reference matrix of pin-hole imaging model is:
R represents to represent from the i-th frame to the translation of jth frame from the i-th frame to the rotation of jth frame, the inverse depth of dp expression p points, t Relation;In internal reference matrix K:fx、fy、cx、cyRepresent respectively camera x-axis focal length, y-axis focal length, optical center x-axis coordinate and optical center In the coordinate of y-axis;
The relative pose R and t of camera, make total luminosity error E in optimized-type (3)photoMinimum, so as to recover camera Motion pose estimates R and t.
Further, the equation of motion is established in the step (3), Extended Kalman filter is constructed, is exported using acceleration And angular velocity information, Kalman filter prediction is extended, it is specific as follows:
Construct the quantity of state x of extended Kalman filter:
It is expressed as position of the i-th moment IMU coordinate system relative to world coordinate systems (being represented by w) Put, speed, posture (being represented with quaternary number), bω、baRepresent the biasing of angular speed and acceleration respectively, λ represent vision system with The scale of actual physical system,Position of the camera coordinates system with respect to IMU coordinate systems and posture are represented respectively, World coordinate systems are relative to the position of visual coordinate system (being represented by v) and the kinematical equation of posture, then virtual condition amount respectively For:
A=am-ba-naω=ωm-bω-nω (7)
Wherein, C () represents that quaternary counts to the transformation relation of spin matrix, and Ω () represents turn of angular speed and quaternary number Change relation, nbwRepresent the drift noise of angular speed biasing, nawRepresent the drift noise of acceleration biasing, g represents that gravity accelerates Degree, a represent the actual value of acceleration, amRepresent the acceleration output of inertial measurement component, naRepresent the measurement noise of acceleration, w Represent the actual value of angular speed, ωmRepresent the angular speed output of inertial measurement component, nωRepresent the measurement noise of angular speed;
Due to noise or the presence of disturbance, the actual value of state can not be obtained in practice, it is necessary to be carried out to these states Estimation, thinks the measurement of accelerometer and gyroscope without noise or disturbance, then rewriting time of day equation is at this time:
Wherein,The i-th moment IMU coordinate system is expressed as (to be represented by w) relative to world coordinate systems Position, speed, the estimate of posture (being represented with quaternary number),The biasing of angular speed and acceleration is represented respectively Estimate,Represent the estimate of vision system and the scale of actual physical system,Camera coordinates system phase is represented respectively The estimate of position and posture to IMU coordinate systems,Difference world coordinate systems are relative to visual coordinate system (by v tables Show) position and posture estimate,Represent the estimate of the actual value of acceleration,Represent estimating for the actual value of angular speed Evaluation;
In the state representation of formula (6) and formula (8), using quaternary number as the description of posture, but quaternary number is not The smallest dimension characterization of posture, causes the singularity problem of covariance matrix in order to avoid the hyper parameter or redundancy of parameter, The smallest dimension that rotating vector θ is introduced into when characterizing state error as posture describes, therefore error state amount is:
The i-th moment IMU coordinate system is expressed as (to be represented by w) relative to world coordinate systems Site error, velocity error, attitude error (being represented with quaternary number), Δ bω、ΔbaAngular speed and acceleration are represented respectively Biased error, Δ λ represent vision system and the scale error of actual physical system,Camera coordinates system is represented respectively With respect to the site error and attitude error of IMU coordinate systems,World coordinate systems are relative to visual coordinate system respectively The site error and attitude error of (being represented by v);
The error state equation of motion for obtaining continuous time is:
WhereinRepresent the antisymmetric matrix of vector;
Formula (11) is generalized into the error state equation of continuous time is:
Wherein,
To FcSliding-model control is carried out, obtains Fd
Wherein Δ t represents discrete sampling time interval, IdRepresent unit matrix;
And the noise covariance matrix Q of discrete timed
Wherein:
QcThe system noise covariance matrix of continuous time is expressed as, whereinTable successively It is shown as acceleration analysis noise variance, acceleration offset drift variance, angular velocity measurement noise variance, angular speed offset drift side Difference;
Thus, the prediction steps for being extended Kalman filtering are as follows:
1. quantity of state is updated according to acceleration and angular speed information iteration;
2. calculate discrete system matrix FdWith noise covariance matrix Qd
3. the state covariance matrix at k-1 moment to k moment is calculated according to Riccati equation:
Wherein Pk|k-1The process covariance matrix that expression k-1 moment to the k moment is predicted, Pk-1|k-1Represent the mistake at k-1 moment Journey covariance matrix.
Further, the visual sensor motion bit of the scale missing of vision direct method output is combined in the step (3) Appearance estimation is used as measuring value, is updated, and obtains the estimation of aircraft pose, specific as follows:
Calculate Extended Kalman filter measurement equation:
Site errorFor:
Wherein zpRepresent the position measurements of vision system,Represent the position estimation value of vision system, npRepresent vision The noise of system position measured value;
Formula (16) is linearized and can obtained:
Wherein HpFor the measurement equation matrix after linearisation;
Equally establish attitude error
Wherein zqRepresent the attitude measurement value of vision system,Represent the Attitude estimation value of vision system,Represent quaternary Number multiplication;
Formula (18) is linearized and can obtained:
Wherein HqFor the measurement equation matrix after linearisation;
Complete Extended Kalman filter measurement equation is obtained after formula (17) and formula (19) are merged:
Calculate the renewal step of Extended Kalman filter:
1. calculate kalman gain:
K=Pk|k-1HT(HPk|k-1HT+Rm)-1 (21)
Wherein RmRepresent the variance of vision system measured value;
2. update quantity of state:
xk|kRepresent the k moment update after quantity of state, xk|k-1Predicted value for the k-1 moment to the k moment;
3. renewal process covariance matrix:
Pk|k=(I-KH) Pk|k-1(I-KH)T+KRmKT (23)
Wherein I represents unit matrix;
The final motion pose estimation for obtaining the visual sensor with true dimensional information.
Compared with the existing technology, beneficial effects of the present invention are embodied in:
1) present invention uses core algorithm of the direct method as vision system, and this method need not carry out feature point description Calculating, it is not required that Feature Points Matching is carried out, so as to save substantial amounts of computing resource so that the running frequency of whole algorithm It is higher;
2) compared with general monocular vision odometer, the acceleration and angular speed of the present invention and inertial measurement component IMU The problem of being merged, avoiding general monocular vision odometer scale missing, so as to fulfill can longtime running and robustness compared with High Aerial vehicle position function.
Brief description of the drawings
Fig. 1 is the hardware structure schematic diagram of the present invention;
Fig. 2 is the sensing data process chart that the present invention realizes Aerial vehicle position;
In figure:1- microprocessors (NUC), 2- inertial measurement components (IMU), 3- cameras.
Embodiment
It is as shown in Figure 1 the hardware structure schematic diagram of the present invention, by microprocessor (NUC) 1, inertial measurement component (IMU) 2 and camera 3 form, inertial measurement component (IMU) 2 and camera 3 are connected with microprocessor (NUC) 1.
Microprocessor is using NUC (NUC5i7RYH) series of Intel Company, and main screw lift is only 0.607Kg, at this Reason device possess small, more excuses, processing speed is fast, powerful, low-power consumption, rapid heat dissipation the advantages that.Of the invention main answers Estimated with the motion pose that background is aircraft, main algorithm relies on the quick processing to image, therefore this NUC5i7RYH is micro- Type processor is very suitable for the demand of the present invention.
Inertial measurement component has used advanced data using LPMS series, the series of products of LP-RESEARCH companies Integration technology, provides high accuracy, high robust, the attitude information of high stability and 3 axle accelerations, 3 axis angular rates to the user With the data such as the ground quantity of magnetism, azimuth, realize that Extended Kalman filter provides reliable attitude information for algorithm, be very suitable for this hair Bright demand.
Camera using German Matrix Vision companies industrial camera mvBlueFox-MLC200w.The camera Image resolution ratio is 752x480, and maximum frame per second is up to 90Hz, and the time for exposure is 6 delicate -460 milliseconds, using overall situation exposure MT9V034CMOS sensors, are very suitable for the demand of the present invention.
A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion provided by the invention, including following step Suddenly:
(1) the real-time acceleration of aircraft and angle are measured by the accelerometer in inertial measurement component and gyroscope respectively Velocity information, absolute yaw angle is determined by magnetometer, so as to obtain the real-time attitude information of aircraft;The real-time attitude Information includes pitch angle, roll angle and the yaw angle of aircraft;
(2) visual sensor (camera) is utilized, is estimated using direct method to obtain the cam movement pose of scale missing Meter;The motion pose obtained in the step includes:Camera is relative to the rotation and camera of reference frame in reference coordinate Lack the translation of scale in system;
(3) equation of motion is established, constructs Extended Kalman filter, acceleration output and angle speed using inertial measurement component Information is spent, is extended Kalman filter prediction, the cam movement pose that the scale exported with reference to vision direct method lacks Estimation is used as measuring value, is updated, and obtains the estimation of aircraft pose.
The implementation of the motion pose estimation of visual sensor scale missing is obtained in the step (2) using direct method For:
Establish total luminosity error Ephoto
Wherein,Represent the set of picture frame in sliding window, p is representedIn a feature point coordinates,Represent i-th The set of all feature point coordinates in two field picture, obs (p) represent the set of all observations of characteristic point p;
Wherein EpjRepresent the re-projection luminosity error of single feature point, Np represents the block where characteristic point, wpRepresent power Weight, IiAnd IjRepresent the i-th two field picture and jth two field picture respectively, p ' represents p point re-projections in the i-th two field picture to jth two field picture On coordinate;ai、biRepresent the photometric parameter of the i-th two field picture, aj、bjRepresent the photometric parameter of jth two field picture, tiAnd tjPoint The time for exposure of the i-th frame and jth two field picture is not represented, | | | |γExpression uses Huber kernel functions;
The coordinate p ' on p point re-projections to jth two field picture in i-th two field picture is:
Wherein πcIt is pin-hole imaging model for the projection model of 3D-2D, the internal reference matrix of pin-hole imaging model is:
R represents to represent from the i-th frame to the translation of jth frame from the i-th frame to the rotation of jth frame, the inverse depth of dp expression p points, t Relation;In internal reference matrix K:fx、fy、cx、cyRepresent respectively camera x-axis focal length, y-axis focal length, optical center x-axis coordinate and optical center In the coordinate of y-axis;
Arrive this, we have obtained total luminosity error equation of direct method, in optimized-type (3) the relative pose R of camera and T, makes total luminosity error EphotoMinimum, so as to recover the motion pose estimation R and t of camera.
It will be appreciated that since the above-mentioned cam movement pose for showing that monocular vision is realized is estimated, because Translation of the camera that this this method obtains under referential is the absence of real dimensional information, the pose of so-called " scale missing " Estimation.Therefore we need the measurement of the acceleration and angular speed using inertial measurement component IMU, establish Extended Kalman filter Equation, intactly estimate aircraft motion pose and vision system and real physical system between dimensional information.
The equation of motion is established in the step (3), constructs Extended Kalman filter, is believed using acceleration output and angular speed Breath, is extended Kalman filter prediction, specific as follows:
Construct the quantity of state x of extended Kalman filter:
It is expressed as position of the i-th moment IMU coordinate system relative to world coordinate systems (being represented by w) Put, speed, posture (being represented with quaternary number), bω、baRepresent the biasing of angular speed and acceleration respectively, λ represent vision system with The scale of actual physical system,Position of the camera coordinates system with respect to IMU coordinate systems and posture are represented respectively, World coordinate systems are relative to the position of visual coordinate system (being represented by v) and the kinematical equation of posture, then virtual condition amount respectively For:
A=am-ba-naω=ωm-bω-nω (7)
Wherein, C () represents that quaternary counts to the transformation relation of spin matrix, and Ω () represents turn of angular speed and quaternary number Change relation, nbwRepresent the drift noise of angular speed biasing, nawRepresent the drift noise of acceleration biasing, g represents that gravity accelerates Degree, a represent the actual value of acceleration, amRepresent the acceleration output of inertial measurement component, naRepresent the measurement noise of acceleration, ω represents the actual value of angular speed, ωmRepresent the angular speed output of inertial measurement component, nωRepresent the measurement noise of angular speed;
Due to noise or the presence of disturbance, the actual value of state can not be obtained in practice, it is necessary to be carried out to these states Estimation, thinks the measurement of accelerometer and gyroscope without noise or disturbance, then rewriting time of day equation is at this time:
Wherein,The i-th moment IMU coordinate system is expressed as (to be represented by w) relative to world coordinate systems Position, speed, the estimate of posture (being represented with quaternary number),The biasing of angular speed and acceleration is represented respectively Estimate,Represent the estimate of vision system and the scale of actual physical system,Camera coordinates system phase is represented respectively The estimate of position and posture to IMU coordinate systems,Difference world coordinate systems are relative to visual coordinate system (by v tables Show) position and posture estimate,Represent the estimate of the actual value of acceleration,Represent estimating for the actual value of angular speed Evaluation;
In the state representation of formula (6) and formula (8), using quaternary number as the description of posture, but quaternary number is not The smallest dimension characterization of posture, causes the singularity problem of covariance matrix in order to avoid the hyper parameter or redundancy of parameter, The smallest dimension that rotating vector θ is introduced into when characterizing state error as posture describes, therefore error state amount is:
The i-th moment IMU coordinate system is expressed as (to be represented by w) relative to world coordinate systems Site error, velocity error, attitude error (being represented with quaternary number), Δ bω、ΔbaAngular speed and acceleration are represented respectively Biased error, Δ λ represent vision system and the scale error of actual physical system,Camera coordinates system is represented respectively With respect to the site error and attitude error of IMU coordinate systems,World coordinate systems are relative to visual coordinate system respectively The site error and attitude error of (being represented by v);
The error state equation of motion for obtaining continuous time is:
WhereinRepresent the antisymmetric matrix of vector;
Formula (11) is generalized into the error state equation of continuous time is:
Wherein,
To FcSliding-model control is carried out, obtains Fd
Wherein Δ t represents discrete sampling time interval, IdRepresent unit matrix;
And the noise covariance matrix Q of discrete timed
Wherein:
QcThe system noise covariance matrix of continuous time is expressed as, whereinTable successively It is shown as acceleration analysis noise variance, acceleration offset drift variance, angular velocity measurement noise variance, angular speed offset drift side Difference;
Thus, the prediction steps for being extended Kalman filtering are as follows:
1. quantity of state is updated according to acceleration and angular speed information iteration;
2. calculate discrete system matrix FdWith noise covariance matrix Qd
3. the state covariance matrix at k-1 moment to k moment is calculated according to Riccati equation:
Wherein Pk|k-1The process covariance matrix that expression k-1 moment to the k moment is predicted, Pk-1|k-1Represent the mistake at k-1 moment Journey covariance matrix.
The visual sensor motion pose estimation conduct of the scale missing of vision direct method output is combined in the step (3) Measuring value, is updated, and obtains the estimation of aircraft pose, specific as follows:
Calculate Extended Kalman filter measurement equation:
Site errorFor:
Wherein zpRepresent the position measurements of vision system,Represent the position estimation value of vision system, npRepresent vision The noise of system position measured value;
Formula (16) is linearized and can obtained:
Wherein HpFor the measurement equation matrix after linearisation;
Equally establish attitude error
Wherein zqRepresent the attitude measurement value of vision system,Represent the Attitude estimation value of vision system,Represent quaternary Number multiplication;
Formula (18) is linearized and can obtained:
Wherein HqFor the measurement equation matrix after linearisation;
Complete Extended Kalman filter measurement equation is obtained after formula (17) and formula (19) are merged:
Calculate the renewal step of Extended Kalman filter:
1. calculate kalman gain:
K=Pk|k-1HT(HPk|k-1HT+Rm)-1 (21)
Wherein RmRepresent the variance of vision system measured value;
2. update quantity of state:
xk|kRepresent the k moment update after quantity of state, xk|k-1Predicted value for the k-1 moment to the k moment;
3. renewal process covariance matrix:
Pk|k=(I-KH) Pk|k-1(I-KH)T+KRmKT (23)
Wherein I represents unit matrix;
The final motion pose estimation for obtaining the visual sensor with true dimensional information.
It is illustrated in figure 2 the sensing data process chart that the present invention realizes Aerial vehicle position.When algorithm initialization is complete Cheng Hou, being divided into two threads, the view data to camera, the acceleration and angular speed data of inertial measurement component carry out respectively Processing.Image procossing thread after camera image is obtained, estimated by the camera motion pose that scale missing is obtained using direct method Meter, the processing thread is with 20Hz stable operations.Another thread is obtaining the acceleration and angular speed number of inertial measurement component IMU According to rear, acceleration and angular speed is integrated according to the equation of motion, predicts the motion pose of aircraft, the thread is with 100Hz Stable operation.Hereafter, with upper frequency (100Hz) if the IMU data processing threads of operation find there is lower frequency operation When the result of the image procossing thread of (20Hz) produces, then data fusion, final output will be carried out by Extended Kalman filter The motion pose estimated result of aircraft.

Claims (4)

1. a kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion, it is characterised in that include the following steps:
(1) the real-time acceleration and angular speed information of aircraft is measured by accelerometer and gyroscope respectively, passes through magnetometer Determine absolute yaw angle, so as to obtain the real-time attitude information of aircraft, the real-time attitude information includes bowing for aircraft The elevation angle, roll angle and yaw angle;
(2) visual sensor is utilized, is estimated using direct method to obtain the visual sensor motion pose of scale missing, the step The motion pose of middle acquisition includes:Visual sensor is relative to the rotation and visual sensor of reference frame in reference frame The middle translation for lacking scale.
(3) equation of motion is established, constructs Extended Kalman filter, using acceleration output and angular velocity information, is extended card Thalmann filter predicts that the visual sensor motion pose estimation that the scale exported with reference to vision direct method lacks, which is used as, to be measured Value, is updated, and obtains the estimation of aircraft pose.
A kind of 2. Aerial vehicle position method based on direct method and inertial navigation fusion according to claim 1, it is characterised in that Use in the step (2) direct method obtain implementation that the motion pose of visual sensor scale missing estimates for:
Establish total luminosity error Ephoto
Wherein,Represent the set of picture frame in sliding window, p is representedIn a feature point coordinates,Represent the i-th frame figure The set of all feature point coordinates as in, obs (p) represent the set of all observations of characteristic point p;
Wherein EpjRepresent the re-projection luminosity error of single feature point, NpRepresent the block where characteristic point, wpRepresent weight, Ii And IjRepresent the i-th two field picture and jth two field picture respectively, p ' represents p point re-projections in the i-th two field picture to jth two field picture Coordinate;ai、biRepresent the photometric parameter of the i-th two field picture, aj、bjRepresent the photometric parameter of jth two field picture, tiAnd tjTable respectively Show the time for exposure of the i-th frame and jth two field picture, | | | |γExpression uses Huber kernel functions;
The coordinate p ' on p point re-projections to jth two field picture in i-th two field picture is:
<msub> <mi>&amp;pi;</mi> <mi>c</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>R</mi> <msubsup> <mi>&amp;pi;</mi> <mi>c</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi></mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein πcIt is pin-hole imaging model for the projection model of 3D-2D, the internal reference matrix of pin-hole imaging model is:
<mrow> <mi>K</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>c</mi> <mi>x</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> </mtd> <mtd> <msub> <mi>c</mi> <mi>y</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <msup> <mi>K</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>f</mi> <mi>x</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>f</mi> <mi>x</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>c</mi> <mi>x</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msubsup> <mi>f</mi> <mi>y</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>f</mi> <mi>y</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>c</mi> <mi>y</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
R represents to represent to close from the i-th frame to the translation of jth frame from the i-th frame to the rotation of jth frame, the inverse depth of dp expression p points, t System;In internal reference matrix K:fx、fy、cx、cyRepresent respectively camera x-axis focal length, y-axis focal length, optical center x-axis coordinate and optical center in y The coordinate of axis;
The relative pose R and t of camera, make total luminosity error E in optimized-type (3)photoMinimum, so as to recover the movement of camera Pose estimates R and t.
3. a kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion according to claim 2, its feature It is, establishes the equation of motion in the step (3), constructs Extended Kalman filter, using acceleration output and angular velocity information, Kalman filter prediction is extended, it is specific as follows:
Construct the quantity of state x of extended Kalman filter:
<mrow> <mi>x</mi> <mo>=</mo> <mfenced open = "{" close = "}"> <mtable> <mtr> <mtd> <msubsup> <mi>p</mi> <mi>i</mi> <mrow> <mi>w</mi> <mi>T</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>v</mi> <mi>i</mi> <mrow> <mi>w</mi> <mi>T</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>q</mi> <mi>i</mi> <mrow> <mi>w</mi> <mi>T</mi> </mrow> </msubsup> </mtd> <mtd> <mrow> <msup> <msub> <mi>b</mi> <mi>w</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>b</mi> <mi>a</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mi>&amp;lambda;</mi> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>p</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>q</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>p</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>q</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
It is expressed as position of the i-th moment IMU coordinate system relative to world coordinate systems (being represented by w), speed Degree, posture (being represented with quaternary number), bw、baThe biasing of angular speed and acceleration is represented respectively, and λ represents vision system and true thing The scale of reason system,Position of the camera coordinates system with respect to IMU coordinate systems and posture are represented respectively,Respectively Position and posture of the world coordinate systems relative to visual coordinate system (being represented by v), then the kinematical equation of virtual condition amount be:
<mrow> <msubsup> <mover> <mi>p</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>w</mi> </msubsup> </mrow>
<mrow> <msubsup> <mover> <mi>v</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>)</mo> </mrow> <mi>a</mi> <mo>-</mo> <mi>g</mi> </mrow>
<mrow> <msubsup> <mover> <mi>q</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>&amp;Omega;</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> <msubsup> <mi>q</mi> <mi>i</mi> <mi>w</mi> </msubsup> </mrow>
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mover> <mi>b</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>&amp;omega;</mi> </msub> <mo>=</mo> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>&amp;omega;</mi> </msub> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mover> <mi>b</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>a</mi> </msub> <mo>=</mo> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>a</mi> </msub> </msub> </mrow> </mtd> <mtd> <mrow> <mover> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> </mover> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mtable> <mtr> <mtd> <mrow> <msubsup> <mover> <mi>q</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </mtd> <mtd> <mrow> <msubsup> <mover> <mi>q</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </mtd> <mtd> <mrow> <msubsup> <mover> <mi>p</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </mtd> <mtd> <mrow> <msubsup> <mover> <mi>q</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
A=am-ba-naω=ωm-bω-nω (7)
Wherein, C () represents that quaternary counts to the transformation relation of spin matrix, and Ω () represents that the conversion of angular speed and quaternary number is closed System, nbwRepresent the drift noise of angular speed biasing, nawRepresent the drift noise of acceleration biasing, g represents acceleration of gravity, a tables Show the actual value of acceleration, amRepresent the acceleration output of inertial measurement component, naRepresent the measurement noise of acceleration, ω is represented The actual value of angular speed, ωmRepresent the angular speed output of inertial measurement component, nωRepresent the measurement noise of angular speed;
Due to noise or the presence of disturbance, the actual value of state can not be obtained in practice, it is necessary to estimate these states, The measurement of accelerometer and gyroscope is thought at this time without noise or disturbance, then rewriting time of day equation is:
<mrow> <msubsup> <mover> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <msubsup> <mover> <mi>v</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>,</mo> </mrow>
<mrow> <msubsup> <mover> <mover> <mi>v</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>)</mo> </mrow> <mover> <mi>a</mi> <mo>^</mo> </mover> <mo>-</mo> <mi>g</mi> <mo>,</mo> </mrow>
<mrow> <msubsup> <mover> <mover> <mi>q</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>&amp;Omega;</mi> <mrow> <mo>(</mo> <mover> <mi>&amp;omega;</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>,</mo> </mrow>
<mrow> <msub> <mover> <mover> <mi>b</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>&amp;omega;</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mover> <mover> <mi>b</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>a</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mover> <mover> <mi>&amp;lambda;</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow>
<mrow> <msubsup> <mover> <mover> <mi>q</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msubsup> <mover> <mover> <mi>q</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msubsup> <mover> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msubsup> <mover> <mover> <mi>q</mi> <mo>^</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <mover> <mi>a</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mi>a</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mover> <mi>b</mi> <mo>^</mo> </mover> <mi>a</mi> </msub> </mrow> </mtd> <mtd> <mrow> <mover> <mi>&amp;omega;</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mi>&amp;omega;</mi> <mi>m</mi> </msub> <mo>-</mo> <msub> <mover> <mi>b</mi> <mo>^</mo> </mover> <mi>&amp;omega;</mi> </msub> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
Wherein,It is expressed as position of the i-th moment IMU coordinate system relative to world coordinate systems (being represented by w) Put, speed, the estimate of posture (being represented with quaternary number),The estimation of the biasing of angular speed and acceleration is represented respectively Value,Represent the estimate of vision system and the scale of actual physical system,Represent camera coordinates system with respect to IMU respectively The position of coordinate system and the estimate of posture,World coordinate systems are relative to visual coordinate system (being represented by v) respectively Position and the estimate of posture,Represent the estimate of the actual value of acceleration,Represent the estimate of the actual value of angular speed;
In the state representation of formula (6) and formula (8), using quaternary number as the description of posture, but quaternary number is not posture Smallest dimension characterization, cause the singularity problem of covariance matrix in order to avoid the hyper parameter or redundancy of parameter, in table The smallest dimension that rotating vector θ is introduced into when levying state error as posture describes, therefore error state amount is:
<mrow> <mover> <mi>x</mi> <mo>~</mo> </mover> <mo>=</mo> <mfenced open = "{" close = "}"> <mtable> <mtr> <mtd> <mrow> <msup> <msubsup> <mi>&amp;Delta;p</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;Delta;v</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;delta;&amp;theta;</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>&amp;Delta;b</mi> <mi>w</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msub> <mi>&amp;Delta;b</mi> <mi>a</mi> </msub> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <mi>&amp;Delta;</mi> <mi>&amp;lambda;</mi> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;Delta;p</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;delta;&amp;theta;</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;Delta;p</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> <mtd> <mrow> <msup> <msubsup> <mi>&amp;delta;&amp;theta;</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mi>T</mi> </msup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
It is expressed as position of the i-th moment IMU coordinate system relative to world coordinate systems (being represented by w) Put error, velocity error, attitude error (being represented with quaternary number), Δ bw、ΔbaRepresent that the biasing of angular speed and acceleration misses respectively Difference, Δ λ represent vision system and the scale error of actual physical system,Represent camera coordinates system with respect to IMU respectively The site error and attitude error of coordinate system,World coordinate systems (are represented) relative to visual coordinate system by v respectively Site error and attitude error;
The error state equation of motion for obtaining continuous time is:
<mrow> <mi>&amp;Delta;</mi> <msubsup> <mover> <mi>p</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <msubsup> <mi>&amp;Delta;v</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>,</mo> </mrow>
<mrow> <mi>&amp;delta;</mi> <msubsup> <mover> <mi>&amp;theta;</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>=</mo> <mo>-</mo> <mover> <mi>&amp;omega;</mi> <mo>^</mo> </mover> <mo>&amp;times;</mo> <msubsup> <mi>&amp;delta;&amp;theta;</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>-</mo> <msub> <mi>&amp;Delta;b</mi> <mi>&amp;omega;</mi> </msub> <mo>-</mo> <msub> <mi>n</mi> <mi>&amp;omega;</mi> </msub> <mo>,</mo> </mrow>
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;Delta;</mi> <msub> <mover> <mi>b</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>&amp;omega;</mi> </msub> <mo>=</mo> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>&amp;omega;</mi> </msub> </msub> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>&amp;Delta;</mi> <msub> <mover> <mi>b</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>a</mi> </msub> <mo>=</mo> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>a</mi> </msub> </msub> </mrow> </mtd> <mtd> <mrow> <mi>&amp;Delta;</mi> <mover> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> </mover> <mo>=</mo> <mn>0</mn> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mi>&amp;Delta;</mi> <msubsup> <mover> <mi>p</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>&amp;delta;</mi> <msubsup> <mover> <mi>&amp;theta;</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>&amp;Delta;</mi> <msubsup> <mover> <mi>p</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>&amp;delta;</mi> <msubsup> <mover> <mi>&amp;theta;</mi> <mo>&amp;CenterDot;</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
WhereinRepresent the antisymmetric matrix of vector;
Formula (11) is generalized into the error state equation of continuous time is:
<mrow> <mover> <mover> <mi>x</mi> <mo>~</mo> </mover> <mo>&amp;CenterDot;</mo> </mover> <mo>=</mo> <msub> <mi>F</mi> <mi>c</mi> </msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mo>+</mo> <msub> <mi>G</mi> <mi>c</mi> </msub> <mi>n</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein,To FcInto Row sliding-model control, obtains Fd
<mrow> <msub> <mi>F</mi> <mi>d</mi> </msub> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <msub> <mi>F</mi> <mi>c</mi> </msub> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>I</mi> <mi>d</mi> </msub> <mo>+</mo> <msub> <mi>F</mi> <mi>c</mi> </msub> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>+</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <mo>!</mo> </mrow> </mfrac> <msubsup> <mi>F</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>+</mo> <mo>...</mo> </mrow>
Wherein Δ t represents discrete sampling time interval, IdRepresent unit matrix;
And the noise covariance matrix Q of discrete timed
<mrow> <msub> <mi>Q</mi> <mi>d</mi> </msub> <mo>=</mo> <msub> <mo>&amp;Integral;</mo> <mrow> <mi>&amp;Delta;</mi> <mi>t</mi> </mrow> </msub> <msub> <mi>F</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;tau;</mi> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mi>c</mi> </msub> <msub> <mi>Q</mi> <mi>c</mi> </msub> <msubsup> <mi>G</mi> <mi>c</mi> <mi>T</mi> </msubsup> <msub> <mi>F</mi> <mi>d</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>&amp;tau;</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>d</mi> <mi>&amp;tau;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
Wherein:
<mrow> <msub> <mi>Q</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>&amp;sigma;</mi> <msub> <mi>n</mi> <mi>a</mi> </msub> <mn>2</mn> </msubsup> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msubsup> <mi>&amp;sigma;</mi> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>a</mi> </msub> </msub> <mn>2</mn> </msubsup> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msubsup> <mi>&amp;sigma;</mi> <msub> <mi>n</mi> <mi>&amp;omega;</mi> </msub> <mn>2</mn> </msubsup> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msub> <mn>0</mn> <mrow> <mn>3</mn> <mo>&amp;times;</mo> <mn>3</mn> </mrow> </msub> </mtd> <mtd> <msubsup> <mi>&amp;sigma;</mi> <msub> <mi>n</mi> <msub> <mi>b</mi> <mi>&amp;omega;</mi> </msub> </msub> <mn>2</mn> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
QcThe system noise covariance matrix of continuous time is expressed as, whereinIt is represented sequentially as Acceleration analysis noise variance, acceleration offset drift variance, angular velocity measurement noise variance, angular speed offset drift variance;
Thus, the prediction steps for being extended Kalman filtering are as follows:
1. quantity of state is updated according to acceleration and angular speed information iteration;
2. calculate discrete system matrix FdWith noise covariance matrix Qd
3. the state covariance matrix at k-1 moment to k moment is calculated according to Riccati equation:
<mrow> <msub> <mi>P</mi> <mrow> <mi>k</mi> <mo>|</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>F</mi> <mi>d</mi> </msub> <msub> <mi>P</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> <mo>|</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <msubsup> <mi>F</mi> <mi>d</mi> <mi>T</mi> </msubsup> <mo>+</mo> <msub> <mi>Q</mi> <mi>d</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow>
Wherein Pk|k-1The process covariance matrix that expression k-1 moment to the k moment is predicted, Pk-1|k-1Represent the process association at k-1 moment Variance matrix.
4. a kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion according to claim 3, its feature It is, the visual sensor motion pose estimation of scale missing of vision direct method output is combined in the step (3) as amount Measured value, is updated, and obtains the estimation of aircraft pose, specific as follows:
Calculate Extended Kalman filter measurement equation:
Site errorFor:
<mrow> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>z</mi> <mi>p</mi> </msub> <mo>-</mo> <msub> <mover> <mi>z</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>+</mo> <mi>C</mi> <mo>(</mo> <msubsup> <mi>q</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>)</mo> <msubsup> <mi>p</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mi>&amp;lambda;</mi> <mo>+</mo> <msub> <mi>n</mi> <mi>p</mi> </msub> <mo>-</mo> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mover> <mi>p</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>+</mo> <mi>C</mi> <mo>(</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>)</mo> <msubsup> <mover> <mi>p</mi> <mo>^</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mover> <mi>&amp;lambda;</mi> <mo>^</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow>
Wherein zpRepresent the position measurements of vision system,Represent the position estimation value of vision system, npRepresent vision system position Put the noise of measured value;
Formula (16) is linearized and can obtained:
<mrow> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>p</mi> </msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
Wherein HpFor the measurement equation matrix after linearisation;
Equally establish attitude error
<mrow> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi>z</mi> <mi>q</mi> </msub> <mo>-</mo> <msub> <mover> <mi>z</mi> <mo>^</mo> </mover> <mi>q</mi> </msub> <mo>=</mo> <msubsup> <mi>q</mi> <mi>w</mi> <mi>v</mi> </msubsup> <mo>&amp;CircleTimes;</mo> <msubsup> <mi>q</mi> <mi>i</mi> <mi>w</mi> </msubsup> <mo>&amp;CircleTimes;</mo> <msubsup> <mi>q</mi> <mi>c</mi> <mi>i</mi> </msubsup> <mo>&amp;CircleTimes;</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>w</mi> <mi>v</mi> </msubsup> <mo>&amp;CircleTimes;</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>i</mi> <mi>w</mi> </msubsup> <mo>&amp;CircleTimes;</mo> <msubsup> <mover> <mi>q</mi> <mo>^</mo> </mover> <mi>c</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
Wherein zqRepresent the attitude measurement value of vision system,Represent the Attitude estimation value of vision system,Represent that quaternary number multiplies Method;
Formula (18) is linearized and can obtained:
<mrow> <msub> <mover> <mi>z</mi> <mo>~</mo> </mover> <mi>q</mi> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>q</mi> </msub> <mover> <mi>x</mi> <mo>~</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
Wherein HqFor the measurement equation matrix after linearisation;
Complete Extended Kalman filter measurement equation is obtained after formula (17) and formula (19) are merged:
Calculate the renewal step of Extended Kalman filter:
1. calculate kalman gain:
K=Pk|k-1HT(HPk|k-1HT+Rm)-1 (21)
Wherein RmRepresent the variance of vision system measured value;
2. update quantity of state:
<mrow> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>|</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>x</mi> <mrow> <mi>k</mi> <mo>|</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <mi>K</mi> <mover> <mi>z</mi> <mo>~</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow>
xk|kRepresent the k moment update after quantity of state, xk|k-1Predicted value for the k-1 moment to the k moment;
3. renewal process covariance matrix:
Pk|k=(I-KH) Pk|k-1(I-KH)T+KRmKT (23)
Wherein I represents unit matrix;
The final motion pose estimation for obtaining the visual sensor with true dimensional information.
CN201711190702.0A 2017-11-24 2017-11-24 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion Pending CN108036785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711190702.0A CN108036785A (en) 2017-11-24 2017-11-24 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711190702.0A CN108036785A (en) 2017-11-24 2017-11-24 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion

Publications (1)

Publication Number Publication Date
CN108036785A true CN108036785A (en) 2018-05-15

Family

ID=62093066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711190702.0A Pending CN108036785A (en) 2017-11-24 2017-11-24 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion

Country Status (1)

Country Link
CN (1) CN108036785A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN108921898A (en) * 2018-06-28 2018-11-30 北京旷视科技有限公司 Pose of camera determines method, apparatus, electronic equipment and computer-readable medium
CN108956003A (en) * 2018-07-17 2018-12-07 杭州崧智智能科技有限公司 A kind of method, apparatus and terminal device of real-time calibration 6 DOF sensor attitude
CN109099912A (en) * 2017-08-11 2018-12-28 黄润芳 Outdoor accurate positioning air navigation aid, device, electronic equipment and storage medium
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109269511A (en) * 2018-11-06 2019-01-25 北京理工大学 The Curve Matching vision navigation method that circumstances not known lower planet lands
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN109544638A (en) * 2018-10-29 2019-03-29 浙江工业大学 A kind of asynchronous online calibration method for Multi-sensor Fusion
CN109631875A (en) * 2019-01-11 2019-04-16 京东方科技集团股份有限公司 The method and system that a kind of pair of sensor attitude fusion measurement method optimizes
CN109916399A (en) * 2019-04-04 2019-06-21 中国人民解放军火箭军工程大学 A kind of attitude of carrier estimation method under shade
CN110160522A (en) * 2019-04-16 2019-08-23 浙江大学 A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN110260861A (en) * 2019-06-13 2019-09-20 北京华捷艾米科技有限公司 Pose determines method and device, odometer
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
WO2020024182A1 (en) * 2018-08-01 2020-02-06 深圳市大疆创新科技有限公司 Parameter processing method and apparatus, camera device and aircraft
CN110850817A (en) * 2019-10-18 2020-02-28 杭州电子科技大学 Safety estimation method of networked industrial control system
CN110929402A (en) * 2019-11-22 2020-03-27 哈尔滨工业大学 Probabilistic terrain estimation method based on uncertain analysis
CN111367319A (en) * 2020-05-06 2020-07-03 仿翼(北京)科技有限公司 Aircraft, control method for aircraft, and computer-readable storage medium
CN111383282A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Pose information determination method and device
CN111398522A (en) * 2020-03-24 2020-07-10 山东智翼航空科技有限公司 Indoor air quality detection system and detection method based on micro unmanned aerial vehicle
CN112161639A (en) * 2020-07-29 2021-01-01 河海大学 Vertical binocular inertial navigation odometer based on angular optical flow method and calculation method thereof
CN112504261A (en) * 2020-11-09 2021-03-16 中国人民解放军国防科技大学 Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN113065572A (en) * 2019-12-31 2021-07-02 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN114419109A (en) * 2022-03-29 2022-04-29 中航金城无人系统有限公司 Aircraft positioning method based on visual and barometric information fusion
CN117405109A (en) * 2023-12-15 2024-01-16 北京神导科技股份有限公司 Three-set inertial navigation system attitude voting method based on quaternion spherical linear weighting

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781B (en) * 2011-12-14 2014-12-17 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
US20160163114A1 (en) * 2014-12-05 2016-06-09 Stmicroelectronics S.R.L. Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN105973238A (en) * 2016-05-09 2016-09-28 郑州轻工业学院 Spacecraft attitude estimation method based on norm-constrained cubature Kalman filter
CN106017463A (en) * 2016-05-26 2016-10-12 浙江大学 Aircraft positioning method based on positioning and sensing device
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538781B (en) * 2011-12-14 2014-12-17 浙江大学 Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
US20160163114A1 (en) * 2014-12-05 2016-06-09 Stmicroelectronics S.R.L. Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN105973238A (en) * 2016-05-09 2016-09-28 郑州轻工业学院 Spacecraft attitude estimation method based on norm-constrained cubature Kalman filter
CN106017463A (en) * 2016-05-26 2016-10-12 浙江大学 Aircraft positioning method based on positioning and sensing device
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JAKOB ENGEL 等,: ""Direct Sparse Odometry"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
叶波,: ""基于四旋翼平台的融合单目视觉与惯性传感的里程计方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
茹祥宇 等,: ""单目视觉惯性融合方法在无人机位姿估计中的应用"", 《控制与信息技术》 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109099912B (en) * 2017-08-11 2022-05-10 黄润芳 Outdoor accurate positioning navigation method and device, electronic equipment and storage medium
CN109099912A (en) * 2017-08-11 2018-12-28 黄润芳 Outdoor accurate positioning air navigation aid, device, electronic equipment and storage medium
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
CN108648215B (en) * 2018-06-22 2022-04-15 南京邮电大学 SLAM motion blur pose tracking algorithm based on IMU
CN108921898A (en) * 2018-06-28 2018-11-30 北京旷视科技有限公司 Pose of camera determines method, apparatus, electronic equipment and computer-readable medium
CN108921898B (en) * 2018-06-28 2021-08-10 北京旷视科技有限公司 Camera pose determination method and device, electronic equipment and computer readable medium
CN108956003B (en) * 2018-07-17 2020-10-20 崧智智能科技(苏州)有限公司 Method and device for calibrating six-dimensional sensor posture in real time and terminal equipment
CN108956003A (en) * 2018-07-17 2018-12-07 杭州崧智智能科技有限公司 A kind of method, apparatus and terminal device of real-time calibration 6 DOF sensor attitude
WO2020024182A1 (en) * 2018-08-01 2020-02-06 深圳市大疆创新科技有限公司 Parameter processing method and apparatus, camera device and aircraft
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109211241B (en) * 2018-09-08 2022-04-29 天津大学 Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN109520497B (en) * 2018-10-19 2022-09-30 天津大学 Unmanned aerial vehicle autonomous positioning method based on vision and imu
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN109544638A (en) * 2018-10-29 2019-03-29 浙江工业大学 A kind of asynchronous online calibration method for Multi-sensor Fusion
CN109544638B (en) * 2018-10-29 2021-08-03 浙江工业大学 Asynchronous online calibration method for multi-sensor fusion
CN109269511A (en) * 2018-11-06 2019-01-25 北京理工大学 The Curve Matching vision navigation method that circumstances not known lower planet lands
CN111383282A (en) * 2018-12-29 2020-07-07 杭州海康威视数字技术股份有限公司 Pose information determination method and device
CN111383282B (en) * 2018-12-29 2023-12-01 杭州海康威视数字技术股份有限公司 Pose information determining method and device
CN109631875A (en) * 2019-01-11 2019-04-16 京东方科技集团股份有限公司 The method and system that a kind of pair of sensor attitude fusion measurement method optimizes
CN109916399B (en) * 2019-04-04 2019-12-24 中国人民解放军火箭军工程大学 Carrier attitude estimation method under shadow
CN109916399A (en) * 2019-04-04 2019-06-21 中国人民解放军火箭军工程大学 A kind of attitude of carrier estimation method under shade
CN110160522A (en) * 2019-04-16 2019-08-23 浙江大学 A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN110260861A (en) * 2019-06-13 2019-09-20 北京华捷艾米科技有限公司 Pose determines method and device, odometer
CN110260861B (en) * 2019-06-13 2021-07-27 北京华捷艾米科技有限公司 Pose determination method and device and odometer
CN110850817A (en) * 2019-10-18 2020-02-28 杭州电子科技大学 Safety estimation method of networked industrial control system
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN110929402A (en) * 2019-11-22 2020-03-27 哈尔滨工业大学 Probabilistic terrain estimation method based on uncertain analysis
CN113065572A (en) * 2019-12-31 2021-07-02 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN113065572B (en) * 2019-12-31 2023-09-08 北京凌宇智控科技有限公司 Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN111398522B (en) * 2020-03-24 2022-02-22 山东智翼航空科技有限公司 Indoor air quality detection system and detection method based on micro unmanned aerial vehicle
CN111398522A (en) * 2020-03-24 2020-07-10 山东智翼航空科技有限公司 Indoor air quality detection system and detection method based on micro unmanned aerial vehicle
CN111367319B (en) * 2020-05-06 2021-01-08 仿翼(北京)科技有限公司 Aircraft, control method for aircraft, and computer-readable storage medium
CN111367319A (en) * 2020-05-06 2020-07-03 仿翼(北京)科技有限公司 Aircraft, control method for aircraft, and computer-readable storage medium
CN112161639A (en) * 2020-07-29 2021-01-01 河海大学 Vertical binocular inertial navigation odometer based on angular optical flow method and calculation method thereof
CN112504261A (en) * 2020-11-09 2021-03-16 中国人民解放军国防科技大学 Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point
CN112504261B (en) * 2020-11-09 2024-02-09 中国人民解放军国防科技大学 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN114419109A (en) * 2022-03-29 2022-04-29 中航金城无人系统有限公司 Aircraft positioning method based on visual and barometric information fusion
CN117405109A (en) * 2023-12-15 2024-01-16 北京神导科技股份有限公司 Three-set inertial navigation system attitude voting method based on quaternion spherical linear weighting
CN117405109B (en) * 2023-12-15 2024-04-16 北京神导科技股份有限公司 Three-set inertial navigation system attitude voting method based on quaternion spherical linear weighting

Similar Documents

Publication Publication Date Title
CN108036785A (en) A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
CN101726295B (en) Unscented Kalman filter-based method for tracking inertial pose according to acceleration compensation
CN106289246B (en) A kind of flexible link arm measure method based on position and orientation measurement system
WO2021180128A1 (en) Rgbd sensor and imu sensor-based positioning method
CN106979780B (en) A kind of unmanned vehicle real-time attitude measurement method
CN109991636A (en) Map constructing method and system based on GPS, IMU and binocular vision
CN108592950B (en) Calibration method for relative installation angle of monocular camera and inertial measurement unit
CN110398245B (en) Indoor pedestrian navigation attitude estimation method based on foot-worn inertial measurement unit
CN105953796A (en) Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN105698765A (en) Method using combination of double IMUs (inertial measurement units) and monocular vision to measure pose of target object under non-inertial system
CN107490378B (en) Indoor positioning and navigation method based on MPU6050 and smart phone
Omari et al. Metric visual-inertial navigation system using single optical flow feature
CN103940442A (en) Location method and device adopting accelerating convergence algorithm
CN107728182A (en) Flexible more base line measurement method and apparatus based on camera auxiliary
CN108981693A (en) VIO fast joint initial method based on monocular camera
CN108917772A (en) Noncooperative target Relative Navigation method for estimating based on sequence image
CN103175502A (en) Attitude angle detecting method based on low-speed movement of data glove
CN112815939A (en) Pose estimation method for mobile robot and computer-readable storage medium
CN110702113B (en) Method for preprocessing data and calculating attitude of strapdown inertial navigation system based on MEMS sensor
CN110160522A (en) A kind of position and orientation estimation method of the vision inertial navigation odometer based on sparse features method
CN103712598A (en) Attitude determination system and method of small unmanned aerial vehicle
CN109764870B (en) Carrier initial course estimation method based on transformation estimation modeling scheme
CN106153069A (en) Attitude rectification apparatus and method in autonomous navigation system
Amirsadri et al. Practical considerations in precise calibration of a low-cost MEMS IMU for road-mapping applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180515