CN106197432A - A kind of UAV Landing method based on FastSLAM algorithm - Google Patents

A kind of UAV Landing method based on FastSLAM algorithm Download PDF

Info

Publication number
CN106197432A
CN106197432A CN201610766065.6A CN201610766065A CN106197432A CN 106197432 A CN106197432 A CN 106197432A CN 201610766065 A CN201610766065 A CN 201610766065A CN 106197432 A CN106197432 A CN 106197432A
Authority
CN
China
Prior art keywords
delta
road sign
theta
unmanned plane
psi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610766065.6A
Other languages
Chinese (zh)
Other versions
CN106197432B (en
Inventor
王养柱
张烨林
韩震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201610766065.6A priority Critical patent/CN106197432B/en
Publication of CN106197432A publication Critical patent/CN106197432A/en
Application granted granted Critical
Publication of CN106197432B publication Critical patent/CN106197432B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a kind of UAV Landing method based on FastSLAM algorithm, belong to field of navigation technology.Described method includes following step: step one, sets up UAV Landing section system model;Step 2, designs UAV Landing section FastSLAM algorithm;Step 3, uses arc tangent Nonlinear Tracking Differentiator to obtain velocity estimation.The present invention utilizes FastSLAM algorithm, it is respectively adopted particle filter and EKF unmanned plane path and environmental characteristic are estimated, constructing environment map, realize the unmanned plane renewal to self poisoning, realize independent navigation, have the advantages that navigation accuracy is high, it is possible to meet positioning precision and the requirement of real-time of UAV Landing section.

Description

A kind of UAV Landing method based on FastSLAM algorithm
Technical field
The present invention relates to a kind of UAV Landing method based on FastSLAM algorithm, belong to field of navigation technology.
Background technology
Landing is that unmanned plane performs the stage that is extremely important in task process and that easily break down, due to handle complicated, Ground interference factor is many and Frequent Accidents, the most high-precision navigation and guiding system are the important guarantors of unmanned plane safe landing Card.Currently used Landing Guidance System mainly has microwave landing system and instrument landing system, in navigation accuracy and performance The requirement of aircraft landing can be met, but expensive, be not suitable for relative low price, need the unmanned plane of frequent transition.
The airmanship that at present UAV Landing uses specifically include that inertial navigation system, GPS navigation system, inertia/ GPS integrated navigation system, computer vision navigation system etc..Inertial navigation belongs to independent navigation, it is provided that multiple navigation is joined Number, but error passage in time and dissipate, landing phase only relies on inertial navigation can bring bigger navigation error.GPS navigation has There is precision height, use simple advantage, but if be interfered in landing mission and can have a strong impact on navigation accuracy.Computer Vision has big visual field, noncontact, informative advantage, but image procossing needs a large amount of calculating and storage, is landing In journey, real-time is difficult to be guaranteed.
Summary of the invention
The invention aims to solve problems of the prior art, it is provided that a kind of based on FastSLAM algorithm UAV Landing method, FastSLAM algorithm is used for UAV Landing by described method, uses particle filter and spreading kalman Filter and path and the environment road sign of unmanned plane estimated, specifically include following step:
Step one, sets up UAV Landing section system model.
Step 2, designs UAV Landing section FastSLAM algorithm.
Step 3, uses arc tangent Nonlinear Tracking Differentiator to obtain velocity estimation.
It is an advantage of the current invention that:
(1) it is respectively adopted particle filter and EKF unmanned plane path and environment road sign are estimated, structure Build environmental map, it is achieved the unmanned plane renewal to self poisoning, it is achieved independent navigation.
(2) there is the feature that navigation accuracy is high, it is possible to meet the required precision of UAV Landing section.
Accompanying drawing explanation
Fig. 1 is the UAV Landing method flow diagram based on FastSLAM algorithm of the present invention.
Fig. 2 is the environment road sign in embodiment and landing path schematic diagram.
Fig. 3 is to be 95s at simulation time, three shaft position estimation difference curves in the case of different particles.
Fig. 4 is to be 95s at simulation time, three axle speed estimation error curves in the case of different particles.
Fig. 5 is to be 95s at simulation time, grade comparison curves in the case of different particles.
Fig. 6 is to be 95s at simulation time, three shaft position estimation difference curve in the case of varying environment road sign.
Fig. 7 is to be 95s at simulation time, 18 particles, in the case of 134 environment road signs, and environment road sign covariance matrix Trace curve.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.
The present invention provides a kind of UAV Landing method based on FastSLAM algorithm, and flow process is as it is shown in figure 1, include following Several steps:
Step one, northeastward under sky geographic coordinate system, sets up UAV Landing section system model, described system model bag Include motion model and the observation model of unmanned plane;
State vector x=[x, y, z, ψ, θ] of unmanned planeT, wherein x, y, z represent that unmanned plane is at earth axes respectively Three shaft position coordinates under oxyz, wherein o is a bit chosen on ground, and x, y, z are respectively directed to east, north, direction, sky, and ψ is nothing Man-machine velocity is the angle of unmanned plane velocity and xoy plane at the angle of xoy plane projection Yu x-axis, θ, is referred to as Unmanned plane Velocity Azimuth angle.It is distributed around some environment road signs at UAV Landing flight path, utilizes airborne laser radar to obtain Obtain the distance between unmanned plane and environment road sign and azimuth.The motion model of unmanned plane and observation model can be with containing height The nonlinear equation of this noise is expressed as:
x · = f ( x ( t ) , u ( t ) , t ) + δ t - - - ( 1 )
Z=h (x (t), m (t), t)+εt (2)
Wherein, t is the time, and x (t) represents the state vector of the unmanned plane of t,Represent the micro-of unmanned plane state vector Point, z represents the unmanned plane observed quantity to environment road sign, δtRepresent the Gaussian noise in motion model, motion model covariance square Battle array is QttBeing the observation noise in observation model, observation model covariance matrix is Rt, u (t)=v is that airborne ins obtains The unmanned plane speed of t, m (t) is environment road sign position vector under earth axes, and environment road sign position is fixed, no T change in time, after motion model discretization, form is as follows:
x ( l + 1 ) y ( l + 1 ) z ( l + 1 ) ψ ( l + 1 ) θ ( l + 1 ) = x ( l ) + v Δ t c o s ( θ ( l ) + Δ θ ) s i n ( ψ ( l ) + Δ ψ ) y ( l ) + v Δ t c o s ( θ ( l ) + Δ θ ) c o s ( ψ ( l ) + Δ ψ ) z ( l ) + v Δ t sin ( θ ( l ) + Δ θ ) ψ ( l ) + Δ ψ θ ( l ) + Δ θ + δ t 1 δ t 2 δ t 3 δ t 4 δ t 5 - - - ( 3 )
L refers to the l sampling step, and Δ t is sampling time interval, and Δ θ is the increment of θ in adjacent two sampling steps, and Δ ψ is The increment of ψ, δ in adjacent two sampling stepst1、δt2、δt3、δt4、δt5Represent that each quantity of state of unmanned plane (x, y, z, ψ and θ) is right respectively The Gaussian noise answered, v represents the velocity magnitude of the current time unmanned plane that airborne ins obtains.UAV Landing flight path selects Select several destinations, if destination to be reached is first destination, calculate the spacing of unmanned plane current location and destination to be reached, If less than setting threshold value Rmin, destination to be reached becomes next destination.Velocity Azimuth angle increment Δ θ, Δ ψ is calculated as follows:
Δ ψ = arctan ( wp y - y ( l ) wp x - x ( l ) ) - ψ ( l ) - - - ( 4 )
Δ θ = arctan ( wp z - z ( l ) ( wp x - x ( l ) ) 2 + ( wp y - y ( l ) ) 2 ) - θ ( l ) - - - ( 5 )
Wherein wpx,wpy,wpzIt is the destination to be reached position coordinates under earth axes respectively, x (l), y (l), z (l), ψ L (), θ (l) is respectively the quantity of state of the l lower unmanned plane of sampling step.
Unmanned plane can be expressed as z=[r, α, β] to the vector form of the observed quantity of environment road signT, concrete form is as follows:
r α β = ( x - x m ) 2 + ( y - y m ) 2 + ( z - z m ) 2 arctan ( z - z m ( x - x m ) 2 + ( y - y m ) 2 ) - θ arctan ( y - y m x - x m ) - ψ - - - ( 6 )
Wherein, r is the distance between unmanned plane and environment road sign, α be between unmanned plane and environment road sign position vector with Current time unmanned plane velocity respectively with the difference of the angle of horizontal plane, β is the difference of two angles, and first angle is unmanned Position vector projection on xoy face and x-axis angle between machine and environment terrestrial reference, second angle is Velocity Azimuth angle ψ;x,y, Z is respectively the current position coordinates of unmanned plane, xm,ym,zmIt is respectively three shaft position coordinates of environment road sign.
Step 2, designs UAV Landing section FastSLAM algorithm.
Specifically include following step:
1) unmanned plane state updates;
Using N number of particle filter to estimate unmanned plane state, initializing particle weights coefficient is 1/N, initializes Destination and environment road sign, initial state vector is x0=[x0,y0,z000]T, x0Each element be respectively unmanned plane initial bit Put and the azimuth of initial velocity.
The perfect condition of prediction unmanned plane next one sampling step:
x ( l + 1 ) y ( l + 1 ) z ( l + 1 ) ψ ( l + 1 ) θ ( l + 1 ) = x ( l ) + v Δ t c o s ( θ ( l ) + Δ θ ) s i n ( ψ ( l ) + Δ ψ ) y ( l ) + v Δ t c o s ( θ ( l ) + Δ θ ) c o s ( ψ ( l ) + Δ ψ ) z ( l ) + v Δ t sin ( θ ( l ) + Δ θ ) ψ ( l ) + Δ ψ θ ( l ) + Δ θ - - - ( 7 )
Wherein, x (l+1), y (l+1), z (l+1), ψ (l+1), θ (l+1) are respectively the pre-of unmanned plane the l+1 sampling step Survey state, Velocity Azimuth angle increment Δ ψ, Δ θ define same formula (4), (5).
Below to speed and Velocity Azimuth angle increment adding procedure noise, the covariance matrix of motion model is Qt, it was predicted that Each particle state.
Covariance matrix Q to motion modeltCarry out Cholesky factorization, i.e. Qt=SST;Definition q=[v, Δ ψ, Δ θ ]T, X=randn (3) is random number vector, qn=STX+q, then qnIt is the unmanned plane speed after adding stochastic process noise and side Parallactic angle increment, makes qn=[vn,Δψn,Δθn]T, wherein vnIt is the speed after adding stochastic process noise, Δ ψnIt is to add at random The increment of the azimuth ψ after process noise, Δ θnIt it is the increment adding the azimuth angle theta after stochastic process noise.
Each particle represents a kind of possible state of unmanned plane, to each particle, carries out state recursion, has following l Relation between step and l+1 step state vector:
x p ( l + 1 ) y p ( l + 1 ) z p ( l + 1 ) ψ p ( l + 1 ) θ p ( l + 1 ) = x p ( l ) + v n Δ t c o s ( θ p ( l ) + Δθ n ) s i n ( ψ p ( l ) + Δψ n ) y p ( l ) + v n Δ t c o s ( θ p ( l ) + Δθ n ) c o s ( ψ p ( l ) + Δψ n ) z p ( l ) + v n Δ t sin ( θ p ( l ) + Δθ n ) ψ p ( l ) + Δψ n θ p ( l ) + Δθ n - - - ( 8 )
Wherein, xp,yp,zpIt is respectively the position that particle is current, ψppFor the Velocity Azimuth angle that particle is current.
If reaching observation cycle, environment road sign is detected by airborne laser radar, it is thus achieved that laser radar detection distance Interior all environment road sign observed quantities and environment road sign sequence, the environment road sign sequence that storage has detected, if for the first time The environment road sign detected, environment road sign observed quantity is stored in new road sign collection znIn, if not the environment road detected for the first time Mark, then the observed quantity of environment road sign is stored in non-new road sign collection zfIn;Otherwise, unmanned plane predicted state is regained.
2) environment road sign is updated;
If not new road sign collection zfIt is not empty set, to non-new road sign collection zfIn environment road sign, computing environment road sign observed quantity Predictive value, first provides intermediate variable dx, dy, dz, d and d1As follows:
Dx=xf-xp, dy=yf-yp, dz=zf-zp
d = dx 2 + dy 2 + dz 2 , d 1 = dx 2 + dy 2 - - - ( 9 )
z ^ t = [ d , arctan ( d y d x ) - ψ , arctan ( d z d 1 ) - θ ] T
Wherein xp,yp,zpIt is the location components in particle state value,For the predictive value of environment road sign observed quantity, xf,yf, zfIt is respectively non-new road sign collection zfThree shaft position coordinates of middle environment road sign.
Use EKF that average and the variance of environment road sign are updated, can obtain:
Kt=Pf,t-1Hf(Rt+HfPf,t-1Hf T)-1
μ t = μ t - 1 + K t ( z t - z ^ t ) - - - ( 10 )
Pf,t=(I-KtHf)Pf,t-1
Wherein, lower footnote t represents that t, lower footnote f represent and is directed to non-new road sign collection;RtAssociation for observation noise Variance matrix, KtIt is the filtering gain matrix of Kalman filtering, HfFor observational equation (2) to environment road sign coordinate (xf,yf,zf) Jacobian matrix, I be dimension be the unit matrix of 3, Pf,tIt is the covariance matrix of environment road sign, μtFor the average of environment road sign, zt For environment road sign observed quantity.
Described Jacobian matrix HfIt is expressed as:
H f = d x d d y d d z d - d y d 1 2 d x d 1 2 0 - d x d z d 2 d 1 - d y d z d 2 d 1 d 1 d 2 - - - ( 11 )
Definition: Lt=HfPf,tHf T+Rt (12)
The weight coefficient of kth particleAs follows:
ω t [ k ] = | 2 πL t [ k ] | - 1 / 2 exp ( - 1 2 ( z - z ^ t [ k ] ) T ( L t [ k ] ) ( z - z ^ t [ k ] ) ) - - - ( 13 )
Subscript [k] represents kth particle, and k scope is 1~N, and z is the unmanned plane observation vector to environment road sign.
If new road sign collection znIt is not empty set, shows to observe new environment road sign, then need to extend environmental map;
If the state vector of kth particle is x[k]=[xp [k],yp [k] ,zp [k]p [k]p [k]]T, for new road sign collection znIn Observed quantity z=[r, α, β]T, definition intermediate variable:
S_phi=sin (α+ψ[k])
C_phi=cos (α+ψ[k])
S_thi=sin (β+θ[k])
C_thi=cos (β+θ[k]) (14)
Δ x=r*c_thi*c_phi
Δ y=r*c_thi*s_phi
Δ z=r*s_thi
Order H z = c _ t h i * c _ p h i - r c _ t h i * s _ p h i - r c _ t h i * c _ p h i c _ t h i * s _ p h i r c _ t h i * c _ p h i - r s _ t h i * s _ p h i s _ t h i 0 r * c _ t h i - - - ( 15 )
xf=[xp [k]+Δx,yp [k]+Δy,zp [k]+Δz]T (16)
Pf,t=Hz*Rt*Hz T (17)
Pf,tIt is the covariance matrix of newly-increased environment road sign, xfBeing the estimated value of environment road sign position, r is unmanned plane and environment Distance between road sign, Δ x is the x-axis component difference with the x-axis component of particle state of environment road sign estimated value, and Δ y is environment The y-axis component of road sign estimated value and the difference of the y-axis component of particle state, Δ z is z-axis component and the particle of environment road sign estimated value The difference of the z-axis component of state.
3) particle resampling;
Formula (13) has been obtained for the weight coefficient before the normalization of kth particleIt is now to calculate each particle Homogenization weight coefficient, the weight coefficient after the normalization of kth particle is as follows:
Calculate number of effective particlesEffective population threshold N is setminIf, Neff < Nmin, Then carry out following resampling steps, otherwise need not resampling.
The step of described resampling is as follows:
Generate the uniform random number between N number of 0 to 1 (wherein N is the number of above-mentioned particle), be set to select [1] ..., Select [j] ..., select [N], if the weight coefficient ω of kth particle[k]> select [j], all meet above formula ω[k]> Subscript j of select [j] constitutes set j;Definition sampling array sap [1]~sap [N], be initialized as 0, then kth particle exists It is designated as gathering in j under sampling array and is set to k at element, in like manner, all particle weights coefficients are compared with uniformly random array And carrying out aforesaid operations, available final sampling array, the particle numbering after resampling is sampling array sequence.
After obtaining resampling, particle state average can obtain unmanned plane location estimation.
Step 3, uses arc tangent Nonlinear Tracking Differentiator to obtain velocity estimation;
Unmanned plane three shaft position is allowed to estimate to pass through Nonlinear Tracking Differentiator respectively, it is possible to obtain the velocity estimation of unmanned plane.Due to The Nonlinear Tracking Differentiator of arc tangent form can take into account tracking rapidity and transient process stationarity, and has preferable filter effect, Parameter to be regulated is less, use arc tangent form Nonlinear Tracking Differentiator obtain unmanned plane velocity estimation, arc tangent form with Track differentiator is as follows:
x &CenterDot; 1 ( t ) = x 2 ( t ) x &CenterDot; 2 ( t ) = R 2 ( - a 1 arctan ( l 1 ( x 1 ( t ) - v ( t ) ) ) - a 2 arctan ( l 2 x 2 ( t ) / R ) ) - - - ( 18 )
Wherein a1>0,a2>0,l1>0,l2> 0, R > 0 it is design parameter, v (t) is the speed of the unmanned plane t of input, x1 (t) and x2T () is estimation and the estimation of velocity differentials of input speed respectively.
Embodiment:
In landing glide section, the some height that glides is 100m, and flight path angle is-3.5 °, and rising or falling speed is-2m/s, is evening up Section, touchdown elevation is 0.7m, and rising or falling speed-0.08m/s evens up Terminal Track tilt angle gamma2=-1 °, glide point wait fly away from From for 1985.1m, the rising or falling speed of the section of evening up is changed to-0.08m/s by-2m/s with exponential form.The flight time of downslide section For 35s, the flight time of the section of evening up is 60s.At flight path environment distributed about road sign, compare environment road sign below to estimating essence In the example of degree contrast, environment road sign number takes 69 and 134 respectively.Unmanned plane speed controlling noise is ± 0.3m/s, angle Controlling noise is ± 0.3 °, and system communication cycle is 0.05s, and the maximum detectable range of airborne laser radar is 50m, observed range Noise is ± 0.1m, and observation angle noise is ± 0.3 °, and observation cycle is 0.4s.The Nonlinear Tracking Differentiator of x, y, z axle velocity estimation Parameter designing is as follows:
Rx=10, a1x=2, l1x=3, a2x=2, l2x=3
Ry=20, a1y=2, l1y=3, a2y=2, l2y=3
Rz=5, a1z=2, l1z=3, a2z=2, l2z=3
Three axle initial position errors are respectively 0.2m, 0.3m, 0.5m, choose 12 destinations and constitute expection from landing path Route, landing path, environment road sign, destination schematic diagram as shown in Figure 2.
In order to investigate the number of particles impact on path estimation difference, take 12,18 and 24 particles respectively, at equivalent environment Emulating under road sign, road sign number is 134.Simulation result is as shown in Figure 3.It can be seen that the lateral deviation of unmanned plane is away from (X Axis error) and in the range of height error (Z axis error) is held in 1m, the navigation accuracy that can meet the UAV Landing stage is wanted Ask.Additionally, number of particles increase advantageously reduces the position estimation error of unmanned plane.Owing to FastSLAM algorithm uses particle filter The path of unmanned plane is estimated by ripple, increases population and can increase Path selection sample, and more particle sampler is beneficial to force The true path of nearly unmanned plane, thus advantageously reduce path estimation error.
In the case of different particles, the speed estimation error of three axes is as shown in Figure 4, it can be seen that in landing mission, and 3 The speed estimation error of axle is substantially in the smaller range of 0.Grade compares as shown in Figure 5, it can be seen that grade Estimating that error is less between actual value, during UAV Landing, downslide section grade is constant, and the section of evening up grade gradually drops Low to close to 0.
For investigating the impact on path estimation difference of the environment road sign quantity, choose 69 and 134 environment road signs respectively, Emulating in the case of 18 particles, result is as shown in Figure 6.It can be seen that environment road sign quantity increases, advantageously reduce unmanned The path estimation error of machine.Environment road sign increases, and unmanned plane can obtain more observed quantity, is beneficial to reduce the location of unmanned plane Error.Especially when unmanned plane is through the more region of environment road sign, it is possible to obtain a large amount of observation information, it is beneficial to correction location by mistake Difference.
Observe the convergence of map that environment road sign constitutes for checking, make the curve of environment road sign covariance matrix mark such as Shown in Fig. 7.Can be seen that, after observing new environment road sign, the mark of environment road sign covariance matrix can increase, and shows map every time Uncertainty all can increase, and is gradually reduced then as the continuation campaign of unmanned plane, converges to 0.
Owing to the control noise in emulation and observation noise randomly generate, in order to reduce randomness to simulation result Impact, assessment algorithm performance more accurately, to 134 environment road signs, the situation of 12,18 and 24 particles carries out 20 times solely respectively Vertical repetition emulates, and algorithm performance is compared by the absolute value being averaged position estimation error, and result is as shown in table 1.Can see Go out three shaft position estimation difference absolute values all in the range of 0.6m, it is possible to meet the position accuracy demand of landing phase navigation.
In the case of the different population of table 1, mean place estimation difference absolute value compares
Population X-axis position estimation error absolute value Y-axis position estimation error absolute value Z axis position estimation error absolute value
12 0.53m 0.31m 0.53m
18 0.48m 0.30m 0.51m
24 0.46m 0.30m 0.51m

Claims (3)

1. a UAV Landing method based on FastSLAM algorithm, it is characterised in that include following step:
Step one, northeastward under sky geographic coordinate system, sets up UAV Landing section system model, and described system model includes nothing Man-machine motion model and observation model, be embodied as:
x &CenterDot; = f ( x ( t ) , u ( t ) , t ) + &delta; t - - - ( 1 )
Z=h (x (t), m (t), t)+εt (2)
Wherein, t is the time, and x (t) represents the state vector of the unmanned plane of t,Represent the differential of unmanned plane state vector, nothing Man-machine state vector x=[x, y, z, ψ, θ]T, wherein x, y, z represent the unmanned plane three axles under earth axes oxyz respectively Position coordinates, wherein o is a bit chosen on ground, and x, y, z are respectively directed to east, north, direction, sky, and ψ is unmanned plane velocity At the angle of xoy plane projection Yu x-axis, θ is the angle of unmanned plane velocity and xoy plane, is referred to as unmanned plane speed side Parallactic angle;It is distributed around some environment road signs at UAV Landing flight path, utilizes airborne laser radar to obtain unmanned plane and environment road Distance between mark and azimuth;Z represents the unmanned plane observed quantity to environment road sign, δtRepresent that the Gauss in motion model makes an uproar Sound, motion model covariance matrix is QttBeing the observation noise in observation model, observation model covariance matrix is Rt, u (t) =v is the unmanned plane speed of the t that airborne ins obtains, and m (t) is environment road sign position vector under earth axes, After motion model discretization, form is as follows:
x ( l + 1 ) y ( l + 1 ) z ( l + 1 ) &psi; ( l + 1 ) &theta; ( l + 1 ) = x ( l ) + v &Delta; t cos ( &theta; ( l ) + &Delta; &theta; ) sin ( &psi; ( l ) + &Delta; &psi; ) y ( l ) + v &Delta; t cos ( &theta; ( l ) + &Delta; &theta; ) cos ( &psi; ( l ) + &Delta; &psi; ) z ( l ) + v &Delta; t sin ( &theta; ( l ) + &Delta; &theta; ) &psi; ( l ) + &Delta; &psi; &theta; ( l ) + &Delta; &theta; + &delta; t 1 &delta; t 2 &delta; t 3 &delta; t 4 &delta; t 5 - - - ( 3 )
L refers to the l sampling step, and Δ t is sampling time interval, and Δ θ is the increment of θ in adjacent two sampling steps, and Δ ψ is adjacent The increment of ψ, δ in two sampling stepst1、δt2、δt3、δt4、δt5Represent each quantity of state of unmanned plane (x, y, z, ψ and θ) correspondence respectively Gaussian noise, v represents the velocity magnitude of the current time unmanned plane that airborne ins obtains;If selecting on UAV Landing flight path Dry destination, if destination to be reached is first destination, calculates the spacing of unmanned plane current location and destination to be reached, if little In setting threshold value Rmin, destination to be reached becomes next destination;Velocity Azimuth angle increment Δ θ, Δ ψ is calculated as follows:
&Delta; &psi; = arctan ( wp y - y ( l ) wp x - x ( l ) ) - &psi; ( l ) - - - ( 4 )
&Delta; &theta; = arctan ( wp z - z ( l ) ( wp x - x ( l ) ) 2 + ( wp y - y ( l ) ) 2 ) - &theta; ( l ) - - - ( 5 )
Wherein wpx,wpy,wpzIt is the position coordinates under the earth axes of destination to be reached respectively, x (l), y (l), z (l), ψ (l), θ (l) is respectively the quantity of state of the l lower unmanned plane of sampling step;
Unmanned plane is expressed as z=[r, α, β] to the vector form of the observed quantity of environment road signT, concrete form is as follows:
r &alpha; &beta; = ( x - x m ) 2 + ( y - y m ) 2 + ( z - z m ) 2 arctan ( z - z m ( x - x m ) 2 + ( y - y m ) 2 ) - &theta; arctan ( y - y m x - x m ) - &psi; - - - ( 6 )
Wherein, r is the distance between unmanned plane and environment road sign, α be between unmanned plane and environment terrestrial reference position vector with unmanned Motor speed vector respectively with the difference of the angle of horizontal plane, β is position vector throwing on xoy face between unmanned plane and environment terrestrial reference Shadow and x-axis angle and the difference of Velocity Azimuth angle ψ, x, y, z are respectively the current position coordinates of unmanned plane, xm,ym,zmIt is respectively ring Three shaft position coordinates of border road sign;
Step 2, designs UAV Landing section FastSLAM algorithm;
Specifically include following step:
1) unmanned plane state updates;
Using N number of particle filter to estimate unmanned plane state, initializing particle weights coefficient is 1/N, initializes destination With environment road sign, initial state vector is x0=[x0,y0,z000]T, x0Each element be respectively unmanned plane initial position with And the azimuth of initial velocity;
The perfect condition of prediction unmanned plane next one sampling step:
x ( l + 1 ) y ( l + 1 ) z ( l + 1 ) &psi; ( l + 1 ) &theta; ( l + 1 ) = x ( l ) + v &Delta; t cos ( &theta; ( l ) + &Delta; &theta; ) sin ( &psi; ( l ) + &Delta; &psi; ) y ( l ) + v &Delta; t cos ( &theta; ( l ) + &Delta; &theta; ) cos ( &psi; ( l ) + &Delta; &psi; ) z ( l ) + v &Delta; t sin ( &theta; ( l ) + &Delta; &theta; ) &psi; ( l ) + &Delta; &psi; &theta; ( l ) + &Delta; &theta; - - - ( 7 )
Wherein, x (l+1), y (l+1), z (l+1), ψ (l+1), θ (l+1) are respectively the prediction shape of unmanned plane the l+1 sampling step State;
To speed and Velocity Azimuth angle increment adding procedure noise, to each particle, carry out state recursion, have following l step and Relation between l+1 step state vector:
x p ( l + 1 ) y p ( l + 1 ) z p ( l + 1 ) &psi; p ( l + 1 ) &theta; p ( l + 1 ) = x p ( l ) + v n &Delta; t cos ( &theta; p ( l ) + &Delta;&theta; n ) sin ( &psi; p ( l ) + &Delta;&psi; n ) y p ( l ) + v n &Delta; t cos ( &theta; p ( l ) + &Delta;&theta; n ) cos ( &psi; p ( l ) + &Delta;&psi; n ) z p ( l ) + v n &Delta; t sin ( &theta; p ( l ) + &Delta;&theta; n ) &psi; p ( l ) + &Delta;&psi; n &theta; p ( l ) + &Delta;&theta; n - - - ( 8 )
Wherein, xp,yp,zpIt is respectively the position that particle is current, ψppFor the Velocity Azimuth angle that particle is current;vnIt is to add at random Speed after process noise, Δ ψnIt is the increment adding the azimuth ψ after stochastic process noise, Δ θnIt is to add stochastic process to make an uproar The increment of the azimuth angle theta after sound;
If reaching observation cycle, environment road sign is detected by airborne laser radar, it is thus achieved that in laser radar detection distance All environment road sign observed quantities and environment road sign sequence, the environment road sign sequence that storage has detected, if detection for the first time The environment road sign arrived, environment road sign observed quantity is stored in new road sign collection znIn, if not the environment road sign detected for the first time, then Environment road sign observed quantity is stored in non-new road sign zfIn;Otherwise, unmanned plane predicted state is regained;
2) environment road sign is updated;
If not new road sign collection zfIt is not empty set, to non-new road sign collection zfIn environment road sign, the prediction of computing environment road sign observed quantity Value, first provides intermediate variable dx, dy, dz, d and d1As follows:
d x = x f - x p , d y = y f - y p , d z = z f - z p d = dx 2 + dy 2 + dz 2 , d 1 = dx 2 + dy 2 z ^ t = &lsqb; d , arctan ( d y d x ) - &psi; , arctan ( d z d 1 ) - &theta; &rsqb; T - - - ( 9 )
Wherein xp,yp,zpIt is the location components in particle state value,For the predictive value of environment road sign observed quantity, xf,yf,zfRespectively For non-new road sign collection zfThree shaft position coordinates of middle environment road sign;
Use EKF that average and the variance of environment road sign are updated:
K t = P f , t - 1 H f ( R t + H f P f , t - 1 H f T ) - 1 &mu; t = &mu; t - 1 + K t ( z t - z ^ t ) P f , t = ( I - K t H f ) P f , t - 1 - - - ( 10 )
Wherein, lower footnote t represents that t, lower footnote f represent and is directed to non-new road sign collection;RtCovariance square for observation noise Battle array, KtIt is the filtering gain matrix of Kalman filtering, HfFor observational equation (2) to environment road sign coordinate (xf,yf,zf) Jacobi Matrix, I be dimension be the unit matrix of 3, Pf,tIt is the covariance matrix of environment road sign, μtFor the average of environment road sign, ztFor environment Road sign observed quantity;
Described Jacobian matrix HfIt is expressed as:
H f = d x d d y d d z d - d y d 1 2 d x d 1 2 0 - d x d z d 2 d 1 - d y d z d 2 d 1 d 1 d 2 - - - ( 11 )
Definition: Lt=HfPf,tHf T+Rt (12)
The weight coefficient of kth particleAs follows:
&omega; t &lsqb; k &rsqb; = | 2 &pi;L t &lsqb; k &rsqb; | - 1 / 2 exp ( - 1 2 ( z - z ^ t &lsqb; k &rsqb; ) T ( L t &lsqb; k &rsqb; ) ( z - z ^ t &lsqb; k &rsqb; ) ) - - - ( 13 )
Subscript [k] represents kth particle, and k scope is 1~N, and z is the unmanned plane observation vector to environment road sign;
If new road sign collection znIt is not empty set, shows to observe new environment road sign, then need to extend environmental map;
If the state vector of kth particle is x[k]=[xp [k],yp [k],zp [k]p [k]p [k]]T, for new road sign collection znIn sight Measure z=[r, α, β]T, definition intermediate variable:
s _ p h i = sin ( &alpha; + &psi; &lsqb; k &rsqb; ) c _ p h i = cos ( &alpha; + &psi; &lsqb; k &rsqb; ) s _ t h i = sin ( &beta; + &theta; &lsqb; k &rsqb; ) c _ t h i = cos ( &beta; + &theta; &lsqb; k &rsqb; ) &Delta; x = r * c _ t h i * c _ p h i &Delta; y = r * c _ t h i * s _ p h i &Delta; z = r * s _ t h i - - - ( 14 )
Order
xf=[xp [k]+Δx,yp [k]+Δy,zp [k]+Δz]T (16)
Pf,t=Hz*Rt*Hz T (17)
Pf,tIt is the covariance matrix of newly-increased environment road sign, xfThe estimated value of environment road sign position, r be unmanned plane and environment road sign it Between distance, Δ x is the difference of the x-axis component of x-axis component and the particle state of environment road sign estimated value, and Δ y is that environment road sign is estimated The y-axis component of evaluation and the difference of the y-axis component of particle state, Δ z is z-axis component and the particle state of environment road sign estimated value The difference of z-axis component;
3) particle resampling;
Formula (13) has been obtained for the weight coefficient of kth particleIt is now to calculate the homogenization weight system of each particle Number, the normalized weight coefficient of kth particle is as follows:
Calculate number of effective particlesEffective population threshold N is setminIf, Neff < Nmin, then carry out Resampling steps, obtains particle state average and then acquisition unmanned plane location estimation after resampling;
Otherwise need not resampling;
Step 3, uses arc tangent Nonlinear Tracking Differentiator to obtain velocity estimation.
A kind of UAV Landing method based on FastSLAM algorithm the most according to claim 1, it is characterised in that: step The step of resampling described in two is as follows:
Generate the uniform random number between N number of 0 to 1, be set to select [1] ..., select [j] ..., select [N], if kth The weight coefficient ω of individual particle[k]> select [j], all meet formula ω[k]> select [j] subscript j constitute set j;Definition Sampling array sap [1]~sap [N], be initialized as 0, then kth particle is designated as gathering element disposal in j under sampling array For k, in like manner, all particle weights coefficients compared with uniformly random array and carries out aforesaid operations, obtaining final hits Group, the particle numbering after resampling is sampling array sequence.
A kind of UAV Landing method based on FastSLAM algorithm the most according to claim 1, it is characterised in that: step Arc tangent Nonlinear Tracking Differentiator described in three is as follows:
x &CenterDot; 1 ( t ) = x 2 ( t ) x &CenterDot; 2 ( t ) = R 2 ( - a 1 arctan ( l 1 ( x 1 ( t ) - v ( t ) ) ) - a 2 arctan ( l 2 x 2 ( t ) / R ) ) - - - ( 18 )
Wherein a1>0,a2>0,l1>0,l2> 0, R > 0 it is design parameter, v (t) is the speed of the unmanned plane t of input, x1(t) and x2T () is estimation and the estimation of velocity differentials of input speed respectively.
CN201610766065.6A 2016-08-30 2016-08-30 A kind of UAV Landing method based on FastSLAM algorithm Expired - Fee Related CN106197432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610766065.6A CN106197432B (en) 2016-08-30 2016-08-30 A kind of UAV Landing method based on FastSLAM algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610766065.6A CN106197432B (en) 2016-08-30 2016-08-30 A kind of UAV Landing method based on FastSLAM algorithm

Publications (2)

Publication Number Publication Date
CN106197432A true CN106197432A (en) 2016-12-07
CN106197432B CN106197432B (en) 2018-12-21

Family

ID=58088578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610766065.6A Expired - Fee Related CN106197432B (en) 2016-08-30 2016-08-30 A kind of UAV Landing method based on FastSLAM algorithm

Country Status (1)

Country Link
CN (1) CN106197432B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108195376A (en) * 2017-12-13 2018-06-22 天津津航计算技术研究所 Small drone Camera calibration method
CN108387236A (en) * 2018-02-08 2018-08-10 北方工业大学 Polarized light S L AM method based on extended Kalman filtering
CN108616302A (en) * 2018-04-28 2018-10-02 中国人民解放军陆军工程大学 Unmanned plane Multi folds coverage model and dispositions method under a kind of power control
CN109032169A (en) * 2018-06-15 2018-12-18 岭南师范学院 A kind of unmanned plane landing-gear based on laser conduction
US20200339153A1 (en) * 2019-04-25 2020-10-29 WeRide Corp. Apparatus and method for controlling velocity of autonomous driving vehicle, and storage medium
CN115235475A (en) * 2022-09-23 2022-10-25 成都凯天电子股份有限公司 MCC-based EKF-SLAM back-end navigation path optimization method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625572A (en) * 2009-08-10 2010-01-13 浙江大学 FastSLAM algorithm based on improved resampling method and particle selection
US20140126769A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
CN105737832A (en) * 2016-03-22 2016-07-06 北京工业大学 Distributed SLAM (simultaneous localization and mapping) method on basis of global optimal data fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625572A (en) * 2009-08-10 2010-01-13 浙江大学 FastSLAM algorithm based on improved resampling method and particle selection
US20140126769A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
CN105737832A (en) * 2016-03-22 2016-07-06 北京工业大学 Distributed SLAM (simultaneous localization and mapping) method on basis of global optimal data fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LULU YING ETC.: "A Modified FastSLAM with Simple Particle Swarm Optimization and Consistent Mapping", 《OCEANS 2014-TAIPE》 *
丛楚滢等: "一种小型无人机的FastSLAM算法", 《华中科技大学学报:自然科学版》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108195376A (en) * 2017-12-13 2018-06-22 天津津航计算技术研究所 Small drone Camera calibration method
CN108195376B (en) * 2017-12-13 2021-06-18 天津津航计算技术研究所 Autonomous navigation positioning method for small unmanned aerial vehicle
CN108387236A (en) * 2018-02-08 2018-08-10 北方工业大学 Polarized light S L AM method based on extended Kalman filtering
CN108387236B (en) * 2018-02-08 2021-05-07 北方工业大学 Polarized light SLAM method based on extended Kalman filtering
CN108616302A (en) * 2018-04-28 2018-10-02 中国人民解放军陆军工程大学 Unmanned plane Multi folds coverage model and dispositions method under a kind of power control
CN108616302B (en) * 2018-04-28 2020-10-30 中国人民解放军陆军工程大学 Unmanned aerial vehicle multiple coverage model under power control and deployment method
CN109032169A (en) * 2018-06-15 2018-12-18 岭南师范学院 A kind of unmanned plane landing-gear based on laser conduction
CN109032169B (en) * 2018-06-15 2024-03-26 岭南师范学院 Unmanned aerial vehicle landing device based on laser conduction
US20200339153A1 (en) * 2019-04-25 2020-10-29 WeRide Corp. Apparatus and method for controlling velocity of autonomous driving vehicle, and storage medium
US11572079B2 (en) * 2019-04-25 2023-02-07 WeRide Corp. Apparatus and method for controlling velocity of autonomous driving vehicle, and storage medium
CN115235475A (en) * 2022-09-23 2022-10-25 成都凯天电子股份有限公司 MCC-based EKF-SLAM back-end navigation path optimization method
CN115235475B (en) * 2022-09-23 2023-01-03 成都凯天电子股份有限公司 MCC-based EKF-SLAM back-end navigation path optimization method

Also Published As

Publication number Publication date
CN106197432B (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN106197432A (en) A kind of UAV Landing method based on FastSLAM algorithm
CN106643737A (en) Four-rotor aircraft attitude calculation method in wind power interference environments
Langelaan et al. Wind field estimation for autonomous dynamic soaring
CN106597017A (en) UAV angular acceleration estimation method and apparatus based on extended Kalman filtering
CN103913181A (en) Airborne distribution type POS (position and orientation system) transfer alignment method based on parameter identification
Sabatini et al. Low-cost sensors data fusion for small size unmanned aerial vehicles navigation and guidance
CN103076017A (en) Method for designing Mars entry phase autonomous navigation scheme based on observability degree analysis
Petrich et al. On-board wind speed estimation for uavs
CN103708045B (en) The on-line parameter discrimination method that a kind of lunar exploration airship great-jump-forward reenters
CN103438890B (en) Based on the planetary power descending branch air navigation aid of TDS and image measurement
CN105300387B (en) A kind of martian atmosphere approach section nonlinear and non-Gaussian order filtering method
CN101762272A (en) Deep space autonomous navigation method based on observability degree analysis
Yu et al. Observability-based local path planning and obstacle avoidance using bearing-only measurements
Guo et al. Algorithm for geomagnetic navigation and its validity evaluation
Xiong et al. A two-position SINS initial alignment method based on gyro information
CN103921957B (en) The energy management method of jumping up that a kind of lunar exploration airship great-jump-forward reenters
Zahran et al. Hybrid Machine Learning VDM for UAVs in GNSS‐denied Environment
Bhandari et al. Flight testing, data collection, and system identification of a multicopter UAV
Sridhar et al. Kalman filter based range estimation for autonomous navigation using imaging sensors
Wang et al. Novel in-flight coarse alignment of low-cost strapdown inertial navigation system for unmanned aerial vehicle applications
Crocoll et al. Quadrotor inertial navigation aided by a vehicle dynamics model with in-flight parameter estimation
CN103616024A (en) Method for determining observability of planetary exploration entry section autonomous navigation system
Lee et al. Analysis on observability and performance of INS-range integrated navigation system under urban flight environment
Guan et al. A new integrated navigation system for the indoor unmanned aerial vehicles (UAVs) based on the neural network predictive compensation
Li et al. Small UAV autonomous localization based on multiple sensors fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181221

Termination date: 20190830