Unmanned aerial vehicle staged autonomous landing method based on visual information fusion
Technical Field
The invention relates to the technical field of intelligent offshore robots, in particular to a staged autonomous landing method of an unmanned aerial vehicle based on visual information fusion.
Background
With the progress of science and technology, unmanned systems are more and more widely applied in the professional fields of agriculture, electric power, ocean and the like. Unmanned Aerial Vehicles (UAVs) have been used as "pets" in Unmanned systems, and their development speed and application fields have been promoted in recent years.
In the ocean field, unmanned aerial vehicles with limited cruising ability and wide search range are usually equipped on unmanned boats with strong cruising ability and small search range to form a coordinated formation of the unmanned boats with complementary advantages for completing tasks such as maritime rescue, environmental monitoring, battlefield reconnaissance and the like, and the core technology of the unmanned aerial vehicle autonomous navigation technology is a shipborne unmanned aerial vehicle autonomous navigation technology.
Taking off, hovering and landing are three basic problems of the autonomous navigation technology of the carrier-borne unmanned aerial vehicle. Among them, autonomous landing of the drone is the most challenging problem. In the process of landing on the boat, the unmanned aerial vehicle faces the moving and swinging landing platform to effectively identify landmarks and accurately control the pose. At present, research on unmanned aerial vehicle autonomous landing technology is still few internationally, and main challenges are reflected in two aspects:
first, unmanned aerial vehicle is independently the navigation precision of boat. The GPS of the small unmanned aerial vehicle often has meter-level precision and cannot meet the requirement of the boat task precision; the INS/GPS integrated navigation system can only position the self pose of the unmanned aerial vehicle, and the relative pose of the unmanned aerial vehicle and a boat landing platform is lacked. Although many documents propose to use computer vision to assist navigation, the image processing process of the documents has large computation amount, the general onboard computer has limited computation capability, and the pose given by the vision navigation is often behind the real-time pose. Therefore, how to ensure the real-time performance and effectiveness of the pose information while improving the navigation precision becomes a problem to be solved urgently in the autonomous landing process of the unmanned aerial vehicle.
Secondly, unmanned aerial vehicle is in the security of ship end. During the process of landing on the boat, some documents directly use the center point of the landmark as a reference position signal for control. However, this tends to cause the camera to lose sight of the landmark, especially on unmanned boats where the landmark position is not fixed. In addition, due to the wing-ground effect (when the moving object is close to the ground, the ground interferes with the aerodynamic force generated by the object), the unmanned aerial vehicle is difficult to keep stable when being close to the landing platform of the unmanned ship, so that the success rate of the method is not high when the method is applied, and the safety of the unmanned aerial vehicle at the tail end of the landing ship is to be improved.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a staged autonomous landing method of an unmanned aerial vehicle based on visual information fusion.
The purpose of the invention can be realized by the following technical scheme:
an unmanned aerial vehicle staged autonomous landing method based on visual information fusion comprises the following steps:
1) landmark making: taking a corresponding target landing point on the unmanned ship as a landmark, attaching an AprilTags label to the landmark, and adjusting the angle of a camera of the unmanned aerial vehicle to ensure that the unmanned aerial vehicle can detect the landmark when approaching the landmark;
2) image processing: calculating parameters of an AprilTags visual reference system configured in an on-board computer of the unmanned aerial vehicle according to the parameter information of the camera and the image information captured by the camera, and acquiring the relative pose X between the camera and the landmark when the landmark is foundct;
3) Information fusion: relative pose X between camera and landmarkctAfter information fusion is carried out on the measurement data of the IMU, the real-time relative pose X of the unmanned ship under the unmanned aerial vehicle reference system is obtainedvs;
4) And (3) motion control: according to the real-time relative pose XvsA nested control mode is adopted to ensure the stable flight and the path tracking, and a staged boat landing method is adopted to perform boat landing.
The parameters of AprilTags visual reference system in the step 2) comprise a focal length F in a pixel unit in the wide directionwFocal length F in the height direction in units of pixelshCenter position of image (C)w,Ch) The calculation formula is as follows:
wherein F and C are the focal length and position in different directions, respectively, and LfocusAnd LreceptorRespectively focal length and photosensitive element size, NpixelThe parameters in the width direction and the height direction are calculated by using the formula for the number of pixels.
The step 3) specifically comprises the following steps:
31) the system state estimation comprises two stages, wherein the first stage is a stage in which no landmark is detected, and the second stage is a stage in which the landmark is detected;
when the landmark is not detected, the system state provides pose information X for the IMUvI.e. X ═ Xv]If the system state is estimated, the following steps are performed:
wherein,
is the estimated value of the system state at the k-th step,
is the estimated value of the system state in the k-1 step, F
k-1For Kalman filtering parameters at landmark stages not detected, a uniform velocity model is used to predict the future state of the system, i.e.
Satisfy the requirement of
Δ t is the sampling interval, wk-1Is white gaussian noise;
starting from the first detection of the landmark, adding the unmanned ship state X into the system statesI.e. X ═ Xv Xs]And updating Kalman filter parameter Fk-1Then, there are:
wherein,
are kalman filtering parameters associated with the drone,
for Kalman filtering parameters related to unmanned ship, only the motion of unmanned ship on horizontal plane is considered, and uniform velocity model is also adopted, including
32) Obtaining the observation results of the IMU and the camera, wherein the observation result h of the IMUIMUComprises the following steps:
wherein z islvIs a height, philv,θlv,ψlvRespectively, the angle of rotation, u, about the three directions of movement x, y, zlvAnd vlvForward and lateral velocities, respectively, corresponding to Jacobian matrix HIMUComprises the following steps:
the observation result h of the cameratag(XKF) Comprises the following steps:
wherein, X
vcIs the pose, X, of the camera under the unmanned aerial vehicle reference system
stFor the pose, X, of the marker under the reference system of the unmanned ship
lvFor the estimated pose of the drone, X
lsFor the estimated pose of the unmanned vehicle,
and
converting the coordinates;
its corresponding Jacobian matrix HtagComprises the following steps:
33) and performing extended Kalman filtering on the observation result to obtain a real-time system state.
The step 3) further comprises the following steps:
34) through historical poses of the unmanned aerial vehicle and the unmanned ship, the system state is modified under the condition of delay, and the real-time relative pose X of the unmanned ship under the unmanned aerial vehicle reference system is obtainedvs。
In the step 4), the nesting control mode is specifically as follows:
and 6 independent closed-loop PID controllers are adopted for control, wherein the inner ring attitude control is used for ensuring stable flight, and the outer ring position control is used for path tracking.
In the step 4), the staged boat landing method specifically comprises the following steps:
41) and (3) finding a landmark: when the landmark is found for the first time, initializing the system state and enabling the unmanned aerial vehicle to track the landmark;
42) and (3) a slow descending stage: guiding the unmanned aerial vehicle to the center of the landmark to keep the unmanned aerial vehicle at the center of the landmark and with the radius rloopHeight of hloopAnd the next course point is vertically searched downwards by taking the delta z as a step length, and the height of the delta z is increased when the visual field of the landmark is lost until the landmark is found again.
43) Entering a rapid descending stage: setting a first height threshold hlandWhen the height of the unmanned aerial vehicle is less than hlandIf the landmark is in the visual field of the camera in continuous N frames and meets the rapid descending condition, the landmark can enter a rapid descending stage and directly takes 0 as a reference signal of a z axis;
44) and (3) a landing stage: setting a second height threshold hminWhen the height of the unmanned aerial vehicle is lower than a second height threshold hminAnd in time, the propeller is closed, so that the unmanned aerial vehicle can freely fall and complete landing.
The fast descending conditions in the step 43) are as follows:
z-hland≤hloop
||(x,y)||2≤rloop
wherein z is the height of the unmanned aerial vehicle relative to the unmanned boat, | | (x, y) | luminance2Is the horizontal distance between the unmanned aerial vehicle and the unmanned boat.
In the step 44) described above, the operation,
if the unmanned aerial vehicle is higher than the second height threshold hminAnd when the camera loses the view of the landmark, the unmanned aerial vehicle is pulled up no matter in the slow descending stage or the fast descending stage until the landmark is found again.
Compared with the prior art, the invention has the following advantages:
firstly, real-time effective: in the processes of landmark identification and image processing, mature and open-source AprilTags are adopted, so that the technical threshold can be reduced; by using the monocular camera, the operation complexity is simplified and the equipment cost is reduced on the premise of ensuring the precision.
Secondly, hysteresis is avoided: performing information fusion on the relative pose obtained by the camera and the pose obtained by the IMU to improve the precision; the problem of hysteresis in image processing is avoided by increasing the history.
Thirdly, stable and safe: the unmanned aerial vehicle boat landing nesting control scheme and the staged safe boat landing method are combined, so that the situation that the camera of the unmanned aerial vehicle loses the visual field of a landmark in the boat landing process can be avoided, and meanwhile, the stability and the safety of the unmanned aerial vehicle when the unmanned aerial vehicle is close to the landing platform of the unmanned aerial vehicle can be guaranteed.
Drawings
Fig. 1 is a block diagram of a nesting control scheme for unmanned aerial vehicle landing.
Fig. 2 is a flow chart of a phased safe landing method of the unmanned aerial vehicle in the invention.
Fig. 3 is a schematic diagram of an autonomous landing method of the unmanned aerial vehicle in the invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
The following describes the technical solution of the present invention with reference to fig. 3.
For convenience of explanation, the following symbol convention is made:
X
ijrefers to the pose of the j reference system under the i reference system, wherein the pose is defined as a 6-dimensional column vector [ x y z phi theta phi ]]
TWhere (x, y, z) is the position coordinate in the reference frame and (phi, theta, psi) are the angles of rotation about the x, y, z axes, respectively, called roll, pitch, and yaw. The reference systems used are: the unmanned aerial vehicle is characterized by comprising an unmanned aerial vehicle reference system { v }, a local reference system { l }, an unmanned ship reference system { s }, a camera reference system { c }, and a landmark reference system { t }. Some basic symbols of the reference frame transformation are defined at the same time: if i, j, k represents three reference systems, symbols
Represents the accumulation of transformations, satisfies
Symbol
To represent
Is inverse operation of
In the method provided by the invention, the calculation is realized in an onboard computer of the unmanned aerial vehicle through software, the unmanned aerial vehicle is provided with an inertia measurement unit to obtain real-time attitude information, and a camera is used for collecting environmental information. Taking the four-rotor unmanned aerial vehicle landing on an electrically-driven unmanned boat as an example, the implementation process of the method comprises the following 4 specific implementation steps.
Step 1: and (5) landmark making. The labels in AprilTags are printed and the corresponding target landing sites posted on the drones serve as landmarks. The angle adjustment of the unmanned aerial vehicle camera is forward and downward slightly, and the unmanned aerial vehicle can detect the landmark when approaching the landmark.
Step 2: and (5) processing the image. An aprilatags visual reference system is configured in the onboard computer of the drone. Utilizing the parameter information of the camera and the image information captured by the camera, 4 important parameters are calculated, namely: focal length F in pixel unit in width and height directionswAnd FhCenter position of image (C)w,Ch). The calculation formula is as follows:
wherein L isfocusAnd LreceptorRespectively, the focal length and the size of the photosensitive element, in millimeters. N is a radical ofpixelIs the number of pixels. The parameters in the width direction and the height direction are calculated by using the formula.
The calculated parameters are transmitted to an AprilTags visual reference system, so that the relative pose X between the camera and the mark can be returned when the landmark is foundct。
And step 3: letterAnd (4) fusing. Relative pose X between camera and landmarkctCarrying out information fusion with the measurement data of the IMU to obtain the real-time poses X of the unmanned aerial vehicle and the unmanned shiplvAnd XlsAnd relative pose Xvs. The process is divided into the following 4 sub-steps:
(1) and estimating the state of the system. The information fusion is divided into two stages, wherein the first stage is a stage in which the landmark is not detected, and the second stage is a stage in which the landmark is detected.
In the stage that no landmark is detected, the system state is taken as pose information X provided by the IMUv=[XlvVvWv]. Wherein, VvIs the speed in the x, y, z directions, and WvIs the angular velocity around the x, y, z direction. The system state is estimated as follows:
wherein, w
k-1Is white Gaussian noise, where a uniform model is used to predict the system state, i.e.
Has the following form:
Δ t is the sampling interval.
Starting from the first detection of a landmark, the state of the unmanned boat is added to the system state, i.e., X ═ Xv Xs],Xv=[XlvVv Wv],Xs=[Xls Vs Ws]Then, Fk-1Has the following form:
only the motion of the unmanned boat on the horizontal plane is considered, and a uniform speed model is also adopted, wherein
(2) And obtaining the observation result of the sensor. The observation models of the IMU and the camera can be expressed as
z[k]=h(X)+v[k]。
Where vk-N (0, Rk) is the observation noise. h (X) represents a function of state X.
IMU is unmanned aerial vehicle's height, gesture and speed information to unmanned aerial vehicle's observation result, promptly:
wherein z islvIs height, philv,θlv,ψlvThe rotation angles are respectively around the three movement directions of x, y and z. u. oflvAnd vlvForward and side speeds, respectively. Corresponding to a Jacobian matrix of
The observation result of the camera on the ith landmark is
Wherein h is
tag(X
KF) As a result of observation by the camera, X
vcIs the pose, X, of the camera under the unmanned aerial vehicle reference system
stIs the marked pose under the unmanned boat reference system,
and
is a coordinate transformation operation.
Correspond to
(3) And (5) expanding Kalman filtering. The above nonlinear filtering problem is solved using extended Kalman filtering, since the state quantities conform to a normal distribution XKFN (mu, sigma), where the covariance matrix sigma is expressed as
The extended kalman filter is updated according to the following formula:
through the extended Kalman filtering, the real-time state X of the system can be obtained as Xv Xs]。
(4) The problem of state hysteresis is solved. Because image calculation is complex, AprilTag can only provide a calculation result at the time k-n, and therefore state estimation of the extended Kalman filtering to the time k +1 is obtained according to the state at the time k-n and is inaccurate. We solve this problem by recording the historical poses of drones and drones. Modifying system state estimator under delayed condition
Where n is the number of delay states. The iterative calculation of the system state estimate is then formulated as
Accordingly, the Jacobian matrix of the IMU sensor model is modified to
Since the observation in the last step is performed for the delay state, for the jth delay state Xj, the observation model of the camera is modified to
After the correction, the state X after the augmentation can be updated by using the extended Kalman filteringDS. After each delayed observation is updated by the extended kalman filter, it is ignored and shifted out of the state vector. Therefore, the filtering algorithm only needs to store a small segment of history state additionally.
After filtered system state information is obtained by solving the state lag problem, the pose X of the unmanned aerial vehicle and the unmanned ship under the local reference system is utilizedlvAnd XlsThe relative pose X can be calculatedvs. The specific calculation method is
And 4, step 4: and (5) controlling the motion. And performing motion control on the unmanned aerial vehicle by using the fused relative pose. A nested control scheme is adopted, and 6 independent closed-loop PID controllers are used for control, wherein the inner ring attitude control is used for ensuring stable flight, and the outer ring position control is used for path tracking. Meanwhile, a staged safe boat landing method is adopted. The method is divided into the following 4 sub-steps:
(1) when the landmark is found for the first time, the system state is initialized, and the unmanned aerial vehicle is enabled to track the landmark.
(2) Directing the drone to the center of the landmark and then maintaining the drone centered at the landmark at a radius rloopHeight of hloopAnd the next course point is vertically searched downwards by taking the step length of delta z as a step length to descend. Once the field of view of the landmark is lost, Δ z is raised upward until the landmark is rediscovered, a phase known as the slow descent phase.
(3) A smaller height h is setlandIf the height is less than hlandIt is considered that the landing is directly prepared without lowering. If the landmark is in the camera view in N consecutive frames, and satisfies
z-hland≤hloop,
||(x,y)||2≤rloop,
It is considered that the fast-falling phase can be entered with 0 as the reference signal for the z-axis directly.
Wherein z is the height of the unmanned aerial vehicle relative to the unmanned boat, | | (x, y) | luminance2Is the horizontal distance between the unmanned aerial vehicle and the unmanned boat.
(4) When the camera falls to a certain height, the camera cannot identify the landmark at a short distance, so that the positioning cannot be carried out. Therefore, we set a very small height hminWhen the height is lower than the height, the propeller is closed, and the unmanned aerial vehicle is allowed to land in a free falling mode. But if above hminThe camera loses the view of the landmark, and the unmanned aerial vehicle should be pulled up no matter in the fast descending or slow descending stage until the landmark is found again.