CN114326765A - Landmark tracking control system and method for visual landing of unmanned aerial vehicle - Google Patents

Landmark tracking control system and method for visual landing of unmanned aerial vehicle Download PDF

Info

Publication number
CN114326765A
CN114326765A CN202111452916.7A CN202111452916A CN114326765A CN 114326765 A CN114326765 A CN 114326765A CN 202111452916 A CN202111452916 A CN 202111452916A CN 114326765 A CN114326765 A CN 114326765A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
target
landmark
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111452916.7A
Other languages
Chinese (zh)
Other versions
CN114326765B (en
Inventor
亚德
周佳欢
唐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aidi Uav Technology Nanjing Co ltd
Original Assignee
Aidi Uav Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aidi Uav Technology Nanjing Co ltd filed Critical Aidi Uav Technology Nanjing Co ltd
Priority to CN202111452916.7A priority Critical patent/CN114326765B/en
Publication of CN114326765A publication Critical patent/CN114326765A/en
Application granted granted Critical
Publication of CN114326765B publication Critical patent/CN114326765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a landmark tracking control system and method for visual landing of an unmanned aerial vehicle, wherein the system comprises the following components: the landmark gesture acquisition module specifically executes: acquiring attitude information of the landmark relative to the camera by a Nanodet target detection algorithm and an Apriltag detection algorithm; cloud platform gesture obtains the module, specifically carries out: obtaining the attitude angle w of the holder by the inertial sensing unit of the holdergimbal(ii) a Unmanned aerial vehicle state acquisition module specifically carries out: unmanned aerial vehicle state information s is obtained by measuring sensor carried by unmanned aerial vehicledrone(ii) a The control information generation module specifically executes: respectively generating control quantities including the posture of the pan/tilt head
Figure DDA0003385635810000011
Unmanned aerial vehicle attitude control quantity
Figure DDA0003385635810000012
And the speed control quantity of the unmanned aerial vehicle
Figure DDA0003385635810000013
The control amount information of (1). The invention integrally solves the technical requirements that the existing visual landing of the unmanned aerial vehicle has unstable detection of the landing landmarks, the visual landing landmarks are single, the detection range of the existing visual landing scheme of the unmanned aerial vehicle is small, and the large-scale target search can not be realized in a short time.

Description

Landmark tracking control system and method for visual landing of unmanned aerial vehicle
Technical Field
The invention relates to a landmark tracking control system and method for visual landing of an unmanned aerial vehicle, and belongs to the technical field of computer control.
Background
Unmanned aerial vehicle visual landing control is one of the most important tasks in unmanned aerial vehicle research. Unmanned aerial vehicle is patrolling and examining, is returning a journey, when tasks such as charging, all needs unmanned aerial vehicle can realize the vision landing accurately.
To achieve accurate visual landing, real-time and accurate landing landmark detection and tracking and stable control of the unmanned aerial vehicle need to be guaranteed. In order to guarantee the detection speed on the unmanned aerial vehicle control panel with limited performance, the complexity of a detection algorithm cannot be too high. And in order to ensure the stability of the control of the unmanned aerial vehicle, the tracking algorithm is reliable enough, and the control quantity is smooth enough. Therefore, how to balance the real-time performance of landmark detection, the tracking accuracy and the robustness of unmanned aerial vehicle control is very important.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and solve the following technical problems: 1. the existing visual landing of the unmanned aerial vehicle has the problem of unstable landing landmark detection. The unmanned aerial vehicle visual landing landmark mainly adopts an Apriltag detection algorithm. Apriltag detection algorithm positioning depends on detection of edges in an image, and when the image resolution is not high or a target is far away, edge features of the target are difficult to accurately capture; in addition, when the unmanned aerial vehicle is in a flying state, the acquired image can frequently shake, the edge characteristics of the target can be further damaged, and the detection performance of the Apriltag detection algorithm is seriously influenced. Such as CN 202010978134. 2. The existing visual landing landmarks are single and simple in pattern, and cannot meet the personalized customization requirement. Due to the limitation of visual detection algorithms, the landing landmarks generally adopt H patterns, circles and other patterns with obvious edge or shape features. On one hand, the basic patterns are difficult to meet the personalized customization requirements; on the other hand, the detection result often contains more noise due to the single pattern feature. Such as CN 201911136463. 3. The existing unmanned aerial vehicle visual landing scheme has a small detection range, and large-scale target search cannot be realized in a short time. Because current technical scheme generally fixes the camera in the unmanned aerial vehicle bottom, and the search range is fixed, need cooperate unmanned aerial vehicle's motion just can realize the land mark detection of great range.
The invention specifically adopts the following technical scheme: a landmark tracking control system for visual landing of a drone, comprising:
the landmark gesture acquisition module specifically executes: obtaining the attitude information of the landmark relative to the camera by a Nanodet target detection algorithm and an Apriltag detection algorithm, wherein the attitude information of the landmark relative to the camera comprises three-dimensional position information p of the landmark relative to the cameratargetAnd angle information wtarget
Cloud platform gesture obtains the module, specifically carries out: obtaining the attitude angle w of the holder by the inertial sensing unit of the holdergimbalThe attitude angle w of the holdergimbalThe pitch angle, the yaw angle and the roll angle of the tripod head are included;
unmanned aerial vehicle state acquisition module specifically carries out: unmanned aerial vehicle state information s is obtained by measuring sensor carried by unmanned aerial vehicledroneThe unmanned aerial vehicle state information sdroneThe method comprises the steps of (1) including real-time attitude and altitude flight data of the unmanned aerial vehicle;
the control information generation module specifically executes: respectively generating control quantity containing the posture of the tripod head by the information acquired by the landmark posture acquisition module, the posture acquisition module of the tripod head and the unmanned aerial vehicle state acquisition module
Figure BDA0003385635790000021
Unmanned aerial vehicle attitude control quantity
Figure BDA0003385635790000022
And the speed control quantity of the unmanned aerial vehicle
Figure BDA0003385635790000023
Control amount information of (2); the control quantity informationThe information is represented as:
u=F(ptarget,wtarget,wgimbal,sdrone)
wherein the content of the first and second substances,
Figure BDA0003385635790000031
f represents a control algorithm comprising an extended Kalman filter algorithm, a PID control algorithm and a visual servo algorithm.
The invention also provides a control method based on the landmark tracking control system for the visual landing of the unmanned aerial vehicle, which comprises the following steps:
step SS 1: the unmanned aerial vehicle prepares for landing, whether a landing landmark is found is judged by adopting a Nanodet target detection algorithm, if so, the step SS2 is carried out, otherwise, the step SS2 is judged to be no, and the tripod head is rotated to search for a target;
step SS 2: performing target tracking by adopting an extended Kalman filtering algorithm;
step SS 3: controlling the holder to enable the target to be positioned at the center of the visual field;
step SS 4: calculating position information of a target equivalent to the unmanned aerial vehicle according to the posture information of the holder;
step SS 5: controlling the unmanned aerial vehicle to enable the unmanned aerial vehicle to be positioned above the target; fixing the posture of the holder to enable the camera to keep vertically downward; and (4) moving the unmanned aerial vehicle to the target by using visual servo control, adjusting the attitude, landing if the position error and the attitude error are judged to be smaller than the specified threshold, and returning to the step SS5 if the position error and the attitude error are not judged to be smaller than the specified threshold.
As a preferred embodiment, step SS1 specifically includes: the unmanned aerial vehicle is positioned near a landing point and hovers, and a 3-axis brushless tripod head positioned at the bottom of the unmanned aerial vehicle is controlled to search a landing landmark in a fan-shaped range; the method comprises the steps that a search area is gradually enlarged by continuously adjusting the pitch angle and the roll angle of a holder, the holder adopts a Nanodet target detection algorithm to extract image features through a ShuffleNet multilayer convolution neural network, multi-scale image features are fused through a feature pyramid network, and identification and positioning of landmarks are completed through a shallow classification network and a regression network.
As a preferred embodiment, step SS2 specifically includes: when the land landmark is detected in the visual field, the landmark is tracked by adopting an extended Kalman filtering algorithm, the change of a yaw angle of the unmanned aerial vehicle when the unmanned aerial vehicle tracks the target is considered, the angle and the angular velocity of the target are also put into a state space of the extended Kalman filtering, namely, the state vector adopts the pixel position, the speed, the angle and the angular velocity of the target in the image visual field, and the measured value adopts the pixel position information obtained by the Nanodet target detection algorithm.
As a preferred embodiment, step SS2 further includes: taking the upper left corner of the target as an example, the corresponding state quantity is expressed as:
Figure BDA0003385635790000041
wherein p isxAnd pyIndicating the pixel position of the upper left corner of the target in the horizontal and vertical directions of the image at time t, respectively, v indicating the current linear velocity, ψ indicating the angle between the current target moving direction and the horizontal direction,
Figure BDA0003385635790000042
representing the current angular velocity; the state of the target at the time k +1 is obtained by state prediction at the time k, and the corresponding state quantity prediction formula is as follows:
Figure BDA0003385635790000043
wherein:
Figure BDA0003385635790000044
Figure BDA0003385635790000045
to simplify the model, the velocity and angular velocity variations of the object in the image from time k to time k +1 are taken into account in the modelType of process noise, target acceleration is not considered in the state transition equation
Figure BDA0003385635790000046
And angular acceleration
Figure BDA0003385635790000047
Are all 0, i.e.:
Figure BDA0003385635790000048
Figure BDA0003385635790000049
Figure BDA0003385635790000051
in process noise, an acceleration noise w in which an object moves in an image is assumeda,kAnd angular acceleration noise
Figure BDA0003385635790000052
Gaussian distribution with 0 mean, i.e.:
Figure BDA0003385635790000053
Figure BDA0003385635790000054
since the detection result of the Nanodet target detection algorithm is the pixel coordinates of the upper left corner and the lower right corner of the target, the corresponding measurement formula is as follows:
zk=Hxk+vk
wherein z iskDenotes the detection result of the Nanodet target detection algorithm, H ═ 1,0,0, 0; 0,1,0,0,0), vkRepresenting the measurement noise.
As a preferred embodiment, step SS3 specifically includes: in order to ensure that the landmark is not lost in the tracking process, after the landmark is tracked through the Nanodet target detection algorithm and the extended Kalman filtering algorithm, the tripod head at the bottom of the unmanned aerial vehicle needs to be adjusted to ensure that the landmark is positioned in the middle of the visual field.
As a preferred embodiment, step SS3 further includes: taking a 3-axis brushless holder as an example, the horizontal offset of the landmark in the image corresponds to the change of the roll angle of the holder, and the vertical offset corresponds to the change of the pitch angle of the holder; let W and H be the width and height of the image, and theta be the angle of view of the camera in the horizontal and vertical directionsxAnd thetay(ii) a Suppose the coordinates of the center point of the target in the image at a certain moment are (x)t,yt) The linear distance from the target to the camera is R, the deflection angles of the target relative to the camera in the horizontal direction and the vertical direction are alpha and beta, and the following relation is obtained by utilizing a trigonometric function:
W=2kRtanθx
H=2kRtanθy
xt=kRtanα
yt=kRtanβ
wherein k represents a scaling factor from a three-dimensional space to a pixel plane, determined by camera internal parameters, obtained by the following equation:
Figure BDA0003385635790000061
Figure BDA0003385635790000062
if the angle error e of the current pan-tilt tracking is (α, β), the following PID controller is designed for pan-tilt control:
Figure BDA0003385635790000063
wherein,Kp、Ki、KdRespectively representing the proportion, the differential coefficient and the integral coefficient of the tripod head control, and u is the final tripod head angle change control quantity.
As a preferred embodiment, step SS4 specifically includes:
calculating the position information of the current target relative to the unmanned aerial vehicle according to the current attitude information of the holder and the position information of the target in the camera, taking a ground landmark as an example, setting the position of the target relative to the unmanned aerial vehicle as (x, y, z), and using a corresponding calculation formula as follows:
x=ztanα
y=ztanβ
Figure BDA0003385635790000071
wherein alpha and beta are deflection angles of the target in the horizontal direction and the vertical direction in the camera; d is the distance from the target to the camera and is calculated by the attitude information of the target relative to the camera.
As a preferred embodiment, step SS5 specifically includes:
after the unmanned aerial vehicle acquires the position information of the target, the cradle head starts to track the target, and the specific control mode of the unmanned aerial vehicle depends on the current attitude angle information of the cradle head camera; when the pan-tilt camera is not in a vertical state, only PID position control is applied to the unmanned aerial vehicle, and the input error signal is the deviation of the three-dimensional position information of the target relative to the unmanned aerial vehicle and the target position information, namely:
e=(xcurrent-xtarget,ycurrent-ytarget,zcurrent-ztarget)T
Figure BDA0003385635790000072
to further smooth the control curve and limit the response range of the output control, the control quantity is mapped using the following function:
Figure BDA0003385635790000073
wherein u isinIs the output of the PID position control; u. ofoutIs the final position control quantity obtained after mapping; u shapemaxRepresenting the maximum control quantity of the scope unmanned aerial vehicle; lambda is an attenuation factor and determines the attenuation speed of the control quantity;
when the unmanned aerial vehicle approaches the position right above the landmark step by step, the pitch angle and the roll angle of the holder tend to 0, the unmanned aerial vehicle enters the effective detection range of an Apriltag detection algorithm, the unmanned aerial vehicle enters the visual landing stage at the moment, the visual angle of a camera is kept to be vertical and downward, an extended Kalman filter is adopted to track 4 vertexes of a target in an image, and the three-dimensional space position information equivalent to the unmanned aerial vehicle is accurately calculated according to the camera internal parameters and the pixel position information of the Apriltag detection algorithm in the image; because when the distance to land the sign is nearer, unmanned aerial vehicle's speed has already trended steadily, adopts visual servo control algorithm to exert position control and angle control to unmanned aerial vehicle, and the input error signal this moment is the pixel coordinate deviation that the pixel coordinate of current landmark in the image and landmark three-dimensional space position map to in the image, promptly:
e(t)=s[m(t),a]-s*
where s denotes the coordinates of the 4 vertices of the landmark on the image plane, s*The position of the object position of the landmark on the image plane is represented, m represents the pixel coordinate of the object in the image, and a represents the camera internal parameter for mapping the pixel coordinate onto the image plane.
As a preferred embodiment, step SS5 further includes: the visual servo control rate adopted according to the error input is calculated by the following formula:
Figure BDA0003385635790000081
wherein v iscRepresenting desired linear and angular velocity vectors of the camera; l isxRepresenting a jacobian matrix of the image, determined by camera parameters and pixel coordinatesThe camera is responsible for mapping the speed transformation in the pixel coordinate system into a camera coordinate system; λ represents the visual servo gain, which determines the magnitude of the control force.
The invention achieves the following beneficial effects: 1. according to the landmark tracking control system and method for visual landing of the unmanned aerial vehicle, disclosed by the invention, deep learning is combined with an Apriltag detection algorithm, the robustness of the deep learning detection on small targets and target fuzzy jitter is good, and effective detection can be realized at a long distance and when the speed of the unmanned aerial vehicle changes rapidly; the Apriltag detection algorithm can acquire accurate target attitude information and can realize accurate attitude control when the target is close to the Apriltag detection algorithm; the two methods are combined to realize the landing sign detection at both long distance and short distance. 2. The landmark tracking control system and method for visual landing of the unmanned aerial vehicle, provided by the invention, are matched with a deep learning target detection technology, so that the target is not limited to Apriltag and simple patterns any more, and personalized customization can be realized for different use scenes. 3. The landmark tracking control system and method for visual landing of the unmanned aerial vehicle can realize large-scale target search by matching with the three-axis holder, improve the reliability of visual landing and reduce the dependence on other positioning modules (such as a GPS (global positioning system), an inertial sensing unit and the like). 4. The invention obtains the attitude information of the landmark relative to the camera through a Nanodet target detection algorithm and an Apriltag detection algorithm, wherein the attitude information of the landmark relative to the camera comprises three-dimensional position information p of the landmark relative to the cameratargetAnd angle information wtarget(ii) a Obtaining the attitude angle w of the holder by the inertial sensing unit of the holdergimbalThe attitude angle w of the holdergimbalThe pitch angle, the yaw angle and the roll angle of the tripod head are included; unmanned aerial vehicle state information s is obtained by measuring sensor carried by unmanned aerial vehicledroneThe unmanned aerial vehicle state information sdroneThe method comprises the steps of (1) including real-time attitude and altitude flight data of the unmanned aerial vehicle; respectively generating control quantity containing the posture of the tripod head by the information acquired by the landmark posture acquisition module, the posture acquisition module of the tripod head and the unmanned aerial vehicle state acquisition module
Figure BDA0003385635790000091
Unmanned aerial vehicle attitude control quantity
Figure BDA0003385635790000092
And the speed control quantity of the unmanned aerial vehicle
Figure BDA0003385635790000093
The control quantity information integrally solves the problems that the existing visual landing of the unmanned aerial vehicle has unstable landing landmark detection, the existing visual landing landmark is single and has simple patterns, the personalized customization requirement cannot be met, the detection range of the existing visual landing scheme of the unmanned aerial vehicle is small, and the technical requirement that the large-range target search cannot be realized in a short time is met.
Drawings
FIG. 1 is a schematic topological diagram of a landmark tracking control system for visual landing of an unmanned aerial vehicle according to the present invention;
fig. 2 is a flowchart of a landmark tracking control method for visual landing of an unmanned aerial vehicle according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1: as shown in fig. 1, the present invention provides a landmark tracking control system for visual landing of an unmanned aerial vehicle, including:
the landmark gesture acquisition module specifically executes: obtaining the attitude information of the landmark relative to the camera by a Nanodet target detection algorithm and an Apriltag detection algorithm, wherein the attitude information of the landmark relative to the camera comprises three-dimensional position information p of the landmark relative to the cameratargetAnd angle information wtarget
Cloud platform gesture obtains the module, specifically carries out: obtaining the attitude angle w of the holder by the inertial sensing unit of the holdergimbalThe attitude angle w of the holdergimbalThe pitch angle, the yaw angle and the roll angle of the tripod head are included;
unmanned aerial vehicle state acquisition module specifically carries out: sensor survey that unmanned aerial vehicle carriedMeasuring to obtain unmanned aerial vehicle state information sdroneThe unmanned aerial vehicle state information sdroneThe method comprises the steps of (1) including real-time attitude and altitude flight data of the unmanned aerial vehicle;
the control information generation module specifically executes: respectively generating control quantity containing the posture of the tripod head by the information acquired by the landmark posture acquisition module, the posture acquisition module of the tripod head and the unmanned aerial vehicle state acquisition module
Figure BDA0003385635790000101
Unmanned aerial vehicle attitude control quantity
Figure BDA0003385635790000102
And the speed control quantity of the unmanned aerial vehicle
Figure BDA0003385635790000103
Control amount information of (2); the control amount information is expressed as:
u=F(ptarget,wtarget,wgimbal,sdrone)
wherein the content of the first and second substances,
Figure BDA0003385635790000104
f represents a control algorithm comprising an extended Kalman filter algorithm, a PID control algorithm and a visual servo algorithm.
Example 2: as shown in fig. 2, the present invention provides a landmark tracking control method for visual landing of an unmanned aerial vehicle, which specifically includes the following 6 steps.
1. The unmanned aerial vehicle is positioned near the landing site and hovers, and a 3-axis brushless tripod head positioned at the bottom of the unmanned aerial vehicle is controlled to search for a landing landmark in a fan-shaped range. The search area can be gradually enlarged by continuously adjusting the pitch angle and the roll angle of the holder. At the moment, the landmark is very small in the visual field due to the fact that the landmark is far away from the target, and the detection algorithm adopts a lightweight Nanodet model. The model automatically extracts image features through a ShuffleNet multilayer convolutional neural network, fuses multi-scale image features through a feature pyramid network, and completes identification and positioning of landmarks through a shallow classification network and a regression network.
2. When the land landmark is detected in the visual field, the landmark is tracked by adopting extended Kalman filtering, the angle and the angular velocity of the target are also put into a state space of the extended Kalman filtering in consideration of the change of a yaw angle when the unmanned aerial vehicle tracks the target, namely, the state vector adopts the pixel position, the speed, the angle and the angular velocity of the target in the image visual field, and the measured value adopts the pixel position information obtained by detecting the Nanodet target. Taking the upper left corner of the target as an example, the corresponding state quantity is expressed as
Figure BDA0003385635790000111
Wherein p isxAnd pyIndicating the pixel position of the upper left corner of the target in the horizontal and vertical directions of the image at time t, respectively, v indicating the current linear velocity, ψ indicating the angle between the current target moving direction and the horizontal direction,
Figure BDA0003385635790000112
indicating the current angular velocity. The state of the target at the moment k +1 can be obtained by predicting the state at the moment k, and the corresponding prediction formula is as follows:
Figure BDA0003385635790000113
wherein:
Figure BDA0003385635790000121
Figure BDA0003385635790000122
for model simplification, the velocity and angular velocity changes of the target in the image from the time k to the time k +1 are considered in the process noise of the model, and the target acceleration and the angular acceleration are not considered to be 0 in the state transition equation, namely:
Figure BDA0003385635790000123
Figure BDA0003385635790000124
Figure BDA0003385635790000125
in process noise, an acceleration noise w in which an object moves in an image is assumeda,kAnd angular acceleration noise
Figure BDA0003385635790000126
Gaussian distribution with 0 mean, i.e.:
Figure BDA0003385635790000127
Figure BDA0003385635790000128
since the result of the Nanodet detection is the pixel coordinates of the top left corner and the bottom right corner of the target, the corresponding measurement formula is:
zk=Hxk+vk
wherein z iskDenotes the detection result of Nanodet, H ═ 1,0,0, 1,0,0,0), vkRepresenting the measurement noise.
3. In order to ensure that the landmark is not lost in the tracking process, after the landmark is tracked through the Nanodet detection model and the extended Kalman filtering, the tripod head at the bottom of the unmanned aerial vehicle needs to be adjusted, and the landmark is positioned in the middle of the visual field. Taking a common 3-axis brushless holder as an example, the horizontal offset of the landmark in the image corresponds to the roll angle change of the holder, and the vertical offset corresponds to the pitch angle change of the holder. Let W and H be the width and height of the image, and the camera is in the horizontal directionThe field angles of view to and from which direction are thetaxAnd thetay(ii) a Assuming that coordinates of a center point of an object in an image at a certain moment are (xt, yt), a straight-line distance from the object to a camera is R, and declination angles of the object relative to the camera in the horizontal direction and the vertical direction are alpha and beta, the following relation can be obtained by using a trigonometric function:
W=2kRtanθx
H=2kRtanθy
xt=kRtanα
yt=kRtanβ
where k represents the scaling factor from the three-dimensional space to the pixel plane, determined by camera parameters. From the above formula one can obtain:
Figure BDA0003385635790000131
Figure BDA0003385635790000132
if the angle error e of the current pan/tilt head tracking is (α, β), the following PID controller can be designed for pan/tilt head control:
Figure BDA0003385635790000141
wherein, Kp、Ki、KdRespectively representing the proportion, the differential coefficient and the integral coefficient of the tripod head control, and u is the final tripod head angle change control quantity.
4. Calculating the position information of the current target relative to the unmanned aerial vehicle according to the current attitude information of the holder and the position information of the target in the camera, taking a ground landmark as an example, setting the position of the target relative to the unmanned aerial vehicle as (x, y, z), and using a corresponding calculation formula as follows:
x=ztanα
y=ztanβ
Figure BDA0003385635790000142
wherein alpha and beta are deflection angles of the target in the horizontal direction and the vertical direction in the camera; d is the distance from the target to the camera and is calculated by the attitude information of the target relative to the camera.
5. After the unmanned aerial vehicle acquires the position information of the target, the cradle head starts to track the target, and the specific control mode of the unmanned aerial vehicle depends on the current attitude angle information of the cradle head camera.
When the pan-tilt camera is not in a vertical state, only PID position control is applied to the unmanned aerial vehicle, and the input error signal is the deviation between the three-dimensional position information of the target relative to the unmanned aerial vehicle and the target position, namely:
e=(xcurrent-xtarget,ycurrent-ytarget,zcurrent-ztarget)T
Figure BDA0003385635790000143
to further smooth the control curve and limit the response range of the output control, the control quantity is mapped using the following function:
Figure BDA0003385635790000151
wherein u isinIs the output of the PID position control; u. ofoutIs the final position control quantity obtained after mapping; u shapemaxRepresenting the maximum control quantity of the scope unmanned aerial vehicle; λ is a damping factor, and determines the damping rate of the control amount.
And when the landing mark is gradually approached, the effective detection range of the Apriltag is entered, the extended Kalman filter is adopted to track 4 vertexes of the target in the image, and the three-dimensional space position information equivalent to the unmanned aerial vehicle can be accurately calculated according to the camera internal reference and the pixel position information of the Apriltag in the image. When the distance between the unmanned aerial vehicle and the landing mark is close, the speed of the unmanned aerial vehicle tends to be stable, so that the detection robustness of the Ariltag target is high at this stage, and the deep learning is not relied on. The method comprises the following steps of applying position control and angle control to the unmanned aerial vehicle by adopting visual servo control, wherein an input error signal at the moment is the pixel coordinate deviation of a current landmark in an image and the pixel coordinate deviation of the mapping of a three-dimensional space position of the landmark to the image, namely:
e(t)=s[m(t),a]-s*
where s denotes the coordinates of the 4 vertices of the landmark on the image plane, s*The position of the object position of the landmark on the image plane is represented, m represents the pixel coordinate of the object in the image, and a represents the camera internal parameter for mapping the pixel coordinate to the image plane. The visual servo control rate adopted according to the error input is calculated by the following formula:
Figure BDA0003385635790000152
wherein v iscRepresenting desired linear and angular velocity vectors of the camera; l isxThe jacobian matrix of the image is represented, is determined by camera internal parameters and pixel coordinates and is responsible for mapping the speed transformation in the pixel coordinate system into the camera coordinate system; λ represents the visual servo gain, which determines the magnitude of the control force.
6. The unmanned aerial vehicle rapidly reaches the target position according to the visual servo idle rate, and the unmanned aerial vehicle is controlled to land when the error is lower than a specified threshold value.
Compared with the prior art, the invention has the advantages that: 1. according to the landmark tracking control system and method for visual landing of the unmanned aerial vehicle, disclosed by the invention, deep learning is combined with an Apriltag detection algorithm, the robustness of the deep learning detection on small targets and target fuzzy jitter is good, and effective detection can be realized at a long distance and when the speed of the unmanned aerial vehicle changes rapidly; the Apriltag detection algorithm can acquire accurate target attitude information and can realize accurate attitude control when the target is close to the Apriltag detection algorithm; the two methods are combined to realize the landing sign detection at both long distance and short distance. 2. The landmark tracking control system and method for visual landing of the unmanned aerial vehicle, provided by the invention, are matched with a deep learning target detection technology, so that the target is not limited to Apriltag and simple patterns any more, and personalized customization can be realized for different use scenes. 3. The landmark tracking control system and method for visual landing of the unmanned aerial vehicle can realize large-scale target search by matching with the three-axis holder, improve the reliability of visual landing and reduce the dependence on other positioning modules (such as a GPS (global positioning system), an inertial sensing unit and the like).
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A landmark tracking control system for visual landing of an unmanned aerial vehicle, comprising:
the landmark gesture acquisition module specifically executes: obtaining the attitude information of the landmark relative to the camera by a Nanodet target detection algorithm and an Apriltag detection algorithm, wherein the attitude information of the landmark relative to the camera comprises three-dimensional position information p of the landmark relative to the cameratargetAnd angle information wtarget
Cloud platform gesture obtains the module, specifically carries out: obtaining the attitude angle w of the holder by the inertial sensing unit of the holderqimbalThe attitude angle w of the holderqimbalThe pitch angle, the yaw angle and the roll angle of the tripod head are included;
unmanned aerial vehicle state acquisition module specifically carries out: unmanned aerial vehicle state information s is obtained by measuring sensor carried by unmanned aerial vehicledroneState information of the unmanned aerial vehicle SdroneThe method comprises the steps of (1) including real-time attitude and altitude flight data of the unmanned aerial vehicle;
the control information generation module specifically executes: respectively generating control quantity containing the posture of the tripod head by the information acquired by the landmark posture acquisition module, the posture acquisition module of the tripod head and the unmanned aerial vehicle state acquisition module
Figure FDA0003385635780000011
Unmanned aerial vehicle attitude control quantity
Figure FDA0003385635780000012
And the speed control quantity of the unmanned aerial vehicle
Figure FDA0003385635780000013
Control amount information of (2); the control amount information is expressed as:
u=F(ptarget,wtarget,wgimbal,Sdrone)
wherein the content of the first and second substances,
Figure FDA0003385635780000014
f represents a control algorithm comprising an extended Kalman filter algorithm, a PID control algorithm and a visual servo algorithm.
2. A control method based on the landmark tracking control system for visual landing of the unmanned aerial vehicle is characterized by comprising the following steps:
step SS 1: the unmanned aerial vehicle prepares for landing, whether a landing landmark is found is judged by adopting a Nanodet target detection algorithm, if so, the step SS2 is carried out, otherwise, the step SS2 is judged to be no, and the tripod head is rotated to search for a target;
step SS 2: performing target tracking by adopting an extended Kalman filtering algorithm;
step SS 3: controlling the holder to enable the target to be positioned at the center of the visual field;
step SS 4: calculating position information of a target equivalent to the unmanned aerial vehicle according to the posture information of the holder;
step SS 5: controlling the unmanned aerial vehicle to enable the unmanned aerial vehicle to be positioned above the target; fixing the posture of the holder to enable the camera to keep vertically downward; and (4) moving the unmanned aerial vehicle to the target by using visual servo control, adjusting the attitude, landing if the position error and the attitude error are judged to be smaller than the specified threshold, and returning to the step SS5 if the position error and the attitude error are not judged to be smaller than the specified threshold.
3. The method according to claim 2, wherein step SS1 specifically comprises: the unmanned aerial vehicle is positioned near a landing point and hovers, and a 3-axis brushless tripod head positioned at the bottom of the unmanned aerial vehicle is controlled to search a landing landmark in a fan-shaped range; the method comprises the steps that a search area is gradually enlarged by continuously adjusting the pitch angle and the roll angle of a holder, the holder adopts a Nanodet target detection algorithm to extract image features through a ShuffleNet multilayer convolution neural network, multi-scale image features are fused through a feature pyramid network, and identification and positioning of landmarks are completed through a shallow classification network and a regression network.
4. The method according to claim 2, wherein step SS2 specifically comprises: when the land landmark is detected in the visual field, the landmark is tracked by adopting an extended Kalman filtering algorithm, the change of a yaw angle of the unmanned aerial vehicle when the unmanned aerial vehicle tracks the target is considered, the angle and the angular velocity of the target are also put into a state space of the extended Kalman filtering, namely, the state vector adopts the pixel position, the speed, the angle and the angular velocity of the target in the image visual field, and the measured value adopts the pixel position information obtained by the Nanodet target detection algorithm.
5. The method according to claim 4, wherein step SS2 further comprises: taking the upper left corner of the target as an example, the corresponding state quantity is expressed as:
Figure FDA0003385635780000021
wherein p isxAnd pyIndicating the pixel position of the upper left corner of the target in the horizontal and vertical directions of the image at time t, respectively, v indicating the current linear velocity, ψ indicating the angle between the current target moving direction and the horizontal direction,
Figure FDA0003385635780000031
representing the current angular velocity; the state of the target at the time k +1 is obtained by state prediction at the time k, and the corresponding state quantity prediction formula is as follows:
Figure FDA0003385635780000032
wherein:
Figure FDA0003385635780000033
Figure FDA0003385635780000034
to simplify the model, the velocity and angular velocity changes of the target in the image from time k to time k +1 are taken into account in the process noise of the model, and the target acceleration is not taken into account in the state-transfer equation
Figure FDA0003385635780000035
And angular acceleration
Figure FDA0003385635780000036
Are all 0, i.e.:
Figure FDA0003385635780000037
Figure FDA0003385635780000038
Figure FDA0003385635780000039
in process noiseAssuming acceleration noise w of the object moving in the imagea,kAnd angular acceleration noise
Figure FDA00033856357800000310
Gaussian distribution with 0 mean, i.e.:
Figure FDA00033856357800000311
Figure FDA0003385635780000041
since the detection result of the Nanodet target detection algorithm is the pixel coordinates of the upper left corner and the lower right corner of the target, the corresponding measurement formula is as follows:
zk=Hxk+vk
wherein z iskDenotes the detection result of the Nanodet target detection algorithm, H ═ 1,0,0, 0; 0,1,0,0,0), vkRepresenting the measurement noise.
6. The method according to claim 2, wherein step SS3 specifically comprises: in order to ensure that the landmark is not lost in the tracking process, after the landmark is tracked through the Nanodet target detection algorithm and the extended Kalman filtering algorithm, the tripod head at the bottom of the unmanned aerial vehicle needs to be adjusted to ensure that the landmark is positioned in the middle of the visual field.
7. The method according to claim 6, wherein step SS3 further comprises: taking a 3-axis brushless holder as an example, the horizontal offset of the landmark in the image corresponds to the change of the roll angle of the holder, and the vertical offset corresponds to the change of the pitch angle of the holder; let W and H be the width and height of the image, and theta be the angle of view of the camera in the horizontal and vertical directionsxAnd thetay(ii) a Suppose the coordinates of the center point of the target in the image at a certain moment are (x)t,yt) The linear distance from the target to the camera is R, the deflection angles of the target relative to the camera in the horizontal direction and the vertical direction are alpha and beta, and the following relation is obtained by utilizing a trigonometric function:
W=2kRtanθx
H=2kRtanθy
xt=kRtanα
yt=kRtanβ
wherein k represents a scaling factor from a three-dimensional space to a pixel plane, determined by camera internal parameters, obtained by the following equation:
Figure FDA0003385635780000051
Figure FDA0003385635780000052
if the angle error e of the current pan-tilt tracking is (α, β), the following PID controller is designed for pan-tilt control:
Figure FDA0003385635780000053
wherein, Kp、Ki、KdRespectively representing the proportion, the differential coefficient and the integral coefficient of the tripod head control, and u is the final tripod head angle change control quantity.
8. The method according to claim 2, wherein step SS4 specifically comprises:
calculating the position information of the current target relative to the unmanned aerial vehicle according to the current attitude information of the holder and the position information of the target in the camera, taking a ground landmark as an example, setting the position of the target relative to the unmanned aerial vehicle as (x, y, z), and using a corresponding calculation formula as follows:
x=ztanα
y=ztanβ
Figure FDA0003385635780000061
wherein alpha and beta are deflection angles of the target in the horizontal direction and the vertical direction in the camera; d is the distance from the target to the camera and is calculated by the attitude information of the target relative to the camera.
9. The method according to claim 2, wherein step SS5 specifically comprises:
after the unmanned aerial vehicle acquires the position information of the target, the cradle head starts to track the target, and the specific control mode of the unmanned aerial vehicle depends on the current attitude angle information of the cradle head camera; when the pan-tilt camera is not in a vertical state, only PID position control is applied to the unmanned aerial vehicle, and the input error signal is the deviation of the three-dimensional position information of the target relative to the unmanned aerial vehicle and the target position information, namely:
e=(xcurrent-xtarget,ycurrent-ytarget,zcurrent-ztarget)T
Figure FDA0003385635780000062
to further smooth the control curve and limit the response range of the output control, the control quantity is mapped using the following function:
Figure FDA0003385635780000063
wherein u isinIs the output of the PID position control; u. ofoutIs the final position control quantity obtained after mapping; u shapemaxMaximum control of unmanned aerial vehicle representing scopeAn amount; lambda is an attenuation factor and determines the attenuation speed of the control quantity;
when the unmanned aerial vehicle approaches the position right above the landmark step by step, the pitch angle and the roll angle of the holder tend to 0, the unmanned aerial vehicle enters the effective detection range of an Apriltag detection algorithm, the unmanned aerial vehicle enters the visual landing stage at the moment, the visual angle of a camera is kept to be vertical and downward, an extended Kalman filter is adopted to track 4 vertexes of a target in an image, and the three-dimensional space position information equivalent to the unmanned aerial vehicle is accurately calculated according to the camera internal parameters and the pixel position information of the Apriltag detection algorithm in the image; because when the distance to land the sign is nearer, unmanned aerial vehicle's speed has already trended steadily, adopts visual servo control algorithm to exert position control and angle control to unmanned aerial vehicle, and the input error signal this moment is the pixel coordinate deviation that the pixel coordinate of current landmark in the image and landmark three-dimensional space position map to in the image, promptly:
e(t)=s[m(t),a]-s*
where s denotes the coordinates of the 4 vertices of the landmark on the image plane, s*The position of the object position of the landmark on the image plane is represented, m represents the pixel coordinate of the object in the image, and a represents the camera internal parameter for mapping the pixel coordinate onto the image plane.
10. The method according to claim 9, wherein step SS5 further comprises: the visual servo control rate adopted according to the error input is calculated by the following formula:
Figure FDA0003385635780000071
wherein v iscRepresenting desired linear and angular velocity vectors of the camera; l isxThe jacobian matrix of the image is represented, is determined by camera internal parameters and pixel coordinates and is responsible for mapping the speed transformation in the pixel coordinate system into the camera coordinate system; λ represents the visual servo gain, which determines the magnitude of the control force.
CN202111452916.7A 2021-12-01 2021-12-01 Landmark tracking control system and method for unmanned aerial vehicle visual landing Active CN114326765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452916.7A CN114326765B (en) 2021-12-01 2021-12-01 Landmark tracking control system and method for unmanned aerial vehicle visual landing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452916.7A CN114326765B (en) 2021-12-01 2021-12-01 Landmark tracking control system and method for unmanned aerial vehicle visual landing

Publications (2)

Publication Number Publication Date
CN114326765A true CN114326765A (en) 2022-04-12
CN114326765B CN114326765B (en) 2024-02-09

Family

ID=81049255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452916.7A Active CN114326765B (en) 2021-12-01 2021-12-01 Landmark tracking control system and method for unmanned aerial vehicle visual landing

Country Status (1)

Country Link
CN (1) CN114326765B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581480A (en) * 2022-05-07 2022-06-03 西湖大学 Multi-unmanned aerial vehicle cooperative target state estimation control method and application thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103587708A (en) * 2013-11-14 2014-02-19 上海大学 Method for field fixed point zero-dead-zone autonomous soft landing of subminiature unmanned rotor aircraft
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN107515622A (en) * 2017-07-27 2017-12-26 南京航空航天大学 A kind of rotor wing unmanned aerial vehicle autonomous control method of drop in mobile target
CN108594848A (en) * 2018-03-29 2018-09-28 上海交通大学 A kind of unmanned plane of view-based access control model information fusion autonomous ship method stage by stage
CN110231835A (en) * 2019-07-04 2019-09-13 深圳市科卫泰实业发展有限公司 A kind of accurate landing method of unmanned plane based on machine vision
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN111966133A (en) * 2020-08-29 2020-11-20 山东翔迈智能科技有限公司 Visual servo control system of holder
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN113052876A (en) * 2021-04-25 2021-06-29 合肥中科类脑智能技术有限公司 Video relay tracking method and system based on deep learning
CN113657256A (en) * 2021-08-16 2021-11-16 大连海事大学 Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103587708A (en) * 2013-11-14 2014-02-19 上海大学 Method for field fixed point zero-dead-zone autonomous soft landing of subminiature unmanned rotor aircraft
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN107515622A (en) * 2017-07-27 2017-12-26 南京航空航天大学 A kind of rotor wing unmanned aerial vehicle autonomous control method of drop in mobile target
CN108594848A (en) * 2018-03-29 2018-09-28 上海交通大学 A kind of unmanned plane of view-based access control model information fusion autonomous ship method stage by stage
CN110231835A (en) * 2019-07-04 2019-09-13 深圳市科卫泰实业发展有限公司 A kind of accurate landing method of unmanned plane based on machine vision
CN111932588A (en) * 2020-08-07 2020-11-13 浙江大学 Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN112116651A (en) * 2020-08-12 2020-12-22 天津(滨海)人工智能军民融合创新中心 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN111966133A (en) * 2020-08-29 2020-11-20 山东翔迈智能科技有限公司 Visual servo control system of holder
CN113052876A (en) * 2021-04-25 2021-06-29 合肥中科类脑智能技术有限公司 Video relay tracking method and system based on deep learning
CN113657256A (en) * 2021-08-16 2021-11-16 大连海事大学 Unmanned ship-borne unmanned aerial vehicle sea-air cooperative visual tracking and autonomous recovery method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LESONEN, O.S 等: "Landing Methods of Unmanned Aerial Vehicle", 2020 WAVE ELECTRONICS AND ITS APPLICATION IN INFORMATION AND TELECOMMUNICATION SYSTEMS (WECONF), pages 1 - 4 *
YANG SONGPU 等: "Research on Visual Navigation Technology of Unmanned Aerial Vehicle Landing", 2013 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA), pages 463 - 467 *
刁灿;王英勋;王金提;苗淼;: "辅助自动着陆技术", 系统仿真学报, no. 1, pages 501 - 504 *
洪亮;章政;李亚贵;李宇峰;张舰栋;: "基于模糊预测的INS/视觉无人机自主着陆导航算法", 传感技术学报, no. 12, pages 91 - 97 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581480A (en) * 2022-05-07 2022-06-03 西湖大学 Multi-unmanned aerial vehicle cooperative target state estimation control method and application thereof

Also Published As

Publication number Publication date
CN114326765B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN105652891B (en) A kind of rotor wing unmanned aerial vehicle movement Target self-determination tracks of device and its control method
Sani et al. Automatic navigation and landing of an indoor AR. drone quadrotor using ArUco marker and inertial sensors
CN110243358A (en) The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
Bacik et al. Autonomous flying with quadrocopter using fuzzy control and ArUco markers
Shen et al. Autonomous multi-floor indoor navigation with a computationally constrained MAV
Kendoul et al. An adaptive vision-based autopilot for mini flying machines guidance, navigation and control
CN105759829A (en) Laser radar-based mini-sized unmanned plane control method and system
Olivares-Méndez et al. Fuzzy controller for uav-landing task using 3d-position visual estimation
CN111583369A (en) Laser SLAM method based on facial line angular point feature extraction
Qi et al. Autonomous landing solution of low-cost quadrotor on a moving platform
CN101109640A (en) Unmanned aircraft landing navigation system based on vision
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN105352495A (en) Unmanned-plane horizontal-speed control method based on fusion of data of acceleration sensor and optical-flow sensor
CN107831776A (en) Unmanned plane based on nine axle inertial sensors independently makes a return voyage method
Gur fil et al. Partial aircraft state estimation from visual motion using the subspace constraints approach
CN108594848A (en) A kind of unmanned plane of view-based access control model information fusion autonomous ship method stage by stage
CN106289250A (en) A kind of course information acquisition system
Cho et al. Autonomous ship deck landing of a quadrotor UAV using feed-forward image-based visual servoing
Wang et al. Monocular vision and IMU based navigation for a small unmanned helicopter
CN114326765B (en) Landmark tracking control system and method for unmanned aerial vehicle visual landing
Haddadi et al. Visual-inertial fusion for indoor autonomous navigation of a quadrotor using ORB-SLAM
Lekkala et al. Accurate and augmented navigation for quadcopter based on multi-sensor fusion
Amidi et al. Research on an autonomous vision-guided helicopter
Olivares-Mendez et al. Autonomous landing of an unmanned aerial vehicle using image-based fuzzy control
CN117234203A (en) Multi-source mileage fusion SLAM downhole navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant