CN110221625B - Autonomous landing guiding method for precise position of unmanned aerial vehicle - Google Patents

Autonomous landing guiding method for precise position of unmanned aerial vehicle Download PDF

Info

Publication number
CN110221625B
CN110221625B CN201910446706.3A CN201910446706A CN110221625B CN 110221625 B CN110221625 B CN 110221625B CN 201910446706 A CN201910446706 A CN 201910446706A CN 110221625 B CN110221625 B CN 110221625B
Authority
CN
China
Prior art keywords
semantic
unmanned aerial
aerial vehicle
coordinate system
landing target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910446706.3A
Other languages
Chinese (zh)
Other versions
CN110221625A (en
Inventor
李晓峰
杨晗
管岭
贾利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201910446706.3A priority Critical patent/CN110221625B/en
Publication of CN110221625A publication Critical patent/CN110221625A/en
Application granted granted Critical
Publication of CN110221625B publication Critical patent/CN110221625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • G05B11/42Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential for obtaining a characteristic which is both proportional and time-dependent, e.g. P.I., P.I.D.
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/12Target-seeking control

Abstract

The invention provides an autonomous landing guiding method for the accurate position of an unmanned aerial vehicle. The method comprises the following steps: the unmanned aerial vehicle is guided to fly to a set distance range of a landing target on the ground through a satellite navigation system; acquiring a video image of the ground through an onboard camera, identifying semantic icons on a landing target contained in the video image through a graph detection rule, and calculating the central position information of the landing target according to the semantic icons; calculating the position and the dynamic characteristic of the landing target under a geodetic coordinate system according to the central position information of the landing target through the posture and the relative position relation between the airborne camera and the unmanned aerial vehicle; and continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target under the geodetic coordinate system, and controlling the unmanned aerial vehicle to land at the central position of the landing target through a triple PID control algorithm. According to the method, the semantic icons in the landing targets are identified, so that the landing targets are positioned and tracked, and the unmanned aerial vehicle can land on the dynamic targets accurately and autonomously.

Description

Autonomous landing guiding method for precise position of unmanned aerial vehicle
Technical Field
The invention relates to the technical field of unmanned aerial vehicle control, in particular to an autonomous landing guiding method for the accurate position of an unmanned aerial vehicle.
Background
Rotor unmanned aerial vehicle has advantages such as convenient to use, flexible, the operation cost is low, the flight precision is high, has a large amount of demands in practical application, by each field such as wide application reconnaissance, rescue, survey and drawing, plant protection. The technologies of autonomous take-off and landing of the unmanned aerial vehicle have been research hotspots in the field of unmanned aerial vehicles for many years.
Currently, for autonomous landing of an unmanned aerial vehicle, a GNSS (Global Navigation Satellite System) is mostly used for Navigation and positioning in cooperation with altitude data to perform fixed-point landing. The altitude data is typically measured by GNSS, barometers, ultrasound or by radar. However, GNSS signals are susceptible to building shading and weather conditions, data drift is severe, and accuracy in the height direction is very limited; the ranging sensor based on ultrasonic wave, microwave, laser etc. is difficult to distinguish descending platform and ground, can't directly be used for unmanned aerial vehicle to descend on moving platform.
At present, to the landing platform that removes, unmanned aerial vehicle among the prior art's autonomic landing adopts artifical guide control usually, all has higher requirement to GNSS precision and operating personnel's proficiency, can't accomplish autonomic landing. Under some complex conditions, such as taking off and landing on a sea surface moving platform and a bumpy ground platform, a serious challenge is still posed to the flight control system and control personnel of the unmanned aerial vehicle, and the unmanned aerial vehicle is restricted to be applied in a wider field.
Disclosure of Invention
The embodiment of the invention provides an autonomous landing guiding method for the precise position of an unmanned aerial vehicle, which aims to overcome the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme.
An autonomous landing guidance method for precise position of an unmanned aerial vehicle comprises the following steps:
the unmanned aerial vehicle is guided to fly to a set distance range of a landing target on the ground through a satellite navigation system;
acquiring a video image of the ground through an onboard camera, identifying semantic icons on a landing target contained in the video image through a graph detection rule, and calculating the central position information of the landing target according to the semantic icons on the landing target;
calculating the position and the dynamic characteristic of the landing target under a geodetic coordinate system according to the central position information of the landing target through the posture and the relative position relation between an airborne camera and the unmanned aerial vehicle;
and continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target under the geodetic coordinate system based on the position and the dynamic characteristic of the landing target, and controlling the unmanned aerial vehicle to land at the central position of the landing target through a triple PID control algorithm.
Preferably, the inside of landing target contains a plurality of non-overlapping semantic icons, and the size, the position and the corresponding semantic of every semantic icon are known, and a plurality of semantic icons arrange according to certain rule for unmanned aerial vehicle landing in-process airborne camera can see 1 semantic icon all the time at least.
Preferably, the main body of the semantic icon is a black rectangle, white rectangles with different positions and numbers are drawn in the semantic icon according to semantic rules, and semantic information contained in each semantic icon is stored in a semantic icon database.
Preferably, the acquiring of the video image of the ground by the onboard camera, the recognizing of the semantic icon on the landing target included in the video image by the pattern detection rule, and the calculating of the central position information of the landing target according to the semantic icon on the landing target include:
shooting a video image of the ground through an airborne camera of an unmanned aerial vehicle, converting the video image into a gray image, calculating an adaptive threshold value corresponding to each pixel point according to the neighborhood of each pixel point in the gray image, comparing the gray value of each pixel point with the corresponding adaptive threshold value, setting a certain pixel point to be white when the gray value of the certain pixel point is greater than the adaptive threshold value, and setting the certain pixel point to be black when the gray value of the certain pixel point is not greater than the adaptive threshold value;
and extracting the black rectangle in the gray level image after the pixel points are reset, and detecting whether the black rectangle is a correct semantic icon according to semantic rules. Because the unmanned aerial vehicle is always in the position above the target, and the attitude change is less, consequently can filter wherein the area is too little, the profile of shape too partially, reduce the calculation volume.
The semantic rule is as follows: dividing each side of a black rectangle into 6 equal parts, connecting corresponding equal division points of the opposite sides, dividing the black rectangle into 6 multiplied by 6 small rectangles, except that the outermost layer is totally black, marking the rectangle with the majority of black as 0 and the rectangle with the majority of white as 1 in each small rectangle inside, connecting data of each line in the black rectangle in series from top to bottom to obtain series data, taking the series data as semantic information contained in the semantic icons, comparing the semantic information represented by the series data with the semantic information of each semantic icon stored in a semantic icon database, and determining the black rectangle corresponding to the series data as a correct semantic icon when the comparison result is consistent;
and calculating the central position information of the landing target according to the semantic information contained in all the semantic icons on the landing target.
Preferably, the calculating of the position and the dynamic characteristics of the landing target in the geodetic coordinate system according to the central position information of the landing target through the attitude and the relative position relationship between the airborne camera and the unmanned aerial vehicle includes:
establishing a target coordinate system according to the central position information of the landing target, and obtaining the vertex and the coordinate position of the center of each semantic icon in the target coordinate system according to the semantic information contained in each semantic icon;
the method comprises the steps that a rotation and translation matrix from a target plane to a camera imaging plane is obtained through the one-to-one correspondence relationship between semantic icons in a target coordinate system and image pixel coordinates, the coordinates of the semantic icons in the target coordinate system are converted into the coordinates of the semantic icons in the camera coordinate system according to the rotation and translation matrix from the target plane to the camera imaging plane, and the space coordinates of the semantic icons in the camera coordinate system are converted into the coordinates of the semantic icons in the geodetic coordinate system according to a conversion formula from the camera coordinate system to the geodetic coordinate system;
the method comprises the steps of taking the geodetic coordinates in the east and north directions of a semantic icon as input, calculating the position and the speed of a landing target under a geodetic coordinate system through Kalman filtering, and forming a state vector X (X, y, v) by the east and north coordinates and the speed of the landing targetx,vy]TAnd the output vector Y ═ x, Y]TThe equation of state of the landing target in the geodetic coordinate system is shown as the formula (1):
Figure BDA0002073865150000031
Wherein the content of the first and second substances,
Figure BDA0002073865150000032
Δ t is the sampling time interval; w represents the system noise with zero mean and is a Gaussian variable with covariance of Q; v represents measurement noise with a mean value of zero and is a Gaussian variable with covariance of R;
the target plane to camera imaging plane rotation and translation matrix is as shown in equation (2): :
Figure BDA0002073865150000033
wherein (u, v) is image pixel coordinate, (x, y) is coordinate of semantic icon in target coordinate system,
Figure BDA0002073865150000034
is an internal reference matrix, R is a rotation matrix of 3 multiplied by 3, and T is a translation vector of 3 multiplied by 1;
the conversion formula from the camera coordinate system to the geodetic coordinate system is shown as formula (3):
Xg=RpXp+Xg0=RpRcXc+Xg0 (3)
wherein Xg、Xp、XcRespectively are coordinates of the landing target under a geodetic coordinate system, an unmanned aerial vehicle coordinate system and a camera coordinate system; xg0The coordinate of the unmanned aerial vehicle under the geodetic coordinate system is obtained by GNSS positioning coordinate conversion; rp、RcThe rotation matrixes from the unmanned aerial vehicle system to the geodetic coordinate system and from the camera to the unmanned aerial vehicle coordinate system are respectively shown in formula (4), wherein alpha is a roll angle, beta is a pitch angle, and gamma is a yaw angle.
Figure BDA0002073865150000041
Preferably, the step of continuously calculating the relative position and the relative speed of the drone and the landing target in the geodetic coordinate system based on the position and the dynamic characteristics of the landing target, and controlling the drone to land at the central position of the landing target through a triple PID control algorithm includes:
continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target in the geodetic coordinate system based on the position and the dynamic characteristics of the landing target;
controlling the unmanned aerial vehicle to move towards the target by using the relative position of the unmanned aerial vehicle and the landing target as input through a PID control algorithm;
the relative speed of the unmanned aerial vehicle and the landing target is used as input and is superposed on the speed obtained by position control, and the dynamic landing target is tracked through a PID control algorithm;
the relative height of the unmanned aerial vehicle and the landing target is used as input, and the unmanned aerial vehicle is controlled to land on the target through a PID control algorithm.
According to the technical scheme provided by the embodiment of the invention, the autonomous landing guidance method for the precise position of the unmanned aerial vehicle realizes the positioning and tracking of the landing target by identifying the semantic icons in the landing target, estimates the moving speed of the landing target, makes up the defect of large positioning error of the landing target caused by insufficient GNSS positioning precision, and realizes the precise autonomous landing of the unmanned aerial vehicle on the dynamic target. By adopting the image recognition method, the semantic icons in the landing targets can be automatically recognized, so that the relative position and the relative speed of the unmanned aerial vehicle and the targets can be rapidly calculated.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a general flowchart of autonomous precise location landing guidance of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a landing target pattern according to an embodiment of the invention.
Fig. 3 is a schematic view of a process of identifying semantic icons in a target according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating an example of a semantic icon according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a relative relationship between a camera coordinate system, an unmanned aerial vehicle coordinate system, and a geodetic coordinate system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
Example one
In order to solve the problem that the unmanned aerial vehicle cannot accurately and autonomously land on a mobile platform due to insufficient positioning accuracy of a GNSS in the prior art and improve the autonomous capability of the unmanned aerial vehicle, an embodiment of the present invention provides an autonomous landing guidance method for an accurate position of an unmanned aerial vehicle, wherein a processing flow of the method is shown in fig. 1, and the method includes the following processing steps:
step S1: the unmanned aerial vehicle is positioned above the landing target under the guidance of the GNSS satellite navigation system. The current GNSS satellite navigation positioning precision is usually within 10 meters, and the airborne downward-looking camera can see the landing target when the unmanned aerial vehicle reaches the position marked by the GNSS satellite.
The unmanned aerial vehicle refers to a multi-rotor unmanned aerial vehicle and an unmanned helicopter. The landing target used in the embodiment of the invention is shown in fig. 2, and can be installed on the ground or a mobile carrier and used for marking the landing range of the unmanned aerial vehicle.
Step S2: through the target of descending of airborne camera discernment, adjustment unmanned aerial vehicle's gesture and angle keep descending the target under the condition of field of vision within range, reduce unmanned aerial vehicle height.
In the embodiment of the invention, the angle of view of the downward-looking camera of the unmanned aerial vehicle is 90 degrees, and according to the test, the semantic target can be clearly identified when the unmanned aerial vehicle is lowered to the height of 3 meters.
Step S3: semantic icons in the landing target are identified through pattern detection, whether the identified icons are the icons on the landing target is judged, and after false detection icons are filtered, the central position of the landing target is calculated.
Landing target's inside contains a plurality of non-overlapping semantic icons, and the size, the position of every semantic icon and the semanteme that corresponds are all known, and a plurality of semantic icons are arranged according to certain law for unmanned aerial vehicle landing in-process airborne camera can see 1 semantic icon all the time at least. The main body of the semantic icon is a black rectangle, and white rectangles with different positions and quantities are drawn in the semantic icon according to semantic rules. And storing semantic information contained in each semantic icon in a semantic icon database when designing the target. When the semantic icon is detected, the semantic icon is compared with the semantic icon database according to the semantic information of the semantic icon, and the correct semantic icon is obtained if the semantic icon is successfully compared.
The flow of identifying semantic icons in a target provided by the embodiment of the invention is shown in fig. 3, and the specific processing process includes:
firstly, shooting a video image of the ground through an onboard camera, converting the video image into a gray image, and then binarizing the gray image through an adaptive threshold value. The adaptive threshold is a local thresholding method. The principle of the method is that a self-adaptive threshold value corresponding to each pixel point is calculated according to the neighborhood of each pixel point of an image, then the gray value of each pixel point is compared with the corresponding self-adaptive threshold value, and each pixel point is set to be white or black according to the comparison result. Thereby avoiding thresholding errors due to uneven illumination. For each pixel point in the image, the calculation of adaptive thresholding is shown as formula (5),
Figure BDA0002073865150000071
wherein, Pi,jThe gray value of the image pixel in the ith row and the jth column is N, the total number of the pixels in the window is N, and the calculation bias is C. To obtainAfter the threshold value is reached, the corresponding pixel gray value is compared with the threshold value, when the gray value is larger than the threshold value, the pixel point is set to be 255 (white), otherwise, the pixel point is set to be 0 (black), and the formula (6) shows.
Figure BDA0002073865150000072
And extracting the black rectangle in the image after the pixel points are reset. Because the unmanned aerial vehicle is always in the position above the target, and the attitude change is less, therefore can filter out the black rectangle that wherein area is too little, the shape is too biased at first, reduce the calculation volume. And detecting whether the black rectangle is a correct semantic icon or not according to semantic rules corresponding to the rest black rectangles.
The semantic rule is as follows: dividing each side 6 of the black rectangle into equal parts, and connecting corresponding equal parts of the opposite sides, thereby dividing the black rectangle into 6 x 6 small rectangles; except for the outermost layer, which is black, the rectangle with the majority of black is designated as 0, and the rectangle with the majority of white is designated as 1.
Fig. 4a and fig. 4b are schematic diagrams illustrating an example of a semantic icon according to an embodiment of the present invention. As shown in fig. 4a, the data in each row are concatenated from top to bottom to obtain concatenated data, which is the semantic information included in the semantic icon. And comparing the semantic information represented by the serial data with the semantic information of each semantic icon stored in a semantic icon database, and determining that the black rectangle corresponding to the serial data is a correct semantic icon when the comparison result is consistent. As shown in fig. 4b, serial data (semantic information) generated by rotating the same black rectangle by 90 °, 180 °, and 270 ° is considered to contain the same information, so as to ensure the unique corresponding relationship between the semantic icon and the semantic information; also, to avoid confusion, centrosymmetric and axisymmetric semantic icons are not used.
Then, the central position information of the landing target is calculated according to the semantic information contained in all the semantic icons on the landing target, a target coordinate system is established, and the position and size information of the semantic icons on the target can be obtained according to the semantic information, namely the coordinate information of the semantic icons on the target coordinate system is obtained, wherein the coordinate information comprises the corner points and the pixel coordinates of the central points of the semantic icons.
Step S4: and calculating the position of the landing target under the geodetic coordinate system through the posture and relative position relation between the airborne camera and the unmanned aerial vehicle, and calculating the dynamic characteristic of the landing target through Kalman filtering.
The semantic icons used in the embodiment of the invention are shown in fig. 2, and the positions of the vertexes and the centers of the semantic icons under the target coordinate system are obtained according to semantic information contained in each semantic icon. And (3) calculating a rotation and translation matrix from the target plane to the camera imaging plane through the one-to-one correspondence of the semantic icons in the target coordinate system and the image pixel coordinates, as shown in the formula (7).
Figure BDA0002073865150000081
Wherein (u, v) is image pixel coordinate, (x, y) is coordinate of semantic icon in target coordinate system,
Figure BDA0002073865150000082
is an internal reference matrix, R is a rotation matrix of 3 × 3, and T is a translation vector of 3 × 1.
The spatial coordinates of the target plane in the camera coordinate system can be quickly solved by the least square method by using the rotation and translation matrix shown in the formula (3).
The geodetic coordinate system is a northeast coordinate system with the unmanned aerial vehicle flying point as an origin; the dynamic characteristics of the landing target include geodetic coordinates of the target, target orientation, east velocity, north velocity, rotational angular velocity, and the like.
The relative relationship among the camera coordinate system, the unmanned aerial vehicle coordinate system and the geodetic coordinate system provided by the embodiment of the invention is shown in fig. 5, the three-axis attitude (roll angle, pitch angle and course angle) of the unmanned aerial vehicle relative to the ground can be obtained through the inertial navigation module on the unmanned aerial vehicle, and the three-axis attitude of the camera relative to the unmanned aerial vehicle can be obtained through the camera holder or calibration. The formula for converting the landing target from the camera coordinate system to the geodetic coordinate system is shown in equation (8).
Xg=RpXp+Xg0=RpRcXc+Xg0 (8)
Wherein Xg、Xp、XcRespectively are coordinates of the landing target under a geodetic coordinate system, an unmanned aerial vehicle coordinate system and a camera coordinate system; xg0The coordinate of the unmanned aerial vehicle under the geodetic coordinate system is obtained by GNSS positioning coordinate conversion; rp、RcThe rotation matrixes from the unmanned aerial vehicle system to the geodetic coordinate system and from the camera to the unmanned aerial vehicle coordinate system are respectively, and the calculation formula is shown as the formula (9). Wherein alpha is a roll angle, beta is a pitch angle, and gamma is a yaw angle.
Figure BDA0002073865150000083
And (3) taking the geodetic coordinates of the landing target in the east and north directions as input, and predicting the position and the speed of the landing target through Kalman filtering. The landing target east and north coordinates and the speed form a state vector X ═ X, y, vx,vy]TAnd the output vector Y ═ x, Y]TThe state equation of the system is shown in formula (10).
Figure BDA0002073865150000091
Wherein the content of the first and second substances,
Figure BDA0002073865150000092
Δ t is the sampling time interval; w represents the system noise with zero mean and is a Gaussian variable with covariance of Q; v represents the measurement noise with mean zero and is a gaussian variable with covariance R.
And continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target under the geodetic coordinate system based on the position and the dynamic characteristic of the landing target, wherein the relative position comprises a relative distance and a relative height.
Step S5: and (3) according to the relative position and the relative speed of the unmanned aerial vehicle and the landing target in the geodetic coordinate system, the unmanned aerial vehicle is accurately landed at the central position of the target through a triple PID (proportion, integral and differential) control algorithm.
The triple PID control algorithm comprises the following steps:
(1) the position control is carried out, the relative position of the unmanned aerial vehicle and the landing target is used as input, and the unmanned aerial vehicle is controlled to move towards the target through a PID control algorithm;
(2) horizontal speed control, namely, taking the relative speed of the unmanned aerial vehicle and the landing target as input, superposing the input speed on the speed obtained by position control, and tracking the dynamic landing target through a PID control algorithm;
(3) and controlling the landing speed to take the relative height of the unmanned aerial vehicle and the landing target as input, and controlling the unmanned aerial vehicle to land on the target through a PID control algorithm.
The unmanned aerial vehicle landing method provided by the embodiment of the invention is not only suitable for landing of the unmanned aerial vehicle on the fixed target, but also suitable for landing on running vehicles and ships.
In summary, according to the autonomous landing guidance method for the precise position of the unmanned aerial vehicle, the positioning and tracking of the landing target are realized by identifying the semantic icon in the landing target, the moving speed of the landing target is estimated, the defect of large positioning error of the landing target caused by insufficient positioning precision of GNSS is overcome, and the precise autonomous landing of the unmanned aerial vehicle on the dynamic target is realized.
By adopting an image recognition method, semantic icons in the landing target can be automatically recognized, so that the relative position and the relative speed of the unmanned aerial vehicle and the target can be rapidly calculated;
semantic icons in the landing target can be flexibly configured with size, position and posture under the condition of meeting the camera view field to form different landing target graphs;
in the whole process from the time when the unmanned aerial vehicle reaches the identifiable distance to the time when the unmanned aerial vehicle lands on the target, at least one semantic icon is completely identified every moment, so that the positioning accuracy of the landing target is ensured;
and the unmanned aerial vehicle is controlled by using a triple PID algorithm to track and autonomously land the dynamic target, and compared with the similar algorithm, the method is higher in speed and more stable.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (2)

1. An autonomous landing guidance method for the precise position of an unmanned aerial vehicle is characterized by comprising the following steps:
the unmanned aerial vehicle is guided to fly to a set distance range of a landing target on the ground through a satellite navigation system;
acquiring a video image of the ground through an onboard camera, identifying semantic icons on a landing target contained in the video image through a graph detection rule, and calculating the central position information of the landing target according to the semantic icons on the landing target; calculating the position and the dynamic characteristic of the landing target under a geodetic coordinate system according to the central position information of the landing target through the posture and the relative position relation between an airborne camera and the unmanned aerial vehicle;
continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target in the geodetic coordinate system based on the position and the dynamic characteristics of the landing target, and controlling the unmanned aerial vehicle to land at the central position of the landing target through a triple PID control algorithm;
the landing target comprises a plurality of non-overlapping semantic icons, the size and the position of each semantic icon and the corresponding semantics are known, and the semantic icons are arranged according to a certain rule, so that an airborne camera can always see at least 1 semantic icon in the landing process of the unmanned aerial vehicle;
the main body of the semantic icon is a black rectangle which can rotate by 90 degrees, 180 degrees and 270 degrees, white rectangles with different positions and numbers are drawn in the semantic icon according to semantic rules, and semantic information contained in each semantic icon is stored in a semantic icon database;
the method comprises the following steps of acquiring a video image of the ground through an airborne camera, identifying semantic icons on a landing target contained in the video image through a graph detection rule, and calculating the central position information of the landing target according to the semantic icons on the landing target, wherein the method comprises the following steps:
shooting a video image of the ground through an airborne camera of an unmanned aerial vehicle, converting the video image into a gray image, calculating an adaptive threshold value corresponding to each pixel point according to the neighborhood of each pixel point in the gray image, comparing the gray value of each pixel point with the corresponding adaptive threshold value, setting a certain pixel point to be white when the gray value of the certain pixel point is greater than the adaptive threshold value, and setting the certain pixel point to be black when the gray value of the certain pixel point is not greater than the adaptive threshold value;
extracting a black rectangle in the gray image after the pixel points are reset, and detecting whether the black rectangle is a correct semantic icon according to semantic rules;
the semantic rule is as follows: dividing each side 6 of a black rectangle equally, connecting corresponding equal division points of opposite sides, dividing the black rectangle into 6 multiplied by 6 small rectangles, except that the outermost layer is totally black, in each small rectangle in the interior, the rectangle with the majority of black is marked as 0, the rectangle with the majority of white is marked as 1, connecting the data of each line in the black rectangles in series from top to bottom to obtain series data, wherein the series data generated by rotating the same black rectangle by 90 degrees, 180 degrees and 270 degrees is considered to contain the same information, the unique corresponding relation between semantic icons and semantic information is ensured, the semantic icons with central symmetry and axial symmetry are not used, the series data is used as the semantic information contained in the semantic icons, and the semantic information represented by the series data is compared with the semantic information of each semantic icon stored in a semantic icon database, when the comparison results are consistent, determining that the black rectangle corresponding to the serial data is a correct semantic icon;
calculating the central position information of the landing target according to semantic information contained in all semantic icons on the landing target;
according to the central position information of the landing target, the position and the dynamic characteristic of the landing target under the geodetic coordinate system are calculated through the posture and the relative position relation between an airborne camera and an unmanned aerial vehicle, and the method comprises the following steps:
establishing a target coordinate system according to the central position information of the landing target, and obtaining the vertex of each semantic icon and the coordinate position of the center of the semantic icon under the target coordinate system according to the semantic information contained in each semantic icon, wherein the coordinate comprises the corner point of each semantic icon and the pixel coordinate of the central point;
the method comprises the steps that a rotation and translation matrix from a target plane to a camera imaging plane is obtained through the one-to-one correspondence relationship between semantic icons in a target coordinate system and image pixel coordinates, the coordinates of the semantic icons in the target coordinate system are converted into the coordinates of the semantic icons in the camera coordinate system according to the rotation and translation matrix from the target plane to the camera imaging plane, and the space coordinates of the semantic icons in the camera coordinate system are converted into the coordinates of the semantic icons in the geodetic coordinate system according to a conversion formula from the camera coordinate system to the geodetic coordinate system;
the method comprises the steps of taking the geodetic coordinates in the east and north directions of a semantic icon as input, calculating the position and the speed of a landing target under a geodetic coordinate system through Kalman filtering, and forming a state vector X (X, y, v) by the east and north coordinates and the speed of the landing targetx,vy]TAnd the output vector Y ═ x, Y]TThe state equation of the landing target under the geodetic coordinate system is shown as the formula (1):
Figure FDA0003078204130000021
Figure FDA0003078204130000022
Δ t is the sampling time interval; w represents the system noise with zero mean and is a Gaussian variable with covariance of Q; v represents measurement noise with a mean value of zero and is a Gaussian variable with covariance of R;
the target plane to camera imaging plane rotation and translation matrix is as shown in equation (2):
Figure FDA0003078204130000031
wherein (u, v) is image pixel coordinate, (x, y) is coordinate of semantic icon in target coordinate system,
Figure FDA0003078204130000032
is an internal reference matrix, R is a rotation matrix of 3 multiplied by 3, and T is a translation vector of 3 multiplied by 1;
the conversion formula from the camera coordinate system to the geodetic coordinate system is shown as formula (3):
Xg=RpXp+Xg0=RpRcXc+Xg0 (3)
wherein Xg、Xp、XcRespectively are coordinates of the landing target under a geodetic coordinate system, an unmanned aerial vehicle coordinate system and a camera coordinate system; xg0The coordinate of the unmanned aerial vehicle under the geodetic coordinate system is obtained by GNSS positioning coordinate conversion; rp、RcThe rotation matrixes from the unmanned aerial vehicle system to the geodetic coordinate system and from the camera to the unmanned aerial vehicle coordinate system are respectively, and the calculation formula is shown as formula (4), wherein alpha is a roll angle, beta is a pitch angle, and gamma is a yaw angle;
Figure FDA0003078204130000033
2. the method according to claim 1, wherein the continuously calculating the relative position and relative velocity of the drone and the landing target in the geodetic coordinate system based on the position and the dynamic characteristics of the landing target, and controlling the drone to land at the center position of the landing target through a triple PID control algorithm comprises:
continuously calculating the relative position and the relative speed of the unmanned aerial vehicle and the landing target in the geodetic coordinate system based on the position and the dynamic characteristics of the landing target;
controlling the unmanned aerial vehicle to move towards the target by using the relative position of the unmanned aerial vehicle and the landing target as input through a PID control algorithm;
the relative speed of the unmanned aerial vehicle and the landing target is used as input and is superposed on the speed obtained by position control, and the dynamic landing target is tracked through a PID control algorithm;
the relative height of the unmanned aerial vehicle and the landing target is used as input, and the unmanned aerial vehicle is controlled to land on the target through a PID control algorithm.
CN201910446706.3A 2019-05-27 2019-05-27 Autonomous landing guiding method for precise position of unmanned aerial vehicle Active CN110221625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910446706.3A CN110221625B (en) 2019-05-27 2019-05-27 Autonomous landing guiding method for precise position of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910446706.3A CN110221625B (en) 2019-05-27 2019-05-27 Autonomous landing guiding method for precise position of unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN110221625A CN110221625A (en) 2019-09-10
CN110221625B true CN110221625B (en) 2021-08-03

Family

ID=67818488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910446706.3A Active CN110221625B (en) 2019-05-27 2019-05-27 Autonomous landing guiding method for precise position of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110221625B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110920916B (en) * 2019-12-11 2021-09-14 集美大学 Landing equipment for civil aircraft
CN111413717B (en) * 2019-12-18 2023-08-11 中国地质大学(武汉) Satellite navigation-based digital aircraft landing system
CN113066040B (en) * 2019-12-26 2022-09-09 南京甄视智能科技有限公司 Face recognition equipment arrangement method based on unmanned aerial vehicle 3D modeling
CN111813148B (en) * 2020-07-22 2024-01-26 广东工业大学 Unmanned aerial vehicle landing method, system, equipment and storage medium
CN112597893A (en) * 2020-12-23 2021-04-02 上海布鲁可积木科技有限公司 Method and system for judging complete icon
CN112904895B (en) * 2021-01-20 2023-05-12 中国商用飞机有限责任公司北京民用飞机技术研究中心 Image-based airplane guiding method and device
CN113050667B (en) * 2021-02-05 2022-02-08 广东国地规划科技股份有限公司 Unmanned aerial vehicle sampling control method, controller and system
CN114200948B (en) * 2021-12-09 2023-12-29 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance
CN114489112A (en) * 2021-12-13 2022-05-13 深圳先进技术研究院 Cooperative sensing system and method for intelligent vehicle-unmanned aerial vehicle

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
CN104316060A (en) * 2014-06-06 2015-01-28 清华大学深圳研究生院 Rendezvous docking method and device of space non-cooperative target
CN104536453A (en) * 2014-11-28 2015-04-22 深圳一电科技有限公司 Aircraft control method and device
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106054929A (en) * 2016-06-27 2016-10-26 西北工业大学 Unmanned plane automatic landing guiding method based on optical flow
CN106127201A (en) * 2016-06-21 2016-11-16 西安因诺航空科技有限公司 A kind of unmanned plane landing method of view-based access control model positioning landing end
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN107244423A (en) * 2017-06-27 2017-10-13 歌尔科技有限公司 A kind of landing platform and its recognition methods
WO2018111075A1 (en) * 2016-12-16 2018-06-21 Rodarte Leyva Eduardo Automatic landing system with high-speed descent for drones
CN108305264A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 A kind of unmanned plane precision landing method based on image procossing
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108828500A (en) * 2018-06-22 2018-11-16 深圳草莓创新技术有限公司 Unmanned plane accurately lands bootstrap technique and Related product
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands
CN109298723A (en) * 2018-11-30 2019-02-01 山东大学 A kind of accurate landing method of vehicle-mounted unmanned aerial vehicle and system
CN109643129A (en) * 2016-08-26 2019-04-16 深圳市大疆创新科技有限公司 The method and system of independent landing

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
CN104316060A (en) * 2014-06-06 2015-01-28 清华大学深圳研究生院 Rendezvous docking method and device of space non-cooperative target
CN104536453A (en) * 2014-11-28 2015-04-22 深圳一电科技有限公司 Aircraft control method and device
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106127201A (en) * 2016-06-21 2016-11-16 西安因诺航空科技有限公司 A kind of unmanned plane landing method of view-based access control model positioning landing end
CN106054929A (en) * 2016-06-27 2016-10-26 西北工业大学 Unmanned plane automatic landing guiding method based on optical flow
CN109643129A (en) * 2016-08-26 2019-04-16 深圳市大疆创新科技有限公司 The method and system of independent landing
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
WO2018111075A1 (en) * 2016-12-16 2018-06-21 Rodarte Leyva Eduardo Automatic landing system with high-speed descent for drones
CN107244423A (en) * 2017-06-27 2017-10-13 歌尔科技有限公司 A kind of landing platform and its recognition methods
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN108549397A (en) * 2018-04-19 2018-09-18 武汉大学 The unmanned plane Autonomous landing method and system assisted based on Quick Response Code and inertial navigation
CN108305264A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 A kind of unmanned plane precision landing method based on image procossing
CN108828500A (en) * 2018-06-22 2018-11-16 深圳草莓创新技术有限公司 Unmanned plane accurately lands bootstrap technique and Related product
CN108873917A (en) * 2018-07-05 2018-11-23 太原理工大学 A kind of unmanned plane independent landing control system and method towards mobile platform
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands
CN109298723A (en) * 2018-11-30 2019-02-01 山东大学 A kind of accurate landing method of vehicle-mounted unmanned aerial vehicle and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
无人机高速移动降落技术研究;贾配洋;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;20180115(第1期);第21-22、26-32、47-55、59-68页 *

Also Published As

Publication number Publication date
CN110221625A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110221625B (en) Autonomous landing guiding method for precise position of unmanned aerial vehicle
CN110222612B (en) Dynamic target identification and tracking method for autonomous landing of unmanned aerial vehicle
EP3903164B1 (en) Collision avoidance system, depth imaging system, vehicle, map generator, amd methods thereof
US11218689B2 (en) Methods and systems for selective sensor fusion
Lee et al. Vision-based UAV landing on the moving vehicle
US5072396A (en) Navigation systems
US6157876A (en) Method and apparatus for navigating an aircraft from an image of the runway
CA2853546A1 (en) Identification and analysis of aircraft landing sites
CN105352495A (en) Unmanned-plane horizontal-speed control method based on fusion of data of acceleration sensor and optical-flow sensor
CN105644785A (en) Unmanned aerial vehicle landing method based on optical flow method and horizon line detection
US20220198793A1 (en) Target state estimation method and apparatus, and unmanned aerial vehicle
Lombaerts et al. Distributed Ground Sensor Fusion Based Object Tracking for Autonomous Advanced Air Mobility Operations
Lombaerts et al. Adaptive multi-sensor fusion based object tracking for autonomous urban air mobility operations
CN109612333B (en) Visual auxiliary guide system for vertical recovery of reusable rocket
Kong et al. A ground-based multi-sensor system for autonomous landing of a fixed wing UAV
CN114689030A (en) Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
US20170082429A1 (en) Passive altimeter
CN113568430A (en) Correction control method for unmanned aerial vehicle wing execution data
US10330769B1 (en) Method and apparatus for geolocating emitters in a multi-emitter environment
CN113156450B (en) Active rotation laser radar system on unmanned aerial vehicle and control method thereof
EP3060952B1 (en) Locational and directional sensor control for search
CN111615677B (en) Unmanned aerial vehicle safety landing method and device, unmanned aerial vehicle and medium
Andert et al. Radar-aided optical navigation for long and large-scale flights over unknown and non-flat terrain
RU2722599C1 (en) Method for correcting strapdown inertial navigation system of unmanned aerial vehicle of short range using intelligent system of geospatial information
Gavrilov et al. Solving the problem of autonomous navigation using underlying surface models tolerant to survey conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190910

Assignee: GUANGZHOU HI-TARGET SURVEYING INSTRUMENT Co.,Ltd.

Assignor: Beijing Jiaotong University

Contract record no.: X2021990000807

Denomination of invention: Autonomous landing guidance method for precise position of UAV

Granted publication date: 20210803

License type: Exclusive License

Record date: 20211222

EE01 Entry into force of recordation of patent licensing contract