CN111897366A - Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method - Google Patents

Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method Download PDF

Info

Publication number
CN111897366A
CN111897366A CN202010683416.3A CN202010683416A CN111897366A CN 111897366 A CN111897366 A CN 111897366A CN 202010683416 A CN202010683416 A CN 202010683416A CN 111897366 A CN111897366 A CN 111897366A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
scale
identification domain
positioning identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010683416.3A
Other languages
Chinese (zh)
Inventor
王景璟
侯向往
任勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010683416.3A priority Critical patent/CN111897366A/en
Publication of CN111897366A publication Critical patent/CN111897366A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/12Target-seeking control

Abstract

The invention provides a multi-scale and multi-means integrated unmanned aerial vehicle trajectory planning method, which comprises the following steps: (1) the unmanned aerial vehicle is positioned in an empty area on a ground parking apron by using a satellite navigation system; (2) controlling the ground clearance of the unmanned aerial vehicle by utilizing a barometric altimeter and a distance measuring module of an ultrasonic radar; (3) the vision module identifies a large-scale identification domain in real time, and identifies the stand by combining Hough transform and RGB average value method to process the coordinates of the target landing point; (4) when the unmanned aerial vehicle lands to reach the threshold condition of the large-scale identification domain, determining the positioning identification domain by combining an RGB average value method and Hough transform with a small scale; (5) the processed deviation value is used as an input value, and the landing track of the unmanned aerial vehicle is accurately planned by adopting a dual PID algorithm.

Description

Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a multi-scale and multi-hand fusion unmanned aerial vehicle trajectory planning method.
Background
The autonomous flight technology of the unmanned aerial vehicle has been a hotspot of research in the field of aviation for many years, and has the advantages of convenience in use, low operation cost, high flight precision, flexibility, easiness in intelligentization and the like, and a great amount of requirements exist in practical application, such as: reconnaissance and rescue tasks, scientific data collection, geological and forestry exploration, agricultural pest control, video monitoring, movie and television production and the like. The unmanned helicopter has the advantages of no need of special take-off and landing places and runways, vertical take-off and landing, low requirement on required space and the like, and is more concerned in recent years. However, at present, unmanned helicopters mostly adopt GPS navigation to plan the orbit under assisting, and the GPS system price of accurate identification unmanned aerial vehicle position is high, because unmanned aerial vehicle controls complicatedly, realizes that unmanned aerial vehicle stably hovers and has certain technical degree of difficulty, has increased the degree of difficulty of unmanned aerial vehicle position accurate identification, orbit accurate control.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description section. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In order to at least partially solve the technical problems, the invention provides a multi-scale and multi-hand fusion unmanned aerial vehicle trajectory planning method, which comprises the following steps: the method comprises the steps that an unmanned aerial vehicle is located in an overhead area of a ground parking apron by utilizing a satellite navigation system, the unmanned aerial vehicle is navigated to the overhead area of the parking apron by utilizing a satellite positioning system, the ground parking apron is marked by concentric circles, an inner circle is used for determining a positioning identification domain for a small scale, the positioning identification domain is determined for a large scale from the outer circle to the inner circle and is respectively filled with green and red, namely the small-scale positioning identification domain is filled with green, the large-scale positioning identification domain is filled with red, and the circle center of the concentric circles is the accurate position of a target landing place of the unmanned aerial vehicle.
The air pressure altimeter is combined with a distance measuring module of the ultrasonic radar to control the ground clearance of the unmanned aerial vehicle.
Furthermore, the vision module identifies a positioning identification domain in real time, and identifies the aircraft stop by combining an RGB average value method and Hough transform, so as to process the coordinates of the target landing point.
The vision module settle in the bottom of unmanned aerial vehicle organism, the camera acquires the picture under unmanned aerial vehicle in real time. And processing the acquired image by using Hough transform and RGB average value method.
Further, the RGB averaging method extracts the R, G, B channel values of each pixel point stored in RGB565 format, and calculates the average value a. Setting thresholds C1, C2 and C3 corresponding to three channels (note that the three channels respectively correspond to different thresholds), respectively differentiating each channel value with A, if the difference is higher than the corresponding threshold Ci (i is 1, 2 and 3), identifying the pixel as the color of the channel, and if the red large scale is successfully identified, determining a positioning identification domain. The red circular parking space in the image is extracted by using the method, the edge of the circular image is extracted by using an edge extraction algorithm, and then the image is subjected to Hough transform. In the embodiment, the large-scale positioning identification domain is identified firstly, the airplane lands from high altitude, secondly, the small-scale positioning identification domain only occupies 3.14% of the area of the large-scale positioning identification domain, and only the part with large area can be identified according to the calculation of the RGB average value identification method.
The essence of the Hough transform is to perform coordinate transformation on the image, so that the transformation result is easier to identify and detect. The essence is a "voting" algorithm, which is generalized to the detection of circles based on this mechanism. The circle recognition and geometric parameter detection are realized by combining the geometric properties of the circle with a voting mechanism, and the steps are as follows:
(1) by using the property that the perpendicular bisector of any chord of the circle must pass through the center of the circle, scanning by a row (or column) on a given step length on the image plane for each foreground point P (x0, y0), and taking all foreground points Q (xi, yi) on the row (or column);
(2) connecting the two points P and Q and making a perpendicular bisector L of the straight line PQ;
(3) if the two points P and Q are on the circumference, L must pass through the center of a circle, each point on the transformation plane is used as an accumulator, 1 is added to each point passed by L, and the proportion of noise points is smaller than the proportion of effective graphs, so that the number of straight lines passed by non-central points is far smaller than that of straight lines passed by central points, and the position of the maximum value of each accumulator is searched after transformation is finished, so that the center coordinates of the circle are obtained. The radius value is stored in another memory space, each unit in the memory space records the distance (i.e. radius) between the point and the point P, and after the circle center is found, the value of the corresponding position on the radius plane is the radius of the circle center. Processing a large-scale determined positioning identification domain, calculating circle center coordinates of the large-scale determined positioning identification domain, namely, coordinates (x0, y0) of a landing point, calculating current position coordinates (x, y) of the unmanned aerial vehicle, namely, the center coordinates, and deviation quantity delta x of the coordinates (x 0-x) and deviation quantity delta y of the coordinates (y 0-y) of the landing point as input control quantity of a dual PID algorithm.
And when the unmanned aerial vehicle reaches the threshold condition of the large-scale determined positioning identification domain, the algorithm in the third step is adopted to perform accurate positioning processing on the small-scale determined positioning identification domain.
Compared with the prior art, the invention has the technical effects that:
(1) the error defect caused by insufficient satellite positioning precision is overcome, and the control intelligentization degree of the unmanned aerial vehicle is improved;
(2) by adopting a graphic processing method, the unmanned aerial vehicle can be automatically identified, namely, the unmanned aerial vehicle can be automatically identified and accurately landed;
(3) the unmanned aerial vehicle is controlled to land by adopting a distance measuring module of a pressure altimeter and an ultrasonic radar and combining Hough transform and an RGB average value method, so that the cost of applying an accurate sensor is greatly reduced;
(4) the dual PID algorithm is used for realizing the optimized control that the unmanned aerial vehicle can only land, and compared with the similar algorithm, the method is higher in speed and is more stable.
Drawings
Fig. 1 is a schematic structural diagram of the multi-scale and multi-means integrated unmanned aerial vehicle trajectory planning method of the present invention.
Detailed Description
Preferred embodiments of the invention are described below. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the invention, and do not limit the scope of the invention.
The embodiment of the invention provides a multi-scale and multi-means integrated unmanned aerial vehicle trajectory planning method, which is characterized by comprising the following steps: the satellite navigation system enables the unmanned aerial vehicle to be located above a ground parking apron, the satellite positioning system enables the unmanned aerial vehicle to navigate to the above the ground parking apron, the ground parking apron is marked by concentric circles, the inner circle is used for determining a positioning identification domain for a small scale, the positioning identification domain is determined for a large scale from the outer circle to the inner circle and is filled with green and red respectively, namely the small-scale determining positioning identification domain is filled with green, the large-scale determining positioning identification domain is filled with red, and the circle center of the concentric circles is the accurate position of a target landing place of the unmanned aerial vehicle.
The air pressure altimeter is combined with a distance measuring module of the ultrasonic radar to control the ground clearance of the unmanned aerial vehicle.
Specifically, the vision module identifies a positioning identification domain in real time, and identifies the aircraft stop by combining an RGB average value method and Hough transform, so as to process the coordinates of the target landing point.
The vision module is arranged at the bottom of the unmanned aerial vehicle body, the camera acquires images under the unmanned aerial vehicle in real time, and the acquired images are processed by using Hough transform and RGB average value method.
Specifically, the RGB average method extracts the R, G, B channel values of each pixel point stored in the RGB565 format in the image, and calculates the average value a. Setting thresholds C1, C2 and C3 corresponding to three channels (note that the three channels respectively correspond to different thresholds), respectively differentiating each channel value with A, if the difference is higher than the corresponding threshold Ci (i is 1, 2 and 3), identifying the pixel as the color of the channel, and if the red large scale is successfully identified, determining a positioning identification domain. The red circular parking space in the image is extracted by using the method, the edge of the circular image is extracted by using an edge extraction algorithm, and then the image is subjected to Hough transform. In the embodiment, the large-scale positioning identification domain is identified firstly, the airplane lands from high altitude, secondly, the small-scale positioning identification domain only occupies 3.14% of the area of the large-scale positioning identification domain, and only the part with large area can be identified according to the calculation of the RGB average value identification method.
The essence of the Hough transform is that the image is subjected to coordinate transformation, so that the transformed result is easier to identify and detect. The essence is a "voting" algorithm, which is generalized to the detection of circles based on this mechanism. The circle recognition and geometric parameter detection are realized by combining the geometric properties of the circle with a voting mechanism, and the steps are as follows: (1) by using the property that the perpendicular bisector of any chord of the circle must pass through the center of the circle, scanning by a row (or column) on a given step length on the image plane for each foreground point P (x0, y0), and taking all foreground points Q (xi, yi) on the row (or column); (2) connecting the two points P and Q and making a perpendicular bisector L of the straight line PQ; (3) if the two points P and Q are on the circumference, L must pass through the circle center, the same as Hough transformation, each point on the transformation plane is used as an accumulator, 1 is added to each point through which L passes, the proportion of noise points is smaller than the proportion occupied by effective graphs, so that the number of straight lines through which non-central points pass is far smaller than that of straight lines through which central points pass, and the position of the maximum value of each accumulator is searched after transformation is finished, so that the circle center coordinate of the circle is obtained. The radius value is stored in another memory space, each unit in the memory space records the distance (i.e. radius) between the point and the point P, and after the circle center is found, the value of the corresponding position on the radius plane is the radius of the circle center. Processing a large-scale determined positioning identification domain, calculating circle center coordinates of the large-scale determined positioning identification domain, namely, coordinates (x0, y0) of a landing point, calculating current position coordinates (x, y) of the unmanned aerial vehicle, namely, the center coordinates, and deviation quantity delta x of the coordinates (x 0-x) and deviation quantity delta y of the coordinates (y 0-y) of the landing point as input control quantity of a dual PID algorithm.
When the unmanned aerial vehicle reaches the threshold condition of the large-scale determined positioning identification domain, the algorithm in the third step is adopted to perform accurate positioning processing on the small-scale determined positioning identification domain; and taking the processed position coordinate deviation value as an input quantity, using a dual PID controller by the unmanned aerial vehicle finger controller, on one hand, controlling the height, using the ground clearance as an input quantity by the outer layer PID, and outputting the expected landing speed to the inner layer PID for use. The inner layer PID uses the error between the current landing speed and the expected landing speed as an input quantity, outputs the control quantity of the accelerator and carries out height control; on one hand, the method is direction control, the deviation value of x and y axes of the center of the parking space and the center of the aircraft is obtained by image processing and is used as an input value, and a PID controller is used for correcting motor parameters to control the direction of the aircraft.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. A multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method is characterized by comprising the following steps:
(1) the unmanned aerial vehicle is positioned in an empty area on a ground parking apron by using a satellite navigation system;
(2) controlling the ground clearance of the unmanned aerial vehicle by utilizing a barometric altimeter and a distance measuring module of an ultrasonic radar;
(3) the vision module identifies a large-scale positioning identification domain in real time, and identifies a machine halt position by combining an RGB average value method and Hough transformation to process the coordinates of a target landing point;
(4) when the unmanned aerial vehicle lands to reach the threshold condition of the large-scale positioning identification domain, determining the positioning identification domain by combining an RGB average value method and Hough transform with a small scale;
(5) and taking the processed deviation value as an input value, and accurately planning the landing track of the unmanned aerial vehicle by adopting a dual PID algorithm.
2. The multi-scale multi-means integrated unmanned aerial vehicle trajectory planning method according to claim 1, wherein an algorithm of the RGB mean value method is as follows: respectively extracting R, G, B channel values of each pixel point stored in an RGB565 format in the image, and calculating an average value A; setting thresholds C1, C2 and C3 for R, G, B three channels respectively, making difference between each channel value and the average value A, wherein the difference is higher than the threshold Ci of the channel, (i is 1, 2 and 3), identifying the pixel as the color of the channel, successfully identifying a large-scale positioning identification domain, extracting the edge of a circular image by using an edge extraction algorithm, and then carrying out Hough transformation on the image; the Hough transform is to perform coordinate transform on the image; and (3) calculating circle center coordinates, namely landing point coordinates (x0, y0), calculating current position coordinates (x, y), namely vision center coordinates of the unmanned aerial vehicle, and taking deviation quantity delta x of the unmanned aerial vehicle and the landing point coordinates (x 0-x) and delta y (y 0-y) as PID algorithm input control quantity.
3. The multi-scale multi-hand fusion unmanned aerial vehicle trajectory planning method according to claim 1, wherein the threshold condition of the large-scale positioning identification domain is that the unmanned aerial vehicle lands in a distance from the ground so low that the vision module acquires a picture, the large-scale positioning identification domain fills the whole picture, and the threshold condition of defining that the large-scale positioning identification domain occupies more than 75% of the whole picture is that the small-scale positioning identification domain is started.
4. The multi-scale and multi-hand fusion unmanned aerial vehicle trajectory planning method according to claim 1, wherein the dual PID algorithm uses a ground clearance as an input for an outer layer PID and outputs an expected landing speed to an inner layer PID for use, and the inner layer PID outputs a control variable of an accelerator using an error between a current landing speed and the expected landing speed as an input for altitude control.
CN202010683416.3A 2020-07-16 2020-07-16 Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method Pending CN111897366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010683416.3A CN111897366A (en) 2020-07-16 2020-07-16 Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010683416.3A CN111897366A (en) 2020-07-16 2020-07-16 Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method

Publications (1)

Publication Number Publication Date
CN111897366A true CN111897366A (en) 2020-11-06

Family

ID=73192049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010683416.3A Pending CN111897366A (en) 2020-07-16 2020-07-16 Multi-scale and multi-hand-segment integrated unmanned aerial vehicle trajectory planning method

Country Status (1)

Country Link
CN (1) CN111897366A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
FR3009117A1 (en) * 2013-07-24 2015-01-30 Airbus Operations Sas AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
CN107544550A (en) * 2016-06-24 2018-01-05 西安电子科技大学 A kind of Autonomous Landing of UAV method of view-based access control model guiding
CN110595476A (en) * 2019-08-30 2019-12-20 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226356A (en) * 2013-02-27 2013-07-31 广东工业大学 Image-processing-based unmanned plane accurate position landing method
FR3009117A1 (en) * 2013-07-24 2015-01-30 Airbus Operations Sas AUTONOMOUS AUTOMATIC LANDING METHOD AND SYSTEM
CN107544550A (en) * 2016-06-24 2018-01-05 西安电子科技大学 A kind of Autonomous Landing of UAV method of view-based access control model guiding
CN110595476A (en) * 2019-08-30 2019-12-20 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion

Similar Documents

Publication Publication Date Title
CN108820233B (en) Visual landing guiding method for fixed-wing unmanned aerial vehicle
Lee et al. Deep learning-based monocular obstacle avoidance for unmanned aerial vehicle navigation in tree plantations: Faster region-based convolutional neural network approach
CN107544550B (en) Unmanned aerial vehicle automatic landing method based on visual guidance
Marut et al. ArUco markers pose estimation in UAV landing aid system
Martínez et al. On-board and ground visual pose estimation techniques for UAV control
CN106127201B (en) A kind of unmanned plane landing method of view-based access control model positioning landing end
WO2018053861A1 (en) Methods and system for vision-based landing
CN104808685A (en) Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN107240063A (en) A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform
CN105501457A (en) Infrared vision based automatic landing guidance method and system applied to fixed-wing UAV (unmanned aerial vehicle)
CN106054929A (en) Unmanned plane automatic landing guiding method based on optical flow
CN103226356A (en) Image-processing-based unmanned plane accurate position landing method
CN105021184A (en) Pose estimation system and method for visual carrier landing navigation on mobile platform
CN106774386A (en) Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
US20210225180A1 (en) Systems and methods for aiding landing of vertical takeoff and landing vehicle
CN105352495A (en) Unmanned-plane horizontal-speed control method based on fusion of data of acceleration sensor and optical-flow sensor
Xu et al. Use of land’s cooperative object to estimate UAV’s pose for autonomous landing
CN113791621B (en) Automatic steering tractor and airplane docking method and system
CN105966594A (en) Unmanned aerial vehicle body structure, groove assisting positioning platform and landing positioning method of unmanned aerial vehicle
Al-Kaff et al. Intelligent vehicle for search, rescue and transportation purposes
Fan et al. Vision algorithms for fixed-wing unmanned aerial vehicle landing system
CN114089787A (en) Ground three-dimensional semantic map based on multi-machine cooperative flight and construction method thereof
CN113759943A (en) Unmanned aerial vehicle landing platform, identification method, landing method and flight operation system
CN114689030A (en) Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
Kawamura et al. Vision-Based Precision Approach and Landing for Advanced Air Mobility

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201106

RJ01 Rejection of invention patent application after publication