CN110794854A - Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle - Google Patents

Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle Download PDF

Info

Publication number
CN110794854A
CN110794854A CN201911184244.9A CN201911184244A CN110794854A CN 110794854 A CN110794854 A CN 110794854A CN 201911184244 A CN201911184244 A CN 201911184244A CN 110794854 A CN110794854 A CN 110794854A
Authority
CN
China
Prior art keywords
landing
image
path
runway
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911184244.9A
Other languages
Chinese (zh)
Inventor
陈会强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911184244.9A priority Critical patent/CN110794854A/en
Publication of CN110794854A publication Critical patent/CN110794854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0607Rate of change of altitude or depth specially adapted for aircraft
    • G05D1/0653Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing
    • G05D1/0676Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing specially adapted for landing

Abstract

The invention discloses an autonomous take-off and landing method of a fixed wing unmanned aerial vehicle. The creativity of the patent application project of the invention adopts the methods of image recognition and laser ranging to obtain the position information of the target object. And identifying the object through image identification, and then obtaining the size and the distance of the target through laser distance. Therefore, whether the field meets the take-off and landing standard of the unmanned aerial vehicle or not is judged, and a take-off and landing path is planned. Novelty, unmanned aerial vehicle on the commercial market independently takes off and land solution mainly aims at many rotor VTOL unmanned aerial vehicle, does not have the solution of independently taking off and land to fixed wing unmanned aerial vehicle at present. The product form of the invention is an unmanned aerial vehicle photoelectric pod, and image recognition, laser ranging and control equipment are all packaged in the pod. The pod provides an interface with the drone flight controls for communicating flight instructions.

Description

Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an autonomous taking off and landing method of a fixed wing unmanned aerial vehicle.
Background
Fixed wing unmanned aerial vehicle plays more and more important effect in contemporary society, is different from the fixed wing unmanned aerial vehicle's in military field big way. At present, national policies are strict in control of the fixed-wing unmanned aerial vehicle, and a pilot needs to take a pilot certificate of the fixed-wing unmanned aerial vehicle to be qualified for controlling the fixed-wing unmanned aerial vehicle. The root of the country for strict control of the fixed-wing drone is still because the control of the fixed-wing drone is relatively complex. However, it is necessary to develop fixed wing drones because they have many advantages over multi-rotor drones.
The existing unmanned aerial vehicle autonomous landing technology mainly adopts a visual identification technology, a camera used for collecting images is often directly fixed on an unmanned aerial vehicle body, a stability-increasing damping mechanism is not arranged, the camera shakes randomly and cannot collect effective images under the large wind power environment, and image identification is inaccurate. For pure visual identification, no matter a single camera or multiple cameras can not accurately obtain accurate data of a target object, and for an unmanned aerial vehicle with a slow flying speed (for example, a multi-rotor unmanned aerial vehicle), the pure image identification can meet the requirement of autonomous take-off and landing, but is far from insufficient for a fixed-wing unmanned aerial vehicle.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides an autonomous taking-off and landing method of a fixed-wing unmanned aerial vehicle, which is used for acquiring the position information of a target object by adopting image recognition and laser ranging methods. And identifying the object through image identification, and then obtaining the size and the distance of the target through laser distance. Therefore, whether the field meets the take-off and landing standard of the unmanned aerial vehicle or not is judged, and a take-off and landing path is planned.
Therefore, the invention provides an autonomous taking-off and landing method of a fixed-wing unmanned aerial vehicle, which comprises the following steps:
s1: under the condition of taking off, acquiring an image of a current taking-off runway through a camera; under the condition of landing, acquiring an image of a current landing runway through a camera;
s2: under the condition of takeoff, processing an image of a takeoff runway to obtain obstacles on the takeoff runway and drawing a plane distribution image of the obstacles; under the condition of landing, processing the image of the landing runway to obtain obstacles on the landing runway and drawing a plane distribution image of the obstacles;
s3: measuring the size and the distance of an obstacle through a laser ranging module;
s4: under the condition of taking off, carrying out space modeling on a taking-off path or a landing path according to the plane distribution image of the barrier and the size and the distance of the barrier to obtain a space model of the taking-off path or a space model of the landing path; under the condition of landing, performing spatial modeling on the landing path or the landing path according to the plane distribution image of the obstacle and the size and the distance of the obstacle to obtain a spatial model of the landing path or a spatial model of the landing path;
s5: under the condition of taking off, planning a taking-off path according to a space model of the taking-off path; and under the condition of landing, planning a landing path according to the space model of the landing path, and cutting into the landing path to land.
Further, in step S2, the method includes the steps of:
s2-1: under the condition of taking off, converting a three-dimensional color image of an image of a current taking-off runway into a two-dimensional black-and-white image; under the condition of landing, converting a three-dimensional color image of an image of the current landing runway into a two-dimensional black-and-white image;
s2-2: detecting an obstacle in the two-dimensional black-and-white image of the obstacle by an edge detector;
s2-3: drawing a plane distribution diagram of the obstacles according to the obstacles detected in the step S2-2.
Further, the camera with laser rangefinder module uniform package is in the triaxial increases steady cloud platform.
Further, in step S1, in the case of landing, step S1 includes the steps of:
s1-1: obtaining a flight path flying around the runway according to the coordinates of the landing runway and the maximum measurement distance of the laser ranging module;
s1-2: flying on the flight path in step S2-1 when the distance from the runway is set;
s1-3: and acquiring an image of the runway to be landed currently through the camera.
Further, in step S5, the fixed-wing drone lands from a flight path cut-in to a landing path flying around the runway.
The invention provides an autonomous taking-off and landing method of a fixed-wing unmanned aerial vehicle, which has the following beneficial effects:
and acquiring the position information of the target object by adopting an image recognition and laser ranging method. And identifying the object through image identification, and then obtaining the size and the distance of the target through laser distance. Therefore, whether the field meets the take-off and landing standard of the unmanned aerial vehicle or not is judged, and a take-off and landing path is planned.
Drawings
Fig. 1 is a schematic view of a connection structure of a flight control system of an unmanned aerial vehicle of a fixed wing unmanned aerial vehicle in an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a three-axis stability-enhancing pan/tilt head according to an embodiment of the present invention;
FIG. 3 is a schematic view of a planar obstacle in an embodiment of the present invention;
FIG. 4 is a first schematic view of a first spatial barrier of a takeoff path in an embodiment of the present invention;
FIG. 5 is a second schematic view of a space obstacle of a takeoff path in an embodiment of the present invention;
FIG. 6 is a schematic illustration of a runway for a descent path in an embodiment of the invention;
fig. 7 is a schematic view of a space obstacle of a descending path according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to the following drawings, which are illustrative, but the scope of the invention is not limited to the specific embodiments.
In the present application, the type and structure of components that are not specified are all the prior art known to those skilled in the art, and those skilled in the art can set the components according to the needs of the actual situation, and the embodiments of the present application are not specifically limited.
The invention relates to a flight control system of an unmanned aerial vehicle, which adopts image recognition and laser ranging as autonomous take-off and landing solutions, and as shown in figure 1, the flight control system of the unmanned aerial vehicle provided by the invention consists of two core components. The PIX aircraft autopilot directly controls the flight of an unmanned aerial vehicle, is equivalent to a flight controller of the unmanned aerial vehicle, and controls the retraction and release of an aircraft accelerator, a lifting steering engine, a yaw steering engine, a rolling and disengaging machine, a flap steering engine, a front wheel steering engine, a load bin steering engine and an undercarriage; and the processor taking a main controller (Jetson nano development board) as a core, namely a central processing unit of the flight control system, is in signal connection with the three-axis stability-increasing holder.
For the three-axis stability-increasing pan/tilt head, as shown in fig. 2, the upper part and the lower part are respectively: the device comprises a laser ranging module, an image sensor and a laser ranging module. The center of the field of view of the image sensor is coupled with the ranging point of the laser ranging module. The function of ' looking where and what ' is measured ' is realized. In order to realize the identification and distance measurement of objects in the space, the technical scheme of the multi-sensor comprehensive application selects the image sensor and the laser distance measurement module.
The image sensor firstly performs image recognition, specifically, color map → gray scale map → edge detection → drawing of plane obstacles, wherein the color map is composed of a three-dimensional matrix with the third dimensions of R (red matrix), G (green matrix) and B (blue matrix). Whereas a black and white image is only a two-dimensional matrix. The real-time visual recognition of 30 frames per second images is very costly in hardware if color images are processed. When image recognition is performed, the color image is generally converted into a gray image. After the gray level image is obtained, the obstacle can be detected by using the edge detector, and finally, the obstacle plane distribution diagram is drawn for subsequent distance measurement.
And (3) carrying out field coupling ranging through a laser ranging module according to a plane obstacle diagram formed by an image sensor to obtain a three-dimensional diagram shown in the figure 3, wherein the reference plane in the figure 3 is a runway of the unmanned aerial vehicle.
After the image recognition system is built, the laser ranging focus is coupled with the center of the view field of the image sensor, and the center of the image sensor is aligned with the obstacle for ranging the obstacle.
The invention provides a flight control system based on an unmanned aerial vehicle, in particular to an autonomous take-off method of a fixed wing unmanned aerial vehicle, which comprises the following steps:
(1): acquiring an image of a current takeoff runway through a camera;
the camera is fixed at the bottom of the unmanned aerial vehicle body, and in the embodiment of the invention, the camera is built in the image sensor.
(2): processing the images of the takeoff runway in the step (1) to obtain obstacles on the takeoff runway and drawing a plane distribution image of the obstacles;
the image sensor obtains a color image which is composed of a three-dimensional matrix, and the third dimension of the color image is R (red matrix), G (green matrix) and B (blue matrix). Whereas a black and white image is only a two-dimensional matrix. The real-time visual recognition of 30 frames per second images is very costly in hardware if color images are processed. When image recognition is performed, the color image is generally converted into a gray image. After the gray level image is obtained, the obstacle can be detected by using the edge detector, and finally, the obstacle plane distribution diagram is drawn for subsequent distance measurement.
(3): measuring the size and the distance of an obstacle through a laser ranging module;
and carrying out field coupling ranging through a laser ranging module according to a plane obstacle diagram formed by the image sensor to obtain a three-dimensional diagram.
(4): carrying out space modeling on the takeoff path according to the plane distribution image of the obstacle and the size and distance of the obstacle to obtain a space model of the takeoff path;
(5): and planning a takeoff path according to the space model of the takeoff path.
Spatial modeling of takeoff path. First, several simple rules for fixed wing drone take-off need to be considered:
(1) the width of the takeoff path needs to be larger than the wingspan of the unmanned aerial vehicle;
(2) the unmanned aerial vehicle takes off along a straight line;
(3) and the longest safe takeoff distance exists in the unmanned aerial vehicle.
After the three conditions are considered, the unmanned aerial vehicle takeoff path modeling requirement can be refined into a plurality of key points:
(1) the path behind the obstacle is unavailable;
(2) the width of the path between the obstacles is larger than the wingspan of the unmanned aerial vehicle;
(3) the distance between the obstacle-free unmanned aerial vehicle and the unmanned aerial vehicle is greater than the longest safe takeoff distance of the unmanned aerial vehicle, and the path is available.
According to the requirements and the key points, a basic flow of the takeoff path modeling of the unmanned aerial vehicle can be obtained:
(1) carrying out image recognition on the takeoff path to draw a plane obstacle diagram as shown in figure 3;
(2) according to the plane obstacle diagram, scanning in the horizontal direction is subjected to ranging to obtain accurate data of the obstacle relative to the unmanned aerial vehicle;
(3) and establishing a space obstacle diagram according to the plane obstacle diagram, the laser ranging data, the takeoff path space modeling rule and the key points, wherein the space obstacle diagram is shown in the figure 4 and the figure 5.
The space model is established according to the modeling requirement of the takeoff path, so that the unmanned aerial vehicle can quickly find out a feasible takeoff path. In fig. 4 and 5, we can see that there are two takeoff paths, the width of the takeoff path at the left end is greater than that of the takeoff path at the right end, and at this time, it can be determined which takeoff path can be provided for the unmanned aerial vehicle to take off only by considering the wingspan of the unmanned aerial vehicle.
Specifically, the step S1-2 includes the following steps:
(1): converting a three-dimensional color image of the current takeoff runway into a two-dimensional black-and-white image (by what method the three-dimensional image is converted into the two-dimensional image);
(2): detecting an obstacle in the two-dimensional black-and-white image of the obstacle by an edge detector;
(3): and (3) drawing a plane distribution diagram of the obstacles according to the obstacles detected in the step (2).
The color map is composed of a three-dimensional matrix with the third dimensions R (red matrix), G (green matrix), and B (blue matrix). Whereas a black and white image is only a two-dimensional matrix. The real-time visual recognition of 30 frames per second images is very costly in hardware if color images are processed. When image recognition is performed, the color image is generally converted into a gray image.
Each pixel point of the color map is composed of three colors (red, green and blue), so that the three matrices are needed to represent the value of the three colors respectively when the matrix is used for representing the color map, and the color map is a three-dimensional matrix. The black and white image only displays different gray scales, and only one matrix is needed for description.
Further, the camera with laser rangefinder module uniform package is in the triaxial increases steady cloud platform.
Correspondingly, the invention also provides an autonomous landing method of the fixed-wing unmanned aerial vehicle, which comprises the following steps:
(1): obtaining a flight path flying around the runway according to the coordinates of the landing runway and the maximum measurement distance of the laser ranging module;
before landing, the known condition is the coordinates of the runway, the unmanned aerial vehicle flies around the runway to perform image recognition and distance measurement on the surface of the runway, and the action distance of laser distance measurement is considered at the moment. Assuming that the range of laser ranging is L, the unmanned aerial vehicle is required to fly around the runway by a distance less than L, that is, the flight path of the unmanned aerial vehicle flying around the runway.
(2): flying on the flying path in the step (1) when the distance from the runway is set;
(3): acquiring an image of a current runway to be landed through a camera, judging whether obstacles exist on the runway through an image recognizer, and drawing a plane distribution image of the obstacles on the runway;
(4): measuring the size and the distance of an obstacle through a laser ranging module;
(5): carrying out space modeling on a landing path according to a plane distribution image of obstacles on the landing runway and the sizes and distances of the obstacles to obtain a space model of the landing path;
(6): the landing is performed from a flight path cut into to a landing path flying around the runway.
For spatial modeling of the landing path, several simple rules for fixed wing drone landing need to be considered:
(1) the unmanned aerial vehicle lands along a straight line;
(2) the landing path width needs to be larger than the wingspan of the unmanned aerial vehicle;
(3) and the longest safe landing distance exists in the unmanned aerial vehicle.
After the three conditions are considered, the unmanned aerial vehicle landing path modeling requirement can be refined into a plurality of key points:
(1) the path behind the obstacle is unavailable;
(2) the width of the path between the obstacles is larger than the wingspan of the unmanned aerial vehicle;
(3) the distance between the barrier and the unmanned aerial vehicle is greater than the longest safe landing distance of the unmanned aerial vehicle, and the path is available.
According to the requirements and the key points, a basic flow of the unmanned aerial vehicle in the landing path modeling can be obtained:
(1) determining whether the length and the width of the runway meet the rules (2) and (3);
(2) detecting whether the obstacle exists on the runway or not;
(3) and accurately measuring the runway obstacles to judge whether the runway meets the landing condition.
As shown in fig. 6, after finding the runway in the air, the drone first measures the length and width of the runway to determine whether the length and width of the runway meet the rules (2) and (3).
After the length and the width of the runway are measured, the obstacle on the runway is identified, and the block object in fig. 7 is the obstacle on the runway. The position of the barrier is measured (the distance measurement of the unmanned aerial vehicle landing path to the barrier belongs to the measurement of 'moving to static', and at the moment, the field-of-view coupling distance measurement is needed, and the image sensor and the laser range finder are uniformly packaged in the three-axis pan-tilt, so that the external shaking factor can be effectively eliminated), and the distribution diagram of the barrier on the runway is drawn. And then whether the runway meets the landing requirement is judged according to three rules and key points.
In the takeoff path modeling of the unmanned aerial vehicle, in a static state, the image sensor and the laser ranging module do not need to consider jitter factors; and the landing path modeling is carried out in high-speed flight, and external jitter factors need to be eliminated.
In the flight control system of the unmanned aerial vehicle of the present invention, it should be noted that:
1. the image sensor and the laser ranging module are integrated into a three-axis stability-increasing cradle head (photoelectric pod).
The image sensor and the laser ranging module are firstly packaged, the laser ranging point is calibrated to be superposed with the photosensitive midpoint of the image sensor, and then the packaged module is implanted into the three-axis stability-increasing holder.
2. The photoelectric pod is integrated into the fixed-wing unmanned aerial vehicle, and the pneumatic interference of the photoelectric pod on the flight of the fixed-wing unmanned aerial vehicle is solved.
Considering different postures of the fixed-wing unmanned aerial vehicle during takeoff and landing, the third axis of the three-axis stability augmentation holder takes the fixed-wing aircraft as an axis, the photoelectric pod is positioned at the top of the aircraft in the takeoff state, and the photoelectric pod is rotated to the bottom of the aircraft in the cruising state and the landing state. The aerodynamic disturbances caused by the bird during flight are not negligible. The simulation of the pneumatic interference of the photoelectric pod to the fixed-wing unmanned aerial vehicle during take-off, cruise and landing is realized by using Solidworks flow simulation (fluid mechanics analysis software), and the pneumatic appearance is optimized.
3. And linking the image sensor with the laser ranging.
And when the hardware equipment is prepared, completing the on-board calibration of the photoelectric pod.
4. And (4) identifying and tracking the target by the image sensor (algorithm).
And performing target identification and tracking on the image acquired by the image sensor by using a convolutional neural network algorithm, and acquiring position information by using a laser ranging module. The image sensor identifies and tracks the target by using MedianFlow tracking algorithm in Open CV.
5. And the aerial flight three-axis stability-increasing cradle head is linked with an image sensor and laser ranging.
The fixed wing unmanned aerial vehicle has almost no similarity in the processes of taking off and landing, and is more dangerous and heavy in the landing path. The image sensor can hardly continuously and stably track a target in a landing path due to the existence of atmospheric turbulence without the help of the three-axis stability-increasing holder, and the distance acquisition by laser ranging is not mentioned. Therefore, target identification and tracking on the landing path must be established on the stable operation of the three-axis stability augmentation holder in the flight process of the unmanned aerial vehicle.
6. And the aerial flight image sensor identifies, tracks and measures the distance of the target.
The recognition, tracking and ranging of the ground target in the air are the premise of safe landing, and at the moment, the triaxial stability-increasing cradle head, the image sensor and the laser ranging module are required to be strictly calibrated.
7. The main controller is communicated with the PIX airplane autopilot.
The main controller controls the photoelectric pod to identify, track and measure the distance of the ground target and form a flight control instruction to be transmitted into the PIX airplane autopilot, and the PIX airplane autopilot controls specific unmanned aerial vehicle actions.
For the neural network algorithm, the invention is divided into a data input layer, a convolution calculation layer, an excitation layer, a pooling layer and a full connection layer.
Specifically, the processing to be performed on the data input layer of the neural network algorithm is mainly to perform preprocessing on the original image data, and the preprocessing includes:
1. and (3) mean value removal: the input data is centered at 0 for each dimension, which, as shown in the figure below, aims to pull the center of the sample back to the origin of the coordinate system.
2. Normalization: for example, we have two dimensional characteristics a and B, where a ranges from 0 to 10 and B ranges from 0 to 10000, and if it is problematic to directly use the two characteristics, it is good to normalize the amplitudes to the same range, i.e., the data of a and B both become 0 to 1.
3. PCA/whitening: dimensionality reduction by PCA, or normalization of the amplitudes on the various feature axes of the data using whitening.
For the convolution calculation layer of the neural network algorithm, the layer is the most important layer of the convolution neural network, and in the convolution layer, two key operations are provided:
1. local correlation, i.e. each neuron sees a filter;
2. the window (iterative field) slides, the filter computes the local data, while using a parameter sharing mechanism in the convolutional layer.
The weight of each neuron connection data window in the convolutional layer is fixed, and each neuron only concerns one property. Neurons are filters in image processing, such as Sobel filters dedicated to edge detection, i.e., each filter of a convolutional layer has an image feature of interest, such as vertical edge, horizontal edge, color, texture, etc., and all the neurons are added up as a feature extractor set of the whole image.
For the excitation layer of the neural network algorithm, nonlinear mapping is carried out on the output result of the convolutional layer. The adopted excitation function is generally ReLU (The Rectified Linear Unit), and The excitation function is characterized by fast convergence, simple gradient calculation and weak property.
For the pooling layer of the neural network algorithm, the pooling layer is sandwiched between successive convolutional layers for compressing the amount of data and parameters, reducing overfitting. If the input is an image, then the most dominant role of the pooling layer is to compress the image. The pooling layer has the following characteristics:
1. the characteristic invariance, namely the scale invariance of the characteristic frequently mentioned in image processing, the pooling operation is the resize of the image, the image of a dog is reduced by one time at ordinary times, the dog can be recognized as a picture, the most important characteristic of the dog still remains in the image, the dog can be judged to be drawn in the image at a glance, the information removed in image compression is only some irrelevant information, and the remaining information is the characteristic with the scale invariance and is the characteristic capable of expressing the image most.
2. The dimension reduction of the features is realized, that information contained in one image is very large and the features are very many, but some information has no much use or repetition for image tasks of the neural network algorithm, redundant information can be removed, the most important features are extracted, and the function of pooling is also realized.
3. Overfitting is prevented to a certain extent, and optimization is facilitated.
For the pooling layer, the methods used are Max posing algorithm and average posing algorithm.
For the fully-connected layer of the neural network algorithm, all neurons between the fully-connected layer and the pooling layer are connected in a weighted manner, and the fully-connected layer is usually arranged at the tail part of the convolutional neural network, namely the connection mode of the neurons is the same as that of the neurons in the traditional neural network.
In the embodiment of the present invention, the processing of the image, in terms of writing the program, has the following specific steps, which can be referred to by the skilled person:
step 1: reading an image;
step 2: generating a gray scale map (converting a three-dimensional RGB matrix into a two-dimensional gray scale matrix);
and step 3: gaussian blur;
and 4, step 4: canny edge detection;
canny Edge Detection provides a method of detecting the boundaries of an image by means of a gradual change of the image, in which the intensity of each pixel corresponds to the intensity of the gradual change, and finding the Edge by tracing the pixels following the strongest gradual change, typically a strong density change between pixels will indicate the Edge.
And 5: a Hough line detector;
this will be achieved by transmitting the image to a parameter Space called Hough Space, where polar coordinates (rho and theta) are processed, where intersecting lines are searched.
Step 6: finding a route;
first, the image is divided in half with respect to the x-axis; secondly, fitting a linear regression model to the points to find a smooth line; finally, because of the presence of outliers, there is a need for a regression model that can efficiently handle them, using HuberRegressor to constrain the image to a certain range on the y-axis and draw lines with it.
And 7: the line is connected to the original picture.
By weighting the two images, they can also be added.
In conclusion, the invention discloses an autonomous take-off and landing method of a fixed-wing unmanned aerial vehicle, and the method realizes accurate identification and distance measurement in a complex flight environment by uniformly encapsulating a camera for collecting images and a laser distance measurement module in a three-axis stability-increasing holder. The creativity of the patent application project of the invention adopts the methods of image recognition and laser ranging to obtain the position information of the target object. And identifying the object through image identification, and then obtaining the size and the distance of the target through laser distance. Therefore, whether the field meets the take-off and landing standard of the unmanned aerial vehicle or not is judged, and a take-off and landing path is planned. Novelty, unmanned aerial vehicle on the commercial market independently takes off and land solution mainly aims at many rotor VTOL unmanned aerial vehicle, does not have the solution of independently taking off and land to fixed wing unmanned aerial vehicle at present. The photoelectric pod of the unmanned aerial vehicle is in the form of an unmanned aerial vehicle photoelectric pod, and image recognition, laser ranging and control equipment is packaged in the pod. The pod provides an interface with the drone flight controls for communicating flight instructions. The finished product of the invention is externally hung on an unmanned aerial vehicle and can realize autonomous take-off and landing after being connected with flight control.
The above disclosure is only for a few specific embodiments of the present invention, however, the present invention is not limited to the above embodiments, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (5)

1. An autonomous taking-off and landing method for a fixed-wing unmanned aerial vehicle is characterized by comprising the following steps:
s1: under the condition of taking off, acquiring an image of a current taking-off runway through a camera; under the condition of landing, acquiring an image of a current landing runway through a camera;
s2: under the condition of takeoff, processing an image of a takeoff runway to obtain obstacles on the takeoff runway and drawing a plane distribution image of the obstacles; under the condition of landing, processing the image of the landing runway to obtain obstacles on the landing runway and drawing a plane distribution image of the obstacles;
s3: measuring the size and the distance of an obstacle through a laser ranging module;
s4: under the condition of taking off, carrying out space modeling on a taking-off path or a landing path according to the plane distribution image of the barrier and the size and the distance of the barrier to obtain a space model of the taking-off path or a space model of the landing path; under the condition of landing, performing spatial modeling on the landing path or the landing path according to the plane distribution image of the obstacle and the size and the distance of the obstacle to obtain a spatial model of the landing path or a spatial model of the landing path;
s5: under the condition of taking off, planning a taking-off path according to a space model of the taking-off path; and under the condition of landing, planning a landing path according to the space model of the landing path, and cutting into the landing path to land.
2. The method for autonomous take-off and landing of a fixed-wing drone of claim 1, wherein in step S2, the method comprises the following steps:
s2-1: under the condition of taking off, converting a three-dimensional color image of an image of a current taking-off runway into a two-dimensional black-and-white image; under the condition of landing, converting a three-dimensional color image of an image of the current landing runway into a two-dimensional black-and-white image;
s2-2: detecting an obstacle in the two-dimensional black-and-white image of the obstacle by an edge detector;
s2-3: drawing a plane distribution diagram of the obstacles according to the obstacles detected in the step S2-2.
3. The method of claim 1, wherein the camera and the laser ranging module are packaged in a three-axis stability-enhancing pan/tilt head.
4. The method of claim 1, wherein in step S1, in case of landing, step S1 comprises the steps of:
s1-1: obtaining a flight path flying around the runway according to the coordinates of the landing runway and the maximum measurement distance of the laser ranging module;
s1-2: flying on the flight path in step S2-1 when the distance from the runway is set;
s1-3: and acquiring an image of the runway to be landed currently through the camera.
5. The method of claim 4, wherein in step S5, the fixed-wing drone is landed by switching from a flight path around a runway to a landing path.
CN201911184244.9A 2019-11-27 2019-11-27 Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle Pending CN110794854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911184244.9A CN110794854A (en) 2019-11-27 2019-11-27 Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911184244.9A CN110794854A (en) 2019-11-27 2019-11-27 Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN110794854A true CN110794854A (en) 2020-02-14

Family

ID=69446452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911184244.9A Pending CN110794854A (en) 2019-11-27 2019-11-27 Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN110794854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034877A (en) * 2020-09-28 2020-12-04 中国电子科技集团公司第五十四研究所 Laser-assisted unmanned aerial vehicle autonomous take-off and landing terminal, system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011152917A2 (en) * 2010-04-21 2011-12-08 The Boeing Company Determining landing sites for aircraft
US20170287224A1 (en) * 2016-04-01 2017-10-05 Thales Method of synthetic representation of elements of interest in a viewing system for aircraft
CN108873916A (en) * 2017-05-11 2018-11-23 圣速医疗器械江苏有限公司 A kind of flight control method of intelligent balance aircraft
CN109164825A (en) * 2018-08-13 2019-01-08 上海机电工程研究所 A kind of independent navigation barrier-avoiding method and device for multi-rotor unmanned aerial vehicle
CN109240326A (en) * 2018-08-27 2019-01-18 广东容祺智能科技有限公司 A kind of barrier-avoiding method of the mixing obstacle avoidance apparatus of unmanned plane
CN109946751A (en) * 2019-04-12 2019-06-28 中国民用航空飞行学院 A kind of automatic detection method of airfield runway FOD of unmanned plane
CN110426046A (en) * 2019-08-21 2019-11-08 西京学院 A kind of unmanned plane independent landing runway zone barrier judgment and tracking

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011152917A2 (en) * 2010-04-21 2011-12-08 The Boeing Company Determining landing sites for aircraft
CN102859569A (en) * 2010-04-21 2013-01-02 波音公司 Determining landing sites for aircraft
US20170287224A1 (en) * 2016-04-01 2017-10-05 Thales Method of synthetic representation of elements of interest in a viewing system for aircraft
CN107451988A (en) * 2016-04-01 2017-12-08 泰勒斯公司 The method represented is synthesized to element interested in the inspection system of aircraft
CN108873916A (en) * 2017-05-11 2018-11-23 圣速医疗器械江苏有限公司 A kind of flight control method of intelligent balance aircraft
CN109164825A (en) * 2018-08-13 2019-01-08 上海机电工程研究所 A kind of independent navigation barrier-avoiding method and device for multi-rotor unmanned aerial vehicle
CN109240326A (en) * 2018-08-27 2019-01-18 广东容祺智能科技有限公司 A kind of barrier-avoiding method of the mixing obstacle avoidance apparatus of unmanned plane
CN109946751A (en) * 2019-04-12 2019-06-28 中国民用航空飞行学院 A kind of automatic detection method of airfield runway FOD of unmanned plane
CN110426046A (en) * 2019-08-21 2019-11-08 西京学院 A kind of unmanned plane independent landing runway zone barrier judgment and tracking

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
孙一力: "多旋翼无人直升机目标跟踪控制技术研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
朱海峰: "基于立体视觉的无人机感知与规避研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
田浩等: "一种多旋翼无人机自主降落中的视觉导航技术", 《信息记录材料》 *
陈广大等: "无人机梯田施药路径规划设计", 《中国农业化学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034877A (en) * 2020-09-28 2020-12-04 中国电子科技集团公司第五十四研究所 Laser-assisted unmanned aerial vehicle autonomous take-off and landing terminal, system and method
CN112034877B (en) * 2020-09-28 2024-03-15 中国电子科技集团公司第五十四研究所 Laser-assisted unmanned aerial vehicle autonomous take-off and landing terminal, system and method

Similar Documents

Publication Publication Date Title
CN110047241A (en) A kind of forest fire unmanned plane cruise monitoring system
CN110618691B (en) Machine vision-based method for accurately landing concentric circle targets of unmanned aerial vehicle
CN106708073B (en) A kind of quadrotor system of independent navigation power-line patrolling fault detection
Luo et al. A survey of intelligent transmission line inspection based on unmanned aerial vehicle
CN111198004A (en) Electric power inspection information acquisition system based on unmanned aerial vehicle
CN106292126B (en) A kind of intelligence aerial survey flight exposal control method, unmanned aerial vehicle (UAV) control method and terminal
CN109885086A (en) A kind of unmanned plane vertical landing method based on the guidance of multiple polygonal shape mark
CN110309762A (en) A kind of forestry health assessment system based on air remote sensing
CN113298035A (en) Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition
Petrides et al. Towards a holistic performance evaluation framework for drone-based object detection
CN106155082B (en) A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
CN111831010A (en) Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN110187716A (en) Geological exploration UAV Flight Control method and apparatus
CN109885091B (en) Unmanned aerial vehicle autonomous flight control method and system
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
Chiu et al. Vision-only automatic flight control for small UAVs
Xu et al. Development of power transmission line detection technology based on unmanned aerial vehicle image vision
CN110794854A (en) Autonomous take-off and landing method for fixed-wing unmanned aerial vehicle
Lee Research on multi-functional logistics intelligent Unmanned Aerial Vehicle
WO2018211396A1 (en) Detection of powerlines in aerial images
CN108170160A (en) It is a kind of to utilize monocular vision and the autonomous grasping means of airborne sensor rotor wing unmanned aerial vehicle
CN107765706A (en) Ship unmanned engine room fire inspection quadrotor and its control method
Ming et al. Optical tracking system for multi-UAV clustering
CN213987269U (en) System for unmanned aerial vehicle patrols and examines fan blade

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214

RJ01 Rejection of invention patent application after publication