CN112106008A - Landing control method of unmanned aerial vehicle and related equipment - Google Patents

Landing control method of unmanned aerial vehicle and related equipment Download PDF

Info

Publication number
CN112106008A
CN112106008A CN201980030313.2A CN201980030313A CN112106008A CN 112106008 A CN112106008 A CN 112106008A CN 201980030313 A CN201980030313 A CN 201980030313A CN 112106008 A CN112106008 A CN 112106008A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
ground
landing
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980030313.2A
Other languages
Chinese (zh)
Inventor
高文良
周游
叶长春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112106008A publication Critical patent/CN112106008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A landing control method of an unmanned aerial vehicle, a landing control device (600) and the unmanned aerial vehicle are provided, wherein the method comprises the steps of obtaining a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device carried on the unmanned aerial vehicle; determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image; determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points; and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground. The unmanned aerial vehicle can detect the flatness of a landing area by only one shooting device, so that the cost is reduced, and the limitation on the size of the unmanned aerial vehicle is reduced; and pixels of the whole image do not need to be scanned for modeling, so that the flatness detection efficiency is improved.

Description

Landing control method of unmanned aerial vehicle and related equipment
Technical Field
The invention relates to the field of unmanned aerial vehicle control, in particular to a landing control method and a landing control device of an unmanned aerial vehicle and the unmanned aerial vehicle.
Background
The unmanned aerial vehicle is widely applied to the fields of unmanned aerial vehicle shooting, safety inspection, agricultural plant protection and the like due to the characteristics of high flying speed, flexible operation and the like. Because the working environment of the unmanned aerial vehicle in real life is changeable, when the unmanned aerial vehicle lands, the unmanned aerial vehicle may land to a tree, a grass, rugged riprap and the like to cause loss. Therefore, the unmanned aerial vehicle needs to judge the landing area by itself before landing.
At present, the unmanned aerial vehicle judges a descending area by a method of generating a depth map by using a binocular camera, the generation of the depth map requires scanning pixels of a full image and modeling, and the workload is large. In addition, there is a certain requirement for the distance between the binocular cameras when shooting. This not only increases the cost of the unmanned aerial vehicle, but also limits the size of the unmanned aerial vehicle.
Disclosure of Invention
The embodiment of the invention provides a landing control method, a landing control device and equipment of an unmanned aerial vehicle, the unmanned aerial vehicle can detect the flatness of a landing area only by carrying one shooting device, so that the cost of the unmanned aerial vehicle is reduced, the limitation on the size of the unmanned aerial vehicle is reduced, pixels of a full image do not need to be scanned for modeling, and the detection efficiency of the unmanned aerial vehicle is improved.
In a first aspect, an embodiment of the present invention provides a landing control method for an unmanned aerial vehicle, including:
acquiring a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device borne on the unmanned aerial vehicle;
determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image;
determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points;
and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground.
A second aspect of an embodiment of the present invention provides a landing control apparatus, including:
a memory and a processor;
the memory is used for storing program codes;
the processor, invoking the program code, when executed, is configured to:
acquiring a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device borne on the unmanned aerial vehicle;
determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image;
determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points;
and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground.
A third aspect of embodiments of the present invention is to provide an unmanned aerial vehicle, including:
a body;
the power system is arranged on the fuselage and used for providing power for the unmanned aerial vehicle;
and the landing control device provided by the second aspect.
A fourth aspect of embodiments of the present invention is to provide a computer-readable storage medium storing one or more instructions adapted to be loaded by a processor and to execute the method for controlling a landing of an unmanned aerial vehicle according to the first aspect.
According to the embodiment of the invention, a multi-frame target image is obtained, the target image is obtained by shooting the ground below the unmanned aerial vehicle by a shooting device carried on the unmanned aerial vehicle, the three-dimensional coordinates of a plurality of spatial position points on the ground below the unmanned aerial vehicle are determined through the multi-frame target image, and the flatness of a landing area is detected through the three-dimensional coordinates of the plurality of spatial position points. Therefore, by implementing the landing control method, the landing control device and the unmanned aerial vehicle of the unmanned aerial vehicle described in the embodiment of the application, the flatness of the landing area can be detected by only one shooting device, so that the cost of the unmanned aerial vehicle is reduced, and the limitation on the size of the unmanned aerial vehicle is also reduced. And pixels of the whole image do not need to be scanned for modeling, so that the detection efficiency of the unmanned aerial vehicle is improved.
Drawings
FIG. 1 is a schematic view of an automatic landing scenario of an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for controlling the landing of an unmanned aerial vehicle according to an embodiment of the present invention;
FIG. 3a is a schematic view of a land leveling system according to an embodiment of the present invention;
FIG. 3b is a schematic view of a non-flat ground according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for controlling the descent of an UAV according to an embodiment of the present invention;
FIG. 5a is a schematic view of a descent passage of an unmanned aerial vehicle provided by an embodiment of the invention;
FIG. 5b is a schematic diagram of a descent channel projected onto a target image provided by an embodiment of the invention;
fig. 6 is a structural diagram of a landing control device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a landing control method of an unmanned aerial vehicle, which is used for detecting the flatness of a landing area when the unmanned aerial vehicle automatically lands. The following presents a scenario in which the present application is applicable.
As shown in fig. 1, the system includes the unmanned aerial vehicle and the ground on which the unmanned aerial vehicle is desired to land.
Due to the influence of factors such as line of sight or terrain, it may be inconvenient for an operator to observe the ground on which the unmanned aerial vehicle lands. Therefore, when the unmanned aerial vehicle lands, the unmanned aerial vehicle may land on trees, grasses, rugged riprap, and even in water, causing loss. In order to avoid such a situation, the unmanned aerial vehicle needs to judge the situation of the lower ground and determine whether the lower ground is suitable for landing. The unmanned aerial vehicle can shoot the image of the ground below through the carried shooting device, and the flatness of the landing area is detected through the landing control method of the unmanned aerial vehicle provided by the embodiment, so that the unmanned aerial vehicle is prevented from landing in the area which is not suitable for landing.
The method provided by the embodiment of the invention can be operated in the unmanned aerial vehicle or the electronic equipment establishing communication connection with the unmanned aerial vehicle, and the electronic equipment can be a control terminal, a ground station, a mobile terminal or relay equipment and the like. The following description will be given taking as an example a method for realizing landing control of an unmanned aerial vehicle.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a landing control method for an unmanned aerial vehicle according to an embodiment of the present invention. As shown in FIG. 2, the landing control method of the unmanned aerial vehicle can comprise steps 201-204.
Wherein:
201. the unmanned aerial vehicle acquires multi-frame target images.
In the embodiment of the application, the unmanned aerial vehicle continuously shoots the images of the ground below the unmanned aerial vehicle according to the preset time interval through the carried shooting device. And then selecting a plurality of frames of target images from the shot images, so that the translation amount of the shooting device corresponding to two adjacent frames of target images is larger than a preset translation threshold value. The target image acquired by the unmanned aerial vehicle can be a key frame.
For example, the unmanned aerial vehicle continuously takes images of the ground below the unmanned aerial vehicle at time intervals of 1S, and after 30S, 30 images of the ground below the unmanned aerial vehicle are obtained. And numbering the shot images according to the shooting time sequence, and calculating the translation amount of the shooting device when the second image is shot relative to the first image. And if the translation amount is larger than a preset translation threshold value, determining the first image and the second image as multi-frame target images. And calculating the translation amount of the shooting device when the third image is shot relative to the second image according to the same method. And if the translation amount is smaller than the preset translation amount, filtering the third image, and calculating the translation amount of the shooting device when the fourth image is shot relative to the second image. Until all the multi-frame target images are selected from the 30 images.
202. And the unmanned aerial vehicle determines the three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image.
In the embodiment of the application, first, the unmanned aerial vehicle extracts feature points from a plurality of frames of target images, and specifically, the feature points may be implemented by harris corner detection algorithms (harris corner detection algorithms). And then the unmanned aerial vehicle tracks and matches the feature points in the multi-frame target images, and finds a plurality of groups of homonymous feature points in the multi-frame target images. Wherein, the tracking matching can be realized by a Kanade-lucas-tomasi function tracker (kanade-lucas-tomasi feature tracker). The homonymous feature points refer to feature points appearing in a multi-frame image more than once, and each group of homonymous feature points corresponds to one spatial position point on the ground. Then, the unmanned aerial vehicle performs fitting operation on the position of each group of homonymous feature points in the corresponding target image, and determines the three-dimensional coordinates of the ground spatial position point corresponding to each feature point in the plurality of feature points through projection transformation. The number of the same-name feature points of each group for fitting calculation appearing in the multi-frame images is larger than a preset feature point threshold value, or the proportion of the same-name feature points of each group appearing in the multi-frame images is larger than a preset feature point proportion.
For example, assuming that the preset feature point threshold is 8, in 10 target images, there are 5 feature points in the first target image, which are feature point 1, feature point 2, feature point 3, feature point 4, and feature point 5, respectively. And the unmanned aerial vehicle tracks and matches the 5 characteristic points of the first target image in the rest nine target images. The sum of the number of the 5 characteristic points appearing in the ten target images is 10, 9, 7, 10 and 10. Because the number (i.e. 7) of the feature points 3 is smaller than the preset feature point threshold, only the feature points 1, 2, 4 and 5 are selected for fitting calculation to obtain the three-dimensional coordinates of the corresponding spatial position points.
203. And the unmanned aerial vehicle determines whether the ground meets the flatness condition of landing of the unmanned aerial vehicle according to the three-dimensional coordinates of the space position points.
In the embodiment of the application, firstly, the unmanned aerial vehicle executes a fitting algorithm according to the calculated three-dimensional coordinates of the plurality of spatial position points to determine the reference plane. The UAV may then calculate distances of the plurality of spatial location points to the reference plane. Wherein a plurality of spatial location points having a distance from the reference plane less than or equal to the first distance threshold are marked as target spatial location points. And if the number of the target spatial position points is greater than or equal to a first preset number threshold value, or the ratio of the number of the target spatial position points to the number of the plurality of spatial position points is greater than or equal to a first proportional threshold value, determining that the ground flatness meets the landing condition of the unmanned aerial vehicle. As shown in fig. 3a and 3b, the different gray levels in the figures represent different distances of the respective target spatial location point to the plane of the grid. As can be seen from fig. 3a, the gray scales of the target spatial position points are substantially the same, and thus it can be determined that the distances from the target spatial position points to the grid plane are substantially the same, and the target spatial position points can be fitted to a horizontal plane. The situation that the ground is smooth at present and the landing condition of the unmanned aerial vehicle is met is shown. As can be seen from fig. 3b, the gray scales of the target spatial position points are different from each other, so that it can be determined that the distances from the target spatial position points to the grid plane are different from each other, and the target spatial position points cannot be fitted to a plane. The existing uneven ground is illustrated, pits or convex hulls may exist, and the condition that the unmanned aerial vehicle lands is not met.
Alternatively, the reference plane may be a horizontal plane. It should be noted that, when the reference plane is a horizontal plane, the ground is more suitable for landing of the unmanned aerial vehicle. However, in practical cases, the reference plane may be a plane with a certain inclination when the flatness condition is satisfied.
Optionally, after the reference plane is determined, the unmanned aerial vehicle calculates characteristic distances from the plurality of spatial position points to the reference plane. And if the characteristic distance is smaller than or equal to the first distance threshold value, determining that the ground flatness meets the condition of landing of the unmanned aerial vehicle. The characteristic distance may be an average distance, a median distance, a maximum distance, or the like, and is not particularly limited by the embodiments of the present application. The average distance refers to the average of the distances of the plurality of spatial location points to the reference plane. The median distance refers to a value of a distance at an intermediate position among distances from the plurality of spatial position points arranged in order to the reference plane. The maximum distance refers to a value of the distances from the plurality of spatial position points to the reference plane, which is the largest.
Specifically, the unmanned aerial vehicle selects three points which are not on the same straight line at any time from a plurality of spatial position points, and determines a candidate plane through the selected three points until all candidate planes are determined. Then, the sum of the distances from the plurality of spatial position points to each candidate plane is calculated, and the candidate plane with the smallest sum of the distances from the plurality of spatial position points to the candidate plane is determined as the reference plane. Then, the distances from the plurality of spatial position points to the reference plane are respectively calculated, and the spatial position points with the distances smaller than a first distance threshold value are determined as target spatial position points. And if the number of the target spatial position points is greater than or equal to a first preset number threshold value, or the ratio of the number of the target spatial position points to the number of the plurality of spatial position points is greater than or equal to a first proportional threshold value, determining that the ground flatness meets the landing condition of the unmanned aerial vehicle.
For example, the candidate plane can be calculated by the following formula:
Figure BDA0002760661570000061
Figure BDA0002760661570000062
wherein v iskTo be the candidate plane, the plane is selected,
Figure BDA0002760661570000063
as a normal vector of the candidate plane, Pk,Pk+1,Pk+2Three spatial location points that are not on the same straight line.
Illustratively, the spatial location point PiDistance d to candidate planekiCan be calculated by the following formula:
Figure BDA0002760661570000064
illustratively, the reference plane may be fitted by the following formula:
Figure BDA0002760661570000065
wherein arg represents the optimized parameter (goal) of
Figure BDA0002760661570000066
For example, there are 4 spatial location points, and any 3 of them are not on a straight line. The unmanned aerial vehicle can optionally form 1 candidate plane from 3 of 4 spatial position points at a time, and finally the unmanned aerial vehicle can determine 4 candidate planes through the 4 spatial position points. Then, the sum of the distances from the 4 spatial position points to each candidate plane is calculated, and the candidate plane with the smallest sum of the distances from the 4 spatial position points to the candidate plane is determined as the reference plane. Next, the distances from the 4 spatial position points to the reference plane are calculated, assuming that the first distance threshold is 5cm, the first preset number threshold is 3, and the distances from the 4 spatial position points to the reference plane are 1cm, 3cm, 4cm, and 2cm, respectively. The distances from the 4 spatial position points to the reference plane are all less than the first distance of 5cm, and thus, the 4 spatial position points are all determined as target spatial position points. The number of target spatial location points is 4, which is greater than the first preset number threshold 3. Therefore, the ground flatness can be judged to meet the landing condition of the unmanned aerial vehicle. For another example, the distances from the 4 spatial position points to the reference plane are 10cm, 15cm, 4cm and 8cm, respectively. Only the spatial position point having a distance of 4cm from the reference plane among the 4 spatial position points is determined as the target spatial position point. The number of target spatial location points is 1, which is smaller than a first preset number threshold 3. Therefore, the method can judge that pits or convex hulls possibly exist on the ground, and the flatness of the ground does not meet the landing condition of the unmanned aerial vehicle.
For another example, there are 4 spatial location points and the first distance threshold is 5 cm. After the reference plane is determined by the method, the unmanned aerial vehicle calculates the distances from the 4 spatial position points to the reference plane to be 1cm, 5cm, 2cm and 4 cm. And calculating the average distance from the 4 spatial position points to the reference plane to be 3cm, wherein the average distance of 3cm is less than a first distance threshold value of 5cm, so that the ground flatness can be judged to meet the landing condition of the unmanned aerial vehicle. The method for determining that the ground flatness meets the landing condition of the unmanned aerial vehicle through the median distance and the maximum distance is the same as the average distance, and is not described herein again.
204. And when the ground meets the flatness condition of the landing of the unmanned aerial vehicle, controlling the unmanned aerial vehicle to land on the ground.
In the embodiment of the application, when the flatness of the ground meets the landing condition of the unmanned aerial vehicle, the unmanned aerial vehicle lands on the ground. It should be noted that, in the landing process, the unmanned aerial vehicle still determines the flatness of the landing surface below according to the methods in steps 201 to 203 until the unmanned aerial vehicle lands on the ground. If the unmanned aerial vehicle judges that the ground flatness does not meet the landing condition in the descending process, outputting first prompt information to a control terminal of the unmanned aerial vehicle or hovering the unmanned aerial vehicle at the current height without continuously descending. Wherein, the first prompt message includes: and prompting an operator that the ground of the current region of the unmanned aerial vehicle is not suitable for landing, and changing the landing region or manually confirming the ground of the current region. The control terminal and the electronic device can be the same device or different devices.
Optionally, when the number of times that the ground flatness meets the descending condition is judged to reach the preset descending threshold value in the descending process of the unmanned aerial vehicle, or the unmanned aerial vehicle descends to the preset descending height, the unmanned aerial vehicle can not continue to judge the ground, and directly descends to the ground.
According to the landing control method of the unmanned aerial vehicle, the unmanned aerial vehicle obtains the multi-frame target images, and the three-dimensional coordinates of the plurality of spatial position points on the ground below are determined through the multi-frame target images. And the flatness of the landing area is detected through the three-dimensional coordinates of the plurality of spatial position points. Therefore, by implementing the landing control method of the unmanned aerial vehicle, the flatness of the landing area can be detected by only one shooting device, so that the cost of the unmanned aerial vehicle is reduced, and the limitation on the size of the unmanned aerial vehicle is reduced. And pixels of the whole image do not need to be scanned for modeling, so that the detection efficiency of the unmanned aerial vehicle is improved.
Referring to fig. 4, fig. 4 is a schematic flow chart illustrating another method for controlling landing of an unmanned aerial vehicle according to an embodiment of the present invention. As shown in FIG. 4, the landing control method of the unmanned aerial vehicle can comprise steps 401-406.
Wherein:
401. the unmanned aerial vehicle acquires multi-frame target images.
The specific implementation of step 401 is the same as that of step 201 in fig. 2, and is not described herein again.
402. And the unmanned aerial vehicle determines the three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image.
Step 402 differs from step 202 in that the unmanned aerial vehicle may determine a first target area ground (S1) and a second target area ground (S2) from the lower ground according to the current altitude, the self-size, and preset parameters. The preset parameter may be a fixed value or may vary with the change of the height within a preset height range, and is not specifically limited by the embodiment of the present application. Fig. 5a is a schematic view of a descent passage of an unmanned aerial vehicle. As shown in fig. 5a, the first target region (S1) includes the second target region (S2), and the area of the first target region is greater than or equal to the area of the second target region. The second target area (S2) includes a minimum area required for the unmanned aerial vehicle to land, and an area of the second target area is greater than or equal to the minimum area required for the unmanned aerial vehicle to land. The shape of the first target region and the second target region is not limited to a circle, and may be other figures such as a square, a rectangle, and a hexagon, and is not particularly limited by the embodiment of the present application. Radius r of the first target region (S1) in the figure1The parameter value can be preset parameter value, and can also be calculated by multiplying the height h of the unmanned aerial vehicle by a preset proportion. Radius r of the second target region (S2) in the figure2The length and the width of the unmanned aerial vehicle can be added, and the length and the width of the unmanned aerial vehicle can also be obtained in other modes, which are not particularly limited by the applicationThe definition of the embodiment. For example, assuming that the preset ratio is 0.3, the current height of the unmanned aerial vehicle is 5m, and the length and width of the unmanned aerial vehicle are 0.4m and 0.3m, respectively. Then r is1=5*0.3=1.5m,r2=0.3+0.4=0.7m。
Fig. 5b is a schematic diagram of the projection of the descent channel onto the target image. As shown in fig. 5b, the unmanned aerial vehicle only needs to determine the three-dimensional coordinates of a plurality of spatial position points in the ground of the first target area, but does not need to determine the three-dimensional coordinates of all spatial position points on the ground, so that the efficiency of unmanned aerial vehicle detection is further improved. The specific method for calculating the three-dimensional coordinates of the spatial location points is the same as the calculation method in step 202, and is not described herein again.
In addition, the detection accuracy of the unmanned aerial vehicle can be improved by dividing the first target area and the second target area. For example, when the unmanned aerial vehicle detects the lower ground, it is determined that the lower ground as a whole satisfies the flatness condition for landing the unmanned aerial vehicle, but pits or convex hulls may be locally present in the lower ground in reality. At this time, if only the first target area or the second target area is subjected to plane detection, and the area detected by the unmanned aerial vehicle is reduced, the detection accuracy of the unmanned aerial vehicle can be improved.
403. And when the unmanned aerial vehicle is in the first height range, determining whether the first target area ground meets a first flatness condition for landing of the unmanned aerial vehicle according to the three-dimensional coordinates of the spatial position point of the first target area ground.
In the embodiment of the present application, the first height range may be [2m, 10m ]. What is different from step 203 is that when the unmanned aerial vehicle is in the first altitude range, it is only necessary to determine whether the first target area ground meets the first flatness condition for landing of the unmanned aerial vehicle, and the determined reference plane only needs to be a plane.
Optionally, when the first target area ground does not satisfy the first flatness condition for the unmanned aerial vehicle to land, the unmanned aerial vehicle outputs first prompt information to a control terminal of the unmanned aerial vehicle or the unmanned aerial vehicle hovers at the current altitude and does not descend any more. Wherein, the first prompt message includes: and prompting an operator that the ground of the current region of the unmanned aerial vehicle is not suitable for landing, and changing the landing region or manually confirming the ground of the current region.
A specific implementation manner of determining whether the first target area ground meets the first flatness condition for the unmanned aerial vehicle to land in step 403 is similar to a specific implementation manner of determining whether the ground meets the flatness condition for the unmanned aerial vehicle to land in step 203 in fig. 2, which may specifically refer to the description of step 203 and is not described herein again.
404. And when the first target area ground is determined to meet the first flatness condition for the unmanned aerial vehicle to land, controlling the unmanned aerial vehicle to land to a second altitude range.
In the embodiment of the application, when the flatness of the ground of the first target area meets the first flatness condition, the unmanned aerial vehicle descends to the second height range. Wherein the second height range is lower than the first height range. It should be noted that, during the landing process, the unmanned aerial vehicle still determines the ground flatness of the first target area according to the methods in steps 401 to 403 until the unmanned aerial vehicle descends to the second height range. If the unmanned aerial vehicle judges that the ground flatness of the first target area does not meet the first flatness condition in the descending process, outputting first prompt information to a control terminal of the unmanned aerial vehicle or hovering the unmanned aerial vehicle at the current height without continuously descending. Wherein, the first prompt message includes: and prompting an operator that the ground of the current region of the unmanned aerial vehicle is not suitable for landing, and changing the landing region or manually confirming the ground of the current region.
Optionally, when the number of times that the flatness of the ground of the first target area meets the first flatness condition reaches the preset descent threshold value is judged in the descending process of the unmanned aerial vehicle, the unmanned aerial vehicle may not continue to judge the ground of the first target area, and directly descends to the second height range.
Optionally, when it is determined that the first target area ground satisfies the first flatness condition, the unmanned aerial vehicle may land on the first target area ground. And continuing to execute the method from step 401 to step 403 in the descending process to judge the flatness of the ground of the first target area until the unmanned aerial vehicle lands on the ground of the first target area.
405. And when the unmanned aerial vehicle is in the second height range, determining whether the second target area ground meets a second flatness condition for landing of the unmanned aerial vehicle according to the three-dimensional coordinates of the spatial position point of the second target area ground.
In the embodiment of the application, first, the unmanned aerial vehicle executes a fitting algorithm according to the three-dimensional coordinates of the plurality of spatial position points in the second target area to determine the second reference plane. Wherein the second reference plane is a horizontal plane. Then, distances of the plurality of spatial location points within the second target region to the second reference plane are calculated. Wherein a plurality of spatial location points having a distance from the second reference plane less than or equal to the first distance threshold are marked as second target spatial location points. And if the number of the second target space position points is greater than or equal to a first preset number threshold, or the ratio of the number of the second target space position points to the number of the plurality of space position points is greater than or equal to a first proportional threshold, determining a key point from the second target space position points. And the sum of the distances of the key point and other second target space position points except the key point on the Z axis is minimum. And if the distances between the key point and other second target space position points except the key point on the Z axis are smaller than a second distance threshold value, determining that the ground of the second target area meets a second flatness condition of the landing of the unmanned aerial vehicle.
Optionally, after the second reference plane is determined, the unmanned aerial vehicle calculates characteristic distances from a plurality of spatial position points in the second target area to the second reference plane. And if the characteristic distance is smaller than or equal to the first distance threshold value, determining that the flatness of the second target area meets the condition of landing of the unmanned aerial vehicle. Wherein the characteristic distance may be an average distance, a median distance, or a maximum distance.
Specifically, the unmanned aerial vehicle selects three points which are not on the same straight line from a plurality of spatial position points in the second target area at a time, and determines a candidate plane through the selected three points until all candidate planes are determined. Then, the sum of the distances from the plurality of spatial position points to each candidate plane is calculated, and the candidate plane with the smallest sum of the distances from the plurality of spatial position points to the candidate plane is determined as the second reference plane. Then, the distances from the plurality of spatial position points to the second reference plane are respectively calculated, and the spatial position point with the distance smaller than the first distance threshold value is determined as a second target spatial position point. And if the number of the second target spatial position points is greater than or equal to a first preset number threshold, or the ratio of the number of the target spatial position points to the number of the plurality of spatial position points is greater than or equal to a first proportional threshold, determining a key point from the second target spatial position points. The sum of the distances of the key point and other second target space position points except the key point on the Z axis is minimized. And if the distances between the key point and other second target space position points except the key point on the Z axis are smaller than a second distance threshold value, determining that the ground of the second target area meets a second flatness condition of the landing of the unmanned aerial vehicle.
Illustratively, key point zpCan be calculated by the following formula:
Figure BDA0002760661570000111
wherein z isiIs a point PiCoordinate on the Z-axis, arg stands for the optimized parameter (target) Zp
Illustratively, key point zpThe distance from other second target space position points except the second target space position point on the Z axis can be calculated by the following formula:
Figure BDA0002760661570000112
for example, there are 4 spatial location points within the second target region, and any 3 of the spatial location points are not on a straight line. The unmanned aerial vehicle can optionally form 1 candidate plane from 3 of 4 spatial position points at a time, and finally the unmanned aerial vehicle can determine 4 candidate planes through the 4 spatial position points. Then, the sum of the distances from the 4 spatial position points to each candidate plane is calculated, and the candidate plane with the smallest sum of the distances from the 4 spatial position points to the candidate plane is determined as the second reference plane. Then, the distances from the 4 spatial position points to the second reference plane are calculated, for example, the first distance threshold is 5cm, the first preset number threshold is 3, and the distances from the 4 spatial position points to the second reference plane are 1cm, 3cm, 4cm and 2cm, respectively. The distances of the 4 spatial position points to the second reference plane are all less than the first distance of 5cm, and thus the 4 spatial position points are all determined as the second target spatial position point. The number of second target spatial location points is 4, which is greater than the first preset number threshold 3. Then, a key point is determined from the 4 second target spatial position points, so that the sum of the distances of the key point and the other 3 second target spatial position points on the Z axis is minimum. And if the distances between the key point and the other 3 second target space position points on the Z axis are respectively 1cm, 2cm and 1cm, and the second distance threshold is 3cm, it can be judged that the second target area ground meets the second flatness condition of the landing of the unmanned aerial vehicle.
For another example, there are 4 spatial location points within the second target region, and the first distance threshold is 5 cm. After the second reference plane is determined by the method, the unmanned aerial vehicle calculates the distances from the 4 spatial position points to the second reference plane to be 1cm, 5cm, 2cm and 4 cm. And calculating the average distance between the 4 spatial position points and the second reference plane to be 3cm, wherein the average distance of 3cm is less than the first distance threshold value of 5cm, so that the second target area ground can be judged to meet the second flatness condition of landing of the unmanned aerial vehicle. The method for determining that the second target area ground satisfies the second flatness condition of the unmanned aerial vehicle landing through the median distance and the maximum distance is the same as the average distance, and is not repeated herein.
406. And when the second target area ground meets the second flatness condition for the unmanned aerial vehicle to land, controlling the unmanned aerial vehicle to land on the second target area ground.
In the embodiment of the application, when the flatness of the ground of the second target area meets the second flatness condition of the landing of the unmanned aerial vehicle, the unmanned aerial vehicle lands on the ground of the second target area. It should be noted that, in the landing process, the unmanned aerial vehicle still determines the flatness of the lower landing surface according to the methods in step 401, step 402, and step 404 until the unmanned aerial vehicle lands on the ground of the second target area. If the unmanned aerial vehicle judges that the ground flatness of the second target area does not meet the second flatness condition in the descending process, outputting first prompt information to a control terminal of the unmanned aerial vehicle or hovering the unmanned aerial vehicle at the current height without continuously descending. Wherein, the first prompt message includes: and prompting an operator that the ground of the current region of the unmanned aerial vehicle is not suitable for landing, and changing the landing region or manually confirming the ground of the current region.
Optionally, when the number of times that the flatness of the ground of the second target area meets the second flatness condition is determined to reach the preset landing threshold value in the descending process of the unmanned aerial vehicle, or when the unmanned aerial vehicle descends to the preset landing height, the unmanned aerial vehicle can stop determining the flatness of the ground of the second target area, and directly lands on the ground of the second target area.
According to the landing control method of the unmanned aerial vehicle, the unmanned aerial vehicle divides the area of the ground below on the basis of the method shown in FIG. 2, so that the flatness of the target landing area of the unmanned aerial vehicle can be more accurately detected, and the detection efficiency of the unmanned aerial vehicle is further improved. In the flatness detection method, when the flatness of the ground of the second target area is detected, the selection of key points and the judgment of the distances between the key points and other second target space position points on the Z axis are added, so that the precision that the flatness of the ground of the second target area meets the landing condition of the unmanned aerial vehicle can be further ensured.
The embodiment of the invention provides a landing control device which can be carried on an unmanned aerial vehicle or a control terminal connected with the unmanned aerial vehicle. Fig. 6 is a block diagram of a landing control device according to an embodiment of the present invention, and as shown in fig. 6, the landing control device 600 includes a memory 601 and a processor 602, where the memory 601 stores program codes, the processor 602 calls the program codes in the memory 601, and when the program codes are executed, the processor 602 performs the following operations:
acquiring a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device borne on the unmanned aerial vehicle;
determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image;
determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points;
and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground.
Optionally, when the processor 602 invokes the program code, the following operations are further performed:
and when the situation that the flatness condition for landing the unmanned aerial vehicle is not met is determined, outputting first prompt information to a control terminal of the unmanned aerial vehicle or controlling the unmanned aerial vehicle to hover.
Optionally, when determining whether the ground meets the flatness condition for landing the unmanned aerial vehicle according to the three-dimensional coordinates of the plurality of spatial position points, the processor 602 performs the following operations:
performing a fitting algorithm based on the three-dimensional coordinates of the plurality of spatial location points to determine a reference plane;
and determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distances from the plurality of spatial position points to the reference plane.
Optionally, when determining whether the target area ground meets the flatness condition for unmanned aerial vehicle landing according to the distances from the plurality of spatial position points to the reference plane, the processor 602 performs the following operations:
determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the target space position points is larger than or equal to a first preset number threshold value or a first proportion threshold value, determining that the ground of the target area meets the flatness condition of landing of the unmanned aerial vehicle; or,
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a first distance threshold value, determining that the ground meets the flatness condition of landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
Optionally, the reference plane is a horizontal plane.
Optionally, the ground surface comprises a first target area ground surface, and the first target area ground surface comprises a second target area ground surface with an area smaller than that of the first target area ground surface;
the processor 602, when performing a fitting algorithm to determine a reference plane based on the three-dimensional coordinates of the plurality of spatial location points, performs the following:
when the flight altitude of the unmanned aerial vehicle is in a first altitude range, executing a first fitting algorithm according to the three-dimensional positions of the plurality of spatial position points on the ground of the first target area to determine a first reference plane;
when the flight altitude of the unmanned aerial vehicle is in a second altitude range smaller than the first altitude range, executing a second fitting algorithm according to the three-dimensional coordinates of the plurality of spatial position points on the ground of the second target area to determine a second reference plane, wherein the second reference plane is a horizontal plane;
the processor 602 executes the following operations when determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distance between the spatial position point and the reference plane:
determining whether the first target area ground meets a first flatness condition for landing of the unmanned aerial vehicle according to the distance between the spatial position point of the first target area ground and the reference plane;
determining whether the second target area ground meets a second flatness condition for the unmanned aerial vehicle to land according to the distance between the spatial position point of the second target area ground and the reference plane;
when it is determined that the ground meets the flatness condition for landing the unmanned aerial vehicle, the processor 602 performs the following operations when controlling the unmanned aerial vehicle to land on the ground:
when the first target area ground is determined to meet the first flatness condition, controlling the unmanned aerial vehicle to land on the first target area ground;
and when the second target area ground is determined to meet the second flatness condition, controlling the unmanned aerial vehicle to land on the second target area ground.
Optionally, when determining whether the second target area ground meets the second flatness condition for unmanned aerial vehicle landing according to the distance between the spatial position point of the second target area ground and the reference plane, the processor 602 performs the following operations:
and determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the second target space position points is larger than or equal to a first preset number threshold value or a first proportional threshold value, determining that the ground of the target area meets a second flatness condition of landing of the unmanned aerial vehicle.
Optionally, when determining whether the target area ground meets the flatness condition of unmanned aerial vehicle landing according to the distance between the spatial position point and the reference plane, the processor 602 performs the following operations:
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a second distance threshold value, determining that the ground meets a second flatness condition for landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
Optionally, when the processor 602 invokes the program code, the following operations are further performed:
and acquiring multi-frame images output by the shooting device, and selecting multi-frame target images from the multi-frame images.
Optionally, in the multiple frames of target images, the translation amount of the shooting device corresponding to two adjacent frames of target images is greater than a preset translation threshold.
Optionally, when determining the three-dimensional coordinates of the spatial location points of the ground corresponding to the plurality of feature points according to the plurality of frames of target images, the processor 602 performs the following operations:
executing a tracking matching algorithm on the feature points of the multi-frame images to determine multiple groups of homonymous feature points in the multi-frame target images, wherein each group of homonymous feature points corresponds to one spatial position point on the ground;
and performing fitting operation according to the position of each group of homonymous feature points in the corresponding target image to determine the three-dimensional coordinates of the spatial position points of the ground corresponding to each feature point in the plurality of feature points.
Optionally, when the number of the feature points in a group of homonymous feature points is greater than a preset feature point threshold, the group of homonymous feature points is used to determine the three-dimensional position of the corresponding feature point on the ground.
The landing control device provided by this embodiment can execute the landing control method of the unmanned aerial vehicle provided by the foregoing embodiment, and the execution manner and the beneficial effects thereof are similar and will not be described again here.
An embodiment of the present invention further provides an unmanned aerial vehicle, including:
a body;
the power system is arranged on the fuselage and used for providing power for the unmanned aerial vehicle;
the shooting device is installed on the fuselage and used for shooting pictures below the unmanned aerial vehicle;
and the landing control device provided by the embodiment.
Optionally, the unmanned aerial vehicle further comprises:
and the communication equipment is arranged on the machine body and used for carrying out information interaction with the control terminal.
The landing control device provided by this embodiment can execute the landing control method of the unmanned aerial vehicle provided by the foregoing embodiment, and the execution manner and the beneficial effects thereof are similar and will not be described again here.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (26)

1. A method for controlling landing of an unmanned aerial vehicle, comprising:
acquiring a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device borne on the unmanned aerial vehicle;
determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image;
determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points;
and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground.
2. The method of claim 1, further comprising: and when the situation that the flatness condition for landing the unmanned aerial vehicle is not met is determined, outputting first prompt information to a control terminal of the unmanned aerial vehicle or controlling the unmanned aerial vehicle to hover.
3. The method according to claim 1 or 2,
the determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the three-dimensional coordinates of the plurality of spatial position points comprises the following steps:
performing a fitting algorithm based on the three-dimensional coordinates of the plurality of spatial location points to determine a reference plane;
and determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distances from the plurality of spatial position points to the reference plane.
4. The method of claim 3, wherein determining whether the target area ground meets a flatness condition for unmanned aerial vehicle landing based on the distances of the plurality of spatial location points to the reference plane comprises:
determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the target space position points is larger than or equal to a first preset number threshold value or a first proportion threshold value, determining that the ground of the target area meets the flatness condition of landing of the unmanned aerial vehicle; or,
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a first distance threshold value, determining that the ground meets the flatness condition of landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
5. The method according to claim 3 or 4, wherein the reference plane is a horizontal plane.
6. The method of claim 3 or 4, wherein the surface comprises a first target area surface comprising a second target area surface having an area smaller than the first target area surface;
said performing a fitting algorithm from the three-dimensional coordinates of the plurality of spatial location points to determine a reference plane, comprising:
when the flight altitude of the unmanned aerial vehicle is in a first altitude range, executing a first fitting algorithm according to the three-dimensional positions of the plurality of spatial position points on the ground of the first target area to determine a first reference plane;
when the flight altitude of the unmanned aerial vehicle is in a second altitude range smaller than the first altitude range, executing a second fitting algorithm according to the three-dimensional coordinates of the plurality of spatial position points on the ground of the second target area to determine a second reference plane, wherein the second reference plane is a horizontal plane;
the step of determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distance from the spatial position point to the reference plane comprises the following steps:
determining whether the first target area ground meets a first flatness condition for landing of the unmanned aerial vehicle according to the distance between the spatial position point of the first target area ground and the reference plane;
determining whether the second target area ground meets a second flatness condition for the unmanned aerial vehicle to land according to the distance between the spatial position point of the second target area ground and the reference plane;
when confirming ground satisfies the roughness condition that unmanned vehicles descends, control unmanned vehicles to when the ground descends, include:
when the first target area ground is determined to meet the first flatness condition, controlling the unmanned aerial vehicle to land on the first target area ground;
and when the second target area ground is determined to meet the second flatness condition, controlling the unmanned aerial vehicle to land on the second target area ground.
7. The method of claim 6, wherein determining whether the second target area ground satisfies a second flatness condition for unmanned aerial vehicle landing based on the distance of the spatial location point of the second target area ground from the reference plane comprises:
and determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the second target space position points is larger than or equal to a first preset number threshold value or a first proportional threshold value, determining that the ground of the target area meets a second flatness condition of landing of the unmanned aerial vehicle.
8. The method according to claim 6 or 7, wherein the determining whether the target area ground meets a flatness condition for unmanned aerial vehicle landing according to the distance from the spatial position point to the reference plane comprises:
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a second distance threshold value, determining that the ground meets a second flatness condition for landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
9. The method according to any one of claims 1-8, further comprising:
and acquiring multi-frame images output by the shooting device, and selecting multi-frame target images from the multi-frame images.
10. The method according to claim 9, wherein in the multi-frame target images, the translation amount of the photographing device corresponding to two adjacent frame target images is greater than a preset translation threshold.
11. The method according to any one of claims 1 to 10, wherein said determining three-dimensional coordinates of said spatial location points of the ground corresponding to said plurality of feature points from said plurality of frame target images comprises:
executing a tracking matching algorithm on the feature points of the multi-frame images to determine multiple groups of homonymous feature points in the multi-frame target images, wherein each group of homonymous feature points corresponds to one spatial position point on the ground;
and performing fitting operation according to the position of each group of homonymous feature points in the corresponding target image to determine the three-dimensional coordinates of the spatial position points of the ground corresponding to each feature point in the plurality of feature points.
12. The method of claim 11, wherein when the number of feature points in a set of homonymous feature points is greater than a preset feature point threshold, the set of homonymous feature points is used to determine the three-dimensional position of the corresponding feature point on the ground.
13. A fall control apparatus comprising a memory and a processor;
the memory for storing program code;
the processor, invoking the program code, when executed, is configured to:
acquiring a plurality of frames of target images, wherein the target images are obtained by shooting the ground below the unmanned aerial vehicle by a shooting device borne on the unmanned aerial vehicle;
determining three-dimensional coordinates of a plurality of spatial position points on the ground according to the multi-frame target image;
determining whether the ground meets the flatness condition of landing of the unmanned aerial vehicle or not according to the three-dimensional coordinates of the space position points;
and when the ground meets the flatness condition of the unmanned aerial vehicle landing, controlling the unmanned aerial vehicle to land on the ground.
14. The apparatus of claim 13, wherein the processor, when invoking the program code, further performs the following:
and when the situation that the flatness condition for landing the unmanned aerial vehicle is not met is determined, outputting first prompt information to a control terminal of the unmanned aerial vehicle or controlling the unmanned aerial vehicle to hover.
15. The apparatus of claim 13 or 14, wherein the processor, when determining whether the ground satisfies a flatness condition for unmanned aerial vehicle landing from the three-dimensional coordinates of the plurality of spatial position points, performs:
performing a fitting algorithm based on the three-dimensional coordinates of the plurality of spatial location points to determine a reference plane;
and determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distances from the plurality of spatial position points to the reference plane.
16. The apparatus of claim 15, wherein the processor, in determining whether the target area ground meets a flatness condition for unmanned aerial vehicle landing based on the distances of the plurality of spatial location points from the reference plane, performs the following:
determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the target space position points is larger than or equal to a first preset number threshold value or a first proportion threshold value, determining that the ground of the target area meets the flatness condition of landing of the unmanned aerial vehicle; or,
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a first distance threshold value, determining that the ground meets the flatness condition of landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
17. The apparatus of claim 15 or 16, wherein the reference plane is a horizontal plane.
18. The apparatus of claim 15 or 16, wherein the surface comprises a first target area surface comprising a second target area surface having an area smaller than the first target area surface;
the processor, when executing a fitting algorithm to determine a reference plane based on the three-dimensional coordinates of the plurality of spatial location points, performs the following:
when the flight altitude of the unmanned aerial vehicle is in a first altitude range, executing a first fitting algorithm according to the three-dimensional positions of the plurality of spatial position points on the ground of the first target area to determine a first reference plane;
when the flight altitude of the unmanned aerial vehicle is in a second altitude range smaller than the first altitude range, executing a second fitting algorithm according to the three-dimensional coordinates of the plurality of spatial position points on the ground of the second target area to determine a second reference plane, wherein the second reference plane is a horizontal plane;
the processor executes the following operations when determining whether the ground meets the flatness condition of unmanned aerial vehicle landing according to the distance between the spatial position point and the reference plane:
determining whether the first target area ground meets a first flatness condition for landing of the unmanned aerial vehicle according to the distance between the spatial position point of the first target area ground and the reference plane;
determining whether the second target area ground meets a second flatness condition for the unmanned aerial vehicle to land according to the distance between the spatial position point of the second target area ground and the reference plane;
when the fact that the ground meets the flatness condition of unmanned aerial vehicle landing is determined, the processor executes the following operations when the unmanned aerial vehicle is controlled to land on the ground:
when the first target area ground is determined to meet the first flatness condition, controlling the unmanned aerial vehicle to land on the first target area ground;
and when the second target area ground is determined to meet the second flatness condition, controlling the unmanned aerial vehicle to land on the second target area ground.
19. The apparatus of claim 18, wherein the processor, in determining whether the second target area ground satisfies a second flatness condition for unmanned aerial vehicle landing based on the distance of the spatial location point of the second target area ground from the reference plane, performs the following operations:
and determining target space position points with the distance smaller than or equal to a preset first distance threshold value from the plurality of space position points, and if the number of the second target space position points is larger than or equal to a first preset number threshold value or a first proportional threshold value, determining that the ground of the target area meets a second flatness condition of landing of the unmanned aerial vehicle.
20. The apparatus of claim 18 or 19, wherein the processor, when determining whether the target area ground meets a flatness condition for unmanned aerial vehicle landing according to the distance of the spatial location point to the reference plane, performs the following operations:
determining characteristic distances from the plurality of spatial position points to the reference plane according to the distances of the plurality of spatial position points, and if the characteristic distances are smaller than or equal to a second distance threshold value, determining that the ground meets a second flatness condition for landing of the unmanned aerial vehicle, wherein the characteristic distances comprise an average distance, a median distance or a maximum distance.
21. The apparatus of any of claims 13-20, wherein the processor, when invoking the program code, further performs the following:
and acquiring multi-frame images output by the shooting device, and selecting multi-frame target images from the multi-frame images.
22. The device according to claim 21, wherein in the multi-frame target images, the translation amount of the photographing device corresponding to two adjacent frame target images is greater than a preset translation threshold.
23. The apparatus according to any one of claims 13 to 22, wherein the processor, when determining the three-dimensional coordinates of the spatial location points of the ground corresponding to the plurality of feature points from the plurality of frame target images, performs:
executing a tracking matching algorithm on the feature points of the multi-frame images to determine multiple groups of homonymous feature points in the multi-frame target images, wherein each group of homonymous feature points corresponds to one spatial position point on the ground;
and performing fitting operation according to the position of each group of homonymous feature points in the corresponding target image to determine the three-dimensional coordinates of the spatial position points of the ground corresponding to each feature point in the plurality of feature points.
24. The apparatus of claim 23, wherein when the number of feature points in a set of homonymous feature points is greater than a preset feature point threshold, the set of homonymous feature points is used to determine the three-dimensional position of the corresponding feature point on the ground.
25. An unmanned aerial vehicle, comprising:
a body;
the power system is arranged on the fuselage and used for providing power for the unmanned aerial vehicle;
the shooting device is installed on the fuselage and used for shooting pictures below the unmanned aerial vehicle;
and a fall control apparatus according to any one of claims 13 to 24.
26. The UAV of claim 25 further comprising:
and the communication equipment is arranged on the machine body and used for carrying out information interaction with the control terminal.
CN201980030313.2A 2019-09-27 2019-09-27 Landing control method of unmanned aerial vehicle and related equipment Pending CN112106008A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108597 WO2021056432A1 (en) 2019-09-27 2019-09-27 Landing control method for unmanned aerial vehicle, and related device

Publications (1)

Publication Number Publication Date
CN112106008A true CN112106008A (en) 2020-12-18

Family

ID=73749022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980030313.2A Pending CN112106008A (en) 2019-09-27 2019-09-27 Landing control method of unmanned aerial vehicle and related equipment

Country Status (2)

Country Link
CN (1) CN112106008A (en)
WO (1) WO2021056432A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237420A1 (en) * 2006-04-10 2007-10-11 Microsoft Corporation Oblique image stitching
CN101442619A (en) * 2008-12-25 2009-05-27 武汉大学 Method for splicing non-control point image
CN108474658A (en) * 2017-06-16 2018-08-31 深圳市大疆创新科技有限公司 Ground Morphology observation method and system, unmanned plane landing method and unmanned plane
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104132941B (en) * 2014-08-11 2016-11-02 江苏恒创软件有限公司 A kind of many basins based on unmanned plane Water quality comprehensive monitoring and the method for analysis
WO2019119199A1 (en) * 2017-12-18 2019-06-27 深圳市大疆创新科技有限公司 Control method and control device for unmanned aerial vehicle, unmanned aerial vehicle and agricultural unmanned aerial vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237420A1 (en) * 2006-04-10 2007-10-11 Microsoft Corporation Oblique image stitching
CN101442619A (en) * 2008-12-25 2009-05-27 武汉大学 Method for splicing non-control point image
CN108474658A (en) * 2017-06-16 2018-08-31 深圳市大疆创新科技有限公司 Ground Morphology observation method and system, unmanned plane landing method and unmanned plane
CN108919830A (en) * 2018-07-20 2018-11-30 南京奇蛙智能科技有限公司 A kind of flight control method that unmanned plane precisely lands

Also Published As

Publication number Publication date
WO2021056432A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN106940704B (en) Positioning method and device based on grid map
US10897569B2 (en) Photographing control method, apparatus, and control device
US10891500B2 (en) Method and apparatus for acquiring traffic sign information
CN109241820B (en) Unmanned aerial vehicle autonomous shooting method based on space exploration
CN110411339B (en) Underwater target size measuring equipment and method based on parallel laser beams
CN111274943B (en) Detection method, detection device, electronic equipment and storage medium
CN113359782B (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
CN109255302A (en) Object recognition methods and terminal, mobile device control method and terminal
CN108520559B (en) Unmanned aerial vehicle positioning and navigation method based on binocular vision
CN108140245B (en) Distance measurement method and device and unmanned aerial vehicle
CN113867373B (en) Unmanned aerial vehicle landing method and device, parking apron and electronic equipment
WO2019127023A1 (en) Protective aircraft landing method and device and aircraft
CN113052907B (en) Positioning method of mobile robot in dynamic environment
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
CN113920275A (en) Triangular mesh construction method and device, electronic equipment and readable storage medium
CN113741495B (en) Unmanned aerial vehicle attitude adjustment method and device, computer equipment and storage medium
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
CN113987246A (en) Automatic picture naming method, device, medium and electronic equipment for unmanned aerial vehicle inspection
CN115493598B (en) Target positioning method and device in motion process and storage medium
CN112106008A (en) Landing control method of unmanned aerial vehicle and related equipment
US20230039143A1 (en) Own-position estimating device, moving body, own-position estimating method, and own-position estimating program
CN112149687A (en) Method for object recognition
CN111356893A (en) Shooting aiming control method and device for movable platform and readable storage medium
CN111279284A (en) Control method and apparatus
CN116772803B (en) Unmanned aerial vehicle detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201218

RJ01 Rejection of invention patent application after publication