CN114415736B - Multi-stage visual accurate landing method and device for unmanned aerial vehicle - Google Patents

Multi-stage visual accurate landing method and device for unmanned aerial vehicle Download PDF

Info

Publication number
CN114415736B
CN114415736B CN202210335580.4A CN202210335580A CN114415736B CN 114415736 B CN114415736 B CN 114415736B CN 202210335580 A CN202210335580 A CN 202210335580A CN 114415736 B CN114415736 B CN 114415736B
Authority
CN
China
Prior art keywords
landing
unmanned aerial
aerial vehicle
stage
cooperation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210335580.4A
Other languages
Chinese (zh)
Other versions
CN114415736A (en
Inventor
项森伟
叶敏翔
胡易人
王晓波
谢安桓
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210335580.4A priority Critical patent/CN114415736B/en
Publication of CN114415736A publication Critical patent/CN114415736A/en
Application granted granted Critical
Publication of CN114415736B publication Critical patent/CN114415736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

The invention discloses an unmanned aerial vehicle multi-stage visual accurate landing method, which comprises the following steps: step S1: acquiring internal parameters of an airborne overlook camera and actual landing height requirements of an unmanned aerial vehicle, and constructing a ground visual landing sign with a multi-scale and multi-cooperation label; step S2: detecting and calculating the ground visual landing marks, identifying landing targets and detecting angular points; step S3: calculating the three-dimensional relative position of the landing target position under the coordinate system of the airborne downward-looking camera by utilizing a camera attitude estimation algorithm; step S4: and resolving a landing target position under a coordinate system of the unmanned aerial vehicle according to the three-dimensional relative position and the real-time three-dimensional position information of the unmanned aerial vehicle, and completing landing by adopting a mode of reducing the landing speed stage by stage. The invention can lead the unmanned aerial vehicle to realize the identification and positioning without blind areas in the whole process by detecting different multi-scale and multi-cooperation labels on the ground at different landing heights, thereby finishing the safe, accurate and smooth landing.

Description

Multi-stage visual accurate landing method and device for unmanned aerial vehicle
Technical Field
The invention relates to the technical field of autonomous and accurate landing of unmanned aerial vehicles, in particular to a multi-stage visual accurate landing method and device for an unmanned aerial vehicle.
Background
With the development of sensing technology and unmanned aerial vehicle technology, unmanned aerial vehicles have been widely used to perform various military and civil tasks such as power inspection, logistics transportation, police security, environmental monitoring, and the like. The landing process is taken as a stage of frequent accident of the unmanned aerial vehicle, and is suffered from social scaling. Therefore, unmanned aerial vehicle's independently accurate technique of descending has been the focus of the concern of industry.
In recent years, with the cost reduction of related hardware products such as onboard processors and vision sensors and the like and the rise and application of computer vision technology, the vision-based unmanned aerial vehicle autonomous landing technology is widely concerned and researched. Two types of visual landing techniques are common, one is a passive autonomous landing system. The technology places a plurality of cameras and an image processing platform on the ground, takes an unmanned aerial vehicle as a shot target, obtains the pose parameters of the unmanned aerial vehicle through visual measurement, and is typically an optical motion capture system. The method has high precision, but has high cost and poor maneuverability, is mostly used in a laboratory environment, and cannot meet the requirement of rapid and autonomous landing of the unmanned aerial vehicle in any place. The other type of landing navigation system based on an active vision mechanism mainly carries a camera by the unmanned aerial vehicle, acquires images by erecting a cooperative target on the ground or capturing other markers by a camera, and further calculates the pose of the unmanned aerial vehicle to realize landing control. The applicant researches and finds that the research of the method focuses on cooperative target design and configuration optimization, identification and pose resolving algorithms, and generally only one label or a plurality of labels with small size differences are adopted for assisting landing. The prior art is high to the discernment positioning accuracy of single label, but in practical application process, along with the change of unmanned aerial vehicle landing height, the field of vision of its airborne overlook camera also changes correspondingly. Especially, in the initial stage and the completion stage of landing, the problem of overlarge or undersize target appears in the camera imaging by the single label, so that the detection and positioning of the unmanned aerial vehicle on the landing target point are invalid, the blind landing phenomenon in a certain height interval exists, and great hidden danger is brought to safe landing. The existence of the blind landing problem further leads to the fact that the existing active visual landing technology cannot realize accurate and safe landing with larger landing height (more than 30 meters).
Therefore, the unmanned aerial vehicle multi-stage visual accurate landing method and the unmanned aerial vehicle multi-stage visual accurate landing device are provided to solve the technical problem.
Disclosure of Invention
The invention aims to provide a multi-stage visual accurate landing method and device for an unmanned aerial vehicle, which solve the problems that in the initial landing stage and the finishing stage in the prior art, a single label has too large or too small target in camera imaging, so that the unmanned aerial vehicle has failure in detection and positioning of a landing target point, has a blind landing phenomenon in a certain height interval, and brings great hidden danger to safe landing. The problem of blind landing further results in the problem that the existing active visual landing technology cannot realize accurate and safe landing with larger landing height (more than 30 meters).
The technical scheme adopted by the invention is as follows:
a multi-stage visual accurate landing method for an unmanned aerial vehicle comprises the following steps:
step S1: acquiring internal parameters of an airborne overlook camera and actual landing height requirements of an unmanned aerial vehicle, and constructing a ground visual landing sign of a multi-scale and multi-cooperation label;
step S2: detecting and calculating the ground visual landing signs, and completing landing target identification and corner detection of the multi-scale multi-cooperation labels corresponding to each landing stage;
step S3: calculating the three-dimensional relative position of the landing target position under the coordinate system of the airborne overlooking camera by utilizing a camera attitude estimation algorithm according to the results of the landing target identification and the angular point detection;
step S4: and resolving a landing target position under a coordinate system of the unmanned aerial vehicle according to the three-dimensional relative position and the real-time three-dimensional position information of the unmanned aerial vehicle, and finishing landing by reducing the landing speed stage by stage.
Further, in the step S1, the multi-scale and multi-cooperation labels include a multi-scale label and a multi-cooperation label, the multi-cooperation labels are labels with different shapes, and each type of the multi-cooperation label corresponds to a landing stage; the multi-scale labels are concentric circle labels with different sizes, the center of the concentric circle label with the largest size is regarded as a target landing point and the center of the concentric circle, the rest concentric circle labels surround the center of the concentric circle according to the size, the concentric circle is distributed on the concentric circle label with the largest size, the smaller the size, the closer the size is to the center of the concentric circle, the smaller the size is, the label with the smallest size occupies the center of the concentric circle, and the multi-scale label and the multi-cooperation label jointly form a ground visual landing sign.
Further, the number of the multi-cooperative tags in each type is not less than 1.
Further, in step S1, the ground visual landing indicator uses a two-dimensional code, a landing indicator, or an indicator with a corner point to perform landing target identification and corner point detection.
Further, the landing flight flow in the step S1 divides the landing process of the unmanned aerial vehicle into a plurality of landing stages according to the types of the multi-scale and multi-cooperation tags, each landing stage maps different height intervals, the unmanned aerial vehicle can identify at least one type of the multi-scale and multi-cooperation tags in each landing stage, the position and the posture of the unmanned aerial vehicle relative to a target landing point are calculated in real time, the flight is completed by adopting corresponding posture control parameters, and the unmanned aerial vehicle can land autonomously and accurately in the whole course by multi-stage and seamless connection detection, positioning and control.
Further, the step S2 includes detecting and calculating landing target identification and corner detection corresponding to each type of the multi-scale multi-cooperation tag according to the ground visual landing signs during the process that the unmanned aerial vehicle completes landing at each stage.
Further, the step S3 includes the following sub-steps:
step S31: calculating the three-dimensional relative position of the landing target position under the coordinate system of the airborne downward-looking camera by using a camera attitude estimation algorithm, wherein the landing target position of the unmanned aerial vehicle is the sum of the three-dimensional position of the multi-scale multi-cooperation tag and the real-time three-dimensional position of the unmanned aerial vehicle, and continuing to land;
step S32: after the unmanned aerial vehicle descends to a certain height, entering a next stage, detecting the landing target identification of the multi-scale multi-cooperation tag and the corresponding corner detection thereof in the next stage by the airborne downward-looking camera at the moment, applying a camera attitude estimation algorithm to calculate the three-dimensional relative position of the target in a camera coordinate system, wherein the landing target position of the unmanned aerial vehicle is the sum of the three-dimensional position of the multi-scale multi-cooperation tag in the next stage and the real-time three-dimensional position of the unmanned aerial vehicle, and the landing speed of each stage is lower than that in the previous stage to continue landing;
further, the step S4 includes combining the real-time three-dimensional position information of the unmanned aerial vehicle with the three-dimensional relative position, and when the multi-scale multi-cooperation tag enters the effective detection range of the airborne downward-looking camera, that is, the final landing stage is entered, resolving the landing target position in the body coordinate system to complete autonomous landing.
The invention also provides an unmanned aerial vehicle multi-stage visual precision landing device, which comprises a memory and one or more processors, wherein executable codes are stored in the memory, and when the one or more processors execute the executable codes, the one or more processors are used for realizing the unmanned aerial vehicle multi-stage visual precision landing method in any one of the embodiments.
The invention also provides a computer readable storage medium, on which a program is stored, which when executed by a processor is configured to implement a method for multi-stage visual precision landing of an unmanned aerial vehicle according to any one of the above embodiments.
The invention has the beneficial effects that:
1. compared with the prior art, the multi-stage visual accurate landing method for the unmanned aerial vehicle can enable the unmanned aerial vehicle to realize non-blind area identification and positioning in the whole process through detection of different multi-scale and multi-cooperation labels on the ground at different landing heights, so that safe, accurate and smooth landing is completed;
2. the ground visual landing sign design and the corresponding staged control method thereof provided by the invention have the advantages of simple theory and convenience in operation, technicians can increase and decrease the number of the multi-scale multi-cooperation labels and the number of the corresponding landing stages according to actual requirements, and the ground visual landing sign design and the corresponding staged control method thereof have the advantages of low cost, good mobility, good expansibility, strong generalization capability and the like.
Drawings
Fig. 1 is a schematic diagram of a multi-scale multi-cooperative tag of a multi-stage visual precision landing method for an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a multi-stage visual precision landing method for an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 3 is a structural diagram of the multi-stage visual precision landing device of the unmanned aerial vehicle.
Detailed Description
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multi-stage visual accurate landing method for an unmanned aerial vehicle comprises the following steps:
step S1: acquiring internal parameters of an airborne overlook camera and actual landing height requirements of an unmanned aerial vehicle, and constructing a ground visual landing sign of a multi-scale and multi-cooperation label;
the multi-scale and multi-cooperation labels comprise multi-scale labels and multi-cooperation labels, the multi-cooperation labels are labels with different shapes, and each type of the multi-cooperation labels corresponds to one landing stage; the multi-scale labels are concentric circle labels with different sizes, the center of the concentric circle label with the largest size is regarded as a target landing point and the center of the concentric circle, the rest concentric circle labels surround the center of the concentric circle according to the size, the concentric circle is distributed on the concentric circle label with the largest size, the smaller the size, the closer the size is to the center of the concentric circle, the smaller the size is, and the multi-scale label and the multi-cooperation label jointly form the ground visual landing mark.
The size design of the multi-scale multi-cooperation label and internal parameters such as the physical size, the field angle and the lens focal length of the airborne overlook camera photosensitive chip and external parameters such as the actual landing height requirement of the unmanned aerial vehicle are in a specific segmented mapping relation.
The ground visual landing signs adopt two-dimensional codes, landing signs (such as H signs of helipads) or signs with corner points (such as self-designed logo signs) for target identification and corner point detection.
As shown in fig. 1, N types of multi-cooperation tags with different sizes are included in the category, and 101 indicates the multi-cooperation tag with the largest size, which is denoted as label _ begin; 104 represents the smallest-sized multi-cooperation tag, denoted label _ end; 102, a multi-cooperation label with the size of i, which is marked as label _ i; 103, the multi-cooperation labels with the size of i +1 are marked as label _ i +1, each type of multi-cooperation labels corresponds to one landing stage, and the number N of the types of the multi-cooperation labels is more than or equal to 2.
In number, there is only one each of label _ begin and label _ end. The number of the middle-sized label _ i and label _ i +1 is not less than 1.
In the distribution, the concentric circle labels with different sizes are multi-scale labels, the label _ begin is taken as a bottom plate, and the center 202 is taken as a target landing point. The rest are distributed in concentric circles 201 around the central point according to the size, wherein the label _ end label occupies the central position as the size is smaller and closer to the center, and the visual landing signs on the ground are formed together.
The sizes of the different classes of multi-scale multi-cooperation labels are related to the internal reference of the airborne overlook camera and the actual landing height, and the sizes are designed as follows:
Figure DEST_PATH_IMAGE001
,
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
represents the size of label _ begin (the largest sized multi-scale multi-cooperative label),
Figure DEST_PATH_IMAGE006
represents the size of the ith label i (a multi-scale multi-cooperative label of intermediate size),
Figure 133816DEST_PATH_IMAGE007
represents the size of the (i + 1) th label i +1 (a multi-scale multi-cooperative label of intermediate size),
Figure DEST_PATH_IMAGE008
represents the size of label _ end (the smallest-sized multi-scale multi-cooperative label);
Figure 910011DEST_PATH_IMAGE009
maximum landing height representing unmanned aerial vehicle design,
Figure DEST_PATH_IMAGE010
The height of the airborne overlook camera relative to the ground after the unmanned aerial vehicle falls is represented;
Figure 79961DEST_PATH_IMAGE011
representing the physical dimensions of the onboard overhead camera imaging chip,
Figure DEST_PATH_IMAGE012
showing the focal length of the onboard overhead camera,
Figure 881695DEST_PATH_IMAGE013
the target recognition algorithm adopted by the airborne overlook camera is represented, and the ratio between the minimum object size and the maximum object size can be effectively detected at the same distance.
Step S2: detecting and calculating the ground visual landing signs, and completing landing target identification and corner detection of the multi-scale multi-cooperation labels corresponding to each landing stage;
and in the process that the unmanned aerial vehicle finishes landing at each stage, detecting and calculating landing target identification of each type of the multi-scale multi-cooperation labels and corresponding corner detection according to the ground visual landing marks, wherein the detection algorithm can adopt the existing mature method, such as a template matching and other traditional image processing algorithms or a convolutional neural network-based deep learning algorithm, and the method is not restricted.
Step S3: calculating the three-dimensional relative position of the landing target position under the coordinate system of the airborne overlooking camera by utilizing a camera attitude estimation algorithm according to the results of the landing target identification and the angular point detection;
the landing flight process divides the landing process of the unmanned aerial vehicle into a plurality of landing stages according to the category of the multi-scale multi-cooperation label, and each landing stage is mapped to a different height interval. The unmanned aerial vehicle can at least identify one type of multi-scale multi-cooperation tags in each landing stage, the position and the posture of the unmanned aerial vehicle relative to the ground visual landing mark center are calculated in real time, and the flight is finished by adopting corresponding position and posture control parameters. Through detection, location, the control of multi-stage, seamless connection, realize unmanned aerial vehicle whole journey independently accurate descending.
In the different landing stages, the detection algorithm for the ground visual landing signs can adopt the existing mature method, such as the traditional image processing algorithm like template matching or the deep learning algorithm based on the convolutional neural network. After the landing target recognition and the angular point detection are completed, the three-dimensional relative position of the landing target position under the camera coordinate system is calculated by applying a camera attitude estimation algorithm, and the landing target position under the body coordinate system is solved by combining the real-time three-dimensional position information of the unmanned aerial vehicle.
After the landing target positions are obtained in different landing stages, the unmanned aerial vehicle adopts different pose control parameters to achieve different landing speeds. The closer to the ground the landing phase, the smaller the corresponding landing speed. The smooth, reasonable and safe unmanned aerial vehicle landing is finished through the control of the multi-stage landing speed.
See fig. 2, step S31: calculating the position of a landing target under an airborne downward-looking camera coordinate system by utilizing a camera attitude estimation algorithm
Figure DEST_PATH_IMAGE014
(three-dimensional relative position), the landing target position of the unmanned aerial vehicle is
Figure 86280DEST_PATH_IMAGE015
Wherein
Figure DEST_PATH_IMAGE016
Is the real-time three-dimensional position of the unmanned aerial vehicle to
Figure 298956DEST_PATH_IMAGE017
As the three-dimensional position of the multi-scale multi-cooperation label, the falling speed is
Figure DEST_PATH_IMAGE018
Continuing to land;
step S32: after the unmanned aerial vehicle descends to a certain height, the unmanned aerial vehicle enters the ith stage, and at the moment, the airborne overlook camera starts to detect the ith label _ i (the middle of the size is large)Small multi-scale multi-cooperation label) and corresponding corner detection, and applying a camera pose estimation algorithm to calculate the three-dimensional relative position of the target under a camera coordinate system
Figure 950343DEST_PATH_IMAGE019
At this stage, the landing target position of the unmanned aerial vehicle is
Figure DEST_PATH_IMAGE020
And are combined with
Figure 539456DEST_PATH_IMAGE021
Position control of the unmanned aerial vehicle for the target point, and landing speed at the moment
Figure DEST_PATH_IMAGE022
Continue to fall and the falling speed
Figure 598548DEST_PATH_IMAGE022
Lower than the previous stage
Figure 795174DEST_PATH_IMAGE023
The falling speed of the crane continues to fall;
step S33: after finishing the i-th stage descent, entering an i + 1-th stage descent process, detecting the landing target identification of the i + 1-th label _ i +1 (a multi-scale multi-cooperation label with the size of the middle size) and the corner detection corresponding to the landing target identification by the airborne overlooking camera, and applying a camera attitude estimation algorithm to calculate the three-dimensional relative position of the target under a camera coordinate system
Figure DEST_PATH_IMAGE024
At this stage, the landing target position of the unmanned aerial vehicle is
Figure 939716DEST_PATH_IMAGE025
To do so by
Figure DEST_PATH_IMAGE026
Position control and landing speed of unmanned aerial vehicle for target point
Figure 66941DEST_PATH_IMAGE027
Lower than the previous stage
Figure 793589DEST_PATH_IMAGE022
Continues to fall.
Step S4: and resolving a landing target position under a coordinate system of the unmanned aerial vehicle according to the three-dimensional relative position and the real-time three-dimensional position information of the unmanned aerial vehicle, and completing landing by adopting a mode of reducing the landing speed stage by stage.
When label _ end (the multi-scale multi-cooperation label with the minimum size) enters the effective detection range of the airborne downward-looking camera, the last landing stage is entered, and the last landing stage is carried out
Figure DEST_PATH_IMAGE028
By performing autonomous landing, wherein the landing speed of each stage is less than the landing speed of the previous stage, i.e.
Figure 348067DEST_PATH_IMAGE029
Corresponding to the embodiment of the unmanned aerial vehicle multi-stage visual accurate landing method, the invention also provides an embodiment of the unmanned aerial vehicle multi-stage visual accurate landing device.
Referring to fig. 3, the multi-stage visual precision landing apparatus for the unmanned aerial vehicle according to the embodiment of the present invention includes a memory and one or more processors, where the memory stores executable codes, and when the one or more processors execute the executable codes, the one or more processors are configured to implement a multi-stage visual precision landing method for the unmanned aerial vehicle in the above embodiments.
The embodiment of the multi-stage visual precision landing device of the unmanned aerial vehicle can be applied to any equipment with data processing capability, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 3, the present invention is a hardware structure diagram of any device with data processing capability where the multi-stage visual precision landing apparatus of the unmanned aerial vehicle is located, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 3, in the embodiment, any device with data processing capability where the apparatus is located may also include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. One of ordinary skill in the art can understand and implement without inventive effort.
The embodiment of the invention also provides a computer-readable storage medium, wherein a program is stored on the computer-readable storage medium, and when the program is executed by a processor, the multi-stage visual accurate landing method of the unmanned aerial vehicle in the embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The multi-stage visual accurate landing method of the unmanned aerial vehicle is characterized by comprising the following steps of:
step S1: acquiring internal parameters of an airborne overlook camera and actual landing height requirements of an unmanned aerial vehicle, and constructing a ground visual landing sign of a multi-scale and multi-cooperation label;
the size of the multi-scale multi-cooperation labels of different classes is related to the airborne overlook camera internal parameters and the actual landing height of the unmanned aerial vehicle, and the size design is as follows:
Figure 628004DEST_PATH_IMAGE001
,
Figure 479417DEST_PATH_IMAGE002
Figure 223382DEST_PATH_IMAGE003
Figure 460328DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 608544DEST_PATH_IMAGE005
representing the size of the largest multi-scale multi-cooperative tag,
Figure 134203DEST_PATH_IMAGE006
a size of the multi-scale multi-collaboration label representing an i-th intermediate size,
Figure 947438DEST_PATH_IMAGE007
the size of the multi-scale multi-collaboration label representing the i +1 st size intermediate size,
Figure 258465DEST_PATH_IMAGE008
representing a size of the multi-scale multi-collaboration tag having a smallest size;
Figure 826850DEST_PATH_IMAGE009
represents the designed maximum landing height of the drone,
Figure 387275DEST_PATH_IMAGE010
the height of the airborne overlook camera relative to the ground after the unmanned aerial vehicle falls is represented;
Figure 66518DEST_PATH_IMAGE011
representing the physical dimensions of the onboard overhead camera imaging chip,
Figure 153423DEST_PATH_IMAGE012
showing the focal length of the onboard overhead camera,
Figure 440179DEST_PATH_IMAGE013
the target recognition algorithm adopted by the airborne overlook camera is represented, and the ratio between the minimum object size and the maximum object size can be effectively detected at the same distance;
step S2: detecting and calculating the ground visual landing signs, and completing landing target identification and corner detection of the multi-scale multi-cooperation labels corresponding to each landing stage;
step S3: calculating the three-dimensional relative position of the landing target position under the coordinate system of the airborne overlooking camera by utilizing a camera attitude estimation algorithm according to the results of the landing target identification and the angular point detection;
step S4: and resolving a landing target position under a coordinate system of the unmanned aerial vehicle according to the three-dimensional relative position and the real-time three-dimensional position information of the unmanned aerial vehicle, and completing landing by adopting a mode of reducing the landing speed stage by stage.
2. A multi-stage visual precision landing method for an unmanned aerial vehicle as claimed in claim 1, wherein the multi-scale multi-cooperation labels in step S1 include a multi-scale label and a multi-cooperation label, the multi-cooperation labels are labels with different shapes, and each type of the multi-cooperation label corresponds to a landing stage; the multi-scale labels are concentric circle labels with different sizes, the center of the concentric circle label with the largest size is regarded as a target landing point and the center of the concentric circle, the rest concentric circle labels surround the center of the concentric circle according to the size, the concentric circle is distributed on the concentric circle label with the largest size, the smaller the size, the closer the size is to the center of the concentric circle, the smaller the size is, and the multi-scale label and the multi-cooperation label jointly form the ground visual landing mark.
3. A multi-stage visual precision landing method for an unmanned aerial vehicle as claimed in claim 2, wherein the number of each type of multi-cooperative tag is no less than 1.
4. An unmanned aerial vehicle multi-stage visual precision landing method as claimed in claim 1, wherein in step S1, the ground visual landing signs adopt two-dimensional codes, landing signs or angular point signs for landing target identification and angular point detection.
5. The multi-stage visual accurate landing method for the unmanned aerial vehicle as claimed in claim 1, wherein the landing flight procedure in step S1 divides the landing process of the unmanned aerial vehicle into a plurality of landing stages according to the categories of the multi-scale multi-cooperation tags, each landing stage maps different height intervals, the unmanned aerial vehicle can identify at least one type of the multi-scale multi-cooperation tags in each landing stage, calculate the position and attitude of the unmanned aerial vehicle relative to a target landing point in real time, and complete the flight by using corresponding attitude control parameters, so that the unmanned aerial vehicle can independently and accurately land in the whole course through multi-stage seamless connection detection, positioning and control.
6. A multi-stage visual precision landing method for an unmanned aerial vehicle as claimed in claim 1, wherein step S2 includes detecting and calculating landing target identifications and corner point detections corresponding to each type of the multi-scale multi-cooperation tags according to the ground visual landing signs during the landing process of the unmanned aerial vehicle at each stage.
7. A multi-stage visual precision landing method for an unmanned aerial vehicle as claimed in claim 1, wherein the step S3 includes the following sub-steps:
step S31: calculating the three-dimensional relative position of the landing target position under the airborne downward-looking camera coordinate system by utilizing a camera attitude estimation algorithm, wherein the landing target position of the unmanned aerial vehicle is the sum of the three-dimensional position of the multi-scale multi-cooperation tag and the real-time three-dimensional position of the unmanned aerial vehicle, and continuing to land;
step S32: after the unmanned aerial vehicle descends to a certain height, the unmanned aerial vehicle enters the next stage, at the moment, the airborne overlook camera starts to detect the landing target identification of the multi-scale multi-cooperation tag and the corresponding corner detection of the multi-scale multi-cooperation tag in the next stage, the camera attitude estimation algorithm is applied to calculate the three-dimensional relative position of the target in the camera coordinate system, the landing target position of the unmanned aerial vehicle is the sum of the three-dimensional position of the multi-scale multi-cooperation tag in the next stage and the real-time three-dimensional position of the unmanned aerial vehicle, and the landing speed in each stage is lower than that in the previous stage to continue landing.
8. The multi-stage visual precision landing method for the unmanned aerial vehicle as claimed in claim 7, wherein the step S4 includes combining the real-time three-dimensional position information of the unmanned aerial vehicle according to the three-dimensional relative position, and when the multi-scale multi-cooperation tag enters the effective detection range of the airborne downward-looking camera, that is, the last landing stage, the landing target position in the coordinate system of the unmanned aerial vehicle is resolved to complete the autonomous landing.
9. An unmanned aerial vehicle multi-stage visual precision landing device, comprising a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors are used for implementing the unmanned aerial vehicle multi-stage visual precision landing method according to any one of claims 1-8 when executing the executable codes.
10. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, is configured to carry out a method for multi-stage visual precision landing of a drone according to any one of claims 1 to 8.
CN202210335580.4A 2022-04-01 2022-04-01 Multi-stage visual accurate landing method and device for unmanned aerial vehicle Active CN114415736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210335580.4A CN114415736B (en) 2022-04-01 2022-04-01 Multi-stage visual accurate landing method and device for unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210335580.4A CN114415736B (en) 2022-04-01 2022-04-01 Multi-stage visual accurate landing method and device for unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN114415736A CN114415736A (en) 2022-04-29
CN114415736B true CN114415736B (en) 2022-07-12

Family

ID=81263261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210335580.4A Active CN114415736B (en) 2022-04-01 2022-04-01 Multi-stage visual accurate landing method and device for unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN114415736B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115924157A (en) * 2022-12-07 2023-04-07 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle single-person operation equipment capable of accurately landing and using method thereof
CN115857519B (en) * 2023-02-14 2023-07-14 复亚智能科技(太仓)有限公司 Unmanned plane curved surface platform autonomous landing method based on visual positioning
CN116012422B (en) * 2023-03-23 2023-06-09 西湖大学 Monocular vision-based unmanned aerial vehicle 6D pose estimation tracking method and application thereof
CN116558504B (en) * 2023-07-11 2023-09-29 之江实验室 Monocular vision positioning method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111562791A (en) * 2019-03-22 2020-08-21 沈阳上博智像科技有限公司 System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN114115318A (en) * 2021-12-01 2022-03-01 山东八五信息技术有限公司 Visual method for unmanned aerial vehicle to land on top of moving vehicle

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10049589B1 (en) * 2016-09-08 2018-08-14 Amazon Technologies, Inc. Obstacle awareness based guidance to clear landing space
JP7042911B2 (en) * 2018-07-13 2022-03-28 三菱電機株式会社 UAV control device and UAV control method
CN109270953B (en) * 2018-10-10 2021-03-26 大连理工大学 Multi-rotor unmanned aerial vehicle autonomous landing method based on concentric circle visual identification
CN110569838B (en) * 2019-04-25 2022-05-24 内蒙古工业大学 Autonomous landing method of quad-rotor unmanned aerial vehicle based on visual positioning
CN110989661B (en) * 2019-11-19 2021-04-20 山东大学 Unmanned aerial vehicle accurate landing method and system based on multiple positioning two-dimensional codes
CN110865650B (en) * 2019-11-19 2022-12-20 武汉工程大学 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
CN113377118A (en) * 2021-07-14 2021-09-10 中国计量大学 Multi-stage accurate landing method for unmanned aerial vehicle hangar based on vision
CN114200948B (en) * 2021-12-09 2023-12-29 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111562791A (en) * 2019-03-22 2020-08-21 沈阳上博智像科技有限公司 System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN114115318A (en) * 2021-12-01 2022-03-01 山东八五信息技术有限公司 Visual method for unmanned aerial vehicle to land on top of moving vehicle

Also Published As

Publication number Publication date
CN114415736A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN114415736B (en) Multi-stage visual accurate landing method and device for unmanned aerial vehicle
Yang et al. An onboard monocular vision system for autonomous takeoff, hovering and landing of a micro aerial vehicle
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
CN112734852B (en) Robot mapping method and device and computing equipment
Carrio et al. Onboard detection and localization of drones using depth maps
Patruno et al. A vision-based approach for unmanned aerial vehicle landing
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
CN107589758A (en) A kind of intelligent field unmanned plane rescue method and system based on double source video analysis
Qi et al. Autonomous landing solution of low-cost quadrotor on a moving platform
CN107063261B (en) Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle
Jin et al. Ellipse proposal and convolutional neural network discriminant for autonomous landing marker detection
CN108225273A (en) A kind of real-time runway detection method based on sensor priori
Le Saux et al. Rapid semantic mapping: Learn environment classifiers on the fly
Avola et al. Automatic estimation of optimal UAV flight parameters for real-time wide areas monitoring
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
Budzan et al. Improved human detection with a fusion of laser scanner and vision/infrared information for mobile applications
Duan et al. Image digital zoom based single target apriltag recognition algorithm in large scale changes on the distance
CN113436276B (en) Visual relative positioning-based multi-unmanned aerial vehicle formation method
Yuan et al. A hierarchical vision-based localization of rotor unmanned aerial vehicles for autonomous landing
CN113313824A (en) Three-dimensional semantic map construction method
Shrestha et al. Automatic pose estimation of micro unmanned aerial vehicle for autonomous landing
Wang et al. Agv navigation based on apriltags2 auxiliary positioning
Cui et al. Coarse-to-fine visual autonomous unmanned aerial vehicle landing on a moving platform
Jiang et al. Quadrotors' Low-cost Vision-based Autonomous Landing Architecture on a Moving Platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant