CN112119428A - Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position - Google Patents

Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position Download PDF

Info

Publication number
CN112119428A
CN112119428A CN201980030518.0A CN201980030518A CN112119428A CN 112119428 A CN112119428 A CN 112119428A CN 201980030518 A CN201980030518 A CN 201980030518A CN 112119428 A CN112119428 A CN 112119428A
Authority
CN
China
Prior art keywords
area
depth information
aerial vehicle
unmanned aerial
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980030518.0A
Other languages
Chinese (zh)
Inventor
周游
蔡剑钊
魏盛华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112119428A publication Critical patent/CN112119428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C5/00Measuring height; Measuring distances transverse to line of sight; Levelling between separated points; Surveyors' levels
    • G01C5/06Measuring height; Measuring distances transverse to line of sight; Levelling between separated points; Surveyors' levels by using barometric means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method for acquiring a landing position, equipment for acquiring the landing position and an unmanned aerial vehicle (12) can more effectively determine a target landing area so as to realize high-speed and safe landing of the unmanned aerial vehicle (12). The method comprises the following steps: acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle (12) determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by a shooting device (S201); based on the depth information, a target landing area is determined in the target area (S202).

Description

Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position
Technical Field
The invention relates to the technical field of control, in particular to a method, equipment, an unmanned aerial vehicle, a system and a storage medium for acquiring a landing position.
Background
Along with the development of present science and technology, unmanned aerial vehicle is used in more and more scenes, along with the promotion of wireless transmission technology, the distance that unmanned aerial vehicle can fly is more and more far away, because unmanned aerial vehicle shooting device's size is very little, just hardly sees unmanned aerial vehicle clearly surpassing the certain distance, then far away completely invisible, and unmanned aerial vehicle can be called beyond the horizon flight at this distance usually.
At present, the process of returning to the air and landing of an unmanned aerial vehicle is usually to return to the air according to the take-off position located by a Global Positioning System (GPS), however, if the environment itself of the take-off is relatively complex, if the take-off position located by the GPS returns to the air, a suitable landing area may not be determined, and there is a certain risk in the landing process. Therefore, how to control the unmanned aerial vehicle to land more effectively has very important meaning.
Disclosure of Invention
The embodiment of the invention provides a method, equipment, an unmanned aerial vehicle, a system and a storage medium for acquiring a landing position, which can more effectively determine a target landing area so as to realize high-speed and safe landing of the unmanned aerial vehicle.
In a first aspect, an embodiment of the present invention provides a method for acquiring a landing position, which is applied to an unmanned aerial vehicle, where a shooting device is mounted on the unmanned aerial vehicle, and the method includes:
acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle, which is determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by a shooting device;
determining a target landing area in the target area based on the depth information.
In a second aspect, an embodiment of the present invention provides an apparatus for acquiring a landing position, including a memory and a processor;
the memory is used for storing programs;
the processor, configured to invoke the program, when the program is executed, is configured to perform the following operations:
acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle, which is determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by a shooting device;
determining a target landing area in the target area based on the depth information.
In a third aspect, an embodiment of the present invention provides an unmanned aerial vehicle, including:
a body;
the power system is arranged on the airframe and used for providing power for the unmanned aerial vehicle to move;
an apparatus for acquiring a landing position as described in the second aspect above.
In a fourth aspect, an embodiment of the present invention provides a system for acquiring a landing position, including:
the device for acquiring the landing position is used for acquiring the depth information of a target area, the target area is a landing area in the environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by the shooting device; determining a target landing area in the target area based on the depth information;
and the unmanned aerial vehicle is used for flying to the target landing area.
In a fifth aspect, the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In the embodiment of the invention, the equipment for acquiring the landing position can acquire the depth information of a target area, and determine the target landing area in the target area based on the depth information, wherein the target area is a landing area in the environment below the unmanned aerial vehicle determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device. Through the embodiment, the target landing area can be determined more effectively, so that the unmanned aerial vehicle can land safely at a high speed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system for acquiring a landing position according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for obtaining a landing position according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a method of determining a touchdown area provided by an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for acquiring a landing position according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The method for acquiring a landing position provided in the embodiments of the present invention may be performed by a system for acquiring a landing position, where the system for acquiring a landing position includes a device for acquiring a landing position and a drone, in some embodiments, the device for acquiring a landing position may be installed on the drone, in some embodiments, the device for acquiring a landing position may be spatially independent from the drone, in some embodiments, the device for acquiring a landing position may be a component of the drone, that is, the drone includes a device for acquiring a landing position. In other embodiments, the method for acquiring the landing position may also be applied to other movable devices, such as a robot, an unmanned vehicle, an unmanned ship, and other movable devices capable of autonomous movement. In some embodiments, the drone has a camera mounted thereon; in some embodiments, the unmanned aerial vehicle may be mounted with a cradle head, and the photographing device may be mounted with the cradle head of the unmanned aerial vehicle. In some embodiments, the cameras may include, but are not limited to, monocular cameras, binocular cameras, and the like.
The system for acquiring the landing position provided by the embodiment of the invention is schematically described below with reference to the attached drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a system for acquiring a landing position according to an embodiment of the present invention. The system for acquiring the landing position comprises: and acquiring equipment 11 and an unmanned aerial vehicle 12 of a landing position. The drone 12 includes a power system 121, the power system 121 being configured to provide motive power for movement of the drone 12. In some embodiments, the device 11 for acquiring the landing position is disposed in the drone 12, and may establish a communication connection with other devices (such as the power system 121) in the drone through a wired communication connection. In other embodiments, the drone 12 and the device 11 for acquiring the landing position are independent of each other, for example, the device 11 for acquiring the landing position is disposed in a cloud server, and is in communication connection with the drone 12 through a wireless communication connection. In some embodiments, the device 11 for acquiring the landing position may be a flight controller.
In this embodiment of the present invention, the device 11 for acquiring a landing position may acquire depth information of a target area, and determine the target landing area in the target area based on the depth information, where the target area is a landing area in an environment below the unmanned aerial vehicle 12 determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device.
The method for acquiring the landing position provided by the embodiment of the invention is schematically described below with reference to the attached drawings.
Referring to fig. 2 specifically, fig. 2 is a schematic flowchart of a method for acquiring a landing position according to an embodiment of the present invention, where the method may be executed by a device for acquiring a landing position, and a specific explanation of the device for acquiring a landing position is as described above. Specifically, the method of the embodiment of the present invention includes the following steps.
S201: the method comprises the steps of obtaining depth information of a target area, wherein the target area is a degradable area in an environment below the unmanned aerial vehicle and determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment through a shooting device.
In the embodiment of the invention, the equipment for acquiring the landing position can acquire the depth information of a target area, the target area is a landing area in the environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by a shooting device.
Semantic segmentation is a classification at the pixel level, and pixels belonging to the same class are classified into one class, so that the semantic segmentation is to understand an image from the pixel level. For example, when looking at the environment below the unmanned aerial vehicle from the viewpoint of the unmanned aerial vehicle, pixels belonging to lawns are classified into one type, pixels belonging to highways are also classified into one type, and besides, roof pixels are also classified into one type. Specifically, the semantic segmentation model may be trained by deep learning methods, such as Patch classification, full convolution method (FCN), and so on. The embodiments of the present invention are only for illustration, and the specific manner of semantic learning is not limited in the embodiments.
In some embodiments, the image is taken when the drone is in a range of heights greater than a first height value; the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
In an embodiment, when unmanned aerial vehicle is in the altitude range that is greater than first height value, the equipment that acquires the landing position can acquire the image that the shooting device on the unmanned aerial vehicle was shot the environment of unmanned aerial vehicle below and is obtained to can descend the area in the environment of this unmanned aerial vehicle below through carrying out the semantic segmentation to this image and confirm. In certain embodiments, the height range of the first height value belongs to a high altitude area, which in one example may be 50 m; in one example, the first height value may have a height ranging from 50m to 200 m. Through this kind of embodiment, can be when unmanned aerial vehicle is in high altitude area, roughly determine unmanned aerial vehicle's the zone of can landing to control unmanned aerial vehicle toward this zone of can landing flight.
In an implementation mode, when the landing position acquiring device determines the landing area in the environment below the unmanned aerial vehicle by performing semantic segmentation on the image, semantic analysis can be performed on the image to determine the image areas of different semantic categories, and the image areas of different semantic categories are segmented to determine the landing area.
In some embodiments, the image regions of different semantic categories include, but are not limited to, regions of houses, vehicles, woods, lawns, playgrounds, highways, and the like. In some embodiments, the landing areas may include, but are not limited to, lawn, playground, highway, etc. areas. In some embodiments, image regions of different semantic categories may be labeled with different identifying information; in some embodiments, the different identifying information may include, but is not limited to, numbers, colors, shapes, and the like; in one example, the image regions of different semantic categories may be identified with different colors, such as marking houses with red, green lawn with green, yellow road with yellow, and so forth. Through this kind of embodiment, can distinguish the region of different semantic types, help determining unmanned aerial vehicle's landing area.
Specifically, fig. 3 is an example for explaining the determination of the landing area in the environment below the unmanned aerial vehicle, fig. 3 is a schematic diagram for determining the landing area according to an embodiment of the present invention, and as shown in fig. 3, a triangle 31 represents a camera (i.e., a shooting device) mounted on the unmanned aerial vehicle and shooting vertically downward, where a focal length of the camera is f, a height of the camera from the ground is z when measured by a barometer, and the camera shoots vertically downward to obtain an image 32. The device for acquiring the landing position can determine the landing area 331 and the non-landing area 332 on the ground 33 by performing semantic segmentation on the image 32, and determine the position information of the nearest safe landing area 333 in the landing area 331.
In some embodiments, the position information of the safe landing area 333 is obtained by converting the position information of the safe landing area 333 into the world coordinate system through the attitude information of the drone, such as the yaw angle yaw. In some embodiments, the yaw angle yaw of the drone may be obtained by fusing an Inertial Measurement Unit (IMU), an electronic compass, and a GPS; in some embodiments, the process of flying the unmanned aerial vehicle to the safe landing area is a process of landing and translating. Through this kind of embodiment, can control unmanned aerial vehicle and land simultaneously toward the regional flight of safe landing.
In one embodiment, the area of the touchdown area 331 can be calculated by the following formula (1) according to the acquired position relationship between the center points of the two areas on the image 32, the focal length, and the height to the ground measured by the barometer.
Figure BDA0002762480990000061
Wherein, P is the position of the safe falling point under the camera coordinate system, f is the focal length (pixel unit) of the camera, and z is the height to the ground measured by the barometer. In some embodiments, the installation positions of the optical center of the camera and the barometer are fixed, and the distance between the camera and the ground can be obtained through design drawings.
In one embodiment, the apparatus for acquiring a landing position may acquire the depth information of the target area through a monocular photographing device and/or a binocular photographing device when acquiring the depth information.
In one embodiment, when the device for acquiring the landing position acquires the depth information of the target area, when the unmanned aerial vehicle is in the height range larger than the second height value, the monocular shooting device performs a multi-frame shooting operation on the target area, and determines the depth information according to an image obtained by the multi-frame shooting operation. In certain embodiments, the second height value is lower than the first height value.
In an embodiment, when the unmanned aerial vehicle is in the height range greater than the second height value, the device for acquiring the landing position may acquire that a monocular shooting device performs a multi-frame shooting operation on the target area, and determine the depth information according to an image obtained by the multi-frame shooting operation. In some embodiments, when the drone is in a height range greater than the second height value, the device for obtaining the landing position may also determine the depth information according to the multi-frame image obtained by the radar detection. In one example, the second height value may have a height ranging from 10m to 50 m.
By the implementation mode, the boundary is more accurate, the depth information in the same category is smoother, the smooth item in the same category is added when the depth map is optimized, and the discrimination is added for different categories, so that the determination of the image areas of different semantic categories is further facilitated.
In one embodiment, when the device for acquiring the landing position acquires the depth information of the target area, the depth information may be determined by the disparity of images captured by the binocular shooting device on the target area when the unmanned aerial vehicle is in the height range not greater than the second height value.
In one embodiment, when the unmanned aerial vehicle is in the height range not greater than the second height value, the device for acquiring the landing position may acquire an image obtained by photographing a target area with a binocular shooting device, and determine the depth information through parallax of the image photographed by the binocular shooting device on the target area. By the embodiment, the image areas of different semantic categories can be more accurately determined, and the target area can be more accurately determined.
In one example, when the unmanned aerial vehicle is in a height range smaller than 10m, the device for acquiring the landing position may determine the depth information through image parallax of a binocular shooting device mounted on the unmanned aerial vehicle to the target area.
In one embodiment, the shooting device is connected with the unmanned aerial vehicle body through the cloud platform, the equipment that obtains the landing position can control the cloud platform turns to the appointed direction, and control the shooting device on the cloud platform to shoot the ground of appointed direction and obtain the image. In some embodiments, the designated direction may be below the unmanned aerial vehicle, or may be a direction having a certain included angle with the below of the unmanned aerial vehicle, and the embodiments of the present invention are not particularly limited.
In an implementation mode, the equipment for acquiring the landing position can control the cradle head to turn to the appointed direction corresponding to the shooting ground and having a certain included angle with the lower part of the unmanned aerial vehicle in the shooting process of the shooting device, and control the shooting device on the cradle head to shoot the ground in the appointed direction so as to obtain the image of the ground in the appointed direction. Through the embodiment, the shooting device can shoot the ground more comprehensively, the obtained image is more effective, and the landing target area can be determined more accurately.
In one example, the device for acquiring the landing position may control the cradle head to turn to a ground house and to be at an angle of 45 degrees with the drone, and control the camera on the cradle head to shoot the wall of the ground house to obtain an image including the wall of the house.
In one embodiment, when the device for acquiring the landing position determines the landing area, the device may further determine, based on semantic segmentation, an image area in which the semantic meets a preset semantic condition in the image, determine, based on camera parameters when the image is captured and attitude information of the unmanned aerial vehicle, a mapping relationship between the image area and geographic position coordinates in the environment, and determine, based on the mapping relationship, corresponding position coordinates of the landing area.
In one example, the device for acquiring the landing position may determine, based on semantic segmentation, an image region in the image whose semantics meet a preset semantic condition, determine a mapping relationship between the image region and a geographic position coordinate in the environment based on camera parameters when the image is captured and attitude information of the unmanned aerial vehicle, and convert, based on the mapping relationship, the geographic position coordinate of the image region into a world coordinate system, thereby determining a position coordinate of a corresponding landing-enabled region.
S202: determining a target landing area in the target area based on the depth information.
In the embodiment of the present invention, the device for acquiring the landing position may determine the target landing area in the target area based on the depth information.
In one embodiment, when the device for acquiring the landing position determines the target landing area in the target area based on the depth information, the device may determine, in the target area, an area to be selected that meets a preset flatness condition based on the depth information, and determine, in the area to be selected, the target landing area based on the size information of the drone. In some embodiments, the preset flattening condition includes the same semantic meaning, or depth information of different semantic meanings is within a preset range.
In one example, lawn and playground semantics are different, but the depth information is within a preset range, then it is determined that a preset leveling condition is satisfied. In another example, if the house and lawn semantics are not the same and the depth information is outside the preset range, it is determined that the preset leveling condition is not satisfied. Through the implementation mode, the area to be selected can be determined from the target area more effectively.
In one example, assuming that the size information of the unmanned aerial vehicle is small, if the determined area to be selected includes a lawn and a road, the road may be determined to be the target landing area according to the size information of the unmanned aerial vehicle. Through this kind of embodiment, can avoid because unmanned aerial vehicle descends to the problem that great target landing zone leads to unable unmanned aerial vehicle of finding from the target landing zone for a short time, improved the flexibility of confirming the target landing zone.
In one embodiment, the target region includes sub-regions divided into different semantics; the equipment for acquiring the landing position can determine whether the boundary position between the adjacent sub-areas meets the preset flatness condition or not based on the depth information when determining the target landing area in the target area based on the depth information; and if so, determining that the area to be selected comprises the boundary position.
In one example, assuming that the target area is divided into 3 sub-areas with different semantics of lawn, playground and house, if the lawn sub-area is adjacent to the playground sub-area and the depth information of the boundary position satisfies the preset range, it may be determined that a preset flatness condition is met, and it is determined that the to-be-selected area includes the boundary positions of the lawn sub-area and the playground sub-area. By the implementation mode, the area to be selected can be further and more accurately determined.
In one embodiment, the device for acquiring the landing position may control the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend, so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction. Through this kind of embodiment, can be when control unmanned aerial vehicle descends, control unmanned aerial vehicle moves in the horizontal direction to adjust unmanned aerial vehicle's position more effectively, realize the smooth descending of transition.
In the embodiment of the invention, the equipment for acquiring the landing position can acquire the depth information of a target area, and determine the target landing area in the target area based on the depth information, wherein the target area is a landing area in the environment below the unmanned aerial vehicle determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device. Through the embodiment, the target landing area can be determined more effectively, so that the unmanned aerial vehicle can land safely at a high speed.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an apparatus for acquiring a landing position according to an embodiment of the present invention. Specifically, the equipment for acquiring the landing position comprises: memory 401, processor 402.
In one embodiment, the device for acquiring the landing position further comprises a data interface 403, and the data interface 403 is used for transmitting data information between the device for acquiring the landing position and other devices.
The memory 401 may include a volatile memory (volatile memory); the memory 401 may also include a non-volatile memory (non-volatile memory); the memory 401 may also comprise a combination of the above kinds of memories. The processor 402 may be a Central Processing Unit (CPU). The processor 402 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
The memory 401 is used for storing programs, and the processor 402 can call the programs stored in the memory 401 for executing the following steps:
acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle, which is determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by a shooting device;
determining a target landing area in the target area based on the depth information.
Further, the image is captured when the drone is in a range of heights greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
Further, when the processor 402 acquires the depth information of the target area, it is specifically configured to:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
Further, when the processor 402 acquires the depth information of the target area, it is specifically configured to:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
Further, the second height value is lower than the first height value.
Further, when determining the target landing area in the target area based on the depth information, the processor 402 is specifically configured to:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
Further, the target area comprises sub-areas divided into different semantics; the processor 402, when determining, based on the depth information, a candidate area meeting a preset flatness condition in the target area, is specifically configured to:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
Further, the camera through the cloud platform with the unmanned aerial vehicle fuselage is connected, processor 402 still is used for:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
Further, the processor 402 is further configured to:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
Further, the processor 402 is further configured to:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
In the embodiment of the invention, the equipment for acquiring the landing position can acquire the depth information of a target area, and determine the target landing area in the target area based on the depth information, wherein the target area is a landing area in the environment below the unmanned aerial vehicle determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device. Through the embodiment, the target landing area can be determined more effectively, so that the unmanned aerial vehicle can land safely at a high speed.
An embodiment of the present invention further provides an unmanned aerial vehicle, including: a body; the power system is arranged on the airframe and used for providing power for the unmanned aerial vehicle to move; the shooting device is used for shooting the environment below the unmanned aerial vehicle to obtain an image; the processor is used for acquiring depth information of a target area, wherein the target area is a degradable area in an environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by the shooting device; determining a target landing area in the target area based on the depth information. And the equipment for acquiring the landing position.
Further, the image is captured when the drone is in a range of heights greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
Further, when the processor 402 acquires the depth information of the target area, it is specifically configured to:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
Further, when the processor 402 acquires the depth information of the target area, it is specifically configured to:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
Further, the second height value is lower than the first height value.
Further, when determining the target landing area in the target area based on the depth information, the processor 402 is specifically configured to:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
Further, the target area comprises sub-areas divided into different semantics; the processor 402, when determining, based on the depth information, a candidate area meeting a preset flatness condition in the target area, is specifically configured to:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
Further, the camera through the cloud platform with the unmanned aerial vehicle fuselage is connected, processor 402 still is used for:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
Further, the processor 402 is further configured to:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
Further, the processor 402 is further configured to:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
In the embodiment of the invention, the unmanned aerial vehicle can acquire the depth information of a target area, and determine a target landing area in the target area based on the depth information, wherein the target area is a landing area in an environment below the unmanned aerial vehicle determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device. Through the embodiment, the target landing area can be determined more effectively, so that the unmanned aerial vehicle can land safely at a high speed.
The embodiment of the invention also provides a system for acquiring the landing position, which comprises:
the device for acquiring the landing position is used for acquiring the depth information of a target area, the target area is a landing area in the environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by the shooting device; determining a target landing area in the target area based on the depth information;
and the unmanned aerial vehicle is used for flying to the target landing area.
Further, the image is captured when the drone is in a range of heights greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
Further, when the device for acquiring the landing position acquires the depth information of the target area, the device is specifically configured to:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
Further, when the device for acquiring the landing position acquires the depth information of the target area, the device is specifically configured to:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
Further, the second height value is lower than the first height value.
Further, the device for acquiring a landing position is specifically configured to, when determining a target landing area in the target area based on the depth information:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
Further, the target area comprises sub-areas divided into different semantics; the equipment for acquiring the landing position is specifically configured to, when determining a candidate area meeting a preset flatness condition in the target area based on the depth information:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
Further, the shooting device through the cloud platform with the unmanned aerial vehicle fuselage is connected, the equipment that acquires the landing position still is used for:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
Further, the device for acquiring the landing position is also used for:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
Further, the device for acquiring the landing position is also used for:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
In the embodiment of the invention, the equipment for acquiring the landing position can acquire the depth information of a target area, and determine the target landing area in the target area based on the depth information, wherein the target area is a landing area in the environment below the unmanned aerial vehicle determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by the shooting device. Through the embodiment, the target landing area can be determined more effectively, so that the unmanned aerial vehicle can land safely at a high speed.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method described in the embodiment corresponding to fig. 2 of the present invention is implemented, and the apparatus according to the embodiment corresponding to the present invention described in fig. 4 may also be implemented, which is not described herein again.
The computer readable storage medium may be an internal storage unit of the device according to any of the foregoing embodiments, for example, a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
The above disclosure is intended to be illustrative of only some embodiments of the invention, and is not intended to limit the scope of the invention.

Claims (41)

1. A method for acquiring a landing position is applied to an unmanned aerial vehicle, a shooting device is carried on the unmanned aerial vehicle, and the method comprises the following steps:
acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle, which is determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by a shooting device;
determining a target landing area in the target area based on the depth information.
2. The method of claim 1,
the image is captured when the drone is in a height range greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
3. The method of claim 2, wherein the obtaining depth information of the target region comprises:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
4. The method of claim 3, wherein the obtaining depth information of the target region comprises:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
5. The method of claim 4, wherein the second height value is lower than the first height value.
6. The method of claim 1, wherein determining a target landing area in the target area based on the depth information comprises:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
7. The method of claim 6, wherein the target region comprises sub-regions divided into different semantics; the determining, based on the depth information, a candidate region meeting a preset flatness condition in the target region includes:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
8. The method of claim 1, wherein the camera is connected to the drone fuselage through a cradle head, the method further comprising:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
9. The method of claim 1, further comprising:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
10. The method of claim 1, further comprising:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
11. An apparatus for acquiring a landing position, comprising a memory and a processor;
the memory is used for storing programs;
the processor, configured to invoke the program, when the program is executed, is configured to perform the following operations:
acquiring depth information of a target area, wherein the target area is a degradable area in an environment below an unmanned aerial vehicle, which is determined after semantic segmentation is performed on an image, and the image is obtained by shooting the environment by a shooting device;
determining a target landing area in the target area based on the depth information.
12. The apparatus of claim 11,
the image is captured when the drone is in a height range greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
13. The device according to claim 12, wherein the processor, when obtaining the depth information of the target area, is specifically configured to:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
14. The device according to claim 13, wherein the processor, when obtaining the depth information of the target area, is specifically configured to:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
15. The apparatus of claim 14, wherein the second height value is lower than the first height value.
16. The apparatus according to claim 11, wherein the processor, when determining the target landing zone in the target zone based on the depth information, is specifically configured to:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
17. The device of claim 16, wherein the target region comprises sub-regions divided into different semantics; the processor is specifically configured to, when determining, based on the depth information, a candidate region that meets a preset flatness condition in the target region:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
18. The apparatus of claim 11, wherein the camera is connected to the drone fuselage through a pan-tilt head, the processor further configured to:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
19. The device of claim 11, wherein the processor is further configured to:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
20. The device of claim 11, wherein the processor is further configured to:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
21. An unmanned aerial vehicle, comprising:
a body;
the power system is arranged on the airframe and used for providing power for the unmanned aerial vehicle to move;
the shooting device is used for shooting the environment below the unmanned aerial vehicle to obtain an image;
the processor is used for acquiring depth information of a target area, wherein the target area is a degradable area in an environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by the shooting device; determining a target landing area in the target area based on the depth information.
22. The drone of claim 21,
the image is captured when the drone is in a height range greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
23. The drone of claim 22, wherein the processor, when obtaining depth information for the target area, is specifically configured to:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
24. The drone of claim 23, wherein the processor, when obtaining the depth information of the target area, is specifically configured to:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
25. A drone according to claim 24, wherein the second height value is lower than the first height value.
26. A drone according to claim 21, wherein the processor, when determining the target landing zone in the target zone based on the depth information, is specifically configured to:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
27. A drone according to claim 26, wherein the target area includes sub-areas divided into different semantics; the processor is specifically configured to, when determining, based on the depth information, a candidate region that meets a preset flatness condition in the target region:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
28. The drone of claim 21, wherein the camera is connected to the drone fuselage through a cradle head, the processor further configured to:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
29. The drone of claim 21, wherein the processor is further to:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
30. The drone of claim 21, wherein the processor is further to:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
31. A system for acquiring a landing position, comprising:
the device for acquiring the landing position is used for acquiring the depth information of a target area, the target area is a landing area in the environment below the unmanned aerial vehicle, which is determined after semantic segmentation is carried out on an image, and the image is obtained by shooting the environment by the shooting device; determining a target landing area in the target area based on the depth information;
and the unmanned aerial vehicle is used for flying to the target landing area.
32. The system of claim 31,
the image is captured when the drone is in a height range greater than a first height value;
the depth information is obtained when the drone is in a range of heights that is not greater than the first height value.
33. The system according to claim 32, wherein the device for obtaining a landing position is specifically configured to, when obtaining depth information of the target area:
and acquiring the depth information through a monocular shooting device and/or a binocular shooting device.
34. The system according to claim 33, wherein the means for obtaining the landing position is configured to, when obtaining the depth information of the target area:
when the unmanned aerial vehicle is in a height range larger than a second height value, performing multi-frame shooting operation on the target area through the monocular shooting device, and determining the depth information according to an image obtained by the multi-frame shooting operation;
and when the unmanned aerial vehicle is in a height range which is not larger than a second height value, determining the depth information through the image parallax of the target area shot by the binocular shooting device.
35. The system of claim 34, wherein the second height value is lower than the first height value.
36. The system according to claim 31, wherein the device for obtaining a landing position is configured to, when determining a target landing zone in the target zone based on the depth information, in particular:
determining a region to be selected which meets a preset flatness condition in the target region based on the depth information;
and determining the target landing area in the area to be selected based on the size information of the unmanned aerial vehicle.
37. The system of claim 36, wherein the target region comprises sub-regions partitioned into different semantics; the equipment for acquiring the landing position is specifically configured to, when determining a candidate area meeting a preset flatness condition in the target area based on the depth information:
determining whether the boundary position between the adjacent sub-regions meets the preset flatness condition or not based on the depth information;
and if so, determining that the area to be selected comprises the boundary position.
38. The system of claim 31, wherein the camera is connected to the drone fuselage through a cradle head, and the means for obtaining a landing position is further configured to:
controlling the tripod head to rotate to a specified direction;
and controlling a shooting device on the holder to shoot the ground in the specified direction to obtain the image.
39. The system of claim 31, wherein the means for obtaining a landing position is further configured to:
determining an image area with semantics meeting preset semantic conditions in the image based on semantic segmentation;
determining a mapping relation between the image area and the geographic position coordinate in the environment based on camera parameters and attitude information of the unmanned aerial vehicle when the image is shot;
and determining the position coordinates of the corresponding touchdown areas based on the mapping relation.
40. The system of claim 31, wherein the means for obtaining a landing position is further configured to:
and controlling the unmanned aerial vehicle to move in the horizontal direction while controlling the unmanned aerial vehicle to descend so that the unmanned aerial vehicle approaches the target landing area in the horizontal direction.
41. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN201980030518.0A 2019-09-23 2019-09-23 Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position Pending CN112119428A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/107261 WO2021056139A1 (en) 2019-09-23 2019-09-23 Method and device for acquiring landing position, unmanned aerial vehicle, system, and storage medium

Publications (1)

Publication Number Publication Date
CN112119428A true CN112119428A (en) 2020-12-22

Family

ID=73799722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980030518.0A Pending CN112119428A (en) 2019-09-23 2019-09-23 Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position

Country Status (2)

Country Link
CN (1) CN112119428A (en)
WO (1) WO2021056139A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050664A (en) * 2021-03-24 2021-06-29 北京三快在线科技有限公司 Unmanned aerial vehicle landing method and device
CN113359810A (en) * 2021-07-29 2021-09-07 东北大学 Unmanned aerial vehicle landing area identification method based on multiple sensors
CN114526709A (en) * 2022-02-21 2022-05-24 中国科学技术大学先进技术研究院 Area measurement method and device based on unmanned aerial vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN109002837A (en) * 2018-06-21 2018-12-14 网易(杭州)网络有限公司 A kind of image application processing method, medium, device and calculate equipment
WO2018227350A1 (en) * 2017-06-12 2018-12-20 深圳市大疆创新科技有限公司 Control method for homeward voyage of unmanned aerial vehicle, unmanned aerial vehicle and machine-readable storage medium
CN109658418A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Learning method, device and the electronic equipment of scene structure

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104808685A (en) * 2015-04-27 2015-07-29 中国科学院长春光学精密机械与物理研究所 Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN104932522B (en) * 2015-05-27 2018-04-17 深圳市大疆创新科技有限公司 A kind of Autonomous landing method and system of aircraft
US10019907B2 (en) * 2015-09-11 2018-07-10 Qualcomm Incorporated Unmanned aerial vehicle obstacle detection and avoidance
CN109292099B (en) * 2018-08-10 2020-09-25 顺丰科技有限公司 Unmanned aerial vehicle landing judgment method, device, equipment and storage medium
CN109657715B (en) * 2018-12-12 2024-02-06 广东省机场集团物流有限公司 Semantic segmentation method, device, equipment and medium
CN109343572B (en) * 2018-12-20 2021-07-30 深圳市道通智能航空技术股份有限公司 Unmanned aerial vehicle autonomous landing method and device and unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
WO2018227350A1 (en) * 2017-06-12 2018-12-20 深圳市大疆创新科技有限公司 Control method for homeward voyage of unmanned aerial vehicle, unmanned aerial vehicle and machine-readable storage medium
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN109002837A (en) * 2018-06-21 2018-12-14 网易(杭州)网络有限公司 A kind of image application processing method, medium, device and calculate equipment
CN109658418A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Learning method, device and the electronic equipment of scene structure

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050664A (en) * 2021-03-24 2021-06-29 北京三快在线科技有限公司 Unmanned aerial vehicle landing method and device
WO2022199344A1 (en) * 2021-03-24 2022-09-29 北京三快在线科技有限公司 Unmanned aerial vehicle landing
CN113359810A (en) * 2021-07-29 2021-09-07 东北大学 Unmanned aerial vehicle landing area identification method based on multiple sensors
CN114526709A (en) * 2022-02-21 2022-05-24 中国科学技术大学先进技术研究院 Area measurement method and device based on unmanned aerial vehicle and storage medium

Also Published As

Publication number Publication date
WO2021056139A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
US11693428B2 (en) Methods and system for autonomous landing
US11704812B2 (en) Methods and system for multi-target tracking
US20210358315A1 (en) Unmanned aerial vehicle visual point cloud navigation
US11794890B2 (en) Unmanned aerial vehicle inspection system
US11550315B2 (en) Unmanned aerial vehicle inspection system
US11604479B2 (en) Methods and system for vision-based landing
US20200344464A1 (en) Systems and Methods for Improving Performance of a Robotic Vehicle by Managing On-board Camera Defects
US9513635B1 (en) Unmanned aerial vehicle inspection system
KR102018892B1 (en) Method and apparatus for controlling take-off and landing of unmanned aerial vehicle
CN108132678B (en) Flight control method of aircraft and related device
US11725940B2 (en) Unmanned aerial vehicle control point selection system
CN112119428A (en) Method, device, unmanned aerial vehicle, system and storage medium for acquiring landing position
CN111123964B (en) Unmanned aerial vehicle landing method and device and computer readable medium
CN110570463A (en) target state estimation method and device and unmanned aerial vehicle
CN107576329B (en) Fixed wing unmanned aerial vehicle landing guiding cooperative beacon design method based on machine vision
KR20190097350A (en) Precise Landing Method of Drone, Recording Medium for Performing the Method, and Drone Employing the Method
EP3989034B1 (en) Automatic safe-landing-site selection for unmanned aerial systems
Lee et al. Safe landing of drone using AI-based obstacle avoidance
WO2024081060A1 (en) Obstacle avoidance for aircraft from shadow analysis
Sargeant Unmanned aerial vehicle payload development for aerial survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination