CN112987764B - Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium - Google Patents

Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium Download PDF

Info

Publication number
CN112987764B
CN112987764B CN202110134460.3A CN202110134460A CN112987764B CN 112987764 B CN112987764 B CN 112987764B CN 202110134460 A CN202110134460 A CN 202110134460A CN 112987764 B CN112987764 B CN 112987764B
Authority
CN
China
Prior art keywords
preset visual
unmanned aerial
aerial vehicle
landing
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110134460.3A
Other languages
Chinese (zh)
Other versions
CN112987764A (en
Inventor
唐辉平
陆海博
张卫东
张巍
李胜全
张爱东
李拥祺
张玉梅
何哲
杨玉亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202110134460.3A priority Critical patent/CN112987764B/en
Publication of CN112987764A publication Critical patent/CN112987764A/en
Application granted granted Critical
Publication of CN112987764B publication Critical patent/CN112987764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0607Rate of change of altitude or depth specially adapted for aircraft
    • G05D1/0653Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing
    • G05D1/0676Rate of change of altitude or depth specially adapted for aircraft during a phase of take-off or landing specially adapted for landing

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a landing method, which is applied to an unmanned aerial vehicle and comprises the following steps of: in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark; determining an effective area corresponding to the preset visual identifier in the target image; identifying an effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle; and controlling the landing posture of the unmanned aerial vehicle based on the posture information. The invention also discloses a landing device, an unmanned aerial vehicle and a computer readable storage medium. By utilizing the landing method, the technical effect of improving the real-time performance of the pose information of the unmanned aerial vehicle is achieved.

Description

Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium
Technical Field
The invention relates to the technical field of unmanned aerial vehicle control, in particular to a landing method, a landing device, an unmanned aerial vehicle and a computer readable storage medium.
Background
In recent years, unmanned aerial vehicles have greatly developed in the military and civil fields by virtue of the advantages of good maneuverability, low cost, easy control of flight gestures and the like, and autonomous landing of unmanned aerial vehicles on mobile platforms such as manned/unmanned vehicles, manned/unmanned surface boats and the like becomes a research hotspot.
In the related art, a landing method is disclosed, in the landing process of an unmanned aerial vehicle, the unmanned aerial vehicle acquires an image of a target area (an area related to a landing platform) in the air, the image comprising a visual identifier is identified through a visual identification technology, so that relative pose information of the unmanned aerial vehicle and the landing platform, namely pose information of the unmanned aerial vehicle is obtained, and then the unmanned aerial vehicle utilizes the pose information to control the landing pose of the unmanned aerial vehicle.
However, when the existing landing method is used for identifying the visual identification, the identification efficiency is low, so that the real-time performance of obtaining the pose information in the landing process of the unmanned aerial vehicle is poor.
Disclosure of Invention
The invention mainly aims to provide a landing method, a landing device, an unmanned aerial vehicle and a computer readable storage medium, and aims to solve the technical problem that in the prior art, when a visual identifier is identified, the identification efficiency is low, so that the real-time performance of acquiring pose information in the landing process of the unmanned aerial vehicle is poor.
In order to achieve the above purpose, the invention provides a landing method applied to an unmanned aerial vehicle, which comprises the following steps:
in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark;
determining an effective area corresponding to the preset visual identifier in the target image;
identifying an effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle;
and controlling the landing posture of the unmanned aerial vehicle based on the posture information.
Optionally, the preset visual identifier includes a plurality of visual identifiers; before the step of determining the effective area corresponding to the preset visual identifier in the target image, the method further includes:
acquiring selected historical pose information corresponding to a selected historical target image of the target area;
determining a selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information;
the step of determining the effective area corresponding to the preset visual identifier in the target image comprises the following steps:
determining an effective area corresponding to the selected preset visual identifier in the target image;
the step of identifying the effective area corresponding to the preset visual identifier to obtain pose information of the unmanned aerial vehicle comprises the following steps:
and identifying the effective area corresponding to the selected preset visual identifier to obtain the pose information.
Optionally, the plurality of preset visual identifiers are preset visual identifiers with various sizes, and the preset visual identifiers with different sizes correspond to different height information; the step of determining the selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information comprises the following steps:
acquiring selected historical height information of the unmanned aerial vehicle in the vertical direction in the selected historical pose information;
and determining the selected preset visual identification corresponding to the selected historical height information from the preset visual identifications with the various sizes.
Optionally, the step of determining the effective area corresponding to the selected preset visual identifier in the target image includes:
acquiring history two-dimensional information and history sizes of the selected preset visual marks in a plurality of result history target images;
and determining an effective area corresponding to the selected preset visual identifier in the target image based on the historical two-dimensional information and the historical size.
Optionally, the step of determining, in the target image, an effective area corresponding to the selected preset visual identifier based on the historical two-dimensional information and the historical size includes:
obtaining a predicted displacement based on the historical two-dimensional information;
and determining an effective area corresponding to the selected preset visual mark in the target image based on the predicted displacement, the preset amplification ratio and the history size.
Optionally, before the step of controlling the landing posture of the unmanned aerial vehicle based on the pose information, the method further includes:
acquiring historical pose information corresponding to the plurality of result historical target images respectively;
based on the historical pose information, obtaining the relative displacement of the unmanned aerial vehicle;
obtaining a relative velocity based on the relative displacement and a time difference corresponding to the relative displacement;
the step of controlling the landing posture of the unmanned aerial vehicle based on the posture information comprises the following steps:
and controlling the landing posture of the unmanned aerial vehicle based on the relative speed and the posture information.
Optionally, the preset visual identifier is a two-dimensional code identifier.
In addition, in order to achieve the above object, the present invention also provides a landing device applied to a unmanned aerial vehicle, the device comprising:
the shooting module is used for shooting a target image of a target area in the landing process of the unmanned aerial vehicle, and the target area is provided with a preset visual mark;
the determining module is used for determining an effective area corresponding to the preset visual identifier in the target image;
the identification module is used for identifying the effective area corresponding to the preset visual identifier so as to obtain pose information of the unmanned aerial vehicle;
and the control module is used for controlling the landing gesture of the unmanned aerial vehicle based on the gesture information.
In addition, in order to achieve the above object, the present invention also provides an unmanned aerial vehicle, the unmanned aerial vehicle comprising: a memory, a processor and a landing program stored on the memory and running on the processor, which landing program when executed by the processor implements the steps of the landing method as described in any of the preceding claims.
In addition, in order to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon a landing program which, when executed by a processor, implements the steps of the landing method as set forth in any one of the above.
The technical scheme of the invention provides a landing method for an unmanned aerial vehicle, which comprises the following steps: in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark; determining an effective area corresponding to the preset visual identifier in the target image; identifying an effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle; and controlling the landing posture of the unmanned aerial vehicle based on the posture information.
In the existing landing method, the unmanned aerial vehicle carries out global recognition on the target image, and the data processing amount in the recognition process is more, so that the recognition efficiency is lower, and the real-time performance of acquiring pose information in the unmanned aerial vehicle landing process is poorer; in the landing method, the unmanned aerial vehicle identifies the partial area, namely the effective area, of the target image, and the data processing amount in the identification process is less, so that the identification efficiency is higher, and further, the speed of acquiring the pose information in the landing process of the unmanned aerial vehicle is higher, and the real-time performance of the pose information is higher; therefore, by utilizing the landing method, the technical effect of improving the real-time performance of the pose information of the unmanned aerial vehicle is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an unmanned aerial vehicle in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the landing method of the present invention;
FIG. 3 is a schematic view of a plurality of preset visual markers according to the present invention;
FIG. 4 is a schematic diagram of a plurality of result history object images according to the present invention;
FIG. 5 is a schematic illustration of a target image of the present invention;
fig. 6 is a block diagram of the first embodiment of the landing gear of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an unmanned aerial vehicle in a hardware operating environment according to an embodiment of the present invention.
In general, a drone includes: at least one processor 301, a memory 302 and a landing program stored on the memory and executable on the processor, the landing program being configured to implement the steps of the landing method as described above.
Processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 301 may also include an AI (Artificial Intelligence ) processor for processing the relevant drop method operations so that the drop method model can be trained and learned autonomously, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the touchdown method provided by the method embodiments herein.
In some embodiments, the terminal may further optionally include: a communication interface 303, and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the communication interface 303 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power supply 306.
The communication interface 303 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 301, the memory 302, and the communication interface 303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 304 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 304 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 305 is a touch screen, the display 305 also has the ability to collect touch signals at or above the surface of the display 305. The touch signal may be input as a control signal to the processor 301 for processing. At this point, the display 305 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. The display 305 may be made of LCD (LiquidCrystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The power supply 306 is used to power the various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology. Those skilled in the art will appreciate that the configuration shown in fig. 1 is not limiting and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a landing program, and the landing program realizes the steps of the landing method when being executed by a processor. Therefore, a detailed description will not be given here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application. As determined as an example, the program instructions may be deployed to be executed on one drone or on multiple drones located at one site, or on multiple drones distributed across multiple sites and interconnected by a communications network.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The computer readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
Based on the above hardware structure, an embodiment of the landing method of the present invention is presented.
Referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a landing method according to the present invention, the method being used for a drone, the method comprising the steps of:
step S11: and in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark.
The unmanned aerial vehicle, which is an execution main body of the invention, is provided with a landing program, and the landing method is realized when the unmanned aerial vehicle executes the landing program. Typically unmanned aerial vehicles are equipped with cameras, which may be millions of pixels.
In the specific application, the unmanned aerial vehicle receives a landing instruction, and based on the landing instruction, the unmanned aerial vehicle starts landing, namely, a camera is controlled to shoot an image of a target area, a place or area where the unmanned aerial vehicle is ready to land is a landing platform, the target area is an area shot by the camera of the unmanned aerial vehicle, and the target area comprises the landing platform; the preset visual identifier in the landing platform can be a two-dimensional code visual identifier, for example, arUco or Apriltag. Generally, the target image of the target region includes a region corresponding to the preset visual identifier and a region corresponding to a portion other than the preset visual identifier, that is, the region corresponding to the preset visual identifier is a portion of the target image.
In general, the target images may be images obtained by continuously capturing (scanning) the target area by the camera during the landing process of the unmanned aerial vehicle, and more target images (for example, 40 frames per second or 30 frames per second) are obtained in continuous moments, and steps S11-S14 of the present invention need to be performed on each of the target images, that is, the target images scanned at each moment are respectively identified, so as to obtain pose information corresponding to the target images at each moment.
It can be understood that, in the process of scanning the target area by the camera to obtain the target image, similar to the process of capturing the video stream of the target area, each frame image of the video stream needs to be identified in real time (one frame image corresponding to the current moment is the target image, and multiple frames or one frame image before the current moment can be called as a historical target image).
Step S12: and determining an effective area corresponding to the preset visual identifier in the target image.
Step S13: and identifying the effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle.
It can be understood that, referring to the above description, in the target image, the area corresponding to the preset visual identifier is a part of the target image, the information related to the part is effective information, and the unmanned aerial vehicle only needs to identify the effective information to obtain pose information of the unmanned aerial vehicle, and does not need to identify the whole target image, so that when the unmanned aerial vehicle identifies the target image, the data processing amount is reduced, and further, the speed of obtaining the pose information of the unmanned aerial vehicle is higher.
Therefore, an effective area corresponding to the preset visual identifier needs to be determined in the target image so as to identify the effective area corresponding to the preset visual identifier, so as to obtain pose information of the unmanned aerial vehicle.
Further, the preset visual identifier comprises a plurality of visual identifiers; prior to the step S12, the method further includes: acquiring selected historical pose information corresponding to a selected historical target image of the target area; determining a selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information; correspondingly, step S12 includes: determining an effective area corresponding to the selected preset visual identifier in the target image; correspondingly, step S13 includes: and identifying the effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle.
The preset visual identifiers are of various sizes, and the preset visual identifiers of different sizes correspond to different height information; the step of determining the selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information comprises the following steps: acquiring selected historical height information of the unmanned aerial vehicle in the vertical direction in the selected historical pose information; and determining the selected preset visual identification corresponding to the selected historical height information from the preset visual identifications with the various sizes.
It should be noted that, when the unmanned aerial vehicle is in the landing process, the altitude information (the vertical distance between the unmanned aerial vehicle and the landing platform) is different, the recognition speeds of the visual identifiers with different sizes are different, and the selected visual identifier corresponding to the altitude information needs to be determined in a plurality of preset visual identifiers, so that the recognition speed of the unmanned aerial vehicle on the selected visual identifier is higher. When the unmanned aerial vehicle is at different heights, the selected preset visual identification corresponding to the height information is determined from a plurality of preset visual identifications based on the height information at the moment.
The unmanned aerial vehicle is different in different high visual fields, so the unmanned aerial vehicle is at different high discernment marks of equidimension, and low altitude discernment little sign, high altitude discernment big sign, the concrete thinking is as follows:
before the landing method is executed, taking a visual mark as an example, an unmanned aerial vehicle is parked on a piece of paper printed with the two-dimensional code, the size of the two-dimensional code is designed according to the speed of the two-dimensional code to be identified, when the speed of the two-dimensional code identified by the printed size of the two-dimensional code according to the method for identifying the local area (effective area) is lower than or higher than the preset identification speed, the size of the two-dimensional code is reduced or enlarged, and the two-dimensional code is printed until the identification speed reaches the preset identification speed, and at the moment, the actual size of the two-dimensional code is the size of the first two-dimensional code; the unmanned aerial vehicle slowly keeps away from first two-dimensional code until can't discern first two-dimensional code, prints the two-dimensional code that is little bigger at this moment and puts on first two-dimensional code, at this moment, if the speed that unmanned aerial vehicle discerned this two-dimensional code is less than or is higher than the recognition rate of predetermineeing, also adjusts down or adjusts the size of this two-dimensional code, until the speed that this two-dimensional code discerned reaches the recognition rate of predetermineeing, and the actual size of two-dimensional code is the second two-dimensional code size at this moment. By referring to the same method, the design of the following third two-dimensional code.
When a plurality of visual marks with different sizes corresponding to a plurality of pieces of different height information are determined, the visual marks with different sizes are simultaneously placed in a target area so as to carry out the steps of the all-landing method.
In a specific application, before the target image at the current moment is identified, pose information at the moment cannot be obtained, that is, altitude information of the unmanned aerial vehicle and the landing platform at the current moment cannot be obtained, so that the selected historical pose information of a selected historical target image at the previous moment (that is, a frame of image before the target image is closest to the current moment, the accuracy of altitude information corresponding to the target image at the moment is higher, a frame of image at the moment before the frame of image at the current moment is usually taken, and a user can also determine other selected historical target images in the historical target images according to requirements); as the selected historical target image is identified by the unmanned aerial vehicle, the height information in the selected historical pose information of the selected historical target image can be directly acquired, and the height information is the selected historical height information. Based on the selected historical height information, the unmanned aerial vehicle needs to determine a selected preset visual identifier corresponding to the selected historical height information from a plurality of preset visual identifiers.
For example, the plurality of preset visual identifications include a first identification corresponding to less than 1m, a second identification corresponding to 1m-4m and a third identification corresponding to more than 4m, and the unmanned aerial vehicle obtains the selected historical height information to be 4.5m at a certain moment, and then determines the third identification as the selected preset visual identification from the plurality of preset visual identifications based on the selected historical height information to be 4.5 m.
Referring to fig. 3, fig. 3 is a schematic diagram of a structure of a plurality of preset visual markers according to the present invention, wherein the preset visual markers are two-dimensional codes, the largest two-dimensional code is a first two-dimensional code (large-size two-dimensional code) corresponding to height information greater than 4m, the middle two-dimensional code is a second two-dimensional code (medium-size two-dimensional code) corresponding to height information between 1m and 4m, and the small two-dimensional code is a third two-dimensional code (small-size two-dimensional code) corresponding to height information less than 1 m.
Further, the step of determining the effective area corresponding to the selected preset visual identifier in the target image includes: acquiring history two-dimensional information and history sizes of the selected preset visual marks in a plurality of result history target images; and determining an effective area corresponding to the selected preset visual identifier in the target image based on the historical two-dimensional information and the historical size. The step of determining the effective area corresponding to the selected preset visual identifier in the target image based on the historical two-dimensional information and the historical size comprises the following steps: obtaining a predicted displacement based on the historical two-dimensional information; and determining an effective area corresponding to the selected preset visual mark in the target image based on the predicted displacement, the preset amplification ratio and the history size.
The plurality of result history target images may be a plurality of target images preceding the target image at the current time. For example, the plurality of result history object images may be two result history object images at two previous moments adjacent to the current moment, i.e. the images are also consecutive in consecutive moments and have a temporal order, the object images being named third frame images, the two result history object images being the first frame image and the second frame image, respectively; the history two-dimensional information is the position information of the selected preset visual identifier in the result history target image, and is usually the position information of a certain datum point in the selected preset visual identifier in the result history target image, wherein the datum point is a fixed point in the selected preset visual identifier. Continuously obtaining prediction displacement based on the historical two-dimensional information corresponding to the first frame image and the second frame image respectively; determining an effective area corresponding to the selected preset visual mark in the target image based on the predicted displacement, the preset amplification proportion and the history size; meanwhile, the history size usually takes the size of the selected preset visual beacon in the history target image corresponding to the moment before the target image, and the change of the size of the selected preset visual beacon is very small and can be equivalent to the same because the time of the history target image is closest to the time of the target image. The preset magnification ratio may be set by a user according to the need.
It can be understood that when the unmanned aerial vehicle is ready to land, at least the previous two frame images corresponding to the target image at the current moment are utilized to obtain the prediction displacement, and in the landing process of the unmanned aerial vehicle, the initial first frame image and the initial second frame image need to be globally searched at the landing starting moment, because a plurality of result history target images corresponding to the two frame images cannot be obtained, before the unmanned aerial vehicle is ready to land, the target image of the target area cannot be obtained, and therefore the prediction displacement corresponding to the initial first frame image and the initial second frame image cannot be obtained.
Referring to fig. 4-5, fig. 4 is a schematic view of a plurality of result history object images according to the present invention, and fig. 5 is a schematic view of an object image according to the present invention; in fig. 4, the upper dashed line frame is a schematic diagram of the first frame image, the lower dashed line frame is a schematic diagram of the second frame image, a selected preset visual identifier is selected as a selected two-dimensional code, the heights of the selected two-dimensional code in the first frame image and the second frame image are H, the widths are W, i.e., the historic dimensions are w×h, a point at the upper left corner of the selected two-dimensional code is taken as a reference point, the historic two-dimensional information of the reference point is respectively (x 1, y 1) and (x 2, y 2), the displacement of the reference point of the third frame image (target image) and the reference point of the second frame image is a prediction displacement, the prediction displacement is (x 2-x1, y2-y 1), the effective area corresponding to the selected two-dimensional code in the target image is obtained based on a preset amplification ratio 1:1.3, the effective area corresponding to the selected two-dimensional code in the target image is shown in fig. 5, the point at the upper left corner of the selected two-dimensional code is taken as a reference point, the historic two-dimensional information of the reference point is (x 1, y 1) and the effective area corresponding to the second frame image is (x 2, y 2) is set at the effective area corresponding to the 3.3+1, and the effective area corresponding to the effective area in the 3.3+2.3 is set at the 3.3+1.
It will be appreciated that the user may also set other preset amplification ratios and make adaptive changes to the above calculation formula to obtain a determination scheme of the effective area, and the above description of the present invention is a preferred exemplary description, and is not limited thereto.
Step S14: and controlling the landing posture of the unmanned aerial vehicle based on the posture information.
It should be noted that, after the target image is generally identified, the pose information obtained is relative pose information of the unmanned aerial vehicle relative to the landing platform, where the pose information generally includes horizontal coordinate values x and y of the unmanned aerial vehicle, vertical coordinate z (may be a relative height of the unmanned aerial vehicle and the landing platform) of the unmanned aerial vehicle, and a yaw angle, a pitch angle, a roll angle, and the like of the unmanned aerial vehicle.
Further, before step S14, the method further includes: acquiring historical pose information corresponding to the plurality of result historical target images respectively; based on the historical pose information, obtaining the relative displacement of the unmanned aerial vehicle; obtaining a relative velocity based on the relative displacement and a time difference corresponding to the relative displacement; correspondingly, step S14 includes: and controlling the landing posture of the unmanned aerial vehicle based on the relative speed and the posture information.
Here, taking a plurality of result history target images as two result history target images, the target images are defined as a third frame image, the two result history target images are respectively exemplified by a first frame image and a second frame image, based on x4, y4 and z4 in first history pose information corresponding to the first frame image and x5, y5 and z5 in second history pose information corresponding to the second frame image, relative displacements x6 (x 5-x 4), y6 (y 5-y 4) and z6 (z 5-z 4) are determined, and based on a time difference t (time difference between frames) of the first frame image and the second frame image, x-direction relative speeds vx= (x 5-x 4)/t, y-direction relative speeds vy= (y 5-y 4)/t, and z-direction relative speeds vz= (z 5-z 4)/t are determined, and the relative speeds include vx, vy and vz.
It will be appreciated that when the result history target image selected by calculating the relative displacement is different, the corresponding time difference is different, for example, the result history target image is the initial 20 th frame image (taking the beginning of the unmanned plane landing as the starting time) and the initial 30 th frame image, and the time difference is the time difference between the initial 20 th frame image and the initial 30 th frame image, that is, the duration of 10 frame images.
The technical scheme of the invention provides a landing method for an unmanned aerial vehicle, which comprises the following steps: in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark; determining an effective area corresponding to the preset visual identifier in the target image; identifying an effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle; and controlling the landing posture of the unmanned aerial vehicle based on the posture information.
In the existing landing method, the unmanned aerial vehicle carries out global recognition on the target image, and the data processing amount in the recognition process is more, so that the recognition efficiency is lower, and the real-time performance of pose information obtained in the unmanned aerial vehicle landing process is poorer; in the landing method, the unmanned aerial vehicle identifies the partial area, namely the effective area, of the target image, and the data processing amount in the identification process is less, so that the identification efficiency is higher, and further, the speed of acquiring the pose information in the landing process of the unmanned aerial vehicle is higher, and the real-time performance of the pose information is higher; therefore, by utilizing the landing method, the technical effect of improving the real-time performance of the pose information of the unmanned aerial vehicle is achieved.
Meanwhile, the landing gesture of the unmanned aerial vehicle can be controlled based on the gesture information and the relative speed, so that the unmanned aerial vehicle can accurately land on a landing platform in the landing process; in addition, the visual identifications with different sizes are shown, so that the unmanned aerial vehicle can identify the corresponding visual identifications at different heights, and the identification speed of the unmanned aerial vehicle on the visual identifications is further improved.
Referring to fig. 6, fig. 6 is a block diagram of a first embodiment of a landing apparatus according to the present invention, the apparatus being applied to a drone, the apparatus comprising:
the shooting module 10 is used for shooting a target image of a target area in the landing process of the unmanned aerial vehicle, wherein the target area is provided with a preset visual mark;
the determining module 20 is configured to determine an effective area corresponding to the preset visual identifier in the target image;
the identifying module 30 is configured to identify an effective area corresponding to the preset visual identifier, so as to obtain pose information of the unmanned aerial vehicle;
and the control module 40 is used for controlling the landing posture of the unmanned aerial vehicle based on the posture information.
The foregoing description is only of the optional embodiments of the present invention, and is not intended to limit the scope of the invention, and all the equivalent structural changes made by the description of the present invention and the accompanying drawings or the direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (8)

1. A landing method for an unmanned aerial vehicle, the method comprising the steps of:
in the landing process of the unmanned aerial vehicle, shooting a target image of a target area, wherein the target area is provided with a preset visual mark;
determining an effective area corresponding to the preset visual identifier in the target image;
identifying an effective area corresponding to the preset visual identification to obtain pose information of the unmanned aerial vehicle;
controlling the landing attitude of the unmanned aerial vehicle based on the attitude information;
the preset visual identification comprises a plurality of visual identifications; before the step of determining the effective area corresponding to the preset visual identifier in the target image, the method further includes:
acquiring selected historical pose information corresponding to a selected historical target image of the target area; the selected historical target image is a previous frame image of the target image;
determining a selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information;
the step of determining the effective area corresponding to the preset visual identifier in the target image comprises the following steps:
determining an effective area corresponding to the selected preset visual identifier in the target image;
the step of identifying the effective area corresponding to the preset visual identifier to obtain pose information of the unmanned aerial vehicle comprises the following steps:
identifying an effective area corresponding to the selected preset visual identifier to obtain the pose information;
the preset visual identifications are preset visual identifications with various sizes, and the preset visual identifications with different sizes correspond to different height information; the step of determining the selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information comprises the following steps:
acquiring selected historical height information of the unmanned aerial vehicle in the vertical direction in the selected historical pose information;
and determining the selected preset visual identification corresponding to the selected historical height information from the preset visual identifications with the various sizes.
2. The method of claim 1, wherein the step of determining a valid region in the target image corresponding to the selected preset visual identification comprises:
acquiring history two-dimensional information and history sizes of the selected preset visual marks in a plurality of result history target images;
and determining an effective area corresponding to the selected preset visual identifier in the target image based on the historical two-dimensional information and the historical size.
3. The method of claim 2, wherein the step of determining the active area corresponding to the selected preset visual identifier in the target image based on the historical two-dimensional information and the historical size comprises:
obtaining a predicted displacement based on the historical two-dimensional information;
and determining an effective area corresponding to the selected preset visual mark in the target image based on the predicted displacement, the preset amplification ratio and the history size.
4. The method of claim 3, wherein prior to the step of controlling the landing attitude of the drone based on the pose information, the method further comprises:
acquiring historical pose information corresponding to the plurality of result historical target images respectively;
based on the historical pose information, obtaining the relative displacement of the unmanned aerial vehicle;
obtaining a relative velocity based on the relative displacement and a time difference corresponding to the relative displacement;
the step of controlling the landing posture of the unmanned aerial vehicle based on the posture information comprises the following steps:
and controlling the landing posture of the unmanned aerial vehicle based on the relative speed and the posture information.
5. The method of any one of claims 1-4, wherein the preset visual identifier is a two-dimensional code identifier.
6. A landing gear for use with a drone, the apparatus comprising:
the shooting module is used for shooting a target image of a target area in the landing process of the unmanned aerial vehicle, and the target area is provided with a preset visual mark;
the determining module is used for determining an effective area corresponding to the preset visual identifier in the target image;
the identification module is used for identifying the effective area corresponding to the preset visual identifier so as to obtain pose information of the unmanned aerial vehicle;
the control module is used for controlling the landing posture of the unmanned aerial vehicle based on the posture information;
the preset visual identification comprises a plurality of visual identifications;
the determining module is further used for obtaining selected historical pose information corresponding to selected historical target images of the target area; the selected historical target image is a previous frame image of the target image; determining a selected preset visual identifier from the plurality of preset visual identifiers based on the selected historical pose information; determining an effective area corresponding to the selected preset visual identifier in the target image;
the identification module is further used for identifying an effective area corresponding to the selected preset visual identifier so as to obtain the pose information;
the preset visual identifications are preset visual identifications with various sizes, and the preset visual identifications with different sizes correspond to different height information;
the determining module is further used for acquiring selected historical height information of the unmanned aerial vehicle in the vertical direction in the selected historical pose information; and determining the selected preset visual identification corresponding to the selected historical height information from the preset visual identifications with the various sizes.
7. An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle comprises: memory, a processor and a landing program stored on the memory and running on the processor, which landing program, when executed by the processor, implements the steps of the landing method according to any one of claims 1 to 5.
8. A computer readable storage medium, characterized in that it has stored thereon a landing program, which when executed by a processor, implements the steps of the landing method according to any of claims 1 to 5.
CN202110134460.3A 2021-02-01 2021-02-01 Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium Active CN112987764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110134460.3A CN112987764B (en) 2021-02-01 2021-02-01 Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110134460.3A CN112987764B (en) 2021-02-01 2021-02-01 Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112987764A CN112987764A (en) 2021-06-18
CN112987764B true CN112987764B (en) 2024-02-20

Family

ID=76346635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110134460.3A Active CN112987764B (en) 2021-02-01 2021-02-01 Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112987764B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537430B (en) * 2021-07-02 2022-06-07 北京三快在线科技有限公司 Beacon, beacon generation method, beacon generation device and equipment
CN113867373A (en) * 2021-09-30 2021-12-31 广州极飞科技股份有限公司 Unmanned aerial vehicle landing method and device, parking apron and electronic equipment
CN114200954B (en) * 2021-10-28 2023-05-23 佛山中科云图智能科技有限公司 Unmanned aerial vehicle landing method and device based on Apriltag, medium and electronic equipment
CN115291624B (en) * 2022-07-11 2023-11-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle positioning landing method, storage medium and computer equipment
CN114935946B (en) * 2022-07-21 2022-12-09 浙江这里飞科技有限公司 Unmanned aerial vehicle landing method and device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102967305A (en) * 2012-10-26 2013-03-13 南京信息工程大学 Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square
US9551579B1 (en) * 2015-08-07 2017-01-24 Google Inc. Automatic connection of images using visual features
CN106529538A (en) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 Method and device for positioning aircraft
CN106952284A (en) * 2017-03-28 2017-07-14 歌尔科技有限公司 A kind of feature extracting method and its device based on compression track algorithm
CN107943077A (en) * 2017-11-24 2018-04-20 歌尔股份有限公司 A kind of method for tracing, device and the unmanned plane of unmanned plane drop target
CN108072385A (en) * 2017-12-06 2018-05-25 爱易成技术(天津)有限公司 Space coordinates localization method, device and the electronic equipment of mobile target
CN108875667A (en) * 2018-06-27 2018-11-23 北京字节跳动网络技术有限公司 target identification method, device, terminal device and storage medium
CN109308463A (en) * 2018-09-12 2019-02-05 北京奇艺世纪科技有限公司 A kind of video object recognition methods, device and equipment
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
KR102018892B1 (en) * 2019-02-15 2019-09-05 국방과학연구소 Method and apparatus for controlling take-off and landing of unmanned aerial vehicle
CN110968107A (en) * 2019-10-25 2020-04-07 深圳市道通智能航空技术有限公司 Landing control method, aircraft and storage medium
CN110989687A (en) * 2019-11-08 2020-04-10 上海交通大学 Unmanned aerial vehicle landing method based on nested square visual information
CN111562791A (en) * 2019-03-22 2020-08-21 沈阳上博智像科技有限公司 System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9889932B2 (en) * 2015-07-18 2018-02-13 Tata Consultancy Services Limited Methods and systems for landing of unmanned aerial vehicle

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102967305A (en) * 2012-10-26 2013-03-13 南京信息工程大学 Multi-rotor unmanned aerial vehicle pose acquisition method based on markers in shape of large and small square
US9551579B1 (en) * 2015-08-07 2017-01-24 Google Inc. Automatic connection of images using visual features
CN106529538A (en) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 Method and device for positioning aircraft
CN106952284A (en) * 2017-03-28 2017-07-14 歌尔科技有限公司 A kind of feature extracting method and its device based on compression track algorithm
CN107943077A (en) * 2017-11-24 2018-04-20 歌尔股份有限公司 A kind of method for tracing, device and the unmanned plane of unmanned plane drop target
CN108072385A (en) * 2017-12-06 2018-05-25 爱易成技术(天津)有限公司 Space coordinates localization method, device and the electronic equipment of mobile target
CN108875667A (en) * 2018-06-27 2018-11-23 北京字节跳动网络技术有限公司 target identification method, device, terminal device and storage medium
CN109308463A (en) * 2018-09-12 2019-02-05 北京奇艺世纪科技有限公司 A kind of video object recognition methods, device and equipment
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
KR102018892B1 (en) * 2019-02-15 2019-09-05 국방과학연구소 Method and apparatus for controlling take-off and landing of unmanned aerial vehicle
CN111562791A (en) * 2019-03-22 2020-08-21 沈阳上博智像科技有限公司 System and method for identifying visual auxiliary landing of unmanned aerial vehicle cooperative target
CN110968107A (en) * 2019-10-25 2020-04-07 深圳市道通智能航空技术有限公司 Landing control method, aircraft and storage medium
CN110989687A (en) * 2019-11-08 2020-04-10 上海交通大学 Unmanned aerial vehicle landing method based on nested square visual information
CN111709328A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Vehicle tracking method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于阶层标识的无人机自主精准降落系统;张咪等;航空学报;第39卷(第10期);213-221 *

Also Published As

Publication number Publication date
CN112987764A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112987764B (en) Landing method, landing device, unmanned aerial vehicle and computer-readable storage medium
US11275931B2 (en) Human pose prediction method and apparatus, device, and storage medium
CN109391762B (en) Tracking shooting method and device
CN110865388B (en) Combined calibration method and device for camera and laser radar and storage medium
US11373410B2 (en) Method, apparatus, and storage medium for obtaining object information
CN112802111B (en) Object model construction method and device
CN111123964B (en) Unmanned aerial vehicle landing method and device and computer readable medium
US20220139282A1 (en) Electronic device and method for displaying image on flexible display
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
CN111381602B (en) Unmanned aerial vehicle flight control method and device and unmanned aerial vehicle
CN112884900A (en) Landing positioning method and device for unmanned aerial vehicle, storage medium and unmanned aerial vehicle nest
CN211506262U (en) Navigation system based on visual positioning
CN111256676A (en) Mobile robot positioning method, device and computer readable storage medium
CN113761255B (en) Robot indoor positioning method, device, equipment and storage medium
CN110163862B (en) Image semantic segmentation method and device and computer equipment
US20210357620A1 (en) System, moving object, and information processing apparatus
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN111538009A (en) Radar point marking method and device
CN111580551A (en) Navigation system and method based on visual positioning
CN108063884B (en) Image processing method and mobile terminal
CN114187349B (en) Product processing method and device, terminal device and storage medium
KR20170071278A (en) Mobile terminal
CN114332118A (en) Image processing method, device, equipment and storage medium
CN111612688B (en) Image processing method, device and computer readable storage medium
CN111738034B (en) Lane line detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant