CN111694370A - Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle - Google Patents

Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle Download PDF

Info

Publication number
CN111694370A
CN111694370A CN201910184193.3A CN201910184193A CN111694370A CN 111694370 A CN111694370 A CN 111694370A CN 201910184193 A CN201910184193 A CN 201910184193A CN 111694370 A CN111694370 A CN 111694370A
Authority
CN
China
Prior art keywords
mark
position information
feature point
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910184193.3A
Other languages
Chinese (zh)
Inventor
高坚
翁海敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fengyi Technology (Shenzhen) Co.,Ltd.
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201910184193.3A priority Critical patent/CN111694370A/en
Publication of CN111694370A publication Critical patent/CN111694370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

The invention relates to a visual method and a visual system for multi-stage fixed-point directional landing of an unmanned aerial vehicle. The landing point of the unmanned aerial vehicle is located according to the first position information of the feature point of the first mark with the larger area, when the unmanned aerial vehicle lands, the whole image information of the first mark cannot be collected to a certain height, and then the position of the first mark feature point cannot be accurately located through the image information of the first mark, the position of the first mark feature point is accurately located again by means of the second position information of the second mark feature point with the smaller area, the feature point of the first mark and the set distance between the feature points of the second mark, the position of the landing point of the unmanned aerial vehicle can be accurately located in the whole process of landing of the unmanned aerial vehicle, and the unmanned aerial vehicle is guided to land at a fixed point.

Description

Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of unmanned aerial vehicle control, and particularly relates to a multi-stage fixed-point directional landing visual method and system for an unmanned aerial vehicle.
Background
At present, the unmanned aerial vehicle mainly adopts GPS positioning or adopts a marked image to assist landing, and the following limitations exist:
1. the positioning accuracy of the GPS has a large error (2-5m), the positioning accuracy is easy to interfere, and GPS information is easy to lose, so that the fixed-point landing error of the unmanned aerial vehicle is large;
2. only one mark image needs to be acquired by a camera, the central point of the image is identified, and accurate landing can be realized, namely, the camera is aligned with the mark to realize accurate landing. However, when the flying height of the unmanned aerial vehicle is large, the area of the marked image is generally large in order that the camera can acquire the whole marked image; when the unmanned aerial vehicle descends to a certain height, the camera cannot acquire the whole image of the mark, so that the central point of the image cannot be identified, and the unmanned aerial vehicle has the situation that the scale drift occurs at the relative falling point in the air.
Disclosure of Invention
In order to solve the technical problem, the invention aims to provide a multi-stage fixed-point directional landing visual method and system for an unmanned aerial vehicle.
According to one aspect of the invention, a visual method for multi-stage fixed-point directional landing of an unmanned aerial vehicle is provided, which comprises the following steps:
acquiring image information of a first mark of a to-be-landed field of the unmanned aerial vehicle, wherein a second mark is also arranged in the to-be-landed field;
generating first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired;
acquiring image information of the second mark of the unmanned aerial vehicle to-be-landed field;
and generating second position information of the second mark feature point according to the image information of the second mark, and positioning first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point.
Further, the visual method for multi-stage fixed-point directional landing of the unmanned aerial vehicle further comprises:
collecting height information of the unmanned aerial vehicle from a landing field;
and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
Generating first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point, and the method comprises the following steps:
generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is based on first position information of a first coordinate system established on the basis of a landing field;
converting first position information of a first coordinate system of the first marked feature point into first position information corresponding to a second coordinate system according to a first preset conversion relation between a first coordinate system generated based on an internal reference matrix of a camera and the position information of the second coordinate system established based on the camera;
and converting the first position information of the second coordinate system of the first marking characteristic point into corresponding first position information of a third coordinate system based on a second preset conversion relation of converting the position information of the second coordinate system and the position information of the third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking characteristic point as a landing point.
Generating second position information of the second marker feature point according to the image information of the second marker, including:
generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is based on second position information of a first coordinate system established on the basis of a landing site;
based on the first preset conversion relation, converting the second position information of the first coordinate system of the second marking feature point into second position information of a second coordinate system established based on a camera;
and converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
According to the second position information of the second mark feature point and the set distance between the feature point of the first mark and the feature point of the second mark, positioning the first position information of the first mark feature point, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point, the method comprises the following steps:
and positioning first position information of the third coordinate system of the first mark characteristic point according to second position information of the third coordinate system of the second mark characteristic point and a set distance between the characteristic point of the first mark and the characteristic point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark characteristic point as a landing point.
The visual method for the multi-stage fixed-point directional landing of the unmanned aerial vehicle further comprises the following steps:
determining the installation position of a camera on the unmanned aerial vehicle, and determining the set distance between the feature point of the first mark and the feature point of the second mark and the setting position of the feature point of the first mark relative to the feature point of the second mark according to the distance between the installation position and the feature point of the unmanned aerial vehicle and the position of the installation position relative to the feature point of the unmanned aerial vehicle.
The feature point is a center point.
According to another aspect of the invention, there is provided a vision system for multi-stage fixed-point directional landing of a drone, comprising:
the unmanned aerial vehicle landing system comprises an image information acquisition unit of a first mark, a second mark and a third mark, wherein the image information acquisition unit of the first mark is configured to acquire image information of the first mark of a to-be-landed field of the unmanned aerial vehicle, the to-be-landed field is also provided with the second mark, and the area of the first mark is larger than that of the second mark;
the first position information generating unit is configured to generate first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired;
the image information acquisition unit of the second mark is configured to acquire the image information of the second mark in the to-be-landed field of the unmanned aerial vehicle;
and the second position information generating unit is configured to generate second position information of the second mark feature point according to the image information of the second mark, and position first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point.
Further, the multi-stage fixed-point directional landing vision system for the unmanned aerial vehicle further comprises a judging unit, wherein the judging unit is configured to:
collecting height information of the unmanned aerial vehicle from a landing field;
and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
The first position information generating unit is further configured to:
generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is based on first position information of a first coordinate system established on the basis of a landing field;
converting first position information of a first coordinate system of the first marked feature point into first position information corresponding to a second coordinate system according to a first preset conversion relation between a first coordinate system generated based on an internal reference matrix of a camera and the position information of the second coordinate system established based on the camera;
and converting the first position information of the second coordinate system of the first marking characteristic point into corresponding first position information of a third coordinate system based on a second preset conversion relation of converting the position information of the second coordinate system and the position information of the third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking characteristic point as a landing point.
The second position information generating unit is further configured to:
generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is based on second position information of a first coordinate system established on the basis of a landing site;
based on the first preset conversion relation, converting the second position information of the first coordinate system of the second marking feature point into second position information of a second coordinate system established based on a camera;
and converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
The second position information generating unit is further configured to:
and positioning first position information of the third coordinate system of the first mark characteristic point according to second position information of the third coordinate system of the second mark characteristic point and a set distance between the characteristic point of the first mark and the characteristic point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark characteristic point as a landing point.
According to another aspect of the present invention, there is provided an apparatus comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of the above.
According to another aspect of the invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as defined in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the visual method for the multi-stage fixed-point directional landing of the unmanned aerial vehicle, the landing point of the unmanned aerial vehicle is positioned according to the first position information of the feature point of the first mark with a larger area, when the unmanned aerial vehicle cannot acquire the whole image information of the first mark when the unmanned aerial vehicle lands at a certain height, and further cannot accurately position the position of the first mark feature point through the image information of the first mark, the position of the first mark feature point is accurately re-positioned by means of the second position information of the second mark feature point with a smaller area and the set distance between the feature point of the first mark and the feature point of the second mark, so that the landing point of the unmanned aerial vehicle can be accurately positioned in the whole process in the landing process, and the unmanned aerial vehicle is guided to realize the landing fixed point.
2. According to the multi-stage fixed-point directional landing vision system of the unmanned aerial vehicle, the landing point of the unmanned aerial vehicle is positioned according to the first position information of the first marked feature point with a larger area through the mutual cooperation of the units, when the unmanned aerial vehicle cannot accurately position the position of the first marked feature point in the landing process to a certain height, the position of the first marked feature point is accurately re-positioned by means of the second position information of the second marked feature point with a smaller area and the set distance between the feature point of the first mark and the feature point of the second mark, so that the position of the landing point of the unmanned aerial vehicle is accurately positioned in the whole process of landing of the unmanned aerial vehicle, and the unmanned aerial vehicle is guided to realize fixed-point landing.
Drawings
FIG. 1 is a schematic diagram of a first mark and a second mark according to an embodiment;
FIG. 2 is a block diagram of a computer system according to an embodiment;
figure 3 is a flow chart of the present invention,
in the figure, 100 computer system, 101 CPU, 102 ROM, 103RAM, 104 bus, 105I/O interface, 106 input part, 107 output part, 108 storage part, 109 communication part, 110 drive, 111 removable medium.
Detailed Description
In order to better understand the technical scheme of the invention, the invention is further explained by combining the specific embodiment and the attached drawings of the specification.
The first embodiment is as follows:
the visual method for the multi-stage fixed-point directional landing of the unmanned aerial vehicle comprises the following steps:
s1, collecting height information of the unmanned aerial vehicle from a landing site; and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
S2, when the height information is not less than the preset height, the image information of the first mark of the unmanned aerial vehicle landing field is collected, wherein the second mark is further arranged in the landing field, as shown in figure 1, the set distance between the feature point of the first mark and the feature point of the second mark and the feature point of the first mark relative to the set position of the feature point of the second mark are determined by the distance between the installation position of the unmanned aerial vehicle and the feature point of the unmanned aerial vehicle through the camera and the position of the installation position relative to the feature point of the unmanned aerial vehicle. The feature points may be all center points, and the area of the first mark is larger than the area of the second mark. The video camera may be a monocular camera mounted on the arm of the drone.
S3, generating first position information of the first mark feature point according to the image information of the first mark, and specifically acquiring a ground area image below the unmanned aerial vehicle through a camera; processing the collected ground image, and detecting the characteristic point of the fixed point first mark, such as the central position, wherein the first mark and the second mark can be regular patterns or irregular patterns, and the regular patterns are easier to realizeLocation of feature points such as center position. The first mark may be a logo pattern, and the second mark may be a pattern such as a logo capable of indicating a direction. The positioning of the feature points of the first mark and the second mark is similar, and the positioning of the center of the first mark is taken as an example for explanation. When the first mark is circular, the circle center can be directly detected by using a Hough circle detection algorithm, and the circle center is used as a target point for landing of the unmanned aerial vehicle. When the images are other images, a target detection method, such as a deep learning method like SSD, YOLO, etc., can be adopted to detect a rectangular region surrounding the marker, and then the central point of the rectangular region is obtained, and the central point is used as the target point for landing of the unmanned aerial vehicle. Landing by using the first position information of the first mark characteristic point as a landing point, namely determining the center position of the first mark through a detection algorithm, and then, determining the center (x) of the first mark areacent,ycent) As the landing point, carry out the conversion of descending position, specifically convert this point to camera coordinate system earlier, then convert unmanned aerial vehicle organism coordinate system into again to supply unmanned aerial vehicle to descend, until unable acquisition the image information of first mark, and at this moment, the high information that unmanned aerial vehicle apart from the landing place is less than preset the height.
Wherein S3 includes:
s31, generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is first position information of a first coordinate system established on the basis of a landing place;
s32, converting a first coordinate system generated based on the internal reference matrix of the camera and a first preset conversion relationship based on position information of a second coordinate system established by the camera, and making the internal reference matrix obtained based on calibration of the camera (camera) K, where the calibration of the camera may use, but is not limited to, the zhangnyou calibration method, then:
Figure BDA0001992299890000071
the first preset conversion relation between the first coordinate system and the position information conversion of the second coordinate system established based on the camera is as follows:
xc=(xcent-cx)*h/fx
yc=(ycent-cy)*h/fy
zc=h
wherein (x)c,yc,zc) Is the three-dimensional coordinate of the landing point, h is the ground clearance of the unmanned aerial vehicle,
converting the first position information of the first coordinate system of the first mark feature point into first position information of the corresponding second coordinate system according to the first preset conversion relation;
s33, a second predetermined transformation relation based on the transformation of the position information of the second coordinate system and the third coordinate system, where the second predetermined transformation relation is:
Figure BDA0001992299890000072
wherein (x)b,yb,zb) The final landing position of the unmanned aerial vehicle (namely the first position information of the third coordinate system); and R and T are rotation and translation matrixes of the camera relative to the unmanned aerial vehicle body coordinate system respectively.
And converting the first position information of the second coordinate system of the first marking feature point into corresponding first position information of a third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking feature point as a landing point until the image information of the first mark can not be acquired.
When the unmanned plane lands to (x)b,yb,zb) When the unmanned aerial vehicle is landed to a certain height h (2 m if the height threshold), the fixed point first mark is no longer in the field range of the camera, the camera cannot acquire the image of the fixed point first mark, the detection algorithm cannot provide a landing position for the unmanned aerial vehicle, a fixed point directional second mark is added in the preset distance of the first mark, and the step S4 is carried out.
S4 unmanned plane for acquisitionAnd image information of the second mark of the field to be landed. When h is generated<In threshold, the camera acquires image information of the fixed point orientation second mark, detects a center coordinate of the fixed point orientation second mark, and then converts the center coordinate of the fixed point orientation second mark to a coordinate position of a body coordinate system (third coordinate system), and the specific steps refer to the conversion process in step S3. When the translation matrix T of the camera relative to the body coordinate is formedcbFor 0, there is not the translation vector when converting to the organism coordinate this moment, need not to adjust unmanned aerial vehicle's central point to the central point of the directional second mark of fixed point promptly and put.
S5, generating second position information of the second mark feature points according to the image information of the second mark, positioning first position information of the first mark feature points according to the second position information of the second mark feature points and the set distance between the feature points of the first mark and the feature points of the second mark, and correcting the landing position of the unmanned aerial vehicle so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature points as a landing point. When landing in the correction process, the fixed point orients the center of the second mark below the camera, and the center of the fixed point first mark is still below the machine body until landing is completed.
S5 includes:
s51, generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is the second position information of a first coordinate system established on the basis of a landing place;
s52, converting the second position information of the first coordinate system of the second mark feature point into second position information of a second coordinate system established based on the camera based on the first preset conversion relation;
and S53, converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
S54, positioning first position information of the third coordinate system of the first mark feature point according to second position information of the third coordinate system of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark feature point as a landing point.
The method can be divided into two stages, and the invention provides a two-stage fixed point directional unmanned aerial vehicle landing method for realizing fixed point landing by a fixed point first mark and adjusting the position by a fixed point directional second mark. The vision-based fixed-point directional landing method for the unmanned aerial vehicle can accurately determine the central positions of the first mark and the second mark of the area to be landed of the unmanned aerial vehicle, and guide the unmanned aerial vehicle to land at a fixed point. On the basis of using first mark, increased the directional second mark of a fixed point, when the image of the first mark of unmanned aerial vehicle can't gather the fixed point, rely on the directional second mark of fixed point to provide descending position and revise the unmanned aerial vehicle position for unmanned aerial vehicle, make unmanned aerial vehicle realize more accurate descending:
(1) when the unmanned aerial vehicle is ready to land, the unmanned aerial vehicle hovers above the area to be landed; processing the acquired ground image through a downward-looking camera arranged on a machine arm, detecting and marking the center of a fixed point first mark which is placed in advance, converting the pixel coordinate of the center point into a machine body coordinate, and adjusting the position of the unmanned aerial vehicle to start landing; the fixed point first mark is positioned below the unmanned aerial vehicle,
(2) when the object is landed to a certain height, the camera cannot acquire the image information of the fixed point first mark; at this moment, rely on the directional second mark of fixed point to provide the landing position and revise the unmanned aerial vehicle position for unmanned aerial vehicle to this makes unmanned aerial vehicle can accurate landing in the central point of the first mark of fixed point and puts. By the unmanned aerial vehicle landing method, the unmanned aerial vehicle can realize accurate landing without depending on a GPS.
The visual system that unmanned aerial vehicle multistage fixed point orientation of this embodiment lands includes:
the judging unit is configured to: collecting height information of the unmanned aerial vehicle from a landing field; and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
The image information acquisition unit of first mark, the configuration is used for gathering the image information that unmanned aerial vehicle waited to descend the first mark in place, wherein, still set up the second mark in waiting to descend the place, the area of first mark is greater than the area of second mark.
The first position information generating unit is configured to generate first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired. The first position information generating unit is further configured to:
generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is based on first position information of a first coordinate system established on the basis of a landing field;
converting first position information of a first coordinate system of the first marked feature point into first position information corresponding to a second coordinate system according to a first preset conversion relation between a first coordinate system generated based on an internal reference matrix of a camera and the position information of the second coordinate system established based on the camera;
and converting the first position information of the second coordinate system of the first marking characteristic point into corresponding first position information of a third coordinate system based on a second preset conversion relation of converting the position information of the second coordinate system and the position information of the third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking characteristic point as a landing point.
And the image information acquisition unit of the second mark is configured to acquire the image information of the second mark at the place where the unmanned aerial vehicle is to land.
And the second position information generating unit is configured to generate second position information of the second mark feature point according to the image information of the second mark, and position first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point. The second position information generating unit is further configured to:
generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is based on second position information of a first coordinate system established on the basis of a landing site;
based on the first preset conversion relation, converting the second position information of the first coordinate system of the second marking feature point into second position information of a second coordinate system established based on a camera;
and converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
The second position information generating unit is further configured to:
and positioning first position information of the third coordinate system of the first mark characteristic point according to second position information of the third coordinate system of the second mark characteristic point and a set distance between the characteristic point of the first mark and the characteristic point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark characteristic point as a landing point.
It should be understood that the steps in the vision method for multi-stage fixed point directional landing of the drone correspond to the sub-units described in the vision system for multi-stage fixed point directional landing of the drone. Thus, the operations and features described above for the system and the units included therein are equally applicable to the above method and will not be described again here.
The present embodiment also provides an apparatus, which is suitable for implementing the embodiments of the present application.
The apparatus includes a computer system 100, and as shown in fig. 2, the computer system 100 includes a Central Processing Unit (CPU)101 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)102 or a program loaded from a storage section into a Random Access Memory (RAM) 103. In the RAM103, various programs and data necessary for system operation are also stored. The CPU 101, ROM 102, and RAM103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
The following components are connected to the I/O interface 105: an input portion 106 including a keyboard, a mouse, and the like; an output section 107 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 108 including a hard disk and the like; and a communication section 109 including a network interface card such as a LAN card, a modem, or the like. The communication section 109 performs communication processing via a network such as the internet. The drives are also connected to the I/O interface 105 as needed. A removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 110 as necessary, so that a computer program read out therefrom is mounted into the storage section 108 as necessary.
In particular, the process described above with reference to the flowchart of fig. 3 may be implemented as a computer software program according to an embodiment of the present invention. For example, an embodiment of the invention includes a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 101.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to one embodiment of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. The described units or modules may also be provided in a processor, and may be described as: a processor includes an image information acquisition unit of a first marker, a first position information generation unit, an image information acquisition unit of a second marker, and a second position information generation unit. The names of the units or modules do not limit the units or modules, for example, the image information acquisition unit of the first mark may also be described as an image information acquisition unit configured to acquire image information of a first mark of a to-be-landed field of the unmanned aerial vehicle, wherein the image information acquisition unit of the first mark of a second mark is further disposed in the to-be-landed field, and the area of the first mark is larger than that of the second mark.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the visual method of multi-stage fixed point directional landing of a drone as described in the embodiments above.
For example, the electronic device may implement the following as shown in fig. 3: acquiring image information of a first mark of a to-be-landed field of the unmanned aerial vehicle, wherein a second mark is also arranged in the to-be-landed field, and the area of the first mark is larger than that of the second mark; generating first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired; acquiring image information of the second mark of the unmanned aerial vehicle to-be-landed field; and generating second position information of the second mark feature point according to the image information of the second mark, and positioning first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware.
Example two
The same features of this embodiment and the first embodiment are not described again, and the different features of this embodiment and the first embodiment are:
the first mark and the second mark are circular, and the characteristic point of the first mark and the second mark is the rightmost edge point.
EXAMPLE III
The same features of this embodiment and the first embodiment are not described again, and the different features of this embodiment and the first embodiment are:
the first mark and the second mark are triangles, and the characteristic points of the first mark and the second mark are the leftmost edge points.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the features described above have similar functions to (but are not limited to) those disclosed in this application.

Claims (12)

1. A visual method for multi-stage fixed-point directional landing of an unmanned aerial vehicle is characterized by comprising the following steps:
acquiring image information of a first mark of a to-be-landed field of the unmanned aerial vehicle, wherein a second mark is also arranged in the to-be-landed field;
generating first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired;
acquiring image information of the second mark of the unmanned aerial vehicle to-be-landed field;
and generating second position information of the second mark feature point according to the image information of the second mark, and positioning first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point.
2. The visual method of multi-stage fixed point directional landing of an unmanned aerial vehicle of claim 1, further comprising:
collecting height information of the unmanned aerial vehicle from a landing field;
and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
3. A visual method of multi-stage pointing-directional landing of an unmanned aerial vehicle according to claim 1, wherein generating first position information of the first marked feature point according to the image information of the first mark for the unmanned aerial vehicle to land on the landing point using the first position information of the first marked feature point comprises:
generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is based on first position information of a first coordinate system established on the basis of a landing field;
converting first position information of a first coordinate system of the first marked feature point into first position information corresponding to a second coordinate system according to a first preset conversion relation between a first coordinate system generated based on an internal reference matrix of a camera and the position information of the second coordinate system established based on the camera;
and converting the first position information of the second coordinate system of the first marking characteristic point into corresponding first position information of a third coordinate system based on a second preset conversion relation of converting the position information of the second coordinate system and the position information of the third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking characteristic point as a landing point.
4. The visual method of multi-stage pointing-directional landing of an unmanned aerial vehicle according to claim 3, wherein generating second position information of the second marker feature point from the image information of the second marker comprises:
generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is based on second position information of a first coordinate system established on the basis of a landing site;
based on the first preset conversion relation, converting the second position information of the first coordinate system of the second marking feature point into second position information of a second coordinate system established based on a camera;
and converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
5. The visual method of multi-stage fixed-point directional landing of an unmanned aerial vehicle according to claim 4, wherein the step of positioning the first position information of the first marked feature point according to the second position information of the second marked feature point and the set distance between the feature point of the first mark and the feature point of the second mark so that the unmanned aerial vehicle can land on the landing point by using the first position information of the first marked feature point comprises the steps of:
and positioning first position information of the third coordinate system of the first mark characteristic point according to second position information of the third coordinate system of the second mark characteristic point and a set distance between the characteristic point of the first mark and the characteristic point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark characteristic point as a landing point.
6. The visual method of multi-stage fixed point directional landing of an unmanned aerial vehicle of claim 1, further comprising:
determining the installation position of a camera on the unmanned aerial vehicle, and determining the set distance between the feature point of the first mark and the feature point of the second mark and the setting position of the feature point of the first mark relative to the feature point of the second mark according to the distance between the installation position and the feature point of the unmanned aerial vehicle and the position of the installation position relative to the feature point of the unmanned aerial vehicle.
7. A visual method of multi-stage fixed-point directional landing of an UAV according to any of claims 1 to 6, wherein said feature point is a center point.
8. The utility model provides a visual system that directional landing of unmanned aerial vehicle multistage fixed point which characterized by includes:
the unmanned aerial vehicle landing system comprises an image information acquisition unit of a first mark, a second mark and a third mark, wherein the image information acquisition unit of the first mark is configured to acquire image information of the first mark of a to-be-landed field of the unmanned aerial vehicle, the to-be-landed field is also provided with the second mark, and the area of the first mark is larger than that of the second mark;
the first position information generating unit is configured to generate first position information of the first mark feature point according to the image information of the first mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point until the image information of the first mark cannot be acquired;
the image information acquisition unit of the second mark is configured to acquire the image information of the second mark in the to-be-landed field of the unmanned aerial vehicle;
and the second position information generating unit is configured to generate second position information of the second mark feature point according to the image information of the second mark, and position first position information of the first mark feature point according to the second position information of the second mark feature point and a set distance between the feature point of the first mark and the feature point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the first mark feature point as a landing point.
9. The multi-stage fixed-point directional landing vision system for unmanned aerial vehicles according to claim 8, further comprising a determination unit configured to:
collecting height information of the unmanned aerial vehicle from a landing field;
and comparing the height information with a preset height, if the height information is not less than the preset height, acquiring the image information of the first mark, and if the height information is less than the preset height, acquiring the image information of the second mark.
10. The multi-stage pointing-directed unmanned aerial vehicle landing vision system of claim 8, wherein the first location information generating unit is further configured to:
generating first position information of the first mark feature point according to the image information of the first mark, wherein the first position information of the first mark feature point is based on first position information of a first coordinate system established on the basis of a landing field;
converting first position information of a first coordinate system of the first marked feature point into first position information corresponding to a second coordinate system according to a first preset conversion relation between a first coordinate system generated based on an internal reference matrix of a camera and the position information of the second coordinate system established based on the camera;
and converting the first position information of the second coordinate system of the first marking characteristic point into corresponding first position information of a third coordinate system based on a second preset conversion relation of converting the position information of the second coordinate system and the position information of the third coordinate system, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first marking characteristic point as a landing point.
11. The multi-stage pointing-directed unmanned aerial vehicle landing vision system of claim 10, wherein the second location information generating unit is further configured to:
generating second position information of the second mark feature point according to the image information of the second mark, wherein the second position information of the second mark feature point is based on second position information of a first coordinate system established on the basis of a landing site;
based on the first preset conversion relation, converting the second position information of the first coordinate system of the second marking feature point into second position information of a second coordinate system established based on a camera;
and converting the second position information of the second coordinate system of the second mark feature point into second position information of a third coordinate system established based on the unmanned aerial vehicle based on a second preset conversion relation.
12. The multi-stage pointing-directed unmanned aerial vehicle landing vision system of claim 11, wherein the second location information generating unit is further configured to:
and positioning first position information of the third coordinate system of the first mark characteristic point according to second position information of the third coordinate system of the second mark characteristic point and a set distance between the characteristic point of the first mark and the characteristic point of the second mark, so that the unmanned aerial vehicle can land by taking the first position information of the third coordinate system of the first mark characteristic point as a landing point.
CN201910184193.3A 2019-03-12 2019-03-12 Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle Pending CN111694370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910184193.3A CN111694370A (en) 2019-03-12 2019-03-12 Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910184193.3A CN111694370A (en) 2019-03-12 2019-03-12 Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN111694370A true CN111694370A (en) 2020-09-22

Family

ID=72474615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910184193.3A Pending CN111694370A (en) 2019-03-12 2019-03-12 Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN111694370A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102417037A (en) * 2010-09-28 2012-04-18 株式会社拓普康 Automatic taking-off and landing system
CN103347785A (en) * 2010-12-14 2013-10-09 株式会社大韩航空 Automatic recovery method for an unmanned aerial vehicle
US20160122038A1 (en) * 2014-02-25 2016-05-05 Singularity University Optically assisted landing of autonomous unmanned aircraft
CN105857630A (en) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 Parking apron device, aircraft and aircraft parking system
CN106384382A (en) * 2016-09-05 2017-02-08 山东省科学院海洋仪器仪表研究所 Three-dimensional reconstruction system and method based on binocular stereoscopic vision
CN106444797A (en) * 2016-12-01 2017-02-22 腾讯科技(深圳)有限公司 Method for controlling aircraft to descend and related device
US20170225800A1 (en) * 2016-02-05 2017-08-10 Jordan Holt Visual landing aids for unmanned aerial systems
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102417037A (en) * 2010-09-28 2012-04-18 株式会社拓普康 Automatic taking-off and landing system
CN103347785A (en) * 2010-12-14 2013-10-09 株式会社大韩航空 Automatic recovery method for an unmanned aerial vehicle
US20160122038A1 (en) * 2014-02-25 2016-05-05 Singularity University Optically assisted landing of autonomous unmanned aircraft
US20170225800A1 (en) * 2016-02-05 2017-08-10 Jordan Holt Visual landing aids for unmanned aerial systems
CN105857630A (en) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 Parking apron device, aircraft and aircraft parking system
CN106384382A (en) * 2016-09-05 2017-02-08 山东省科学院海洋仪器仪表研究所 Three-dimensional reconstruction system and method based on binocular stereoscopic vision
CN106444797A (en) * 2016-12-01 2017-02-22 腾讯科技(深圳)有限公司 Method for controlling aircraft to descend and related device
CN107240063A (en) * 2017-07-04 2017-10-10 武汉大学 A kind of autonomous landing method of rotor wing unmanned aerial vehicle towards mobile platform

Similar Documents

Publication Publication Date Title
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
EP3407294B1 (en) Information processing method, device, and terminal
CN111612841B (en) Target positioning method and device, mobile robot and readable storage medium
CN104197899A (en) Mobile robot location method and system
CN113657224A (en) Method, device and equipment for determining object state in vehicle-road cooperation
CN113570631B (en) Image-based pointer instrument intelligent identification method and device
CN111598952A (en) Multi-scale cooperative target design and online detection and identification method and system
CN111273701B (en) Cloud deck vision control system and control method
WO2021103558A1 (en) Rgb-d data fusion-based robot vision guiding method and apparatus
EP4047556A2 (en) Registration method and registration apparatus for autonomous vehicle, electronic device
CN112465908B (en) Object positioning method, device, terminal equipment and storage medium
CN107990825B (en) High-precision position measuring device and method based on priori data correction
CN116486290B (en) Unmanned aerial vehicle monitoring and tracking method and device, electronic equipment and storage medium
CN111339953B (en) Clustering analysis-based mikania micrantha monitoring method
CN116736259A (en) Laser point cloud coordinate calibration method and device for tower crane automatic driving
CN111694370A (en) Visual method and system for multi-stage fixed-point directional landing of unmanned aerial vehicle
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN113781524B (en) Target tracking system and method based on two-dimensional label
CN110569810B (en) Method and device for acquiring coordinate information, storage medium and electronic device
CN113487676A (en) Method and apparatus for determining relative pose angle between cameras mounted to an acquisition entity
CN112991463A (en) Camera calibration method, device, equipment, storage medium and program product
CN111746810B (en) All-weather unmanned aerial vehicle landing method, all-weather unmanned aerial vehicle landing system, all-weather unmanned aerial vehicle landing equipment and storage medium
CN113561181A (en) Target detection model updating method, device and system
CN113191279A (en) Data annotation method, device, equipment, storage medium and computer program product
CN112966059B (en) Data processing method and device for positioning data, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210716

Address after: 518063 5th floor, block B, building 1, software industry base, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Fengyi Technology (Shenzhen) Co.,Ltd.

Address before: 518061 Intersection of Xuefu Road (south) and Baishi Road (east) in Nanshan District, Shenzhen City, Guangdong Province, 6-13 floors, Block B, Shenzhen Software Industry Base

Applicant before: SF TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right