CN112102412B - Method and system for detecting visual anchor point in unmanned aerial vehicle landing process - Google Patents

Method and system for detecting visual anchor point in unmanned aerial vehicle landing process Download PDF

Info

Publication number
CN112102412B
CN112102412B CN202011235408.9A CN202011235408A CN112102412B CN 112102412 B CN112102412 B CN 112102412B CN 202011235408 A CN202011235408 A CN 202011235408A CN 112102412 B CN112102412 B CN 112102412B
Authority
CN
China
Prior art keywords
anchor point
unmanned aerial
aerial vehicle
anchor
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011235408.9A
Other languages
Chinese (zh)
Other versions
CN112102412A (en
Inventor
唐邓清
相晓嘉
周晗
常远
闫超
黄依新
陈紫叶
李贞屹
谭沁
孙懿豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011235408.9A priority Critical patent/CN112102412B/en
Publication of CN112102412A publication Critical patent/CN112102412A/en
Application granted granted Critical
Publication of CN112102412B publication Critical patent/CN112102412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for detecting visual anchor points in the landing process of an unmanned aerial vehicle, wherein a group of anchor points which can better represent the space position and the posture of the unmanned aerial vehicle are selected according to the attributes of the anchor points; then, an unmanned aerial vehicle region is extracted from the visual image by utilizing a target detection neural subnetwork in the anchor point detection model, so that the interference of a background region on the unmanned aerial vehicle anchor point detection can be effectively avoided; and finally, a multi-resolution anchor point detection framework based on region blocking is designed, the unmanned aerial vehicle region is firstly divided into a plurality of sub-regions with different resolutions, then anchor point detection is respectively carried out on the complete unmanned aerial vehicle region and the sub-regions by utilizing an anchor point detection sub-network in an anchor point detection model, the robustness of the detection of target anchor points of the unmanned aerial vehicles with different scales is enhanced, then the visual anchor points are obtained by solving the position average value of the same original anchor points among all groups of original anchor points, and the accuracy of anchor point detection is effectively enhanced.

Description

Method and system for detecting visual anchor point in unmanned aerial vehicle landing process
Technical Field
The invention relates to the technical field of unmanned aerial vehicle autonomous landing, in particular to a method and a system for detecting a visual anchor point in the landing process of an unmanned aerial vehicle.
Background
In the autonomous taking off and landing process of the unmanned aerial vehicle, the unmanned aerial vehicle usually utilizes an airborne inertial navigation system and a global positioning system to complete real-time perception of self pose. However, the systems are interfered by environmental factors such as magnetic fields and temperature, so that the sensing precision of the pose of the unmanned aerial vehicle is seriously reduced, and the autonomous landing of the unmanned aerial vehicle is influenced. The state monitoring system of the ground-based vision unmanned aerial vehicle estimates and monitors the position and the posture of the unmanned aerial vehicle in real time in the landing process of the unmanned aerial vehicle by using a ground vision system, further ensures the accurate perception of the self pose of the unmanned aerial vehicle, and assists the unmanned aerial vehicle to finish autonomous landing. The detection of key points (anchor points) of unmanned aerial vehicle imaging in the image generated by the ground-based vision system in real time is one of the bases for carrying out real-time estimation of the position and the attitude of the unmanned aerial vehicle in the landing process. At present, the image coordinates of the anchor point of the unmanned aerial vehicle are extracted by using traditional methods such as an angular point method, an optical flow method, a color probability density method and the like, and the method has the defects in the aspects of accuracy and robustness. Therefore, an unmanned aerial vehicle image anchor point detection method with high precision is urgently needed.
Disclosure of Invention
The invention provides a method and a system for detecting a visual anchor point in the landing process of an unmanned aerial vehicle, which are used for overcoming the defects of low accuracy, insufficient robustness and the like in the prior art.
In order to achieve the purpose, the invention provides a method for detecting a visual anchor point in the landing process of an unmanned aerial vehicle, which comprises the following steps:
selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
acquiring a visual image of the unmanned aerial vehicle in the landing process, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
training a pre-constructed anchor point detection model by using the training set, wherein the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
inputting a visual image to be detected generated in real time into a trained anchor point detection model, and acquiring an unmanned aerial vehicle region in the visual image by using a target detection neural subnetwork;
dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in the historical frame image, respectively detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing an anchor point detection sub-network to obtain a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the visual anchor points of the unmanned aerial vehicle in the visual image to be detected.
In order to achieve the above object, the present invention further provides a system for detecting a visual anchor point during landing of an unmanned aerial vehicle, comprising:
the anchor point selecting module is used for selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
the training set forming module is used for acquiring a visual image in the landing process of the unmanned aerial vehicle, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
the model training module is used for training a pre-constructed anchor point detection model by utilizing the training set, and the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
the anchor point detection module is used for inputting the visual image to be detected generated in real time into a trained anchor point detection model and acquiring an unmanned aerial vehicle area in the visual image by utilizing a target detection neural subnetwork; dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in the historical frame image, respectively detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing an anchor point detection sub-network to obtain a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the unmanned aerial vehicle anchor point image in the visual image to be detected.
To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
To achieve the above object, the present invention further proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method described above.
Compared with the prior art, the invention has the beneficial effects that:
according to the method for detecting the visual anchor points in the landing process of the unmanned aerial vehicle, firstly, a group of anchor points which can better represent the space position and the posture of the unmanned aerial vehicle are selected according to the attributes of the anchor points; then, an unmanned aerial vehicle region is extracted from the visual image by utilizing a target detection neural subnetwork in the anchor point detection model, so that the interference of a background region on the unmanned aerial vehicle anchor point detection can be effectively avoided; and finally, a multi-resolution anchor point detection framework based on region blocking is designed, the unmanned aerial vehicle region is firstly divided into a plurality of sub-regions with different resolutions, then anchor point detection is respectively carried out on the complete unmanned aerial vehicle region and the sub-regions by utilizing an anchor point detection sub-network in an anchor point detection model, the robustness of the detection of target anchor points of the unmanned aerial vehicles with different scales is enhanced, then the visual anchor points are obtained by solving the position average value of the same original anchor points among all groups of original anchor points, and the accuracy of anchor point detection is effectively enhanced. The method has important significance and practical value for accurately estimating the landing position and the landing attitude of the unmanned aerial vehicle based on the ground-based vision system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a method for detecting a visual anchor point during the landing of an unmanned aerial vehicle according to the present invention;
FIG. 2 is a block diagram of a target detection neural subnetwork;
FIG. 3 is a block diagram of an anchor point detection subnetwork;
FIG. 4 is a schematic diagram of a multi-resolution anchor point detection architecture based on region blocking according to the present invention;
fig. 5 is a diagram of an imaging example of the unmanned aerial vehicle and its anchor point at each stage in the embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides a method for detecting a visual anchor point in the landing process of an unmanned aerial vehicle, which comprises the following steps of:
101: selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
and selecting an anchor point on the unmanned aerial vehicle three-dimensional model by taking the attribute of the anchor point as a starting point, and recording the position of the anchor point.
102: acquiring a visual image of the unmanned aerial vehicle in the landing process, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
and the visual image is an RGB three-channel image which is shot by the foundation camera and contains the unmanned aerial vehicle. RGB, R (red ), G (green ), B (blue, blue).
And marking, namely marking the unmanned aerial vehicle target on the visual image and the selected anchor point according to the selected anchor point.
103: training a pre-constructed anchor point detection model by using a training set; the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
104: inputting a visual image to be detected generated in real time into a trained anchor point detection model, and acquiring an unmanned aerial vehicle region in the visual image by using a target detection neural subnetwork;
105: according to the distribution condition of anchor points in historical frame images, dividing an unmanned aerial vehicle area into a plurality of sub-areas with different resolutions, respectively detecting the unmanned aerial vehicle area and the plurality of sub-areas with different resolutions by using an anchor point detection sub-network, obtaining a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the visual anchor points of the unmanned aerial vehicle in the visual image to be detected.
According to the method for detecting the visual anchor points in the landing process of the unmanned aerial vehicle, firstly, a group of anchor points which can better represent the space position and the posture of the unmanned aerial vehicle are selected according to the attributes of the anchor points; then, an unmanned aerial vehicle region is extracted from the visual image by utilizing a target detection neural subnetwork in the anchor point detection model, so that the interference of a background region on the unmanned aerial vehicle anchor point detection can be effectively avoided; and finally, a multi-resolution anchor point detection framework based on region blocking is designed, the unmanned aerial vehicle region is firstly divided into a plurality of sub-regions with different resolutions, then anchor point detection is respectively carried out on the complete unmanned aerial vehicle region and the sub-regions by utilizing an anchor point detection sub-network in an anchor point detection model, the robustness of the detection of target anchor points of the unmanned aerial vehicles with different scales is enhanced, then the visual anchor points are obtained by solving the position average value of the same original anchor points among all groups of original anchor points, and the accuracy of anchor point detection is effectively enhanced. The method has important significance and practical value for accurately estimating the landing position and the landing attitude of the unmanned aerial vehicle based on the ground-based vision system.
In one embodiment, for step 101, selecting an anchor point on the three-dimensional model of the drone according to the attributes of the anchor point, includes:
and selecting the anchor points on the unmanned aerial vehicle three-dimensional model according to the characteristics, visibility and envelopment of the anchor points.
The selection of the anchor points not only directly influences the accuracy of anchor point detection, but also influences the subsequent pose estimation precision. Because the appearance characteristics of different unmanned aerial vehicle targets are different, an anchor point selection scheme suitable for all targets cannot be designed. When selecting anchors for a specific target, the following attributes of each anchor should be considered emphatically:
a. the characteristics are as follows: the detection precision of the anchor points is directly influenced by the characteristic strength of the anchor points, and the anchor points are selected mainly aiming at the inflection points, the angular points or the spots in the target contour.
b. Visibility: anchor points are always visible in the visual image, which is one of the necessary conditions to achieve accurate detection thereof. In different application scenes, points which are always in the visual field are selected as anchor points according to the motion characteristics of the target and the distribution characteristics of the visual angles of the cameras.
c. Enveloping property: the anchor points should be as dispersed as possible over the target surface. An anchor configuration that is too concentrated can result in redundancy of the anchor's ability to characterize pose.
In addition to the above-mentioned properties of the anchor points, the number of anchor points also affects the performance of the detection method. The number of anchor points needs to be determined according to the application scenario characteristics. Too few anchor points can enhance the sensitivity of the pose estimation algorithm to anchor point detection errors, while too many anchor points can improve the precision of the pose algorithm, but provide higher real-time and accuracy requirements for the anchor point detection algorithm. Typically, the number of anchor points is suitably between 4 and 8.
In the next embodiment, for step 102, the visual image obtained during the landing process of the unmanned aerial vehicle is specifically:
shooting an unmanned aerial vehicle landing RGB image in real time through a foundation camera and transmitting the image to a ground computer in real time.
The unmanned aerial vehicle landing image data shot by the ground-based vision system are utilized, and an unmanned aerial vehicle landing target detection data set is constructed in a manual labeling mode and is used for training and testing an anchor point detection model.
In another embodiment, in order to obtain the unmanned aerial vehicle region accurately in real time, the target detection neural network YOLO is selected as a target detection neural sub-network in the anchor point detection model in the embodiment, and the target detection neural sub-network sequentially includes 6 convolutional layers and 2 full-link layers as shown in fig. 2; wherein, the back of the first 5 convolutional layers is connected with 1 maximum pooling layer.
For convolutional layers, channel pruning is mainly done using a 1 × 1 convolution followed by a 3 × 3 convolution. For convolutional and fully-connected layers, the Leaky ReLU activation function is used.
Before the training of the anchor point detection model, the anchor point detection model is pre-trained on ImageNet, the pre-trained classification model adopts the first 20 convolutional layers in FIG. 2, and then a mean pooling layer and a full connection layer are added. After the pre-training, 4 convolutional layers and 2 fully-connected layers are added on top of the 20 convolutional layers obtained by the pre-training. Since the drone landing target detection task requires a higher definition picture, the input to the target detection neural sub-network in the anchor point detection model is increased from 224 × 224 to 448 × 448.
In this embodiment, obtaining the drone area in the visual image by using the target detection neural subnetwork includes:
401: performing feature extraction on the visual image by using a convolutional layer in a target detection neural subnetwork, and screening the extracted features by using a maximum pooling layer;
402: and according to the screened characteristics, obtaining the unmanned aerial vehicle region in the visual image by utilizing the full connection layer.
In a further embodiment, after obtaining the drone area in the visual image, the detection of the anchor point will be performed within the drone area. Most of the traditional feature points are manually defined features. In practical application, the parameters in the system need to be repeatedly adjusted manually to adapt to environmental factors such as illumination, and the labor cost is often high. Based on the current situation, the invention designs the anchor point detection network taking the convolutional neural network as the core, and can realize the accurate detection of the anchor point based on the region.
As shown in fig. 3, the anchor point detection sub-network sequentially includes 5 convolutional layers and 1 full-link layer; the back of each of 2, 3 and 4 convolutional layers is connected with 1 max pooling layer.
Anchor detection sub-networks have a common point in design criteria with target detection neural sub-networks: that is, the calculation amount of the network needs to be reduced as much as possible, and the real-time operation of the algorithm in various onboard processors is ensured. Through a large number of algorithm performance tests, the main structure of the anchor point detection sub-network is formed by 5 convolutional layers and 1 fully-connected layer. In order to keep more edge and corner features, the pooling layer adopts a maximum pooling scheme. The anchor detection subnetwork input image size is fixed 79 x 79. Unlike the target detection neural subnetwork, the output of the fully connected layer is directly used as the image coordinates of the anchor point without normalization processing. Assuming that the number of anchor points is M, the dimension of the output vector of the anchor point detection subnetwork is 2M.
In this embodiment, the detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by using the anchor point detection sub-network respectively to obtain a plurality of groups of original anchor points includes:
501: extracting features of the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing the convolution layer in the anchor point detection sub-network, and screening the extracted features by utilizing the maximum pooling layer;
502: and obtaining a plurality of groups of original anchor points by utilizing the full connection layer according to the screened characteristics.
The unmanned plane area corresponds to a group of original anchor points; each sub-region corresponds to a set of original anchor points.
The resolution of the target area directly determines the degree to which the target appearance details are discernable. The method for directly detecting the anchor point of the high-resolution target area by adopting the deep network has the defect of large calculation amount. In order to solve the above problem, the present invention designs a multi-resolution anchor point detection architecture based on region blocking as shown in fig. 4. Firstly, dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in historical frame images. The blocking operation generally divides the target area into left and right blocks. And then, respectively adopting the same anchor point to detect the anchor points of the sub-networks aiming at the left sub-block, the right sub-block and the complete target area. Because the input size of the anchor point detection sub-network is fixed and smaller than the size of the target area, when the sub-area inputs the network, the resolution ratio of the target is higher than that of the complete unmanned aerial vehicle area, and richer details can be presented. The anchor detection subnetwork host structures for the sub-region and the complete drone region are identical. And (3) utilizing the anchor point to detect the positions of the 3 groups of anchor points output by the sub-network, and solving the position average value of the same original anchor point among all groups of original anchor points to obtain the visual anchor point.
In another embodiment, for step 105, after obtaining the position of the visual anchor point of the drone in the visual image to be detected, the method further includes:
and correcting the position of the visual anchor point of the unmanned aerial vehicle according to the distribution condition of the anchor point in the historical frame image.
In a certain embodiment, correcting the position of the visual anchor point of the unmanned aerial vehicle according to the distribution condition of the anchor point in the historical frame image comprises:
601: acquiring the position of an anchor point in a historical frame image;
602: according to the positions of anchor points in the historical frame images, fitting the change curves of the anchor points in different frame images to obtain the predicted anchor points of the current frame image;
603: and obtaining the final position of the unmanned aerial vehicle visual anchor point in the current frame image in a weighted average mode according to the prediction anchor point and the visual anchor point output by the anchor point detection model.
In order to further reduce the anchor point detection error, the embodiment makes full use of the advantages of the sequence image, and corrects the anchor point position output by the anchor point detection model of the current frame image according to the anchor point detection result of the historical frame image. In practical applications, the continuity of the motion of the target and the camera in the time domain is the fundamental basis of the correction method.
In the next embodiment, obtaining the final position of the visual anchor point of the unmanned aerial vehicle in the current frame image in a weighted average manner according to the visual anchor point output by the prediction anchor point and the anchor point detection model comprises:
according to predicted anchor point
Figure 135163DEST_PATH_IMAGE001
And visual anchor point output by anchor point detection model
Figure 673592DEST_PATH_IMAGE002
Obtaining the final position of the visual anchor point of the unmanned aerial vehicle in the current frame image in a weighted average mode,
Figure 871355DEST_PATH_IMAGE003
in the formula (I), the compound is shown in the specification,
Figure 215749DEST_PATH_IMAGE004
is a weight factor, generally the value is 0.7;
Figure 182568DEST_PATH_IMAGE005
is the anchor sequence number.
The method for detecting the visual anchor points in the unmanned aerial vehicle landing process is explained by a specific application example, and the method is used for detecting the anchor points of the unmanned aerial vehicle targets on the basis of the visual image sets generated in the two unmanned aerial vehicle landing experiment processes to generate the image positions of the unmanned aerial vehicle anchor points. Fig. 5 lists an example of imaging of the drone at various stages of landing. According to the anchor point selection rule and the imaging characteristics of the target of the unmanned aerial vehicle, 5 characteristic points of a left wing end (LW), a right wing end (RW), an undercarriage nose pulley (FT), a left tail wing end (LT) and a right tail wing end (RT) of the unmanned aerial vehicle are selected as anchor points for the following reasons:
(1) the above feature points are corner points or inflection points in the profile of the unmanned aerial vehicle, and the visual features are obvious.
(2) Above-mentioned characteristic point is comparatively stable at the formation of image of unmanned aerial vehicle descending in-process, and the phenomenon that is sheltered from hardly exists, has established the basis for the stable accurate detection of descending in-process anchor point.
(3) The dispersion degree of the above characteristic points distributed in the unmanned aerial vehicle 3D model is high, and the tolerance to anchor point detection errors is high.
According to the two groups of experimental results, the accuracy and the output frame rate of the detection method of the visual anchor point in the landing process of the unmanned aerial vehicle are shown in table 1. Generally speaking, the detection method provided by the invention realizes the unmanned aerial vehicle anchor point detection accuracy rate of more than 96%, and simultaneously realizes the anchor point detection frame rate (fps) of more than 34 frames per second on a foundation computer.
TABLE 1 Anchor Point detection accuracy and output frame Rate
Figure 157477DEST_PATH_IMAGE006
In conclusion, the invention designs the unmanned aerial vehicle anchor point selection rule aiming at the requirements of self position and attitude estimation in the landing process of the unmanned aerial vehicle, and on the basis of the unmanned aerial vehicle anchor point selection rule, provides and realizes the multi-resolution anchor point detection neural network algorithm based on the region blocking, has higher accuracy and real-time performance, completely meets the requirements of the subsequent unmanned aerial vehicle position and attitude estimation on the accuracy and speed of anchor point extraction, and has important significance and practical value for realizing the estimation of the position and attitude in the landing process of the unmanned aerial vehicle by utilizing a ground-based vision system.
The invention also provides a system for detecting the visual anchor point in the landing process of the unmanned aerial vehicle, which comprises the following steps:
the anchor point selecting module is used for selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
the training set forming module is used for acquiring a visual image in the landing process of the unmanned aerial vehicle, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
the model training module is used for training a pre-constructed anchor point detection model by utilizing the training set, and the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
the anchor point detection module is used for inputting the visual image to be detected generated in real time into a trained anchor point detection model and acquiring an unmanned aerial vehicle area in the visual image by utilizing a target detection neural subnetwork; dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in the historical frame image, respectively detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing an anchor point detection sub-network to obtain a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the unmanned aerial vehicle anchor point image in the visual image to be detected.
The invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for detecting a visual anchor point in the landing process of an unmanned aerial vehicle is characterized by comprising the following steps:
selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
acquiring a visual image of the unmanned aerial vehicle in the landing process, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
training a pre-constructed anchor point detection model by using the training set, wherein the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
inputting a visual image to be detected generated in real time into a trained anchor point detection model, and acquiring an unmanned aerial vehicle region in the visual image by using a target detection neural subnetwork;
dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in the historical frame image, respectively detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing an anchor point detection sub-network to obtain a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the visual anchor points of the unmanned aerial vehicle in the visual image to be detected.
2. The method of claim 1, wherein selecting an anchor point on the three-dimensional model of the drone based on the attributes of the anchor point comprises:
and selecting the anchor points on the unmanned aerial vehicle three-dimensional model according to the characteristics, visibility and envelopment of the anchor points.
3. The method for detecting the visual anchor points during the landing of the unmanned aerial vehicle as claimed in claim 1, wherein the target detection neural sub-network sequentially comprises 6 convolutional layers and 2 full-link layers; wherein, the back of the first 5 convolutional layers is connected with 1 maximum pooling layer;
obtaining a drone area in a visual image with a target detection neural subnetwork, comprising:
performing feature extraction on the visual image by using a convolutional layer in a target detection neural subnetwork, and screening the extracted features by using a maximum pooling layer;
and according to the screened characteristics, obtaining the unmanned aerial vehicle region in the visual image by utilizing the full connection layer.
4. The method of claim 1, wherein the sub-network of anchor detection comprises, in order, 5 convolutional layers and 1 fully-connected layer; the back of each of the 2 nd, 3 rd and 4 th convolution layers is connected with 1 maximum pooling layer;
utilizing an anchor point detection sub-network to respectively detect the unmanned aerial vehicle region and a plurality of sub-regions with different resolutions to obtain a plurality of groups of original anchor points, comprising:
extracting features of the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing the convolution layer in the anchor point detection sub-network, and screening the extracted features by utilizing the maximum pooling layer;
and obtaining a plurality of groups of original anchor points by utilizing the full connection layer according to the screened characteristics.
5. The method for detecting the visual anchor point during the landing of the unmanned aerial vehicle as claimed in claim 1, wherein after obtaining the position of the visual anchor point of the unmanned aerial vehicle in the visual image to be detected, the method further comprises:
and correcting the position of the visual anchor point of the unmanned aerial vehicle according to the distribution condition of the anchor point in the historical frame image.
6. The method for detecting the visual anchor point during the landing of the unmanned aerial vehicle as claimed in claim 5, wherein the step of correcting the position of the visual anchor point of the unmanned aerial vehicle according to the distribution of the anchor point in the historical frame image comprises:
acquiring the position of an anchor point in a historical frame image;
according to the positions of anchor points in the historical frame images, fitting the change curves of the anchor points in different frame images to obtain the predicted anchor points of the current frame image;
and obtaining the final position of the unmanned aerial vehicle visual anchor point in the current frame image in a weighted average mode according to the prediction anchor point and the visual anchor point output by the anchor point detection model.
7. The method as claimed in claim 6, wherein the step of obtaining the final position of the visual anchor point of the unmanned aerial vehicle in the current frame image by means of weighted averaging according to the predicted anchor point and the visual anchor point output by the anchor point detection model comprises:
according to the predicted anchor point
Figure 136070DEST_PATH_IMAGE001
And the visual anchor point output by the anchor point detection model
Figure 284154DEST_PATH_IMAGE002
Obtaining the final position of the visual anchor point of the unmanned aerial vehicle in the current frame image in a weighted average mode,
Figure 828182DEST_PATH_IMAGE003
in the formula (I), the compound is shown in the specification,
Figure 239572DEST_PATH_IMAGE004
is a weight factor;
Figure 943085DEST_PATH_IMAGE005
is the anchor sequence number.
8. The utility model provides a detection system of unmanned aerial vehicle descending in-process vision anchor point which characterized in that includes:
the anchor point selecting module is used for selecting an anchor point on the unmanned aerial vehicle three-dimensional model according to the attribute of the anchor point;
the training set forming module is used for acquiring a visual image in the landing process of the unmanned aerial vehicle, and manually marking the unmanned aerial vehicle target and the anchor point in the visual image to form a training set;
the model training module is used for training a pre-constructed anchor point detection model by utilizing the training set, and the anchor point detection model comprises a target detection neural sub-network and an anchor point detection sub-network;
the anchor point detection module is used for inputting the visual image to be detected generated in real time into a trained anchor point detection model and acquiring an unmanned aerial vehicle area in the visual image by utilizing a target detection neural subnetwork; dividing the unmanned aerial vehicle region into a plurality of sub-regions with different resolutions according to the distribution condition of anchor points in the historical frame image, respectively detecting the unmanned aerial vehicle region and the plurality of sub-regions with different resolutions by utilizing an anchor point detection sub-network to obtain a plurality of groups of original anchor points, solving the position average value of the same original anchor points among the groups of original anchor points, and obtaining the position of the unmanned aerial vehicle anchor point image in the visual image to be detected.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011235408.9A 2020-11-09 2020-11-09 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process Active CN112102412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011235408.9A CN112102412B (en) 2020-11-09 2020-11-09 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011235408.9A CN112102412B (en) 2020-11-09 2020-11-09 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process

Publications (2)

Publication Number Publication Date
CN112102412A CN112102412A (en) 2020-12-18
CN112102412B true CN112102412B (en) 2021-01-26

Family

ID=73785019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011235408.9A Active CN112102412B (en) 2020-11-09 2020-11-09 Method and system for detecting visual anchor point in unmanned aerial vehicle landing process

Country Status (1)

Country Link
CN (1) CN112102412B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112268564B (en) * 2020-12-25 2021-03-02 中国人民解放军国防科技大学 Unmanned aerial vehicle landing space position and attitude end-to-end estimation method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9613273B2 (en) * 2015-05-19 2017-04-04 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
US10377484B2 (en) * 2016-09-30 2019-08-13 Sony Interactive Entertainment Inc. UAV positional anchors
CN111174780B (en) * 2019-12-31 2022-03-08 同济大学 Road inertial navigation positioning system for blind people
CN111551167B (en) * 2020-02-10 2022-09-27 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
CN111881744A (en) * 2020-06-23 2020-11-03 安徽清新互联信息科技有限公司 Face feature point positioning method and system based on spatial position information

Also Published As

Publication number Publication date
CN112102412A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN110825101B (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN109145747B (en) Semantic segmentation method for water surface panoramic image
CN109886312B (en) Bridge vehicle wheel detection method based on multilayer feature fusion neural network model
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN106529538A (en) Method and device for positioning aircraft
CN103186887B (en) Image demister and image haze removal method
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN109543634B (en) Data processing method and device in positioning process, electronic equipment and storage medium
CN112666963A (en) Road pavement crack detection system based on four-axis unmanned aerial vehicle and detection method thereof
CN113495575B (en) Unmanned aerial vehicle autonomous landing visual guidance method based on attention mechanism
CN113112526A (en) Target tracking method, device, equipment and medium
CN109214254A (en) A kind of method and device of determining robot displacement
CN114034296A (en) Navigation signal interference source detection and identification method and system
CN112907557A (en) Road detection method, road detection device, computing equipment and storage medium
CN112102412B (en) Method and system for detecting visual anchor point in unmanned aerial vehicle landing process
CN112268564B (en) Unmanned aerial vehicle landing space position and attitude end-to-end estimation method
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
CN114648639B (en) Target vehicle detection method, system and device
CN111008555B (en) Unmanned aerial vehicle image small and weak target enhancement extraction method
CN109376653B (en) Method, apparatus, device and medium for locating vehicle
CN110660086B (en) Motion control method and system based on optical flow algorithm
KR20220023046A (en) Method, apparatus and computer program for multi-matching based realtime vision-aided navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant