CN115861938A - Unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification - Google Patents
Unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification Download PDFInfo
- Publication number
- CN115861938A CN115861938A CN202310063450.4A CN202310063450A CN115861938A CN 115861938 A CN115861938 A CN 115861938A CN 202310063450 A CN202310063450 A CN 202310063450A CN 115861938 A CN115861938 A CN 115861938A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- feature
- graph
- deconvolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of unmanned aerial vehicle counter-braking, and particularly relates to an unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification, wherein the method comprises the following steps: acquiring an unmanned aerial vehicle detection data set; constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph; training a deconvolution fusion network by adopting an unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model; acquiring a visible light image of a region to be monitored and a depth image corresponding to the visible light image; inputting the visible light image into an unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model; when the visible light image contains the unmanned aerial vehicle, determining the space central point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image; and transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle. The invention improves the accuracy of unmanned aerial vehicle identification and the accuracy of interference.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle counter-braking, in particular to an unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification.
Background
In recent years, the industry of the unmanned aerial vehicle in China is rapidly developed, and the application scene of the unmanned aerial vehicle is continuously widened and deepened. However, the rapid increase of the number of unmanned aerial vehicles and the imperfection of the supervision system thereof also cause a series of problems of personal privacy disclosure, confidential information disclosure and the like, and constitute a serious threat to social and military security. Therefore, countermeasures to achieve low altitude unmanned objectives are of paramount importance.
Compare laser weapon, catch the countermeasures such as net, radio interference can do the countermeasures under the prerequisite of not destroying unmanned aerial vehicle, has the implementation degree of difficulty and hangs down low in cost's advantage. The existing radio interference countermeasures mostly depend on the manual work or photoelectric equipment to discover and locate the target of the unmanned aerial vehicle, and then the interference system is used for countermeasures, so that once the unmanned aerial vehicle discovers untimely and identifies wrongly or the locating accuracy is low, the countermeasures effect can be influenced, and the electromagnetic environment pollution is caused. Therefore, an automatic unmanned aerial vehicle counter-braking method capable of quickly and accurately finding and positioning the unmanned aerial vehicle and performing directional interference is needed.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification, which improve the accuracy of unmanned aerial vehicle identification and the accuracy of interference.
In order to achieve the purpose, the invention provides the following scheme:
an unmanned aerial vehicle counter-braking method based on unmanned aerial vehicle identification comprises the following steps:
acquiring an unmanned aerial vehicle detection data set;
constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph;
training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
acquiring a visible light image of a region to be monitored and a depth image corresponding to the visible light image;
inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
when the visible light image contains the unmanned aerial vehicle, determining the spatial center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image;
transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result graph determination module, wherein the feature extraction module is used for generating a feature graph U1, a feature graph U2, a feature graph U3, a feature graph U4 and a feature graph U5 which are sequentially reduced in size according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature map U3 to obtain a feature map U3', obtaining a feature map V4 with the same size as the feature map U3' by performing deconvolution operation on the feature map V4', and performing convolution after pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U2 to obtain a feature graph U2', performing deconvolution operation on the feature graph V3' to obtain a feature graph V3 with the same size as the feature graph U2', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U1 to obtain a feature graph U1', performing deconvolution operation on the feature graph V2 'to obtain a feature graph V2 with the same size as the feature graph U1', and performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1 'and then performing convolution to obtain a feature graph V1'; the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
Optionally, when the visible light image includes an unmanned aerial vehicle, determining a spatial center point coordinate of the unmanned aerial vehicle according to the segmentation result graph output by the unmanned aerial vehicle detection model and the corresponding depth image, specifically including:
when the visible light image contains the unmanned aerial vehicle, calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result image output by the unmanned aerial vehicle detection model based on a centroid calculation function;
obtaining a depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and determining the space central point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value.
Optionally, the transmitting an interference signal to the drone according to the spatial center coordinates of the drone specifically includes:
determining the altitude angle and the azimuth angle of interference signal transmission according to the space central point coordinates of the unmanned aerial vehicle;
and controlling interference equipment through a servo system to transmit interference signals according to the altitude angle and the azimuth angle.
Optionally, the method further comprises:
inputting a visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or not contained;
when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the previous moment is that the unmanned aerial vehicle is contained, the interference equipment is controlled to reset and closed through the servo system.
Optionally, the feature extraction module is configured to input the input image into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 4 × 4 to obtain a feature map U1, input the feature map U1 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U2, input the feature map U2 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U3, and input the feature map U3 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U4.
The invention discloses an unmanned aerial vehicle counter-braking system based on unmanned aerial vehicle identification, which comprises:
the data set acquisition module is used for acquiring an unmanned aerial vehicle detection data set;
the deconvolution fusion network construction module is used for constructing a deconvolution fusion network, and the deconvolution fusion network is used for outputting a segmentation result graph;
the deconvolution fusion network training module is used for training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
the device comprises a to-be-monitored area image acquisition module, a depth image acquisition module and a monitoring module, wherein the to-be-monitored area image acquisition module is used for acquiring a visible light image of a to-be-monitored area and a depth image corresponding to the visible light image;
the unmanned aerial vehicle detection module is used for inputting the visible light image into the unmanned aerial vehicle detection model and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
the space central point coordinate determination module of the unmanned aerial vehicle is used for determining the space central point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image when the visible light image contains the unmanned aerial vehicle;
the interference signal transmitting module is used for transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result graph determination module, wherein the feature extraction module is used for generating a feature graph U1, a feature graph U2, a feature graph U3, a feature graph U4 and a feature graph U5 which are sequentially reduced in size according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature map U3 to obtain a feature map U3', obtaining a feature map V4 with the same size as the feature map U3' by performing deconvolution operation on the feature map V4', and performing convolution after pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U2 to obtain a feature graph U2', performing deconvolution operation on the feature graph V3' to obtain a feature graph V3 with the same size as the feature graph U2', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U1 to obtain a feature graph U1', performing deconvolution operation on the feature graph V2 'to obtain a feature graph V2 with the same size as the feature graph U1', and performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1 'and then performing convolution to obtain a feature graph V1'; the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
Optionally, the module for determining coordinates of a spatial center point of the unmanned aerial vehicle specifically includes:
the centroid coordinate determination unit of the unmanned aerial vehicle is used for calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result image output by the unmanned aerial vehicle detection model based on a centroid calculation function when the visible light image contains the unmanned aerial vehicle;
the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and the space central point coordinate determination unit of the unmanned aerial vehicle is used for determining the space central point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value.
Optionally, the interference signal transmitting module specifically includes:
the device comprises an interference signal transmitting altitude angle and azimuth angle determining unit, a signal transmitting and receiving unit and a signal transmitting and receiving unit, wherein the interference signal transmitting altitude angle and azimuth angle determining unit is used for determining an altitude angle and an azimuth angle of interference signal transmission according to the space central point coordinates of the unmanned aerial vehicle;
and the interference signal transmitting unit is used for controlling interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through a servo system.
Optionally, the system further comprises:
the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or the unmanned aerial vehicle is not contained;
and the interference equipment resetting and closing module is used for controlling the interference equipment to reset and close through the servo system when the monitoring result at the current moment is that the interference equipment does not contain the unmanned aerial vehicle and the monitoring result at the previous moment contains the unmanned aerial vehicle.
Optionally, the feature extraction module is configured to input the input image into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 4 × 4 to obtain a feature map U1, input the feature map U1 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U2, input the feature map U2 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U3, and input the feature map U3 into 2 convolution layers with kernels of 3 × 3 and 1 pooling layer with pooling kernels of 2 × 2 to obtain a feature map U4.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses an unmanned aerial vehicle reverse system and method based on unmanned aerial vehicle identification, wherein a deconvolution fusion network expands the size of a deep semantic feature map layer by layer through deconvolution operation, and simultaneously ensures high resolution of an output feature map by adopting a mode of gradually fusing with low-layer features, so that end-to-end and accurate pixel-level segmentation of an unmanned aerial vehicle target is realized, the accuracy of unmanned aerial vehicle identification is improved, in addition, the coordinates of a space central point of an unmanned aerial vehicle are determined according to a segmentation result map and a corresponding depth image, an interference signal is transmitted to the unmanned aerial vehicle according to the coordinates of the space central point of the unmanned aerial vehicle, the accuracy of interference signal transmission is improved, and the reverse efficiency of the unmanned aerial vehicle is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first unmanned aerial vehicle countering method based on unmanned aerial vehicle identification according to the invention;
FIG. 2 is a schematic flow chart of a unmanned aerial vehicle countering method based on unmanned aerial vehicle identification according to the second embodiment of the invention;
FIG. 3 is a schematic diagram of a deconvolution fusion network structure according to the present invention;
fig. 4 is a schematic structural diagram of an unmanned aerial vehicle countering system based on unmanned aerial vehicle identification.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification, which improve the accuracy of unmanned aerial vehicle identification and the accuracy of interference.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a first schematic flow chart of an unmanned aerial vehicle countering method based on unmanned aerial vehicle identification, fig. 2 is a second schematic flow chart of the unmanned aerial vehicle countering method based on unmanned aerial vehicle identification, and as shown in fig. 1-2, the unmanned aerial vehicle countering method based on unmanned aerial vehicle identification comprises the following steps:
step 101: and acquiring a detection data set of the unmanned aerial vehicle.
Step 102: and constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph.
As shown in fig. 3, the deconvolution fusion network includes a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module, and a segmentation result map determination module, where the dotted frames from top to bottom in fig. 3 are the first deconvolution fusion module, the second deconvolution fusion module, the third deconvolution fusion module, and the fourth deconvolution fusion module in turn, and the feature extraction module is configured to generate a feature map U1, a feature map U2, a feature map U3, a feature map U4, and a feature map U5, which are reduced in size in turn, according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U3 to obtain a feature graph U3', performing deconvolution operation on the feature graph V4' to obtain a feature graph V4 with the same size as the feature graph U3', and performing pixel-by-pixel addition on the feature graph V4 and the feature graph U3' and then performing convolution to obtain a feature graph V3'; the third deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U2 to obtain a feature graph U2', performing deconvolution operation on the feature graph V3' to obtain a feature graph V3 with the same size as the feature graph U2', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U1 to obtain a feature graph U1', performing deconvolution operation on the feature graph V2 'to obtain a feature graph V2 with the same size as the feature graph U1', and performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1 'and then performing convolution to obtain a feature graph V1'; and the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
The feature extraction module is used for inputting an input image into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 4 × 4 pooling kernels to obtain a feature map U1, inputting the feature map U1 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 pooling kernels to obtain a feature map U2, inputting the feature map U2 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 pooling kernels to obtain a feature map U3, and inputting the feature map U3 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 pooling kernels to obtain a feature map U4.
Taking an example of inputting a visible light monitoring image containing a black-flying unmanned aerial vehicle (with the size of 1024 × 3), the design process of the deconvolution fusion network is specifically shown below:
(1) Firstly, performing feature extraction on a monitoring image (input image): inputting the monitoring image into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 4 × 4 pooling kernels to obtain a feature map U1; inputting the characteristic diagram U1 into 2 convolution layers with cores of 3 × 3 and 1 pooling layer with pooling cores of 2 × 2 to obtain a characteristic diagram U2; repeating the operation processes from U1 to U2 to generate a characteristic diagram which is deeper and richer in semantic representation, namely U3 and U4; and performing convolution operation on the U4 with the convolution number of 4096 and the step length of 2 to obtain a characteristic diagram U5.
Through successive layer feature extraction, the output feature map and the corresponding size of each layer are U1:256 × 24, U2:128 × 48, U3:64 × 96, U4:32 × 192 and U5:16*16*4096.
(2) And next, constructing a deconvolution fusion module based on the reverse direction of the backbone network:
the feature maps U4 and U5 are each subjected to a convolution operation with a channel number of 2, so that the channel dimensions of the above feature maps are converted to 2, and feature maps U4 'and U5' are obtained, whose corresponding dimensions are 32 × 2 and 16 × 2. The reason why the number of the modified channels is 2 is that the task of semantic segmentation is to identify and separate the unmanned aerial vehicle target from the background, namely, the pixel point category in the image is unmanned aerial vehicle and background.
Next, the feature map U5 'is deconvoluted, i.e., the size of U5' is enlarged to 32 × 2 by inserting null zero padding, and then 1 × 1 forward convolution is performed. After deconvolution, a feature map V5 with a size enlarged by 2 times was obtained. Subsequently, the feature map V5 is summed pixel by pixel with U4' of the same size, resulting in a feature map of size 32 × 2. In order to further enhance the fusion effect, the obtained feature map is input into one convolution layer again to obtain a feature map V4' so that the size of the feature map is kept unchanged, but the semantic feature information is enhanced. The above process is the construction process of the deconvolution fusion module.
(3) Inputting the feature maps V4' and U3' into the deconvolution module to obtain a feature map V3' with the size of 64 × 2; inputting the obtained feature maps V3' and U2' into a deconvolution module to obtain a feature map V2' with the size of 128 × 2; finally, a feature map V1' of size 256 × 2 is obtained based on the obtained feature maps V3', U2' and the deconvolution module.
In order to obtain the precise position of the drone target, pixel-level segmentation of the input map is required. At this time, since the feature map V1 'has a size of 1/4 of the input map, the deconvolution operation is performed again on V1' to enlarge the size as the original size, and the final division result map V1 is obtained. V1 has a size of 1024 × 2, and a value of a first channel of V1 indicates a probability that the pixel belongs to the drone target, and when the pixel value is greater than a target threshold, the pixel belongs to the drone; the value of the second channel indicates the probability that the pixel belongs to the background, and when the pixel value is greater than the background threshold, the pixel is in the background area.
The above process is the design process of the deconvolution fusion network. The four deconvolution fusion modules contained in the network ensure high resolution of an output feature map through low-layer feature fusion while enlarging the size of a deep semantic feature map layer by using deconvolution operation, so that end-to-end and accurate pixel-level segmentation is realized.
Step 103: and training the deconvolution fusion network by adopting an unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model.
Wherein, step 103 specifically comprises: when the deconvolution fusion network is trained by adopting the unmanned aerial vehicle detection data set, parameter updating is carried out on the whole deconvolution fusion network based on the two-class cross entropy loss function, and a deconvolution fusion model (unmanned aerial vehicle detection model) is obtained after training is finished.
Step 104: and acquiring a visible light image of the area to be monitored and a depth image corresponding to the visible light image.
Wherein, step 104 specifically includes:
and monitoring a security area (an area to be monitored) which needs unmanned aerial vehicle reverse control by adopting a depth camera, and regularly shooting and obtaining a visible light image and a corresponding depth picture of the security area.
Step 103 and step 104 are functions of the intelligent monitoring module.
Step 105: and inputting the visible light image into an unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model.
Wherein, step 105 specifically includes: inputting the shot visible light image into an unmanned aerial vehicle detection model, when an output segmentation result graph V1 contains unmanned aerial vehicle pixels, a non-cooperative unmanned aerial vehicle appears in a current security area, outputting '1' by an intelligent monitoring module, and converting a first channel image in the segmentation result graph V1 into a binary image and outputting the binary image; when all the pixel classifications in the output segmentation result graph V1 are background classes, the unmanned aerial vehicle target is not found in the security area, and the intelligent monitoring module outputs '0'.
The control module obtains the output of the intelligent monitoring module.
Step 106: and when the visible light image contains the unmanned aerial vehicle, determining the spatial center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image.
Wherein, step 106 specifically includes:
when the visible light image contains the unmanned aerial vehicle (when the output of the intelligent monitoring module is 1), the centroid coordinates (a, b) of the unmanned aerial vehicle pixel area in the segmentation result image output by the unmanned aerial vehicle detection model are calculated based on the centroid calculation function of the opencv. The segmentation result graph is a binary graph.
On the corresponding depth image, a depth value d of the centroid coordinates (a, b) of the drone is obtained.
And determining the space central point coordinate (x, y, z) of the unmanned aerial vehicle according to the centroid coordinate and the depth value of the unmanned aerial vehicle.
And (4) calculating and obtaining the space center point coordinates (x, y, z) of the unmanned aerial vehicle according to the formulas (1) - (3).
In the formulas (1) - (3), a and b are respectively the horizontal and vertical coordinates of the centroid of the unmanned aerial vehicle, d is the depth value of the centroid of the unmanned aerial vehicle, c _ f is the internal parameter of the depth camera, f x Is the focal length value in the x direction, f y Is the focal length value in the y direction, c _ f x The offset of the optical axis in the x-direction from the center of the projection plane coordinate, c _ f y Is the offset of the optical axis in the y-direction from the coordinate center of the projection plane.
Step 107: and transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle.
Wherein, step 107 specifically comprises:
and determining the altitude angle and the azimuth angle of the interference signal transmission according to the space central point coordinates of the unmanned aerial vehicle.
And controlling the interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through a servo system.
In order to aim the jamming device at the drone target, the angle of the servo system needs to be adjusted. Therefore, the motion angle to be reached by the servo system, namely the altitude angle alpha and the azimuth angle theta, is obtained according to the space central point coordinates (x, y, z) of the unmanned aerial vehicle and the formulas (4) to (5).
And the control module sends a servo motion instruction and an interference starting instruction according to the output value '1' of the intelligent monitoring module and the calculated servo system motion angle.
The unmanned aerial vehicle counter-braking method based on unmanned aerial vehicle identification further comprises the following steps:
and inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or the unmanned aerial vehicle is not contained.
And when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the previous moment is that the unmanned aerial vehicle is contained, controlling the interference equipment to reset and closing the interference equipment through the servo system.
When the output of the intelligent monitoring module is '0', namely, when the unmanned aerial vehicle target does not appear, the control module judges whether the value at the current moment is the same as the output value of the intelligent monitoring module at the previous frame. If the data are the same, the unmanned aerial vehicle target does not appear at the last moment, the radio interference transmitter (interference equipment) is in a closed state, and the control module does not send any instruction; if not, the unmanned aerial vehicle target appears at the last moment but does not exist at the current moment (is knocked down by a reverse brake or flies out of a security area), the radio interference transmitter is in an open state, and the control module sends a servo homing command and an interference closing command.
The interference module comprises interference equipment and a servo system, and after a host of the interference module receives an instruction of the control module, the host firstly controls the servo system to move to a specified angle or direction of the control module according to the type of the instruction; and then the switch of the radio interference transmitter is turned on or off according to the command type.
That is to say, when an unmanned aerial vehicle target appears in a security area, the interference module firstly enables the servo system to move to a specified angle and starts the radio interference emitter, so that the interference emitter aims at the position of the unmanned aerial vehicle target to perform interference countermeasures; continuously adjusting the angle of the interference transmitter according to a new servo motion command along with the change of the space position of the unmanned aerial vehicle so as to enable the interference transmitter to aim at the unmanned aerial vehicle for accurate counter-braking; until the unmanned aerial vehicle loses control and is knocked down, namely disappears in the security area, the servo system is returned and the interference transmitter is closed.
The invention discloses a deconvolution fusion network for accurately identifying and positioning the area of an unmanned aerial vehicle from the background, wherein the segmentation network expands the size of a deep semantic feature map layer by layer through deconvolution operation, and simultaneously ensures high resolution of an output feature map by adopting a mode of gradually fusing with low-layer features, thereby realizing end-to-end and accurate pixel-level segmentation of an unmanned aerial vehicle target.
The control module and the interference module can calculate and adjust the transmitting direction of the interference unit according to the output result of the intelligent monitoring module, thereby realizing pointing type accurate interference.
Fig. 4 is a schematic structural view of an unmanned aerial vehicle countering system based on unmanned aerial vehicle identification, and as shown in fig. 4, the unmanned aerial vehicle countering system based on unmanned aerial vehicle identification comprises:
and a data set acquisition module 201, configured to acquire a detection data set of the drone.
And the deconvolution fusion network building module 202 is used for building a deconvolution fusion network, and the deconvolution fusion network is used for outputting the segmentation result graph.
And the deconvolution fusion network training module 203 is used for training a deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model.
The image acquiring module 204 of the region to be monitored is configured to acquire a visible light image of the region to be monitored and a depth image corresponding to the visible light image.
And the unmanned aerial vehicle detection module 205 is used for inputting the visible light image into the unmanned aerial vehicle detection model and judging whether the visible light image contains the unmanned aerial vehicle according to the output of the unmanned aerial vehicle detection model.
And the space central point coordinate determination module 206 of the unmanned aerial vehicle is configured to determine the space central point coordinate of the unmanned aerial vehicle according to the segmentation result graph output by the unmanned aerial vehicle detection model and the corresponding depth image when the visible light image contains the unmanned aerial vehicle.
And the interference signal transmitting module 207 is used for transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle.
The deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result graph determination module, wherein the feature extraction module is used for generating a feature graph U1, a feature graph U2, a feature graph U3, a feature graph U4 and a feature graph U5 which are sequentially reduced in size according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U3 to obtain a feature graph U3', performing deconvolution operation on the feature graph V4' to obtain a feature graph V4 with the same size as the feature graph U3', and performing pixel-by-pixel addition on the feature graph V4 and the feature graph U3' and then performing convolution to obtain a feature graph V3'; the third deconvolution fusion module is used for performing deconvolution operation on the feature graph U2 to obtain a feature graph U2', obtaining a feature graph V3 with the same size as the feature graph U2' by performing deconvolution operation on the feature graph V3', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation on the feature graph U1 with the channel number of 2 to obtain a feature graph U1', obtaining a feature graph V2 with the same size as the feature graph U1 'by performing deconvolution operation on the feature graph V2', and obtaining a feature graph V1 'by performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1'; and the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
The space center coordinate determination module 206 of the drone specifically includes:
and the centroid coordinate determination unit of the unmanned aerial vehicle is used for calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result image output by the unmanned aerial vehicle detection model based on the centroid calculation function when the visible light image contains the unmanned aerial vehicle.
And the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image.
And the space central point coordinate determination unit of the unmanned aerial vehicle is used for determining the space central point coordinate of the unmanned aerial vehicle according to the centroid coordinate and the depth value of the unmanned aerial vehicle.
The interference signal transmitting module 207 specifically includes:
and the altitude angle and azimuth angle determining unit for transmitting the interference signal is used for determining the altitude angle and the azimuth angle for transmitting the interference signal according to the space central point coordinate of the unmanned aerial vehicle.
And the interference signal transmitting unit is used for controlling the interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through the servo system.
An unmanned aerial vehicle counteraction system based on unmanned aerial vehicle discernment still includes:
and the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or the unmanned aerial vehicle is not contained.
And the interference equipment resetting and closing module is used for controlling the interference equipment to reset and close the interference equipment through the servo system when the monitoring result at the current moment is that the interference equipment does not contain the unmanned aerial vehicle and the monitoring result at the previous moment contains the unmanned aerial vehicle.
The feature extraction module is used for inputting an input image into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 4 × 4 pooling kernels to obtain a feature map U1, inputting the feature map U1 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 pooling kernels to obtain a feature map U2, inputting the feature map U2 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 pooling kernels to obtain a feature map U3, and inputting the feature map U3 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 pooling kernels to obtain a feature map U4.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. An unmanned aerial vehicle counter-braking method based on unmanned aerial vehicle identification is characterized by comprising the following steps:
acquiring an unmanned aerial vehicle detection data set;
constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph;
training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
acquiring a visible light image of a region to be monitored and a depth image corresponding to the visible light image;
inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
when the visible light image contains the unmanned aerial vehicle, determining the spatial center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image;
transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result graph determination module, wherein the feature extraction module is used for generating a feature graph U1, a feature graph U2, a feature graph U3, a feature graph U4 and a feature graph U5 which are sequentially reduced in size according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature map U3 to obtain a feature map U3', obtaining a feature map V4 with the same size as the feature map U3' by performing deconvolution operation on the feature map V4', and performing convolution after pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U2 to obtain a feature graph U2', performing deconvolution operation on the feature graph V3' to obtain a feature graph V3 with the same size as the feature graph U2', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U1 to obtain a feature graph U1', performing deconvolution operation on the feature graph V2 'to obtain a feature graph V2 with the same size as the feature graph U1', and performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1 'and then performing convolution to obtain a feature graph V1'; the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
2. The drone countering method according to claim 1, wherein when the visible light image includes a drone, determining spatial center coordinates of the drone according to a segmentation result map output by the drone detection model and a corresponding depth image, specifically including:
when the visible light image contains the unmanned aerial vehicle, calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result image output by the unmanned aerial vehicle detection model based on a centroid calculation function;
obtaining a depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and determining the space central point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value.
3. The drone countermeasure method based on drone identification according to claim 1, wherein the transmitting of the jamming signal to the drone according to the space center point coordinates of the drone specifically includes:
determining the altitude angle and the azimuth angle of interference signal transmission according to the space central point coordinates of the unmanned aerial vehicle;
and controlling interference equipment through a servo system to transmit interference signals according to the altitude angle and the azimuth angle.
4. The drone opposing method based on drone identification of claim 3, the method further comprising:
inputting a visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or not contained;
and when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the last moment is that the unmanned aerial vehicle is contained, controlling the interference equipment to reset and close the interference equipment through a servo system.
5. The unmanned aerial vehicle reverse braking method based on unmanned aerial vehicle identification according to claim 1, wherein the feature extraction module is configured to input the input image into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 4 × 4 kernels to obtain a feature map U1, input the feature map U1 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U2, input the feature map U2 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U3, and input the feature map U3 into 2 convolution layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U4.
6. The utility model provides an unmanned aerial vehicle counter-braking system based on unmanned aerial vehicle discernment which characterized in that includes:
the data set acquisition module is used for acquiring an unmanned aerial vehicle detection data set;
the deconvolution fusion network building module is used for building a deconvolution fusion network, and the deconvolution fusion network is used for outputting a segmentation result graph;
the deconvolution fusion network training module is used for training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
the device comprises a to-be-monitored area image acquisition module, a depth image acquisition module and a monitoring module, wherein the to-be-monitored area image acquisition module is used for acquiring a visible light image of a to-be-monitored area and a depth image corresponding to the visible light image;
the unmanned aerial vehicle detection module is used for inputting the visible light image into the unmanned aerial vehicle detection model and judging whether the visible light image contains the unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
the space central point coordinate determination module of the unmanned aerial vehicle is used for determining the space central point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image when the visible light image contains the unmanned aerial vehicle;
the interference signal transmitting module is used for transmitting an interference signal to the unmanned aerial vehicle according to the space central point coordinate of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result graph determination module, wherein the feature extraction module is used for generating a feature graph U1, a feature graph U2, a feature graph U3, a feature graph U4 and a feature graph U5 which are sequentially reduced in size according to an input image; the first deconvolution fusion module is used for performing convolution operation with the channel number of 2 on the feature graph U4 to obtain a feature graph U4', performing convolution operation with the channel number of 2 on the feature graph U5 to obtain a feature graph U5', performing deconvolution operation on the feature graph U5 'to obtain a feature graph V5 with the same size as the feature graph U4', and performing pixel-by-pixel addition on the feature graph V5 and the feature graph U4 'and then performing convolution to obtain a feature graph V4'; the second deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature map U3 to obtain a feature map U3', obtaining a feature map V4 with the same size as the feature map U3' by performing deconvolution operation on the feature map V4', and performing convolution after pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U2 to obtain a feature graph U2', performing deconvolution operation on the feature graph V3' to obtain a feature graph V3 with the same size as the feature graph U2', performing pixel-by-pixel addition on the feature graph V3 and the feature graph U2' and then performing convolution to obtain a feature graph V2', the fourth deconvolution fusion module is used for performing convolution operation with the channel number being 2 on the feature graph U1 to obtain a feature graph U1', performing deconvolution operation on the feature graph V2 'to obtain a feature graph V2 with the same size as the feature graph U1', and performing pixel-by-pixel addition on the feature graph V2 and the feature graph U1 'and then performing convolution to obtain a feature graph V1'; the segmentation result graph determining module is used for deconvoluting the feature graph V1' to obtain a segmentation result graph V1 with the same size as the input image.
7. The drone reaction system based on drone identification according to claim 6, wherein the drone space center point coordinate determination module specifically includes:
the centroid coordinate determination unit of the unmanned aerial vehicle is used for calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result image output by the unmanned aerial vehicle detection model based on a centroid calculation function when the visible light image contains the unmanned aerial vehicle;
the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and the space central point coordinate determination unit of the unmanned aerial vehicle is used for determining the space central point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value.
8. The drone reaction system based on drone identification according to claim 6, characterized in that the jamming signal emission module specifically comprises:
the device comprises an interference signal transmitting altitude angle and azimuth angle determining unit, a signal transmitting and receiving unit and a signal transmitting and receiving unit, wherein the interference signal transmitting altitude angle and azimuth angle determining unit is used for determining an altitude angle and an azimuth angle of interference signal transmission according to the space central point coordinates of the unmanned aerial vehicle;
and the interference signal transmitting unit is used for controlling interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through a servo system.
9. The drone response system based on drone identification of claim 8, the system further comprising:
the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result graph, and determining a monitoring result according to the output segmentation result graph, wherein the monitoring result is that the unmanned aerial vehicle is contained or the unmanned aerial vehicle is not contained;
and the interference equipment resetting and closing module is used for controlling the interference equipment to reset and close through the servo system when the monitoring result at the current moment is that the interference equipment does not contain the unmanned aerial vehicle and the monitoring result at the previous moment contains the unmanned aerial vehicle.
10. The unmanned aerial vehicle anti-jamming system based on unmanned aerial vehicle identification according to claim 6, wherein the feature extraction module is configured to input the input image into 2 convolutional layers with 3 × 3 kernels and 1 pooling layer with 4 × 4 kernels to obtain a feature map U1, input the feature map U1 into 2 convolutional layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U2, input the feature map U2 into 2 convolutional layers with 3 × 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U3, and input the feature map U3 into 2 convolutional layers with 3 kernels and 1 pooling layer with 2 × 2 kernels to obtain a feature map U4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310063450.4A CN115861938B (en) | 2023-02-06 | 2023-02-06 | Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310063450.4A CN115861938B (en) | 2023-02-06 | 2023-02-06 | Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115861938A true CN115861938A (en) | 2023-03-28 |
CN115861938B CN115861938B (en) | 2023-05-26 |
Family
ID=85657620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310063450.4A Active CN115861938B (en) | 2023-02-06 | 2023-02-06 | Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115861938B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109753903A (en) * | 2019-02-27 | 2019-05-14 | 北航(四川)西部国际创新港科技有限公司 | A kind of unmanned plane detection method based on deep learning |
US20200250468A1 (en) * | 2019-01-31 | 2020-08-06 | StradVision, Inc. | Learning method and learning device for sensor fusion to integrate information acquired by radar capable of distance estimation and information acquired by camera to thereby improve neural network for supporting autonomous driving, and testing method and testing device using the same |
CN111666822A (en) * | 2020-05-13 | 2020-09-15 | 飒铂智能科技有限责任公司 | Low-altitude unmanned aerial vehicle target detection method and system based on deep learning |
CN113255589A (en) * | 2021-06-25 | 2021-08-13 | 北京电信易通信息技术股份有限公司 | Target detection method and system based on multi-convolution fusion network |
CN113554656A (en) * | 2021-07-13 | 2021-10-26 | 中国科学院空间应用工程与技术中心 | Optical remote sensing image example segmentation method and device based on graph neural network |
CN113822383A (en) * | 2021-11-23 | 2021-12-21 | 北京中超伟业信息安全技术股份有限公司 | Unmanned aerial vehicle detection method and system based on multi-domain attention mechanism |
US20210397851A1 (en) * | 2014-09-30 | 2021-12-23 | PureTech Systems Inc. | System And Method For Deep Learning Enhanced Object Incident Detection |
CN114550016A (en) * | 2022-04-22 | 2022-05-27 | 北京中超伟业信息安全技术股份有限公司 | Unmanned aerial vehicle positioning method and system based on context information perception |
CN115187959A (en) * | 2022-07-14 | 2022-10-14 | 清华大学 | Method and system for landing flying vehicle in mountainous region based on binocular vision |
CN115223067A (en) * | 2022-09-19 | 2022-10-21 | 季华实验室 | Point cloud fusion method, device and equipment applied to unmanned aerial vehicle and storage medium |
-
2023
- 2023-02-06 CN CN202310063450.4A patent/CN115861938B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210397851A1 (en) * | 2014-09-30 | 2021-12-23 | PureTech Systems Inc. | System And Method For Deep Learning Enhanced Object Incident Detection |
US20200250468A1 (en) * | 2019-01-31 | 2020-08-06 | StradVision, Inc. | Learning method and learning device for sensor fusion to integrate information acquired by radar capable of distance estimation and information acquired by camera to thereby improve neural network for supporting autonomous driving, and testing method and testing device using the same |
CN109753903A (en) * | 2019-02-27 | 2019-05-14 | 北航(四川)西部国际创新港科技有限公司 | A kind of unmanned plane detection method based on deep learning |
CN111666822A (en) * | 2020-05-13 | 2020-09-15 | 飒铂智能科技有限责任公司 | Low-altitude unmanned aerial vehicle target detection method and system based on deep learning |
CN113255589A (en) * | 2021-06-25 | 2021-08-13 | 北京电信易通信息技术股份有限公司 | Target detection method and system based on multi-convolution fusion network |
CN113554656A (en) * | 2021-07-13 | 2021-10-26 | 中国科学院空间应用工程与技术中心 | Optical remote sensing image example segmentation method and device based on graph neural network |
CN113822383A (en) * | 2021-11-23 | 2021-12-21 | 北京中超伟业信息安全技术股份有限公司 | Unmanned aerial vehicle detection method and system based on multi-domain attention mechanism |
CN114550016A (en) * | 2022-04-22 | 2022-05-27 | 北京中超伟业信息安全技术股份有限公司 | Unmanned aerial vehicle positioning method and system based on context information perception |
CN115187959A (en) * | 2022-07-14 | 2022-10-14 | 清华大学 | Method and system for landing flying vehicle in mountainous region based on binocular vision |
CN115223067A (en) * | 2022-09-19 | 2022-10-21 | 季华实验室 | Point cloud fusion method, device and equipment applied to unmanned aerial vehicle and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115861938B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871763B (en) | Specific target tracking method based on YOLO | |
CN111507166A (en) | Method and apparatus for learning CNN by using camera and radar together | |
CN110825101B (en) | Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network | |
EP3690705A1 (en) | Method and device for generating deceivable composite image by using gan including generating neural network and discriminating neural network to allow surveillance system to recognize surroundings and detect rare event more accurately | |
CN107590456B (en) | Method for detecting small and micro targets in high-altitude video monitoring | |
CN111338382B (en) | Unmanned aerial vehicle path planning method guided by safety situation | |
CN108108697B (en) | Real-time unmanned aerial vehicle video target detection and tracking method | |
CN112068111A (en) | Unmanned aerial vehicle target detection method based on multi-sensor information fusion | |
CN113203409B (en) | Method for constructing navigation map of mobile robot in complex indoor environment | |
CN111985365A (en) | Straw burning monitoring method and system based on target detection technology | |
CN109635661B (en) | Far-field wireless charging receiving target detection method based on convolutional neural network | |
CN110084837B (en) | Target detection and tracking method based on unmanned aerial vehicle video | |
CN110858414A (en) | Image processing method and device, readable storage medium and augmented reality system | |
CN112926461B (en) | Neural network training and driving control method and device | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN114998737A (en) | Remote smoke detection method, system, electronic equipment and medium | |
CN109754424A (en) | Correlation filtering track algorithm based on fusion feature and adaptive updates strategy | |
CN115424072A (en) | Unmanned aerial vehicle defense method based on detection technology | |
CN114419444A (en) | Lightweight high-resolution bird group identification method based on deep learning network | |
Li et al. | Weak moving object detection in optical remote sensing video with motion-drive fusion network | |
CN109697428B (en) | Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network | |
CN114217303A (en) | Target positioning and tracking method and device, underwater robot and storage medium | |
CN112084815A (en) | Target detection method based on camera focal length conversion, storage medium and processor | |
CN115861938A (en) | Unmanned aerial vehicle counter-braking method and system based on unmanned aerial vehicle identification | |
CN105139433A (en) | Method for simulating infrared small target image sequence based on mean value model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |