CN115861938B - Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition - Google Patents

Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition Download PDF

Info

Publication number
CN115861938B
CN115861938B CN202310063450.4A CN202310063450A CN115861938B CN 115861938 B CN115861938 B CN 115861938B CN 202310063450 A CN202310063450 A CN 202310063450A CN 115861938 B CN115861938 B CN 115861938B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
feature
image
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310063450.4A
Other languages
Chinese (zh)
Other versions
CN115861938A (en
Inventor
李雪茹
罗远哲
刘瑞景
王玲洁
郑玉洁
刘志明
张春涛
赵利波
孙成山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing China Super Industry Information Security Technology Ltd By Share Ltd
Original Assignee
Beijing China Super Industry Information Security Technology Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing China Super Industry Information Security Technology Ltd By Share Ltd filed Critical Beijing China Super Industry Information Security Technology Ltd By Share Ltd
Priority to CN202310063450.4A priority Critical patent/CN115861938B/en
Publication of CN115861938A publication Critical patent/CN115861938A/en
Application granted granted Critical
Publication of CN115861938B publication Critical patent/CN115861938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of unmanned aerial vehicle countering, and particularly relates to an unmanned aerial vehicle countering method and system based on unmanned aerial vehicle identification, wherein the method comprises the following steps: acquiring an unmanned aerial vehicle detection data set; constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph; training a deconvolution fusion network by adopting an unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model; obtaining a visible light image of a region to be monitored and a depth image corresponding to the visible light image; inputting the visible light image into an unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle according to the output of the unmanned aerial vehicle detection model; when the visible light image contains the unmanned aerial vehicle, determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image; and transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle. The invention improves the accuracy of unmanned aerial vehicle identification and improves the accuracy of interference.

Description

Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition
Technical Field
The invention relates to the technical field of unmanned aerial vehicle countering, in particular to an unmanned aerial vehicle countering method and system based on unmanned aerial vehicle identification.
Background
In recent years, the Chinese unmanned aerial vehicle industry rapidly develops, and the application scene of the unmanned aerial vehicle is widened and deepened continuously. However, the proliferation of the number of unmanned aerial vehicles and the imperfection of a supervision system thereof also cause a series of problems such as personal privacy leakage, secret information leakage and the like, and the social security and military security are seriously threatened. Therefore, achieving the countering of low-altitude unmanned aerial vehicle targets is important.
Compared with countering means such as laser weapons, net capturing and the like, the radio interference can be countered on the premise of not damaging the unmanned aerial vehicle, and the method has the advantages of low implementation difficulty and low cost. The existing radio interference countermeasures mostly rely on manual or photoelectric equipment to find and position the unmanned aerial vehicle target, and the unmanned aerial vehicle is countered through an interference system, so that the countermeasures effect can be influenced and electromagnetic environment pollution can be caused once the unmanned aerial vehicle is found out untimely, erroneously identified or positioned with low precision. Therefore, an automatic unmanned aerial vehicle countering method capable of rapidly and accurately finding and positioning the unmanned aerial vehicle and performing directional interference is needed.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle countering method and system based on unmanned aerial vehicle identification, which improve the accuracy of unmanned aerial vehicle identification and the accuracy of interference.
In order to achieve the above object, the present invention provides the following solutions:
an unmanned aerial vehicle countering method based on unmanned aerial vehicle identification, comprising:
acquiring an unmanned aerial vehicle detection data set;
constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph;
training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
obtaining a visible light image of a region to be monitored and a depth image corresponding to the visible light image;
inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle according to the output of the unmanned aerial vehicle detection model;
when the visible light image contains the unmanned aerial vehicle, determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image;
transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result diagram determining module, wherein the feature extraction module is used for generating a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4 and a feature diagram U5 with sequentially reduced sizes according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5' to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4' to obtain a feature map V4; the second deconvolution fusion module is configured to perform a convolution operation with a channel number of 2 on the feature map U3 to obtain a feature map U3', perform a deconvolution operation on the feature map V4' to obtain a feature map V4 with the same size as the feature map U3', and perform a pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature image U2 to obtain a feature image U2', carrying out deconvolution operation on the feature image V3' to obtain a feature image V3 with the same size as the feature image U2', carrying out pixel-by-pixel addition on the feature image V3 and the feature image U2' to obtain a feature image V2', carrying out convolution operation with the channel number of 2 on the feature image U1 to obtain a feature image U1', carrying out deconvolution operation on the feature image V2 'to obtain a feature image V2 with the same size as the feature image U1', and carrying out pixel-by-pixel addition on the feature image V2 and the feature image U1 'to obtain a feature image V1'; the segmentation result diagram determining module is used for deconvoluting the characteristic diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image.
Optionally, when the visible light image includes the unmanned aerial vehicle, determining a spatial center point coordinate of the unmanned aerial vehicle according to the segmentation result graph output by the unmanned aerial vehicle detection model and the corresponding depth image, specifically includes:
when the visible light image contains the unmanned aerial vehicle, calculating centroid coordinates of the unmanned aerial vehicle in a segmentation result diagram output by the unmanned aerial vehicle detection model based on a centroid calculation function;
obtaining a depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and determining the space center point coordinates of the unmanned aerial vehicle according to the centroid coordinates of the unmanned aerial vehicle and the depth value.
Optionally, the transmitting an interference signal to the unmanned aerial vehicle according to the spatial center point coordinates of the unmanned aerial vehicle specifically includes:
determining an altitude angle and an azimuth angle of interference signal emission according to the space center point coordinates of the unmanned aerial vehicle;
and controlling the interference equipment to emit interference signals according to the altitude angle and the azimuth angle through a servo system.
Optionally, the method further comprises:
inputting a visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result diagram, and determining a monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise an unmanned aerial vehicle;
when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the previous moment is that the unmanned aerial vehicle is contained, the interference equipment is controlled to reset through the servo system, and the interference equipment is closed.
Optionally, the feature extraction module is configured to input the input image into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 4*4 to obtain a feature map U1, input the feature map U1 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U2, input the feature map U2 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U3, and input the feature map U3 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U4.
The invention discloses an unmanned aerial vehicle countering system based on unmanned aerial vehicle identification, which comprises the following components:
the data set acquisition module is used for acquiring an unmanned aerial vehicle detection data set;
the deconvolution fusion network construction module is used for constructing a deconvolution fusion network which is used for outputting a segmentation result graph;
the deconvolution fusion network training module is used for training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
the device comprises an area to be monitored image acquisition module, a display module and a display module, wherein the area to be monitored image acquisition module is used for acquiring a visible light image of an area to be monitored and a depth image corresponding to the visible light image;
the unmanned aerial vehicle detection module is used for inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains an unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
the unmanned aerial vehicle space center point coordinate determining module is used for determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result graph output by the unmanned aerial vehicle detection model and the corresponding depth image when the visible light image contains the unmanned aerial vehicle;
the interference signal transmitting module is used for transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result diagram determining module, wherein the feature extraction module is used for generating a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4 and a feature diagram U5 with sequentially reduced sizes according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5' to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4' to obtain a feature map V4; the second deconvolution fusion module is configured to perform a convolution operation with a channel number of 2 on the feature map U3 to obtain a feature map U3', perform a deconvolution operation on the feature map V4' to obtain a feature map V4 with the same size as the feature map U3', and perform a pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature image U2 to obtain a feature image U2', carrying out deconvolution operation on the feature image V3' to obtain a feature image V3 with the same size as the feature image U2', carrying out pixel-by-pixel addition on the feature image V3 and the feature image U2' to obtain a feature image V2', carrying out convolution operation with the channel number of 2 on the feature image U1 to obtain a feature image U1', carrying out deconvolution operation on the feature image V2 'to obtain a feature image V2 with the same size as the feature image U1', and carrying out pixel-by-pixel addition on the feature image V2 and the feature image U1 'to obtain a feature image V1'; the segmentation result diagram determining module is used for deconvoluting the characteristic diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image.
Optionally, the spatial center point coordinate determining module of the unmanned aerial vehicle specifically includes:
the unmanned aerial vehicle detection model comprises a centroid coordinate determining unit, a processing unit and a processing unit, wherein the centroid coordinate determining unit is used for calculating the centroid coordinate of the unmanned aerial vehicle in a segmentation result diagram output by the unmanned aerial vehicle detection model based on a centroid calculation function when the visible light image contains the unmanned aerial vehicle;
the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
and the space center point coordinate determining unit of the unmanned aerial vehicle is used for determining the space center point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value.
Optionally, the interference signal transmitting module specifically includes:
the system comprises an interference signal transmitting altitude and azimuth determining unit, a receiving unit and a receiving unit, wherein the interference signal transmitting altitude and azimuth determining unit is used for determining the interference signal transmitting altitude and azimuth according to the space center point coordinates of the unmanned aerial vehicle;
and the interference signal transmitting unit is used for controlling the interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through a servo system.
Optionally, the system further comprises:
the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result diagram, and determining a monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise the unmanned aerial vehicle;
and the interference equipment resetting and closing module is used for controlling the interference equipment to be reset and closing the interference equipment through the servo system when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the last moment is that the unmanned aerial vehicle is contained.
Optionally, the feature extraction module is configured to input the input image into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 4*4 to obtain a feature map U1, input the feature map U1 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U2, input the feature map U2 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U3, and input the feature map U3 into 2 convolution layers with a core 3*3 and 1 pooling layer with a core 2×2 to obtain a feature map U4.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses an unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition, wherein a counterconvolution fusion network expands the size of a deep semantic feature map layer by layer through a counterconvolution operation, and ensures high resolution of an output feature map in a mode of gradually fusing with low-layer features, so that end-to-end and accurate pixel level segmentation of an unmanned aerial vehicle target is realized, the accuracy of unmanned aerial vehicle recognition is improved, in addition, the spatial center point coordinate of the unmanned aerial vehicle is determined according to a segmentation result map and a corresponding depth image, an interference signal is transmitted to the unmanned aerial vehicle according to the spatial center point coordinate of the unmanned aerial vehicle, the accuracy of interference signal transmission is improved, and the countering efficiency of the unmanned aerial vehicle is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an unmanned aerial vehicle countering method based on unmanned aerial vehicle identification;
fig. 2 is a schematic flow chart II of an unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition;
FIG. 3 is a schematic diagram of a deconvolution converged network in accordance with the present invention;
fig. 4 is a schematic structural diagram of an unmanned aerial vehicle countering system based on unmanned aerial vehicle recognition.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide an unmanned aerial vehicle countering method and system based on unmanned aerial vehicle identification, which improve the accuracy of unmanned aerial vehicle identification and the accuracy of interference.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a schematic flow diagram of an unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition according to the present invention, fig. 2 is a schematic flow diagram of an unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition according to the present invention, as shown in fig. 1-2, an unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition, including:
step 101: and acquiring a detection data set of the unmanned aerial vehicle.
Step 102: and constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting the segmentation result graph.
As shown in fig. 3, the deconvolution fusion network includes a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module, and a segmentation result diagram determining module, wherein the dashed boxes from top to bottom in fig. 3 are the first deconvolution fusion module, the second deconvolution fusion module, the third deconvolution fusion module, and the fourth deconvolution fusion module in sequence, and the feature extraction module is configured to generate a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4, and a feature diagram U5 with sizes decreasing in sequence according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5 'to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4 'to obtain a feature map V4'; the second deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the characteristic diagram U3 to obtain a characteristic diagram U3', carrying out deconvolution operation on the characteristic diagram V4' to obtain a characteristic diagram V4 with the same size as the characteristic diagram U3', and carrying out pixel-by-pixel addition on the characteristic diagram V4 and the characteristic diagram U3' to obtain a characteristic diagram V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the characteristic image U2 to obtain the characteristic image U2', carrying out deconvolution operation on the characteristic image V3' to obtain the characteristic image V3 with the same size as the characteristic image U2', carrying out pixel-by-pixel addition on the characteristic image V3 and the characteristic image U2' to obtain the characteristic image V2', carrying out convolution operation with the channel number of 2 on the characteristic image U1 to obtain the characteristic image U1', carrying out deconvolution operation on the characteristic image V2 'to obtain the characteristic image V2 with the same size as the characteristic image U1', and carrying out pixel-by-pixel addition on the characteristic image V2 and the characteristic image U1 'to obtain the characteristic image V1'; the segmentation result diagram determining module is used for deconvoluting the feature diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image.
The feature extraction module is used for inputting an input image into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 4*4 to obtain a feature map U1, inputting the feature map U1 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U2, inputting the feature map U2 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U3, and inputting the feature map U3 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U4.
The following specifically shows a design process of the deconvolution fusion network, taking an example of inputting a visible light monitoring image (size 1024×1024×3) including a black unmanned aerial vehicle:
(1) First, feature extraction is performed on a monitoring image (input image): inputting the monitoring image into 2 convolution layers with the core 3*3 and 1 pooling layer with the pooling core 4*4 to obtain a feature map U1; inputting the characteristic diagram U1 into 2 convolution layers with the core of 3*3 and 1 pooling layer with the pooling core of 2 x 2 to obtain the characteristic diagram U2; repeating the operation process of U1 to U2 to generate a feature map which is deeper and has richer semantic representation, namely U3 and U4; and then carrying out convolution operation with the number of convolutions of U4 being 4096 and the step length being 2, and obtaining a characteristic diagram U5.
Through layer-by-layer feature extraction, the output feature graphs and corresponding sizes of all layers are U1:256×256×24, U2:128 x 48, U3:64×64×96, U4:32 x 192 and U5:16*16*4096.
(2) Next, constructing a deconvolution fusion module based on the reverse direction of the backbone network:
and respectively carrying out convolution operation with the channel number of 2 on the characteristic diagram U4 and the characteristic diagram U5, so that the channel dimension of the characteristic diagram is converted into 2, and obtaining characteristic diagrams U4 'and U5', wherein the corresponding sizes of the characteristic diagrams are 32 x 2 and 16 x 2. The number of modified channels is 2 here because the task of the semantic segmentation is to identify and separate the unmanned aerial vehicle target from the background, i.e. the class of pixels in the image is both unmanned aerial vehicle and background.
Next, deconvolution is performed on the feature map U5', i.e., the size of the feature map U5' is first enlarged to 32×32×2 by interpolation and zero padding, and then forward convolution of 1*1 is performed. After the deconvolution operation, a feature map V5 with a 2-fold size expansion is obtained. Then, the feature map V5 is added pixel by pixel with U4' of the same size, resulting in a feature map of size 32 x 2. In order to further enhance the fusion effect, the obtained feature map is input into a layer of convolution layer again to obtain a feature map V4', so that the size of the feature map V4' is kept unchanged, but the semantic feature information is enhanced. The above is the construction process of the deconvolution fusion module.
(3) Then inputting the characteristic diagrams V4' and U3' into the deconvolution module to obtain a characteristic diagram V3' with the size of 64 x 2; inputting the obtained characteristic diagrams V3' and U2' into a deconvolution module to obtain a characteristic diagram V2' with the size of 128 x 2; finally, a feature map V1' of size 256×256×2 is obtained based on the obtained feature maps V3', U2' and the deconvolution module.
In order to obtain the exact position of the drone target, pixel-level segmentation of the input map is required. At this time, the size of the feature map V1 'is 1/4 of the input map, so that the deconvolution operation is performed again on V1' to enlarge the size thereof to the original map size, and the final segmentation result map V1 is obtained. The size of V1 is 1024×1024×2, and the value of the first channel indicates the probability that the pixel belongs to the unmanned plane target, and when the pixel value is greater than the target threshold, the pixel belongs to the unmanned plane; the value of the second channel indicates the probability that the pixel belongs to the background, and when the pixel value is greater than the background threshold, it indicates that the pixel is in the background region.
The above is the design process of the deconvolution fusion network. The four deconvolution fusion modules contained in the network can enlarge the size of the deep semantic feature map layer by deconvolution operation, and ensure the high resolution of the output feature map by low-level feature fusion, thereby realizing end-to-end and accurate pixel-level segmentation.
Step 103: and training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model.
Step 103 specifically includes: when the deconvolution fusion network is trained by adopting the unmanned aerial vehicle detection data set, the parameter of the whole deconvolution fusion network is updated based on the two classification cross entropy loss functions, and a deconvolution fusion model (unmanned aerial vehicle detection model) is obtained after training is finished.
Step 104: and obtaining a visible light image of the area to be monitored and a depth image corresponding to the visible light image.
Step 104 specifically includes:
and monitoring a security area (an area to be monitored) to be subjected to unmanned aerial vehicle reaction by adopting a depth camera, and shooting at regular time to obtain a visible light image and a corresponding depth picture of the security area.
Steps 103 and 104 are functions of the intelligent monitoring module.
Step 105: and inputting the visible light image into an unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle according to the output of the unmanned aerial vehicle detection model.
Step 105 specifically includes: inputting the shot visible light image into an unmanned aerial vehicle detection model, when an output segmentation result image V1 contains unmanned aerial vehicle pixels, generating a non-cooperative unmanned aerial vehicle in a current security area, outputting a '1' by an intelligent monitoring module, converting a first channel image in the segmentation result image V1 into a binary image, and outputting the binary image; when all pixel classifications in the output segmentation result diagram V1 are background classifications, the unmanned aerial vehicle target is not found in the security area, and the intelligent monitoring module outputs 0.
The control module obtains the output of the intelligent monitoring module.
Step 106: when the visible light image contains the unmanned aerial vehicle, determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image.
Step 106 specifically includes:
when the visible light image contains the unmanned aerial vehicle (when the output of the intelligent monitoring module is 1), calculating centroid coordinates (a, b) of the unmanned aerial vehicle pixel region in the segmentation result diagram output by the unmanned aerial vehicle detection model based on the centroid calculation function of opencv. The segmentation result graph is a binary graph.
And obtaining a depth value d of the centroid coordinates (a, b) of the unmanned aerial vehicle on the corresponding depth image.
And determining the coordinates (x, y, z) of the spatial center point of the unmanned aerial vehicle according to the centroid coordinates and the depth values of the unmanned aerial vehicle.
And (3) calculating and obtaining the space center point coordinates (x, y, z) of the unmanned aerial vehicle according to the formulas (1) - (3).
Figure SMS_1
(1)
Figure SMS_2
(2)
Figure SMS_3
(3)
In the formulas (1) - (3), a and b are respectively the abscissa and the ordinate of the centroid of the unmanned aerial vehicle, d is the depth value of the centroid of the unmanned aerial vehicle, c_f is the internal parameter of the depth camera, and f x Is the focal length value in the x direction, f y C_f is the focal length value in the y direction x C_f is the offset of the optical axis in the x direction from the center of the projection plane coordinates y Is the offset of the optical axis in the y-direction from the center of the projection plane coordinates.
Step 107: and transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle.
Step 107 specifically includes:
and determining the altitude and azimuth angle of the interference signal emission according to the space center point coordinates of the unmanned aerial vehicle.
The interference device is controlled by a servo system to emit interference signals according to the altitude angle and the azimuth angle.
In order to aim the interfering device at the unmanned aerial vehicle target, the angle of the servo system needs to be adjusted. Therefore, according to the coordinates (x, y, z) of the spatial center point of the unmanned aerial vehicle and the formulas (4) to (5), the motion angles required to be reached by the servo system, namely the altitude angle alpha and the azimuth angle theta, are obtained.
Figure SMS_4
(4)
Figure SMS_5
(5)
And the control module sends a servo motion instruction and an interference starting instruction according to the output value '1' of the intelligent monitoring module and the calculated motion angle of the servo system.
The unmanned aerial vehicle countering method based on unmanned aerial vehicle identification further comprises the following steps:
and inputting the visible light image acquired in real time into an unmanned aerial vehicle detection model to output a segmentation result diagram, and determining a monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise the unmanned aerial vehicle.
When the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the previous moment is that the unmanned aerial vehicle is contained, the interference equipment is controlled to reset and the interference equipment is closed through the servo system.
When the output of the intelligent monitoring module is 0, namely, no unmanned aerial vehicle target appears, the control module judges whether the value at the current moment is the same as the output value of the intelligent monitoring module of the previous frame. If the two types of the wireless communication signals are the same, the unmanned aerial vehicle target is not appeared at the last moment, the wireless interference transmitter (interference equipment) is in a closed state, and the control module does not send any instruction; if the target is different, the unmanned aerial vehicle targets appear at the last moment but do not exist at the current moment (the target is knocked down by countering or the target flies out of the security area), the radio interference transmitter is in an on state, and the control module sends a servo homing instruction and an interference closing instruction.
The interference module comprises interference equipment and a servo system, and after a host of the interference module receives an instruction of the control module, the servo system is controlled to move to a designated angle or azimuth of the control module according to the instruction type; and then the switch of the radio interference transmitter is turned on or off according to the instruction type.
That is, when the unmanned aerial vehicle target appears in the security area, the interference module firstly enables the servo system to move to a designated angle and starts the radio interference transmitter, so that the interference transmitter aims at the azimuth of the unmanned aerial vehicle target to perform interference countermeasures; along with the change of the spatial position of the unmanned aerial vehicle, the angle of the interference transmitter is continuously adjusted according to the new servo motion instruction so as to lead the interference transmitter to be aligned with the unmanned aerial vehicle for accurate countering; and (3) enabling the servo system to return to a position and closing the interference transmitter until the unmanned aerial vehicle is knocked down after losing control, namely, the unmanned aerial vehicle is disappeared in a security area.
In order to accurately identify and position the area of the unmanned aerial vehicle from the background, the invention discloses a deconvolution fusion network, wherein the deconvolution network gradually enlarges the size of a deep semantic feature map through deconvolution operation, and ensures the high resolution of an output feature map in a mode of gradually fusing with low-level features, thereby realizing the end-to-end and accurate pixel level segmentation of the unmanned aerial vehicle target.
According to the invention, the control module and the interference module can calculate and adjust the transmitting direction of the interference device according to the output result of the intelligent monitoring module, so that the directional accurate interference is realized.
Fig. 4 is a schematic structural diagram of an unmanned aerial vehicle countering system based on unmanned aerial vehicle recognition according to the present invention, as shown in fig. 4, an unmanned aerial vehicle countering system based on unmanned aerial vehicle recognition includes:
the data set acquisition module 201 is configured to acquire a detection data set of the unmanned aerial vehicle.
The deconvolution fusion network construction module 202 is configured to construct a deconvolution fusion network, where the deconvolution fusion network is configured to output a segmentation result graph.
The deconvolution fusion network training module 203 is configured to train the deconvolution fusion network by using the unmanned aerial vehicle detection data set, so as to obtain an unmanned aerial vehicle detection model.
The region to be monitored image obtaining module 204 is configured to obtain a visible light image and a depth image corresponding to the visible light image of the region to be monitored.
The unmanned aerial vehicle detection module 205 is configured to input a visible light image into an unmanned aerial vehicle detection model, and determine whether the visible light image includes an unmanned aerial vehicle according to an output of the unmanned aerial vehicle detection model.
The space center point coordinate determining module 206 of the unmanned aerial vehicle is configured to determine, when the visible light image includes the unmanned aerial vehicle, the space center point coordinate of the unmanned aerial vehicle according to the segmentation result map output by the unmanned aerial vehicle detection model and the corresponding depth image.
The interference signal transmitting module 207 is configured to transmit an interference signal to the unmanned aerial vehicle according to the spatial center point coordinates of the unmanned aerial vehicle.
The deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result diagram determining module, wherein the feature extraction module is used for generating a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4 and a feature diagram U5 with sizes reduced in sequence according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5 'to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4 'to obtain a feature map V4'; the second deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the characteristic diagram U3 to obtain a characteristic diagram U3', carrying out deconvolution operation on the characteristic diagram V4' to obtain a characteristic diagram V4 with the same size as the characteristic diagram U3', and carrying out pixel-by-pixel addition on the characteristic diagram V4 and the characteristic diagram U3' to obtain a characteristic diagram V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the characteristic image U2 to obtain the characteristic image U2', carrying out deconvolution operation on the characteristic image V3' to obtain the characteristic image V3 with the same size as the characteristic image U2', carrying out pixel-by-pixel addition on the characteristic image V3 and the characteristic image U2' to obtain the characteristic image V2', carrying out convolution operation with the channel number of 2 on the characteristic image U1 to obtain the characteristic image U1', carrying out deconvolution operation on the characteristic image V2 'to obtain the characteristic image V2 with the same size as the characteristic image U1', and carrying out pixel-by-pixel addition on the characteristic image V2 and the characteristic image U1 'to obtain the characteristic image V1'; the segmentation result diagram determining module is used for deconvoluting the feature diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image.
The space center point coordinate determining module 206 of the unmanned plane specifically includes:
and the centroid coordinate determining unit of the unmanned aerial vehicle is used for calculating the centroid coordinate of the unmanned aerial vehicle in the segmentation result diagram output by the unmanned aerial vehicle detection model based on the centroid calculation function when the visible light image contains the unmanned aerial vehicle.
And the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image.
And the space center point coordinate determining unit of the unmanned aerial vehicle is used for determining the space center point coordinate of the unmanned aerial vehicle according to the centroid coordinate and the depth value of the unmanned aerial vehicle.
The interfering signal transmitting module 207 specifically includes:
and the interference signal transmitting altitude and azimuth angle determining unit is used for determining the altitude and azimuth angle of the interference signal transmitting according to the space center point coordinates of the unmanned aerial vehicle.
And the interference signal transmitting unit is used for controlling the interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through the servo system.
Unmanned aerial vehicle reaction system based on unmanned aerial vehicle discernment still includes:
and the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result diagram, and determining the monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise the unmanned aerial vehicle.
And the interference equipment resetting and closing module is used for controlling the interference equipment to be reset and closing the interference equipment through the servo system when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the last moment is that the unmanned aerial vehicle is contained.
The feature extraction module is used for inputting an input image into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 4*4 to obtain a feature map U1, inputting the feature map U1 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U2, inputting the feature map U2 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U3, and inputting the feature map U3 into 2 convolution layers with the core 3*3 and 1 pooling layers with the core 2 x 2 to obtain a feature map U4.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. An unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition is characterized by comprising the following steps:
acquiring an unmanned aerial vehicle detection data set;
constructing a deconvolution fusion network, wherein the deconvolution fusion network is used for outputting a segmentation result graph;
training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
obtaining a visible light image of a region to be monitored and a depth image corresponding to the visible light image;
inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains the unmanned aerial vehicle according to the output of the unmanned aerial vehicle detection model;
when the visible light image contains the unmanned aerial vehicle, determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image;
transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result diagram determining module, wherein the feature extraction module is used for generating a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4 and a feature diagram U5 with sequentially reduced sizes according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5' to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4' to obtain a feature map V4; the second deconvolution fusion module is configured to perform a convolution operation with a channel number of 2 on the feature map U3 to obtain a feature map U3', perform a deconvolution operation on the feature map V4' to obtain a feature map V4 with the same size as the feature map U3', and perform a pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature image U2 to obtain a feature image U2', carrying out deconvolution operation on the feature image V3' to obtain a feature image V3 with the same size as the feature image U2', carrying out pixel-by-pixel addition on the feature image V3 and the feature image U2' to obtain a feature image V2', carrying out convolution operation with the channel number of 2 on the feature image U1 to obtain a feature image U1', carrying out deconvolution operation on the feature image V2 'to obtain a feature image V2 with the same size as the feature image U1', and carrying out pixel-by-pixel addition on the feature image V2 and the feature image U1 'to obtain a feature image V1'; the segmentation result diagram determining module is used for deconvoluting the feature diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image;
when the visible light image contains the unmanned aerial vehicle, determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result image output by the unmanned aerial vehicle detection model and the corresponding depth image, specifically comprising:
when the visible light image contains the unmanned aerial vehicle, calculating centroid coordinates of the unmanned aerial vehicle in a segmentation result diagram output by the unmanned aerial vehicle detection model based on a centroid calculation function;
obtaining a depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
determining the space center point coordinates of the unmanned aerial vehicle according to the centroid coordinates of the unmanned aerial vehicle and the depth values;
calculating the space center point coordinates of the unmanned aerial vehicle according to the formulas (1) - (3)x,y,z);
Figure QLYQS_1
(1)
Figure QLYQS_2
(2)/>
Figure QLYQS_3
(3)
wherein ,ais the abscissa of the centroid of the unmanned aerial vehicle,bis the ordinate of the centroid of the unmanned aerial vehicle,dis the depth value of the centroid of the unmanned aerial vehicle,c_fas an internal parameter of the depth camera,f x is thatxThe focal length value in the direction is set,f y is thatyThe focal length value in the direction is set,c_f x is thatxThe offset of the optical axis in the direction from the center of the projection plane coordinates,c_f y is thatyOffset of the optical axis in the direction from the center of the projection plane coordinates.
2. The unmanned aerial vehicle countering method based on unmanned aerial vehicle identification according to claim 1, wherein the transmitting the interference signal to the unmanned aerial vehicle according to the spatial center point coordinates of the unmanned aerial vehicle specifically comprises:
determining an altitude angle and an azimuth angle of interference signal emission according to the space center point coordinates of the unmanned aerial vehicle;
and controlling the interference equipment to emit interference signals according to the altitude angle and the azimuth angle through a servo system.
3. The unmanned aerial vehicle countering method based on unmanned aerial vehicle identification of claim 2, wherein the method further comprises:
inputting a visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result diagram, and determining a monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise an unmanned aerial vehicle;
when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the previous moment is that the unmanned aerial vehicle is contained, the interference equipment is controlled to reset through the servo system, and the interference equipment is closed.
4. The unmanned aerial vehicle countering method based on unmanned aerial vehicle recognition according to claim 1, wherein the feature extraction module is configured to input an input image into 2 convolutions layers with a core 3*3 and 1 pooled layers with a core 4*4 to obtain a feature map U1, input the feature map U1 into 2 convolutions layers with a core 3*3 and 1 pooled layers with a core 2 x 2 to obtain a feature map U2, input the feature map U2 into 2 convolutions layers with a core 3*3 and 1 pooled layers with a core 2 x 2 to obtain a feature map U3, and input the feature map U3 into 2 convolutions layers with a core 3*3 and 1 pooled layers with a core 2 x 2 to obtain a feature map U4.
5. Unmanned aerial vehicle reaction system based on unmanned aerial vehicle discernment, characterized in that includes:
the data set acquisition module is used for acquiring an unmanned aerial vehicle detection data set;
the deconvolution fusion network construction module is used for constructing a deconvolution fusion network which is used for outputting a segmentation result graph;
the deconvolution fusion network training module is used for training the deconvolution fusion network by adopting the unmanned aerial vehicle detection data set to obtain an unmanned aerial vehicle detection model;
the device comprises an area to be monitored image acquisition module, a display module and a display module, wherein the area to be monitored image acquisition module is used for acquiring a visible light image of an area to be monitored and a depth image corresponding to the visible light image;
the unmanned aerial vehicle detection module is used for inputting the visible light image into the unmanned aerial vehicle detection model, and judging whether the visible light image contains an unmanned aerial vehicle or not according to the output of the unmanned aerial vehicle detection model;
the unmanned aerial vehicle space center point coordinate determining module is used for determining the space center point coordinate of the unmanned aerial vehicle according to the segmentation result graph output by the unmanned aerial vehicle detection model and the corresponding depth image when the visible light image contains the unmanned aerial vehicle;
the interference signal transmitting module is used for transmitting an interference signal to the unmanned aerial vehicle according to the space center point coordinates of the unmanned aerial vehicle;
the deconvolution fusion network comprises a feature extraction module, a first deconvolution fusion module, a second deconvolution fusion module, a third deconvolution fusion module, a fourth deconvolution fusion module and a segmentation result diagram determining module, wherein the feature extraction module is used for generating a feature diagram U1, a feature diagram U2, a feature diagram U3, a feature diagram U4 and a feature diagram U5 with sequentially reduced sizes according to an input image; the first deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature map U4 to obtain a feature map U4', carrying out convolution operation with the channel number of 2 on the feature map U5 to obtain a feature map U5', carrying out deconvolution operation on the feature map U5' to obtain a feature map V5 with the same size as the feature map U4', and carrying out pixel-by-pixel addition on the feature map V5 and the feature map U4' to obtain a feature map V4; the second deconvolution fusion module is configured to perform a convolution operation with a channel number of 2 on the feature map U3 to obtain a feature map U3', perform a deconvolution operation on the feature map V4' to obtain a feature map V4 with the same size as the feature map U3', and perform a pixel-by-pixel addition on the feature map V4 and the feature map U3' to obtain a feature map V3'; the third deconvolution fusion module is used for carrying out convolution operation with the channel number of 2 on the feature image U2 to obtain a feature image U2', carrying out deconvolution operation on the feature image V3' to obtain a feature image V3 with the same size as the feature image U2', carrying out pixel-by-pixel addition on the feature image V3 and the feature image U2' to obtain a feature image V2', carrying out convolution operation with the channel number of 2 on the feature image U1 to obtain a feature image U1', carrying out deconvolution operation on the feature image V2 'to obtain a feature image V2 with the same size as the feature image U1', and carrying out pixel-by-pixel addition on the feature image V2 and the feature image U1 'to obtain a feature image V1'; the segmentation result diagram determining module is used for deconvoluting the feature diagram V1' to obtain a segmentation result diagram V1 with the same size as the input image;
the space center point coordinate determining module of the unmanned aerial vehicle specifically comprises:
the unmanned aerial vehicle detection model comprises a centroid coordinate determining unit, a processing unit and a processing unit, wherein the centroid coordinate determining unit is used for calculating the centroid coordinate of the unmanned aerial vehicle in a segmentation result diagram output by the unmanned aerial vehicle detection model based on a centroid calculation function when the visible light image contains the unmanned aerial vehicle;
the depth value determining unit is used for obtaining the depth value of the centroid coordinate position of the unmanned aerial vehicle on the corresponding depth image;
the space center point coordinate determining unit of the unmanned aerial vehicle is used for determining the space center point coordinate of the unmanned aerial vehicle according to the centroid coordinate of the unmanned aerial vehicle and the depth value;
calculating the space center point coordinates of the unmanned aerial vehicle according to the formulas (1) - (3)x,y,z);
Figure QLYQS_4
(1)
Figure QLYQS_5
(2)
Figure QLYQS_6
(3)
wherein ,ais the abscissa of the centroid of the unmanned aerial vehicle,bis the ordinate of the centroid of the unmanned aerial vehicle,dis the depth value of the centroid of the unmanned aerial vehicle,c_fas an internal parameter of the depth camera,f x is thatxThe focal length value in the direction is set,f y is thatyThe focal length value in the direction is set,c_f x is thatxThe offset of the optical axis in the direction from the center of the projection plane coordinates,c_f y is thatyOffset of the optical axis in the direction from the center of the projection plane coordinates.
6. The unmanned aerial vehicle countering system based on unmanned aerial vehicle recognition according to claim 5, wherein the interfering signal transmitting module specifically comprises:
the system comprises an interference signal transmitting altitude and azimuth determining unit, a receiving unit and a receiving unit, wherein the interference signal transmitting altitude and azimuth determining unit is used for determining the interference signal transmitting altitude and azimuth according to the space center point coordinates of the unmanned aerial vehicle;
and the interference signal transmitting unit is used for controlling the interference equipment to transmit interference signals according to the altitude angle and the azimuth angle through a servo system.
7. The unmanned aerial vehicle countering system based on unmanned aerial vehicle identification of claim 6, wherein the system further comprises:
the monitoring result real-time output module is used for inputting the visible light image acquired in real time into the unmanned aerial vehicle detection model to output a segmentation result diagram, and determining a monitoring result according to the output segmentation result diagram, wherein the monitoring result comprises or does not comprise the unmanned aerial vehicle;
and the interference equipment resetting and closing module is used for controlling the interference equipment to be reset and closing the interference equipment through the servo system when the monitoring result at the current moment is that the unmanned aerial vehicle is not contained and the monitoring result at the last moment is that the unmanned aerial vehicle is contained.
8. The unmanned aerial vehicle countering system based on unmanned aerial vehicle recognition according to claim 5, wherein the feature extraction module is configured to input an input image into 2 convolutions layers with kernels 3*3 and 1 pooled layers with kernels 4*4 to obtain a feature map U1, input the feature map U1 into 2 convolutions layers with kernels 3*3 and 1 pooled layers with kernels 2 x 2 to obtain a feature map U2, input the feature map U2 into 2 convolutions layers with kernels 3*3 and 1 pooled layers with kernels 2 x 2 to obtain a feature map U3, and input the feature map U3 into 2 convolutions layers with kernels 3*3 and 1 pooled layers with kernels 2 x 2 to obtain a feature map U4.
CN202310063450.4A 2023-02-06 2023-02-06 Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition Active CN115861938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310063450.4A CN115861938B (en) 2023-02-06 2023-02-06 Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310063450.4A CN115861938B (en) 2023-02-06 2023-02-06 Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition

Publications (2)

Publication Number Publication Date
CN115861938A CN115861938A (en) 2023-03-28
CN115861938B true CN115861938B (en) 2023-05-26

Family

ID=85657620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310063450.4A Active CN115861938B (en) 2023-02-06 2023-02-06 Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition

Country Status (1)

Country Link
CN (1) CN115861938B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554656A (en) * 2021-07-13 2021-10-26 中国科学院空间应用工程与技术中心 Optical remote sensing image example segmentation method and device based on graph neural network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11288517B2 (en) * 2014-09-30 2022-03-29 PureTech Systems Inc. System and method for deep learning enhanced object incident detection
US10776673B2 (en) * 2019-01-31 2020-09-15 StradVision, Inc. Learning method and learning device for sensor fusion to integrate information acquired by radar capable of distance estimation and information acquired by camera to thereby improve neural network for supporting autonomous driving, and testing method and testing device using the same
CN109753903B (en) * 2019-02-27 2020-09-15 北航(四川)西部国际创新港科技有限公司 Unmanned aerial vehicle detection method based on deep learning
CN111666822A (en) * 2020-05-13 2020-09-15 飒铂智能科技有限责任公司 Low-altitude unmanned aerial vehicle target detection method and system based on deep learning
CN113255589B (en) * 2021-06-25 2021-10-15 北京电信易通信息技术股份有限公司 Target detection method and system based on multi-convolution fusion network
CN113822383B (en) * 2021-11-23 2022-03-15 北京中超伟业信息安全技术股份有限公司 Unmanned aerial vehicle detection method and system based on multi-domain attention mechanism
CN114550016B (en) * 2022-04-22 2022-07-08 北京中超伟业信息安全技术股份有限公司 Unmanned aerial vehicle positioning method and system based on context information perception
CN115187959B (en) * 2022-07-14 2023-04-14 清华大学 Method and system for landing flying vehicle in mountainous region based on binocular vision
CN115223067B (en) * 2022-09-19 2022-12-09 季华实验室 Point cloud fusion method, device and equipment applied to unmanned aerial vehicle and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554656A (en) * 2021-07-13 2021-10-26 中国科学院空间应用工程与技术中心 Optical remote sensing image example segmentation method and device based on graph neural network

Also Published As

Publication number Publication date
CN115861938A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
EP3690727B1 (en) Learning method and learning device for sensor fusion to integrate information acquired by radar capable of distance estimation and information acquired by camera to thereby improve neural network for supporting autonomous driving, and testing method and testing device using the same
CN109871763B (en) Specific target tracking method based on YOLO
CN108108697B (en) Real-time unmanned aerial vehicle video target detection and tracking method
CN112085728B (en) Submarine pipeline and leakage point detection method
CN109635661B (en) Far-field wireless charging receiving target detection method based on convolutional neural network
CN111507210A (en) Traffic signal lamp identification method and system, computing device and intelligent vehicle
CN110825101A (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN113203409A (en) Method for constructing navigation map of mobile robot in complex indoor environment
CN112207821B (en) Target searching method of visual robot and robot
CN110084837B (en) Target detection and tracking method based on unmanned aerial vehicle video
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN111260687B (en) Aerial video target tracking method based on semantic perception network and related filtering
CN112907573A (en) Depth completion method based on 3D convolution
CN115424072A (en) Unmanned aerial vehicle defense method based on detection technology
CN112132900A (en) Visual repositioning method and system
CN116194951A (en) Method and apparatus for stereoscopic based 3D object detection and segmentation
CN114217303A (en) Target positioning and tracking method and device, underwater robot and storage medium
CN113298177B (en) Night image coloring method, device, medium and equipment
Li et al. Weak moving object detection in optical remote sensing video with motion-drive fusion network
CN115861938B (en) Unmanned aerial vehicle countering method and system based on unmanned aerial vehicle recognition
CN111881982A (en) Unmanned aerial vehicle target identification method
CN112084815A (en) Target detection method based on camera focal length conversion, storage medium and processor
Li et al. Feature point extraction and tracking based on a local adaptive threshold
CN116170658A (en) Under-screen depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant