CN113743230A - Airplane detection, tracking and identification system based on edge calculation - Google Patents

Airplane detection, tracking and identification system based on edge calculation Download PDF

Info

Publication number
CN113743230A
CN113743230A CN202110906517.7A CN202110906517A CN113743230A CN 113743230 A CN113743230 A CN 113743230A CN 202110906517 A CN202110906517 A CN 202110906517A CN 113743230 A CN113743230 A CN 113743230A
Authority
CN
China
Prior art keywords
airplane
network
camera
tracking
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110906517.7A
Other languages
Chinese (zh)
Inventor
余锴熔
王善泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202110906517.7A priority Critical patent/CN113743230A/en
Publication of CN113743230A publication Critical patent/CN113743230A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an aircraft detection, tracking and recognition system based on edge calculation, which comprises hardware equipment, wherein the hardware equipment comprises a camera, a holder is installed at the bottom of the camera, the hardware equipment further comprises a Jetson Xavier NX edge calculation platform, and a detection network is arranged in the Jetson Xavier NX edge calculation platform. Compared with the prior art, the invention has the advantages that: manpower and time costs for manually collecting and processing data are reduced. The method is beneficial to acquiring more comprehensive and higher-quality data, the occupied space of the edge computing platform is smaller, the portability is strong, the transportability is strong, and the identification precision of the airplane model type is higher.

Description

Airplane detection, tracking and identification system based on edge calculation
Technical Field
The invention relates to the field of airplane detection, in particular to an airplane detection, tracking and identification system based on edge calculation.
Background
Among the numerous targets of detection, airplanes are not only important carriers in the civilian field, but are even more critical percussion power in the military field. And because the influence of manual shooting on the real-time performance and stability of the shooting process, the airplane detection tracking intelligent small system arranged on the reconnaissance airplane type is more significant for automatically detecting strange airplanes in the air.
The convolution neural network in deep learning depends on tens of millions of network parameters in the neural network to participate in calculation, has the defects of complex network structure, large calculation amount and low speed, and is difficult to transplant into embedded equipment. As the number of layers of network models is deeper and deeper, parameters are more and more, and the reduction of the model size and the calculation loss of the network models is important.
Most of the mainstream target detection algorithms in recent years are trained and detected by using deep convolutional neural networks and various large-scale data sets. Common detection models are classified into one-stage or multi-stage models represented by YOLO, SSD, CornerNet, R-CNN, Fast R-CNN, and the like. The neural network with better performance usually means a deeper and wider network structure, more network parameters and more huge storage and calculation overhead, and the neural network cannot be smoothly deployed to a mobile terminal and embedded equipment. The compression and acceleration of the neural network and the arrangement of the network model with good performance after the weight reduction into a related intelligent small system have great academic and engineering values.
The prior art has the following defects: the cost of manually collecting data is high, and the problem of difficulty in collecting data exists; because long-distance data transmission and manual airplane type identification are needed, the time is more, and greater hysteresis exists; the process of manually distinguishing the model of the airplane is complicated and the precision is not high.
Disclosure of Invention
The invention aims to provide an airplane detection, tracking and identification system based on edge calculation.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: the utility model provides an aircraft detects tracks recognition system based on edge calculation, includes hardware equipment, hardware equipment includes the camera, and the cloud platform is installed to the camera bottom, and hardware equipment still includes Jetson Xavier NX edge calculation platform, is equipped with the detection network in the Jetson Xavier NX edge calculation platform.
As an improvement, the yahBoom two-degree-of-freedom camera is connected with a Jetson Xavier NX edge computing platform by using a USB interface; after the detection network is tested and the parameters are adjusted on the PC terminal, the detection network is stored in the Jetson Xavier NX.
As an improvement, the detection network is a YOLOv5 detection network.
As an improvement, the identification method comprises the following steps:
firstly, acquiring a whole image through a YahBoom camera, taking the image as input, detecting a network through a YOLOv5, and judging whether an airplane appears at the moment;
second, if an airplane is detected in the image, the detection network outputs (x)1,y1),(x2,y2) The approximate location (x) of the aircraft can be estimated based on the coordinates of the anchor framec,yc). Wherein x is1,x2And xcThe relationship of (c) is as follows.
Figure BDA0003201767340000021
Then according to the center point (x) of the cameracenter,ycenter) The method comprises the following steps of controlling and adjusting a steering engine to enable a camera central point to coincide with an airplane target central point as much as possible so as to achieve the target tracking effect, wherein as the phenomenon that a target disappears suddenly possibly exists during shooting of a camera, in order to improve the robustness of the whole system and enhance the tracking capability of the whole system, Kalman filtering is used for predicting the position where the airplane target appears and is used for controlling the rotation of the steering engine to search for the target airplane;
thirdly, after the airplane target appears in the picture, cutting the airplane to be used as the input of a ResNet classification network, and outputting the most possible model and category of the airplane;
and fourthly, storing the image video of the detected airplane for subsequent manual secondary inspection and verification, and improving the identification accuracy.
The improvement comprises pruning a Batch Normalization layer in a YOLOv5 network, clipping parameters which are close to 0 or equal to 0 and appear in the layer to reduce the calculated amount and the total amount of the parameters under the condition that the model keeps the original precision, simultaneously carrying out knowledge distillation on YOLOv5, taking YOLOv5m with deeper network as a T model, taking the original YOLOv5S as an S model, and using improved Softmax Temperature
Figure BDA0003201767340000022
The network is retrained as an activation function to improve the accuracy of the model while reducing the complexity of the model.
Compared with the prior art, the invention has the advantages that: manpower and time costs for manually collecting and processing data are reduced. The method is beneficial to acquiring more comprehensive and higher-quality data, the occupied space of the edge computing platform is smaller, the portability is strong, the transportability is strong, and the identification precision of the airplane model type is higher.
Drawings
Fig. 1 is an overall system flow diagram of an aircraft detection, tracking and identification system based on edge calculation.
FIG. 2 is a schematic diagram of steering engine control tracking of an aircraft detection, tracking and identification system based on edge calculation.
FIG. 3 is a schematic diagram of an aircraft target cropping of an aircraft detection, tracking and recognition system based on edge calculation.
Fig. 4 is a schematic structural diagram of an aircraft detection, tracking and identification system based on edge calculation.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
In specific implementation, the airplane detection, tracking and recognition system based on edge calculation comprises hardware equipment, wherein the hardware equipment comprises a camera, a holder is installed at the bottom of the camera, the hardware equipment further comprises a Jetson Xavier NX edge calculation platform, and a detection network is arranged in the Jetson Xavier NX edge calculation platform.
As an improvement, the yahBoom two-degree-of-freedom camera is connected with a Jetson Xavier NX edge computing platform by using a USB interface; after the detection network is tested and the parameters are adjusted on the PC terminal, the detection network is stored in Jetson XavierNX.
As an improvement, the detection network is a YOLOv5 detection network.
As an improvement, the identification method comprises the following steps:
firstly, acquiring a whole image through a YahBoom camera, taking the image as input, detecting a network through a YOLOv5, and judging whether an airplane appears at the moment;
second, if an airplane is detected in the image, the detection network outputs (x)1,y1),(x2,y2) The approximate location (x) of the aircraft can be estimated based on the coordinates of the anchor framec,yc). Wherein x is1,x2And xcThe relationship of (c) is as follows.
Figure BDA0003201767340000031
Then according to the center point (x) of the cameracenter,ycenter) The method comprises the following steps of controlling and adjusting a steering engine to enable a camera central point to coincide with an airplane target central point as much as possible so as to achieve the target tracking effect, wherein as the phenomenon that a target disappears suddenly possibly exists during shooting of a camera, in order to improve the robustness of the whole system and enhance the tracking capability of the whole system, Kalman filtering is used for predicting the position where the airplane target appears and is used for controlling the rotation of the steering engine to search for the target airplane;
thirdly, after the airplane target appears in the picture, cutting the airplane to be used as the input of a ResNet classification network, and outputting the most possible model and category of the airplane;
and fourthly, storing the image video of the detected airplane for subsequent manual secondary inspection and verification, and improving the identification accuracy.
The improvement comprises pruning a Batch Normalization layer in a YOLOv5 network, clipping parameters which are close to 0 or equal to 0 and appear in the layer to reduce the calculated amount and the total amount of the parameters under the condition that the model keeps the original precision, simultaneously carrying out knowledge distillation on YOLOv5, taking YOLOv5m with deeper network as a T model, taking the original YOLOv5S as an S model, and using improved Softmax Temperature
Figure BDA0003201767340000032
The working principle of the invention is as follows: complex neural network architectures are lightweight and deployed onto edge computing platforms.
Pruning:
pruning is to search a calculation path with optimal value in a parameter space formed by the original model, namely, the calculation amount and the parameter total amount of the model are reduced under the condition of keeping the original precision.
Pruning is performed here in such a way that a penalty of L1 is to be imposed on the batchnormalysis layer (BN for short) in YOLOv 5. And the BN normalizes all parameters of the input characteristic diagram to a normal distribution range with the average value of 0 and the variance of 1, and learns the corresponding two adjusting factors gamma and beta to finely adjust the values in the training process of all the normalized parameters.
Figure BDA0003201767340000033
aiFeature maps, u and σ, representing each channel of the inputiMean and variance, respectively. When gamma isi0 or gammaiApproximately 0, i.e. a weight of 0, has no effect on the model, so the importance of each channel can be represented by a scaling factor γ, which when γ is 0 or approximately 0, will prune the convolution kernels of the corresponding upper and lower convolution layers.
Then, gamma in neural networks after general trainingiThere are not many parameters that are equal to or close to 0. Thus, here γ is reduced by sparse trainingi. Thinning gamma of each BN layer by L1 gradient methodiThe values, as shown below:
Figure BDA0003201767340000041
u represents the learning rate of the loss function,
Figure BDA0003201767340000042
representing the gradient of the original loss function in the training, passing from the reverse direction of the lossEta is a hyper-parameter, determines the gradient descending size of the L1 gradient method each time,
Figure BDA0003201767340000043
determines the direction of its loss.
2. Knowledge distillation
Knowledge distillation refers to the idea of model compression by using a larger, trained network step by step to teach a smaller network exactly what to do. "Soft tags" refer to feature maps that the large network outputs after each layer of convolution. The small network is then trained to learn the exact behavior of the large network by attempting to replicate the output of the large network at each layer.
The improved SoftMax Temperature is used as an activation function, and the output of the Teacher Net is used as a soft label to learn the Student Net so as to obtain more characteristic information, thereby improving the model accuracy.
TensorRT model acceleration
The TensorRT can analyze the network models using frames such as TensorFlow, Pytrch and the like, uniformly and completely convert the models of other frames into the TensorRT, then implement an optimization strategy for a GPU of NVIDIA own in the TensorRT, and perform deployment acceleration. This adapts the NVIDIA Jetson XavierNX device we use.
4. Selection of lightweight models
After screening and comparing the performance of various test model structures, we used the YOLOv5 network model structure as a test algorithm. Since the YOLOv5 has four model structures of s, m, l and x, the models are sequentially deepened, the processing speed is gradually slowed down, and the parameter capacity is gradually increased. However, in the embedded device, under the condition that high speed is guaranteed as much as possible, the model is as small as possible, and the precision is as high as possible, so we choose to use the YOLOv5s model architecture.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of the invention, "plurality" means two or more unless explicitly specifically defined otherwise.
In the present invention, unless otherwise specifically stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
In the description herein, reference to the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (5)

1. An airplane detection, tracking and identification system based on edge calculation comprises hardware equipment, and is characterized in that: the hardware equipment comprises a camera, a holder is installed at the bottom of the camera, the hardware equipment further comprises a Jetson Xavier NX edge computing platform, and a detection network is arranged in the Jetson Xavier NX edge computing platform.
2. The system of claim 1, wherein the system comprises: the YahBoom two-degree-of-freedom camera is connected with a Jetson XavierNX edge computing platform by using a USB interface; after the detection network is tested and the parameters are adjusted on the PC terminal, the detection network is stored in the Jetson Xavier NX.
3. The system of claim 1, wherein the system comprises: the detection network is a YOLOv5 detection network.
4. The system for detecting, tracking and identifying airplanes based on edge calculation as claimed in claim 1, wherein the identification method is as follows:
firstly, acquiring a whole image through a YahBoom camera, taking the image as input, detecting a network through a YOLOv5, and judging whether an airplane appears at the moment;
second, if an airplane is detected in the image, the detection network outputs (x)1,y1),(x2,y2) The approximate location (x) of the aircraft can be estimated based on the coordinates of the anchor framec,yc). Wherein x is1,x2And xcIn relation to (1) asShown below.
Figure FDA0003201767330000011
Then according to the center point (x) of the cameracenter,ycenter) The method comprises the following steps of controlling and adjusting a steering engine to enable a camera central point to coincide with an airplane target central point as much as possible so as to achieve the target tracking effect, wherein as the phenomenon that a target disappears suddenly possibly exists during shooting of a camera, in order to improve the robustness of the whole system and enhance the tracking capability of the whole system, Kalman filtering is used for predicting the position where the airplane target appears and is used for controlling the rotation of the steering engine to search for the target airplane;
thirdly, after the airplane target appears in the picture, cutting the airplane to be used as the input of a ResNet classification network, and outputting the most possible model and category of the airplane;
and fourthly, storing the image video of the detected airplane for subsequent manual secondary inspection and verification, and improving the identification accuracy.
5. An aircraft detection, tracking and identification system based on edge calculation according to claim 4, characterized in that: pruning a Batch Normalization layer in a YOLOv5 network, clipping parameters which are close to 0 or equal to 0 and appear in the layer to reduce the calculated amount and the total amount of the parameters under the condition that the model keeps the original precision, simultaneously carrying out knowledge distillation on YOLOv5, taking YOLOv5m with deeper network as a T model, taking the original YOLOv5S as an S model, and using improved Softmax Temperature
Figure FDA0003201767330000012
The network is retrained as an activation function to improve the accuracy of the model while reducing the complexity of the model.
CN202110906517.7A 2021-08-09 2021-08-09 Airplane detection, tracking and identification system based on edge calculation Pending CN113743230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110906517.7A CN113743230A (en) 2021-08-09 2021-08-09 Airplane detection, tracking and identification system based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110906517.7A CN113743230A (en) 2021-08-09 2021-08-09 Airplane detection, tracking and identification system based on edge calculation

Publications (1)

Publication Number Publication Date
CN113743230A true CN113743230A (en) 2021-12-03

Family

ID=78730607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110906517.7A Pending CN113743230A (en) 2021-08-09 2021-08-09 Airplane detection, tracking and identification system based on edge calculation

Country Status (1)

Country Link
CN (1) CN113743230A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272412A (en) * 2022-08-02 2022-11-01 电子科技大学重庆微电子产业技术研究院 Low, small and slow target detection method and tracking system based on edge calculation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392941A (en) * 2017-07-25 2017-11-24 哈尔滨理工大学 A kind of takeoff and landing tracking system and method
CN110996058A (en) * 2019-12-03 2020-04-10 中国电子科技集团公司第五十四研究所 Intelligent monitoring system based on edge calculation
CN111259748A (en) * 2020-01-10 2020-06-09 利卓创新(北京)科技有限公司 Edge calculation and communication system for video monitoring
CN111385459A (en) * 2018-12-28 2020-07-07 南京婆娑航空科技有限公司 Automatic control, focusing and photometry method for unmanned aerial vehicle cradle head
CN112101175A (en) * 2020-09-09 2020-12-18 沈阳帝信人工智能产业研究院有限公司 Expressway vehicle detection and multi-attribute feature extraction method based on local images
CN112173149A (en) * 2020-10-30 2021-01-05 南方电网数字电网研究院有限公司 Stability augmentation cradle head with edge computing capability, unmanned aerial vehicle and target identification method
CN112308019A (en) * 2020-11-19 2021-02-02 中国人民解放军国防科技大学 SAR ship target detection method based on network pruning and knowledge distillation
CN112597920A (en) * 2020-12-28 2021-04-02 浙江工业大学 Real-time object detection system based on YOLOv3 pruning network
CN112699958A (en) * 2021-01-11 2021-04-23 重庆邮电大学 Target detection model compression and acceleration method based on pruning and knowledge distillation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392941A (en) * 2017-07-25 2017-11-24 哈尔滨理工大学 A kind of takeoff and landing tracking system and method
CN111385459A (en) * 2018-12-28 2020-07-07 南京婆娑航空科技有限公司 Automatic control, focusing and photometry method for unmanned aerial vehicle cradle head
CN110996058A (en) * 2019-12-03 2020-04-10 中国电子科技集团公司第五十四研究所 Intelligent monitoring system based on edge calculation
CN111259748A (en) * 2020-01-10 2020-06-09 利卓创新(北京)科技有限公司 Edge calculation and communication system for video monitoring
CN112101175A (en) * 2020-09-09 2020-12-18 沈阳帝信人工智能产业研究院有限公司 Expressway vehicle detection and multi-attribute feature extraction method based on local images
CN112173149A (en) * 2020-10-30 2021-01-05 南方电网数字电网研究院有限公司 Stability augmentation cradle head with edge computing capability, unmanned aerial vehicle and target identification method
CN112308019A (en) * 2020-11-19 2021-02-02 中国人民解放军国防科技大学 SAR ship target detection method based on network pruning and knowledge distillation
CN112597920A (en) * 2020-12-28 2021-04-02 浙江工业大学 Real-time object detection system based on YOLOv3 pruning network
CN112699958A (en) * 2021-01-11 2021-04-23 重庆邮电大学 Target detection model compression and acceleration method based on pruning and knowledge distillation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GEOFFREY HINTON 等: "Distilling the Knowledge in a Neural Network", 《ARXIV:1503.02531》, pages 1 - 9 *
徐国标 等: "基于YOLO改进算法的远程塔台运动目标检测", 《科学技术与工程》, vol. 19, no. 14, pages 377 - 383 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272412A (en) * 2022-08-02 2022-11-01 电子科技大学重庆微电子产业技术研究院 Low, small and slow target detection method and tracking system based on edge calculation
CN115272412B (en) * 2022-08-02 2023-09-26 电子科技大学重庆微电子产业技术研究院 Edge calculation-based low-small slow target detection method and tracking system

Similar Documents

Publication Publication Date Title
CN112434672B (en) Marine human body target detection method based on improved YOLOv3
CN109934805B (en) Water pollution detection method based on low-illumination image and neural network
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
EP3690714A1 (en) Method for acquiring sample images for inspecting label among auto-labeled images to be used for learning of neural network and sample image acquiring device using the same
CN112446388A (en) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
US10579907B1 (en) Method for automatically evaluating labeling reliability of training images for use in deep learning network to analyze images, and reliability-evaluating device using the same
CN113807464B (en) Unmanned aerial vehicle aerial image target detection method based on improved YOLO V5
CN101944174B (en) Identification method of characters of licence plate
CN113128355A (en) Unmanned aerial vehicle image real-time target detection method based on channel pruning
CN111126278B (en) Method for optimizing and accelerating target detection model for few-class scene
CN110276247A (en) A kind of driving detection method based on YOLOv3-Tiny
CN113139594B (en) Self-adaptive detection method for airborne image unmanned aerial vehicle target
CN111178438A (en) ResNet 101-based weather type identification method
CN112633257A (en) Potato disease identification method based on improved convolutional neural network
CN114781514A (en) Floater target detection method and system integrating attention mechanism
CN113743505A (en) Improved SSD target detection method based on self-attention and feature fusion
CN113128476A (en) Low-power consumption real-time helmet detection method based on computer vision target detection
CN113743230A (en) Airplane detection, tracking and identification system based on edge calculation
CN115496891A (en) Wheat lodging degree grading method and device
CN114359578A (en) Application method and system of pest and disease damage identification intelligent terminal
CN114140753A (en) Method, device and system for identifying marine ship
CN113989655A (en) Radar or sonar image target detection and classification method based on automatic deep learning
CN116580324A (en) Yolov 5-based unmanned aerial vehicle ground target detection method
Xu et al. Compressed YOLOv5 for oriented object detection with integrated network slimming and knowledge distillation
CN111429419B (en) Insulator contour detection method based on hybrid ant colony algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination