CN113780211A - Lightweight aircraft detection method based on improved yolk 4-tiny - Google Patents
Lightweight aircraft detection method based on improved yolk 4-tiny Download PDFInfo
- Publication number
- CN113780211A CN113780211A CN202111086512.0A CN202111086512A CN113780211A CN 113780211 A CN113780211 A CN 113780211A CN 202111086512 A CN202111086512 A CN 202111086512A CN 113780211 A CN113780211 A CN 113780211A
- Authority
- CN
- China
- Prior art keywords
- network
- feature
- cbam
- tiny
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a lightweight aircraft detection method based on improved Yolov4-tiny, and belongs to the technical field of target detection. The method is based on the Yolov4-tiny network, all standard convolutions in the network are replaced by deep separable convolutions, and the parameter number and the calculated amount of the network are further reduced; an attention mechanism is integrated into CSPBlock of a backbone network, and feature extraction of a small target is enhanced; adding a spatial pyramid pooling module behind a backbone network to obtain reception fields in different ranges; adding a detection layer and a bidirectional fusion channel in the feature fusion network, and distributing a weight for each input feature so that the network learns the weight information of each input feature; and finally, respectively sending the three fused feature graphs into a prediction layer, training and testing the network, and selecting a tested optimal model, namely a light-weight and high-efficiency airplane detection model.
Description
Technical Field
The invention relates to the technical field of target detection, in particular to a lightweight aircraft detection method based on improved Yolov 4-tiny.
Background
In recent years, with the development of technologies such as remote sensing satellites and unmanned aerial vehicles, the data volume of the acquired remote sensing images is increased explosively, in order to acquire useful information from massive remote sensing images, target detection plays a great role, an airplane is used as a main target of the remote sensing images and plays a vital role in the military and civil fields, and the realization of the quick detection of the airplane through the remote sensing images has great significance for improving the military combat efficiency, civil safe search and rescue and the like, so that the method for quickly and accurately detecting the airplane in the remote sensing images has high research value.
The target detection can be divided into traditional target detection based on manual characteristics and target detection based on deep learning, but with the rapid development of big data, the deep learning is widely applied to the target detection. Compared with the traditional target detection based on manual characteristics, the method has faster detection speed and higher detection precision. Target detection based on deep learning is mainly divided into two categories: one is a two-stage detection algorithm based on candidate regions, such as RCNN, Fast RCNN, etc., which first generates a candidate frame of a target region, and then classifies and corrects images in the candidate frame, which has the advantages of high detection accuracy, but large calculation amount and long processing time, and is not beneficial to the detection of an airplane target, and the other is a regression-based single-stage detection algorithm, such as: SSD, Yolov1, Yolov2, Yolov3, Yolov4 and the like, directly take end-to-end operation on input pictures, and have the advantage of high speed. The Yolov4-tiny as a compression version of Yolov4 further realizes the lightweight of the model, and is favorable for deployment in embedded devices such as remote sensing satellites and unmanned aerial vehicles, but the lightweight model causes insufficient extraction of target information, and the aircraft in the remote sensing image is often influenced by external factors such as noise, weather and illumination intensity, so that the detection of the aircraft target by using the Yolov4-tiny poses a great challenge on the detection precision. Therefore, the Yolov4-tiny is improved, a lightweight and efficient method is provided, and the method has great research significance for rapidly and accurately detecting the airplane target.
Disclosure of Invention
Aiming at the problem that the existing target detection model is difficult to consider both detection precision and detection speed in the detection of the airplane, the invention provides a lightweight airplane detection method based on improved Yolov4-tiny, which improves the detection precision of small targets of the airplane, reduces the parameter quantity and the calculation quantity of the model, reduces the calculation resources required by the deployment of the model, and realizes the rapid and accurate detection of the airplane target of the remote sensing image.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a lightweight aircraft detection method based on improved yolk 4-tiny comprises the following steps:
step 1, acquiring an image data set containing an airplane target, performing data annotation and image formatting on images in the image data set, and dividing the processed image data set into a training set and a test set;
step 2, constructing an airplane detection network, wherein the specific mode is as follows:
step 201: the lightweight version Yolov4-tiny network of Yolov4 is improved, all standard convolutions in the network are replaced by deep separable convolutions, the parameter number and the calculated amount of the network are reduced, and the lightweight of the airplane detection network is realized;
step 202: a convolution attention mechanism module CBAM is added behind 3 CSPBlock modules in a backbone network CSPDarknet53_ tiny of Yolov4-tiny, each CSPBlock module and the corresponding convolution attention mechanism module CBAM form a CSPBlock _ CBAM module, and the three CSPBlock _ CBAM modules are named as CSPBlock _ CBAM-1, CSPBlock _ CBAM-2 and CSPBlock _ CBAM-3 respectively;
step 203: adding a space pyramid pooling module behind the modified backbone network, obtaining different receptive fields by connecting the maximum pooling operations of different pooling cores, fusing global features and local features together, and enhancing the feature expression capability of small targets;
step 204: adding a prediction layer of a shallow feature map in an original feature fusion network, fusing a bidirectional feature fusion channel from top to bottom, adding transverse connection, distributing a weight for each input feature, learning the weight of each input feature by using the network, and detecting a target by using 3 prediction layers, so that the detection precision of a small target is improved;
step 3, using the training set in the step 1, performing multi-round optimization training on the airplane detection network by adopting an Adam optimization method, and continuously updating parameters to enable the gradual convergence of the loss function to be optimal; keeping the trained parameters every time one round of training is finished, and correspondingly obtaining an airplane detection model;
step 4, inputting the test set obtained in the step 1 into each airplane detection model trained in the step 3, testing the test set, recording the accuracy of different airplane detection models, and selecting the optimal model as a final airplane detection model;
and 5, detecting the small airplane target by using the final airplane detection model selected in the step 4.
Further, in the step 1, the obtained airplane image data set is marked into a format of a VOC2007 data set, and a LabelImage marking tool is used for marking during marking, so that a marking file is generated and stored; the division ratio of the training set and the test set is 8: 2.
Further, in step 202, the CSPBlock _ CBAM module operates as follows:
performing convolution operation with convolution kernel size of 3 and step length of 1 on the input feature graph, and then processing by using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X1;
Halving the number of channels of the input feature graph, performing convolution operation with convolution kernel size of 3 and step length of 1, and performing processing by using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X2;
For feature map X2Convolution operation with convolution kernel size of 3 and step length of 1 is used, then BN batch normalization and Leaky nonlinear activation function are used for processing, and a characteristic diagram X is obtained3;
Mixing X2And X3Carry out channel splicingThen to X2、X3The spliced feature graph is processed by using convolution operation with convolution kernel size of 1 and step length of 1 and using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X4;
Mixing X1And X4Performing channel splicing, and combining X1、X4And sending the spliced feature map into a convolution attention mechanism module CBAM (CBAM), performing self-adaptive optimization on the feature map, and enhancing feature expression capability.
Further, in step 203, the spatial pyramid pooling module divides the input feature map into 4 branches, wherein 3 branches respectively use kernels with pooling kernels of 5, 9, and 13 to perform maximum pooling operation for expanding the receptive field, the 4 th branch is directly transmitted backwards, and then channel splicing is performed on the output feature maps of the 4 branches.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a lightweight aircraft detection method based on improved Yolov4-tiny, firstly, a standard convolution is replaced by a deep separable convolution, the parameter quantity and the calculated quantity of a network are further reduced, and the detection efficiency is improved; secondly, adding an attention mechanism to enhance the feature expression capability of the convolutional neural network, and introducing a hollow convolutional feature pooling pyramid to obtain receptive fields in multiple scale ranges and aggregate wider context information; and finally, adding a multi-scale detection layer and a top-down bidirectional fusion channel, and distributing weight information for each branch, so that the network pays more attention to a useful characteristic layer, and the detection precision of the small-target aircraft is improved. The invention has scientific and reasonable design, realizes the lightweight of the model, is beneficial to the deployment of embedded platforms such as unmanned planes and the like, and can improve the detection effect of small targets.
Drawings
In order to more clearly describe this patent, one or more of the following figures are provided.
FIG. 1 is a diagram of a CBAM attention mechanism.
FIG. 2 is a diagram of a CSPBlock structure embedded in a CBAM module.
FIG. 3 is a spatial pyramid pooling module.
Fig. 4 is a modified overall model structure diagram.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
A lightweight aircraft detection method based on improved yolk 4-tiny comprises the following steps:
step 1, selecting an RSOD and UCAS-AOD remote sensing image data set, screening out airplane images to form a small airplane data set, marking by using a Labelimage marking tool, generating and storing a marking file, and dividing a training set and a testing set in a ratio of 8: 2.
Step 2, constructing an airplane detection network, wherein the specific mode is as follows:
step 201: a lightweight version Yolov4-tiny network of Yolov4 is improved, all standard convolutions in the network are replaced by deep separable convolutions, wherein the deep convolutions are divided into 3 x 3 deep convolutions and 1 x 1 point-by-point convolutions, the parameter number and the calculated amount of the network are reduced, the lightweight of the airplane detection network is realized, and the replaced standard convolutions are distributed in a backbone network, a spatial pyramid pooling module, a feature fusion network and a detection layer.
Step 202: a convolution attention mechanism module CBAM is added behind 3 CSPBlock modules in a backbone network CSPDarknet53_ tiny of Yolov4-tiny, each CSPBlock module and the corresponding convolution attention mechanism module CBAM form a module CSPBlock _ CBAM, and the three modules CSPBlock _ CBAM are named as CSPBlock _ CBAM-1, CSPBlock _ CBAM-2 and CSPBlock _ CBAM-3 respectively; the backbone network firstly undergoes Dconv _1 and Dconv _2 deep separable convolution, and then 3 CSPBlock _ CBAM operations and one Dconv _3 deep separable convolution operation are carried out, wherein the specific flow of each CSPBlock _ CBAM is as follows:
performing convolution operation with convolution kernel size of 3 and step length of 1 on the input feature graph, and then processing by using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X1;
Halving the number of channels of the input feature map, and then using a convolution kernel of size3. Convolution operation with step length of 1, BN batch normalization and Leaky nonlinear activation function are used for processing to obtain a characteristic diagram X2;
For feature map X2Convolution operation with convolution kernel size of 3 and step length of 1 is used, then BN batch normalization and Leaky nonlinear activation function are used for processing, and a characteristic diagram X is obtained3;
Mixing X2And X3Performing channel splicing to X2、X3The spliced feature graph is processed by using convolution operation with convolution kernel size of 1 and step length of 1 and using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X4;
Mixing X1And X4Performing channel splicing, and combining X1、X4And sending the spliced feature map into a convolution attention mechanism module CBAM (CBAM), performing self-adaptive optimization on the feature map, and enhancing feature expression capability. And further, the extraction capability of the lightweight backbone network on the target features is improved. The CSPBlock _ CBAM block diagram is shown in FIG. 2.
Finally, after each CSPBlock _ CBAM module, the down-sampling operation is carried out by using the maximum pooling with the step size of 2 and the pooling core of 2. The sizes of the selected 3 feature graphs for detection are 52 × 52, 26 × 26 and 13 × 13, the feature graph of 52 × 52 is used for detecting small targets, 26 × 26 is used for detecting medium targets, and 13 × 13 is used for detecting large targets.
The CBAM attention module is divided into two parts of channel attention and space attention, a schematic diagram is shown in figure 1, firstly, global average pooling and maximum pooling in the channel attention module are used for compressing an input feature map F, then the two compressed features are simultaneously input into a multi-layer perceptron MLP for dimension reduction and dimension increase operation, finally, element-based addition operation is carried out on the two feature maps output by the MLP, and a sigmoid activation function is used for obtaining a weighting coefficient M of the channel attentionCAs shown in formula (1):
in the formula, Avgpool represents average pooling, Maxpool represents maximum pooling, σ represents Sigmoid activation function, W0And W1Representing the weight matrix in the multi-layered perceptron MLP,andmean pooling characteristics and maximum pooling characteristics are indicated.
Weighting the input features F with the channel attention weighting factor MCMultiplying to obtain a new characteristic diagram F ', wherein the expression is shown in formula (2), and inputting F' into the space attention module to obtain a space attention weighting coefficient MSThe spatial attention mechanism is to compress the channels and perform average pooling and maximum pooling respectively in the channel dimensions. The operation of maximum pooling is to extract the maximum value on the channel, and the extraction times are height multiplied by width; the average pooling operation is to extract an average value on a channel, and the extraction times are height times width; and then combining the extracted feature maps (the number of channels is all 1) to obtain a feature map of 2 channels, wherein the combining process is shown as a formula (3).
MS(F′)=σ(f7×7([AvgPool(F′);MaxPool(F′)])) (3)
In the formula, Avgpool represents average pooling, Maxpool represents maximum pooling, σ represents Sigmoid activation function, f7×7Indicating a convolution operation using a 7 x 7 convolution kernel.
Finally, M is addedSAnd F 'are multiplied to obtain the final attention characteristic F', the expression is shown as the formula (4):
step 203: a space pyramid pooling module is added behind the backbone network, and different receptive fields are obtained by connecting the maximum pooling modules of different pooling cores in parallel, so that the global characteristics and the local characteristics are effectively fused together. The spatial pyramid pooling module divides an input feature map into 4 branches, wherein the 3 branches respectively use kernels with pooling kernels of 5, 9 and 13 to perform maximum pooling operation, the receptive field is enlarged, the 4 th branch does not perform any operation and is directly transmitted backwards, and then the obtained 4 branches are subjected to channel splicing through channel splicing operation, so that feature information is enriched, and the detection performance is improved. The spatial pyramid pooling module structure is shown in fig. 3.
Step 204: the input image passes through three feature maps of a CSPBlock _ CBAM-1 module and a CSPBlock _ CBAM-2 module in the step 202 and a spatial pyramid pooling module in the step 203, the three feature maps are respectively subjected to convolution operation with the convolution kernel size of 1 and the step length of 1 to obtain three feature maps which are respectively named as P3_ in, P4_ in and P5_ in from top to bottom, the three feature maps are sent into a feature fusion network to be fused, a top-down connecting channel is added to fuse information from a backbone network in different scales, and when the fusion is carried out in different scales, the feature resolution is unified through up-sampling and down-sampling, and transverse connection is added among the features in the same scale, so that the feature information loss caused by too many network levels is relieved. Adding an extra weight to each input in the fusion process, enabling the network to learn the importance of each input feature, performing feature fusion by using deep separable convolution, and adding batch standardization and activation after each convolution. Equations (5) - (6) are illustrative of the layer P4:
wherein P4_ td represents the first fusion result, and is composed of two branches, P5_ in (input of P5 layer) viaOversampled profiles and P4_ in (input to P4 level), resize represents the upsampling operation for uniform profile size, w1Represents the weight, w, occupied by P4_ in2Representing the weight occupied by P5_ in, wherein epsilon is 0.0001, Conv represents that two feature maps are subjected to pixel addition, feature fusion is carried out by using depth separable convolution, and batch normalization and swish nonlinear activation functions are added after each convolution, so that features of different scales are effectively fused and information fusion of the same scale is enhanced. P4_ out represents the second fusion result (also referred to as an output feature map) of the P4 layer, and is composed of three branches, i.e., feature maps obtained by down-sampling P4_ in, P4_ td (the first fusion result of the P4 layer), and P3_ out (the output of the P3 layer). w is a1' denotes the weight occupied by P4_ in, w2' denotes the weight occupied by P4_ td, w3' denotes the weight occupied by P3_ out, ∈ 0.0001, and Conv denotes that the two feature maps are pixel-added, and the improved feature fusion is shown in fig. 4.
And 3, training the improved network model in the step 2 by using the training set in the step 1, adjusting the sizes of all input images to be 512 and 512 in order to improve the detection precision of the small target during training, and enhancing the training sample of the airplane by adopting data enhancement methods such as horizontal overturning, random cutting, mirror image and the like, so that the diversity of data is improved, and the detection precision of the airplane is improved. During training, 9 prior frames suitable for the airplane target are clustered again by using a K-means clustering method, and 3 prior frames are respectively allocated to the three detection layers obtained in the step 202 and are respectively used for the three detection layers to detect targets with different scales.
An Adam optimizer is used for carrying out optimization training on the network, 100 epochs are trained (one epoch represents that all pictures in a data set are trained once), the initial learning rate is set to be 0.001, the learning rate is changed to be 0.1 in the original mode every 30 epochs, the batch-size is set to be 4, the trained parameters are reserved for each epoch after the training is finished, and an airplane detection model is correspondingly obtained until the trained model parameters enable the total loss function of the network to be converged. The overall loss function includes three parts, as shown in equation (7):
Loss=Lciou+Lconf+Lcls (7)
wherein L isciouRepresents the bounding box regression loss, LconfRepresenting a loss of confidence, LclsRepresenting a classification loss. The respective expressions are as in formulas (8) to (10).
Wherein s is2The grid number of the characteristic diagram, B is the number of the prior frames,the j prior frame of the ith grid is provided with an object which is respectively 1 and 0, and no object is respectively 0 and 1. Rho2(b,bgt) The Euclidean distance of the central points of the prediction frame and the labeling frame, c is the diagonal distance of the minimum closure area containing the prediction frame and the labeling frame simultaneously, bgt、wgt、hgtDenotes the center coordinates, width and height of the labeling box, and b, w, h denote the center coordinates, width and height of the prediction box.Representing the confidence of the prediction box and the annotation box,class probability, λ, representing the prediction and label boxesnoobjAnd the IOU represents the intersection ratio of the prediction box and the labeling box.
And 4, inputting the test set obtained in the step 1 into the 100 airplane detection models trained in the step 3, testing the test set, recording the accuracy of different parameter models, and selecting the optimal model as the trained lightweight airplane detection model.
And 5, detecting the small airplane target by using the optimal airplane detection model selected in the step 4.
In a word, aiming at the problems of large parameter quantity, large calculated quantity and missing detection and false detection of the airplane small target detection model, the method starts to improve from a main feature extraction network, reduces the calculation cost and simultaneously fully ensures the capability of extracting the small target features, strengthens the feature fusion effect of the small target by using the feature pyramid pooling and bidirectional feature fusion channels, fully utilizes the detail information of a shallow feature map and the semantic information of a deep feature map, and finally improves the performance of small airplane target detection.
Claims (4)
1. A lightweight aircraft detection method based on improved Yolov4-tiny is characterized by comprising the following steps:
step 1, acquiring an image data set containing an airplane target, performing data annotation and image formatting on images in the image data set, and dividing the processed image data set into a training set and a test set;
step 2, constructing an airplane detection network, wherein the specific mode is as follows:
step 201: the lightweight version Yolov4-tiny network of Yolov4 is improved, all standard convolutions in the network are replaced by deep separable convolutions, the parameter number and the calculated amount of the network are reduced, and the lightweight of the airplane detection network is realized;
step 202: a convolution attention mechanism module CBAM is added behind 3 CSPBlock modules in a backbone network CSPDarknet53_ tiny of Yolov4_ tiny, each CSPBlock module and the corresponding convolution attention mechanism module CBAM form a CSPBlock _ CBAM module, and the three CSPBlock _ CBAM modules are named as CSPBlock _ CBAM-1, CSPBlock _ CBAM-2 and CSPBlock _ CBAM-3 respectively;
step 203: adding a space pyramid pooling module behind the modified backbone network, obtaining different receptive fields by connecting the maximum pooling operations of different pooling cores, fusing global features and local features together, and enhancing the feature expression capability of small targets;
step 204: adding a prediction layer of a shallow feature map in an original feature fusion network, fusing a bidirectional feature fusion channel from top to bottom, adding transverse connection, distributing a weight for each input feature, learning the weight of each input feature by using the network, and detecting a target by using 3 prediction layers, so that the detection precision of a small target is improved;
step 3, using the training set in the step 1, performing multi-round optimization training on the airplane detection network by adopting an Adam optimization method, and continuously updating parameters to enable the gradual convergence of the loss function to be optimal; keeping the trained parameters every time one round of training is finished, and correspondingly obtaining an airplane detection model;
step 4, inputting the test set obtained in the step 1 into each airplane detection model trained in the step 3, testing the test set, recording the accuracy of different airplane detection models, and selecting the optimal model as a final airplane detection model;
and 5, detecting the small airplane target by using the final airplane detection model selected in the step 4.
2. The improved Yolov 4-tiny-based lightweight aircraft detection method according to claim 1, wherein in step 1, the acquired aircraft image dataset is labeled into a format of a VOC2007 dataset, and when labeling is performed, a Labelimage labeling tool is used for labeling, and a labeling file is generated and stored; the division ratio of the training set and the test set is 8: 2.
3. The improved Yolov4-tiny based lightweight aircraft detection method according to claim 1, wherein in step 202, the CSPBlock _ CBAM module operates as follows:
performing convolution operation with convolution kernel size of 3 and step length of 1 on the input feature graph, and then processing by using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X1;
Will be transportedHalving the number of channels of the input feature graph, performing convolution operation with convolution kernel size of 3 and step length of 1, and performing processing by using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X2;
For feature map X2Convolution operation with convolution kernel size of 3 and step length of 1 is used, then BN batch normalization and Leaky nonlinear activation function are used for processing, and a characteristic diagram X is obtained3;
Mixing X2And X3Performing channel splicing to X2、X3The spliced feature graph is processed by using convolution operation with convolution kernel size of 1 and step length of 1 and using BN batch normalization and Leaky nonlinear activation function to obtain a feature graph X4;
Mixing X1And X4Performing channel splicing, and combining X1、X4And sending the spliced feature map into a convolution attention mechanism module CBAM (CBAM), performing self-adaptive optimization on the feature map, and enhancing feature expression capability.
4. The improved Yolov4-tiny based lightweight aircraft detection method according to claim 1, wherein in step 203, the spatial pyramid pooling module divides the input feature map into 4 branches, wherein 3 branches respectively use kernels with pooling kernels of 5, 9, and 13 for maximum pooling operation to enlarge the receptive field, the 4 th branch is directly transmitted backwards, and then channel splicing is performed on the output feature maps of the 4 branches.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111086512.0A CN113780211A (en) | 2021-09-16 | 2021-09-16 | Lightweight aircraft detection method based on improved yolk 4-tiny |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111086512.0A CN113780211A (en) | 2021-09-16 | 2021-09-16 | Lightweight aircraft detection method based on improved yolk 4-tiny |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113780211A true CN113780211A (en) | 2021-12-10 |
Family
ID=78851376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111086512.0A Withdrawn CN113780211A (en) | 2021-09-16 | 2021-09-16 | Lightweight aircraft detection method based on improved yolk 4-tiny |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113780211A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114241277A (en) * | 2021-12-22 | 2022-03-25 | 中国人民解放军国防科技大学 | Attention-guided multi-feature fusion disguised target detection method, device, equipment and medium |
CN114462555A (en) * | 2022-04-13 | 2022-05-10 | 国网江西省电力有限公司电力科学研究院 | Multi-scale feature fusion power distribution network equipment identification method based on raspberry pi |
CN114494861A (en) * | 2022-01-10 | 2022-05-13 | 湖北工业大学 | Airplane target detection method based on multi-parameter optimization YOLOV4 network |
CN114596335A (en) * | 2022-03-01 | 2022-06-07 | 广东工业大学 | Unmanned ship target detection tracking method and system |
CN114596536A (en) * | 2022-05-07 | 2022-06-07 | 陕西欧卡电子智能科技有限公司 | Unmanned ship coastal inspection method and device, computer equipment and storage medium |
CN114627282A (en) * | 2022-03-15 | 2022-06-14 | 平安科技(深圳)有限公司 | Target detection model establishing method, target detection model application method, target detection model establishing device, target detection model application device and target detection model establishing medium |
CN114663654A (en) * | 2022-05-26 | 2022-06-24 | 西安石油大学 | Improved YOLOv4 network model and small target detection method |
CN114724033A (en) * | 2022-04-09 | 2022-07-08 | 常州大学 | Robot armor plate detection method based on deep learning |
CN114758288A (en) * | 2022-03-15 | 2022-07-15 | 华北电力大学 | Power distribution network engineering safety control detection method and device |
CN115100709A (en) * | 2022-06-23 | 2022-09-23 | 北京邮电大学 | Feature-separated image face recognition and age estimation method |
CN116343011A (en) * | 2023-04-29 | 2023-06-27 | 河南工业大学 | Lightweight neural network airport scene plane identification method |
CN116363530A (en) * | 2023-03-14 | 2023-06-30 | 北京天鼎殊同科技有限公司 | Method and device for positioning expressway pavement diseases |
CN116453104A (en) * | 2023-06-15 | 2023-07-18 | 安徽容知日新科技股份有限公司 | Liquid level identification method, liquid level identification device, electronic equipment and computer readable storage medium |
CN117764988A (en) * | 2024-02-22 | 2024-03-26 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN117911679A (en) * | 2024-03-15 | 2024-04-19 | 青岛国实科技集团有限公司 | Hull identification system and method based on image enhancement and tiny target identification |
CN118351117A (en) * | 2024-06-18 | 2024-07-16 | 四川联欣科技服务有限公司 | Industrial equipment defect detection method based on machine vision |
WO2024152477A1 (en) * | 2023-01-17 | 2024-07-25 | 南京莱斯电子设备有限公司 | Airport flight zone real-time target detection method based on multiscale feature decoupling |
-
2021
- 2021-09-16 CN CN202111086512.0A patent/CN113780211A/en not_active Withdrawn
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114241277A (en) * | 2021-12-22 | 2022-03-25 | 中国人民解放军国防科技大学 | Attention-guided multi-feature fusion disguised target detection method, device, equipment and medium |
CN114494861B (en) * | 2022-01-10 | 2024-04-26 | 湖北工业大学 | Aircraft target detection method based on multi-parameter optimization YOLOV network |
CN114494861A (en) * | 2022-01-10 | 2022-05-13 | 湖北工业大学 | Airplane target detection method based on multi-parameter optimization YOLOV4 network |
CN114596335B (en) * | 2022-03-01 | 2023-10-31 | 广东工业大学 | Unmanned ship target detection tracking method and system |
CN114596335A (en) * | 2022-03-01 | 2022-06-07 | 广东工业大学 | Unmanned ship target detection tracking method and system |
CN114758288A (en) * | 2022-03-15 | 2022-07-15 | 华北电力大学 | Power distribution network engineering safety control detection method and device |
CN114627282A (en) * | 2022-03-15 | 2022-06-14 | 平安科技(深圳)有限公司 | Target detection model establishing method, target detection model application method, target detection model establishing device, target detection model application device and target detection model establishing medium |
CN114627282B (en) * | 2022-03-15 | 2024-09-13 | 平安科技(深圳)有限公司 | Method, application method, equipment, device and medium for establishing target detection model |
WO2023173552A1 (en) * | 2022-03-15 | 2023-09-21 | 平安科技(深圳)有限公司 | Establishment method for target detection model, application method for target detection model, and device, apparatus and medium |
CN114724033A (en) * | 2022-04-09 | 2022-07-08 | 常州大学 | Robot armor plate detection method based on deep learning |
US11631238B1 (en) | 2022-04-13 | 2023-04-18 | Iangxi Electric Power Research Institute Of State Grid | Method for recognizing distribution network equipment based on raspberry pi multi-scale feature fusion |
CN114462555A (en) * | 2022-04-13 | 2022-05-10 | 国网江西省电力有限公司电力科学研究院 | Multi-scale feature fusion power distribution network equipment identification method based on raspberry pi |
CN114596536A (en) * | 2022-05-07 | 2022-06-07 | 陕西欧卡电子智能科技有限公司 | Unmanned ship coastal inspection method and device, computer equipment and storage medium |
CN114663654A (en) * | 2022-05-26 | 2022-06-24 | 西安石油大学 | Improved YOLOv4 network model and small target detection method |
CN115100709A (en) * | 2022-06-23 | 2022-09-23 | 北京邮电大学 | Feature-separated image face recognition and age estimation method |
WO2024152477A1 (en) * | 2023-01-17 | 2024-07-25 | 南京莱斯电子设备有限公司 | Airport flight zone real-time target detection method based on multiscale feature decoupling |
CN116363530A (en) * | 2023-03-14 | 2023-06-30 | 北京天鼎殊同科技有限公司 | Method and device for positioning expressway pavement diseases |
CN116363530B (en) * | 2023-03-14 | 2023-11-03 | 北京天鼎殊同科技有限公司 | Method and device for positioning expressway pavement diseases |
CN116343011A (en) * | 2023-04-29 | 2023-06-27 | 河南工业大学 | Lightweight neural network airport scene plane identification method |
CN116453104B (en) * | 2023-06-15 | 2023-09-08 | 安徽容知日新科技股份有限公司 | Liquid level identification method, liquid level identification device, electronic equipment and computer readable storage medium |
CN116453104A (en) * | 2023-06-15 | 2023-07-18 | 安徽容知日新科技股份有限公司 | Liquid level identification method, liquid level identification device, electronic equipment and computer readable storage medium |
CN117764988A (en) * | 2024-02-22 | 2024-03-26 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN117764988B (en) * | 2024-02-22 | 2024-04-30 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN117911679A (en) * | 2024-03-15 | 2024-04-19 | 青岛国实科技集团有限公司 | Hull identification system and method based on image enhancement and tiny target identification |
CN117911679B (en) * | 2024-03-15 | 2024-05-31 | 青岛国实科技集团有限公司 | Hull identification system and method based on image enhancement and tiny target identification |
CN118351117A (en) * | 2024-06-18 | 2024-07-16 | 四川联欣科技服务有限公司 | Industrial equipment defect detection method based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113780211A (en) | Lightweight aircraft detection method based on improved yolk 4-tiny | |
CN110188705B (en) | Remote traffic sign detection and identification method suitable for vehicle-mounted system | |
CN108764063B (en) | Remote sensing image time-sensitive target identification system and method based on characteristic pyramid | |
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN111461083A (en) | Rapid vehicle detection method based on deep learning | |
CN114202672A (en) | Small target detection method based on attention mechanism | |
CN108537742A (en) | A kind of panchromatic sharpening method of remote sensing images based on generation confrontation network | |
CN111680176A (en) | Remote sensing image retrieval method and system based on attention and bidirectional feature fusion | |
CN110135267A (en) | A kind of subtle object detection method of large scene SAR image | |
CN109886066A (en) | Fast target detection method based on the fusion of multiple dimensioned and multilayer feature | |
CN111368671A (en) | SAR image ship target detection and identification integrated method based on deep learning | |
CN114782798A (en) | Underwater target detection method based on attention fusion | |
CN110929080A (en) | Optical remote sensing image retrieval method based on attention and generation countermeasure network | |
CN118314353B (en) | Remote sensing image segmentation method based on double-branch multi-scale feature fusion | |
CN116012722A (en) | Remote sensing image scene classification method | |
CN115393690A (en) | Light neural network air-to-ground observation multi-target identification method | |
CN115861619A (en) | Airborne LiDAR (light detection and ranging) urban point cloud semantic segmentation method and system of recursive residual double-attention kernel point convolution network | |
CN114332473A (en) | Object detection method, object detection device, computer equipment, storage medium and program product | |
CN117853955A (en) | Unmanned aerial vehicle small target detection method based on improved YOLOv5 | |
WO2023179593A1 (en) | Data processing method and device | |
CN116597326A (en) | Unmanned aerial vehicle aerial photography small target detection method based on improved YOLOv7 algorithm | |
CN118314333B (en) | Infrared image target detection method based on transducer architecture | |
CN115035381A (en) | Lightweight target detection network of SN-YOLOv5 and crop picking detection method | |
CN118351435A (en) | Unmanned aerial vehicle remote sensing image target detection method and device based on lightweight model LTE-Det | |
CN117152644A (en) | Target detection method for aerial photo of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211210 |