CN115047455A - Lightweight SAR image ship target detection method - Google Patents

Lightweight SAR image ship target detection method Download PDF

Info

Publication number
CN115047455A
CN115047455A CN202210588163.0A CN202210588163A CN115047455A CN 115047455 A CN115047455 A CN 115047455A CN 202210588163 A CN202210588163 A CN 202210588163A CN 115047455 A CN115047455 A CN 115047455A
Authority
CN
China
Prior art keywords
network
target
training
ship target
sar image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210588163.0A
Other languages
Chinese (zh)
Inventor
郑纯
蔡阳葳
陈志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202210588163.0A priority Critical patent/CN115047455A/en
Publication of CN115047455A publication Critical patent/CN115047455A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a light SAR image ship target detection method, which comprises the steps of constructing a light target network; constructing an SAR image ship target data set; the ship target data set comprises a training set, a verification set and a test set; data enhancement is carried out on data in the ship target; training a target network; and detecting the trained target network by using the test set. The application introduces MobileNet V2 and a deep separable convolutional network to carry out lightweight improvement on a YOLOV3 network, so that a lightweight YOLOv3-MD network is obtained. And methods such as data enhancement, SGD optimization and the like are introduced in the training process, so that the training speed is accelerated, and meanwhile, the designed target detection model has higher robustness on SAR images obtained in different environments.

Description

Lightweight SAR image ship target detection method
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a lightweight SAR image ship target detection method.
Background
In recent years, with the acceleration of the development process of Synthetic Aperture Radar (SAR), the quality and quantity of Radar images have been rapidly developed. The SAR uses the antenna to move at a constant speed along the track of the long linear array and radiate coherent signals, and carries out coherent processing on the echoes received at different positions, thereby obtaining imaging with higher resolution, having the advantage of being free from the limitation of illumination, weather conditions and the like, and further realizing all-weather earth observation in all times. When the SAR is used for ship target detection, the size and the proportion of target ships are inconsistent, the detection background is very complex, and great challenges are brought to the SAR ship detection. If the detection is applied to fighting ships in the military application field, higher requirements are provided for the accuracy rate of the SAR target detection algorithm, the detection speed and the light weight of the network.
However, the conventional SAR image ship target detection method is based on the development of a Constant False Alarm Rate (CFAR) algorithm, and usually requires a feature training classifier to be manually designed to achieve the purpose of identifying ships. The traditional method is too dependent on manpower, so that the detection precision and universality are not high, the interference of sea waves, rocky reefs and seacoasts is easy to occur, the detection precision rate is kept under the condition that the detection speed of the current detection algorithm is guaranteed, and the traditional SAR target detection algorithm is not suitable any more.
With the development of artificial intelligence, the target detection algorithm based on deep learning has the advantages of self-learning, simple network design, strong robustness to various environments and the like. Are gradually replacing the traditional detection methods. Currently, mainstream target detection algorithms based on deep learning are also diverse, including: two-stage algorithms R-CNN (Regions with CNN features), Fast R-CNN (Fast-Regions with CNN features), one-stage algorithms yolo (young Only lookone), ssd (single shot shotmultitox detector), etc., which have all been used by humans in ship target detection of SAR images.
However, most existing SAR target detection algorithms based on deep learning are designed to be used by a Graphics Processing Unit (GPU) computing card, and have a large number of parameters and computation amount, so that the existing SAR target detection algorithms are not suitable for a mobile terminal or a satellite terminal with low computation capability. Therefore, it is necessary to research and provide a lightweight SAR image ship target detection model with small parameter and calculation amount.
Disclosure of Invention
The application provides a lightweight SAR image ship target detection method which is used for solving the technical problem that the existing SAR image ship target detection model is huge in parameter quantity and complex in calculation.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the application provides a light SAR image ship target detection method, which comprises the following steps:
step 1, constructing a lightweight target network;
step 2, constructing an SAR image ship target data set; the ship target data set comprises a training set, a verification set and a test set;
step 3, data enhancement is carried out on data in the ship target;
step 4, training a target network;
and 5: and detecting the trained target network by using the test set.
Optionally, constructing a lightweight target network includes:
setting a target network to comprise a MobileNet V2 network, a multi-scale feature fusion network and a detection head network from input to output in sequence; wherein, the MobileNet V2 network is a main network of the target network;
a mechanism of attention was introduced to the MobileNetV2 network.
Optionally, the MobileNetV2 network includes an input convolutional layer, a first group of reciprocal residual layers, a second group of reciprocal residual layers, a third group of reciprocal residual layers, a fourth group of reciprocal residual layers, a fifth group of reciprocal residual layers, a sixth group of reciprocal residual layers, and a seventh group of reciprocal residual layers, which are stacked in sequence;
the fifth group of inverted residual error layers, the sixth group of inverted residual error layers and the seventh group of inverted residual error layers are respectively connected with the three layers of the multi-scale feature fusion network.
Optionally, constructing a SAR image ship target data set includes:
acquiring a picture;
marking the target in the picture by using the square frame to generate a ship target data set;
and dividing a training set, a verification set and a test set.
Optionally, performing data enhancement on data in the ship target includes:
performing photometric distortion enhancement processing and geometric distortion enhancement processing on a ship target data set;
wherein the photometric distortion enhancement process includes randomly changing brightness, randomly changing contrast, and transforming color space;
the geometric distortion enhancement processing includes randomly expanding pictures, randomly cropping, randomly flipping pictures, and scaling pictures.
Optionally, training the target network includes:
and training the target network by using a Leaky ReLU activation function, a Focal loss classification loss function and a CIOU regression loss function and an SGD optimizer.
Compared with the prior art, the light SAR image ship target detection method has the following beneficial effects:
1. the YOLOv3 network is subjected to lightweight improvement, and the YOLOv3-MD network is obtained. Compared with the original network, the improved network can effectively reduce the operation amount in the detection process on the premise of not influencing the SAR image ship target detection effect and detection speed, greatly reduces the size of the model, considers the requirements of high precision and real-time performance, and reduces the realization difficulty of applying the SAR image detection algorithm to the environment with lower calculation amount.
2. The YOLOv3-MD network has a high convergence speed in the training process, and the cost of the training process is effectively reduced.
Drawings
Fig. 1 is a schematic flow diagram of a method for detecting a ship target in a lightweight SAR image according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a lightweight YOLOv3-MD target detection network structure provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a MobileNet V2 network inverse residual error structure with attention mechanism provided in the embodiment of the present application;
fig. 4 is a diagram of a ship target detection result provided in the embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but those skilled in the art will understand that the following examples are only illustrative of the present invention and should not be construed as limiting the scope of the present invention.
Referring to fig. 1, a lightweight SAR image ship target detection method specifically comprises the following implementation steps:
step 1, constructing a lightweight target network.
As shown in fig. 2, the target network is set to include a MobileNetV2 network, a multi-scale feature fusion network (PFN), and a detector head network (yolov3-head) in order from input to output; wherein, the MobileNet V2 network is a main network of the target network;
a mechanism of attention was introduced to the MobileNetV2 network.
The MobileNetV2 network includes sequentially stacked input convolutional layers and a first, second, third, fourth, fifth, sixth, and seventh set of reciprocal residual layers.
And the fifth group of inverted residual error layers, the sixth group of inverted residual error layers and the seventh group of inverted residual error layers are respectively connected with the three layers of the multi-scale feature fusion network.
Correspondingly, the outputs of the fifth group of characteristic inverted residual error layers, the sixth group of characteristic inverted residual error layers and the seventh group of characteristic inverted residual error layers are respectively used as corresponding inputs of three layers of the multi-scale characteristic fusion network, the three characteristic outputs of the characteristic fusion network are used as prediction results in the prediction network, and the prediction results are output by the prediction network.
The embodiment of the application takes a YOLOv3 network as a basic network of a target network. The original backbone network of the YOLOv3 network is Darknet-53, which has better performance in detection accuracy as a classifier, but the number of network layers is relatively deep, including 53 convolutional layers, so that the parameters and the calculation amount of the model are too large. While the MobileNetV2 network structure has only 19 layers, and 17 of them are depth separable convolutions used.
Assume that the convolution kernel size of the standard convolution is D K *D K M N, the input F can be considered for standard convolution operations in *F in M, the output is F out *F out N. Wherein F in Size F representing input feature map out Representing the size of the output profile, M, N is the number of input and output channels, D k Is the size of the convolution kernel. Then it can be found that:
the standard convolution kernel calculated quantity is:
D K *D K *M*N*F in *F in
the standard convolutional layer parameters are:
D K *D K *M*N
the amount of computation of the depth separable convolution is as follows:
D K *D K *M*F in *F in +M*N*F in *F in
the parameters of the depth separable convolution are:
D K *D K *M*N+D K *D K *N
from the above equation, the computation of the depth separable convolution is much smaller than that of the normal convolution. Meanwhile, the depth of the MobileNet V2 network is smaller than that of the Darknet-53 network, and the calculation amount is greatly reduced.
Therefore, when constructing a lightweight network, the method replaces the YOLOv3 algorithm backbone network from the Darknet-53 network to the MobileNetV2 network, and introduces an attention mechanism in the MobileNetV2 network.
Specifically, the Coord Attention was introduced after the 3x3 convolution of the deconvolution module in the MobileNetV2 network. It enhances the representation of the object of interest by decomposing the channel attention into two ID feature codes aggregating features along different directions, so that the location information of the object is embedded in the channel attention. Meanwhile, the common 3 × 3 convolution in PFN and YOLOv3-head is replaced by the deep separable convolution to reduce the parameters and the operation amount of the network, so that the lightweight YOLOv3-MD network, namely the target network, is obtained through improvement.
The inverted residual structure of the MobileNetV2 network incorporating the attention mechanism is shown in fig. 3. The input feature copy is first split into two branches, one of which is passed through a 1x1 normal convolution with step size 1, a 3x3 deep separable convolution with step size 2, CoordAttention attention mechanism, a 1x1 normal convolution with step size 1, and added to the other branch to get the structure output.
Step 2, constructing an SAR image ship target data set; the ship target data set comprises a training set, a verification set and a test set;
the ship target data set is as follows 7: 1: 2, randomly dividing the ratio into a training set, a verification set and a test set;
the naval vessel target data set that adopts in this application is the VOC data set, and the acquisition process is as follows:
step 201, acquiring a picture.
And obtaining original SAR ship pictures and video data from a network, wherein the video data needs to be automatically captured into pictures.
The obtained pictures are uniformly reset to be 512 by 512 in size, and are named as the number numbers thereof in random order, so that the pictures are convenient to process and use in the subsequent process.
Step 202, marking the target in the picture by using a square frame to generate a ship target data set;
and (3) using Labelimg as a marking tool, marking the position of the ship target in the picture by a frame after the picture is guided into the Labeling, and marking the type name ship. The method is used for marking all the pictures, generating a marking file in an xml format, outputting the marking file to a designated folder and generating a ship target data set.
Step 203, dividing a training set, a verification set and a test set;
during training, the generated ship target data set is as follows 7: 1: the scale of 2 is divided into a training set, a validation set and a test set.
Step 3, data enhancement is carried out on the data in the ship target data set;
specifically, photometric distortion and geometric distortion enhancement processing is carried out on a ship target data set, so that a target detection model has higher robustness on SAR images obtained under different environments.
Wherein the photometric distortion comprises: random Brightness (randomly changing Brightness), Random Contrast (randomly changing Contrast), and Convert Color (transforming Color space).
The geometric distortion includes: expand (Random expansion picture), Random Sample Crop (Random cropping), Random flip (Random flip picture), and Resize (zoom picture).
Step 4, training a target network;
and training the target network by using a Leaky ReLU activation function, a Focal loss classification loss function and a CIOU regression loss function and an SGD optimizer so as to enhance the training effect and improve the detection precision. Wherein:
leaky ReLU activation function:
the original ReLU activation function has 0 output when the input is a negative value, and if the input is a negative value, the gradient of the node of the ReLU is 0, so that parameter updating cannot be performed, and the node cannot be trained. To avoid the above problems, the present application employs a leak ReLU as the activation function. The Leaky ReLU activation function introduces a fixed slope a i And giving a negative value to enable the parameter to be updated when the input value is less than 0. The Leaky ReLU activation function is expressed as:
Figure BDA0003666693610000061
wherein x i As abscissa of the activation function, y i As ordinate of the activation function, a i Fixed parameters in the interval e (1, + ∞).
Focal loss classification loss function:
in SAR image data, ship targets appear less times and have uneven characteristics, so that a plurality of difficult samples exist. In order to make the model automatically identify the sample difficulty and make the model focus more on the difficult sample in the later training period, a Focal loss function is adopted as a classification loss function in the training process. Focal loss is expressed as:
Figure BDA0003666693610000062
wherein L is cis Representing a classification loss function, alpha and gamma are two hyper-parameters of the loss function, alpha is used for balancing the unbalance of the number of positive samples and negative samples, gamma is used for adjusting the loss of simple samples and difficult samples, so that the loss function can focus more on the difficult samples, and y represents the classification predicted value of the ship detector model and ranges from 0 to 1.
CIOU regression loss function:
in the training process, a CIOU function is adopted as a regression loss function. CIOU is expressed as:
L CIoU =1-IoU+R CIoU +αv
Figure BDA0003666693610000071
Figure BDA0003666693610000072
Figure BDA0003666693610000073
wherein IoU is cross-over ratio, RCIoU represents penalty term, b and b gt Respectively representing the center point of the prediction frame and the center of the real target frame, rho 2 (b,b gt ) The Euclidean distance of the central points of the two frames, alpha and v are influence factors, wherein alpha is a weight function, v is a parameter for measuring whether the length-width ratio is consistent, c represents the diagonal distance of the minimum circumscribed rectangle of the target, w and h respectively represent the width and height of the prediction frame, and w gt And h gt Representing the width and height of the real box, respectively.
The CIOU loss function can be converged faster when the intersection ratio is 0, and a better regression effect can be obtained by adjusting the prediction frame by using the central point of the boundary frame, the length-width ratio and other numerical values.
SGD optimizer:
SGD, gradient descent algorithm, uses SGD to minimize the objective function f (x), and the gradient descent algorithm iterates along the opposite direction of the gradient vector to obtain the extreme points of the function. The parameter iteration formula is as follows:
Figure BDA0003666693610000074
wherein, gamma is the learning rate, and the initial value of the parameter is the initial position. X k The minimum optimized parameters.
During training, the SGD can automatically escape from the saddle point and from a relatively poor local optimum point. Moreover, the answer found finally has strong generality, and the training result can be well represented on an unseen data set.
And 5: and detecting the trained target network by using the test set.
The final test results are shown in fig. 4. The result shows that the YOLOv3-MD network in the application can give consideration to both detection precision and speed when detecting the SAR image ship target, so that the parameter quantity is greatly reduced, and the realization difficulty of applying the algorithm to the environment with lower calculated quantity is reduced.
Compared with the prior art, the light SAR image ship target detection method provided by the application has the following beneficial effects:
1. the YOLOv3 network is subjected to lightweight improvement, and the YOLOv3-MD network is obtained. Compared with the original network, the improved network can effectively reduce the operation amount in the detection process on the premise of not influencing the SAR image ship target detection effect and detection speed, greatly reduces the size of the model, considers the requirements of high precision and real-time performance, and reduces the realization difficulty of applying the SAR image detection algorithm to the environment with lower calculation amount.
2. The YOLOv3-MD network has high convergence speed in the training process, and the cost of the training process is effectively reduced.

Claims (6)

1. A light-weight SAR image ship target detection method is characterized by comprising the following steps:
step 1, constructing a lightweight target network;
step 2, constructing an SAR image ship target data set; the ship target data set comprises a training set, a verification set and a test set;
step 3, data enhancement is carried out on data in the ship target;
step 4, training a target network;
and 5: and detecting the trained target network by using the test set.
2. The method of claim 1, wherein constructing a lightweight target network comprises:
setting a target network to comprise a MobileNet V2 network, a multi-scale feature fusion network and a detection head network from input to output in sequence; wherein, the MobileNet V2 network is a main network of the target network;
a mechanism of attention was introduced to the MobileNetV2 network.
3. The method of claim 2, wherein the MobileNetV2 network comprises sequentially stacked input convolutional layers and a first, second, third, fourth, fifth, sixth, and seventh set of reciprocal residual layers;
the fifth group of inverted residual error layers, the sixth group of inverted residual error layers and the seventh group of inverted residual error layers are respectively connected with the three layers of the multi-scale feature fusion network.
4. The method of claim 1, wherein constructing a SAR image ship target dataset comprises:
acquiring a picture;
marking the target in the picture by using the square frame to generate a ship target data set;
and dividing a training set, a verification set and a test set.
5. The method of claim 1, wherein data enhancement of data in a ship target comprises:
performing photometric distortion enhancement processing and geometric distortion enhancement processing on a ship target data set;
wherein the photometric distortion enhancement process includes randomly changing brightness, randomly changing contrast, and transforming color space;
the geometric distortion enhancement processing includes randomly expanding the picture, randomly cropping, randomly flipping the picture, and scaling the picture.
6. The method of claim 1, wherein training a target network comprises:
and training the target network by using a LeakyReLU activation function, a Focalloss classification loss function and a CIOU regression loss function and an SGD optimizer.
CN202210588163.0A 2022-05-27 2022-05-27 Lightweight SAR image ship target detection method Pending CN115047455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210588163.0A CN115047455A (en) 2022-05-27 2022-05-27 Lightweight SAR image ship target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210588163.0A CN115047455A (en) 2022-05-27 2022-05-27 Lightweight SAR image ship target detection method

Publications (1)

Publication Number Publication Date
CN115047455A true CN115047455A (en) 2022-09-13

Family

ID=83159336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210588163.0A Pending CN115047455A (en) 2022-05-27 2022-05-27 Lightweight SAR image ship target detection method

Country Status (1)

Country Link
CN (1) CN115047455A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546555A (en) * 2022-10-18 2022-12-30 安徽大学 Lightweight SAR target detection method based on hybrid characterization learning enhancement
CN116343045A (en) * 2023-03-30 2023-06-27 南京理工大学 Lightweight SAR image ship target detection method based on YOLO v5

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546555A (en) * 2022-10-18 2022-12-30 安徽大学 Lightweight SAR target detection method based on hybrid characterization learning enhancement
CN115546555B (en) * 2022-10-18 2024-05-03 安徽大学 Lightweight SAR target detection method based on hybrid characterization learning enhancement
CN116343045A (en) * 2023-03-30 2023-06-27 南京理工大学 Lightweight SAR image ship target detection method based on YOLO v5
CN116343045B (en) * 2023-03-30 2024-03-19 南京理工大学 Lightweight SAR image ship target detection method based on YOLO v5

Similar Documents

Publication Publication Date Title
CN112308019B (en) SAR ship target detection method based on network pruning and knowledge distillation
CN110796037B (en) Satellite-borne optical remote sensing image ship target detection method based on lightweight receptive field pyramid
WO2020238560A1 (en) Video target tracking method and apparatus, computer device and storage medium
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
CN111222396B (en) All-weather multispectral pedestrian detection method
Kim et al. High-speed drone detection based on yolo-v8
CN115047455A (en) Lightweight SAR image ship target detection method
CN111079739B (en) Multi-scale attention feature detection method
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
CN113076871A (en) Fish shoal automatic detection method based on target shielding compensation
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN114612835A (en) Unmanned aerial vehicle target detection model based on YOLOv5 network
CN115147731A (en) SAR image target detection method based on full-space coding attention module
CN112069910B (en) Multi-directional ship target detection method for remote sensing image
Zhu et al. Rapid ship detection in SAR images based on YOLOv3
CN113486819A (en) Ship target detection method based on YOLOv4 algorithm
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN116168240A (en) Arbitrary-direction dense ship target detection method based on attention enhancement
CN113850783B (en) Sea surface ship detection method and system
Dai et al. GCD-YOLOv5: An armored target recognition algorithm in complex environments based on array lidar
Chen et al. Ship Detection with Optical Image Based on Attention and Loss Improved YOLO
CN117173556A (en) Small sample SAR target recognition method based on twin neural network
CN116719031A (en) Ocean vortex detection method and system for synthetic aperture radar SAR image
CN116580324A (en) Yolov 5-based unmanned aerial vehicle ground target detection method
Ding et al. Sw-YoloX: An anchor-free detector based transformer for sea surface object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination