CN113326734A - Rotary target detection method based on YOLOv5 - Google Patents

Rotary target detection method based on YOLOv5 Download PDF

Info

Publication number
CN113326734A
CN113326734A CN202110468451.8A CN202110468451A CN113326734A CN 113326734 A CN113326734 A CN 113326734A CN 202110468451 A CN202110468451 A CN 202110468451A CN 113326734 A CN113326734 A CN 113326734A
Authority
CN
China
Prior art keywords
target
loss function
rotating
yolov5
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110468451.8A
Other languages
Chinese (zh)
Other versions
CN113326734B (en
Inventor
霍静
王宁
李文斌
高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Nanjing University
Original Assignee
Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd, Nanjing University filed Critical Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Priority to CN202110468451.8A priority Critical patent/CN113326734B/en
Publication of CN113326734A publication Critical patent/CN113326734A/en
Application granted granted Critical
Publication of CN113326734B publication Critical patent/CN113326734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rotary target detection algorithm based on YOLOv5, which mainly comprises a data loading processing module, a feature extraction module, a rotary target detection module and a post-processing module. In addition to conventional data loading and data enhancement, a data enhancement method of random rotation is designed for rotating target detection in the data loading processing stage. The feature extraction module adds an Attention-Net structure to a backbone network based on YOLOv5, so that the noise of a feature map is reduced, and the information is more accurate. The post-processing module adopts a calculation mode of rotation intersection ratio and is added into a non-maximum value suppression algorithm. The final model obtains good detection for the detection of the rotating target in the high-altitude shooting image, and the algorithm can effectively solve the problem of the detection of the rotating target.

Description

Rotary target detection method based on YOLOv5
Technical Field
The invention relates to a rotating target detection method based on a deep neural detection network YOLOv5, and belongs to the field of computer application.
Background
With the development of the times, the social progress of remote sensing images and unmanned aerial vehicle images, many new application markets can be developed, such as: urban planning, dynamic monitoring of environmental resources, guidance of updating work in the industry and other directions. With the upgrading of technologies such as remote sensing satellites and unmanned aerial vehicles, the resolution ratio of high-altitude shot images is higher and higher, the number of the high-altitude shot images is more and more, and the high-altitude shot images are difficult to judge by manpower, so that the high-altitude shot images are very necessary to be processed by deep learning target detection, and the labor cost can be greatly reduced. However, the conventional target detection method is difficult to work on the current high-altitude shot image, because the target of the high-altitude shot image has the characteristics of small pixel area, extreme length-width ratio, rotation angle and the like.
To accommodate these features, many researchers have proposed rotating object detection methods. Currently, the mainstream detection method is improved based on FasterRCNN, such as R2CNN, and after RPN, when ropooling is performed, only one kernel size of 7 × 7 is selected as FasterRCNN, but a Pooling layer with three sizes is adopted: 7, 3, 11, 7, the output of the subsequent full convolution layer stage includes the central coordinate point of the detection frame, width and height, and also includes the angle size describing the frame, and positive and negative represent directions. Other work includes SCRDEt proposed by doctor Yankee, the algorithm adds structures such as SF-Net, MDA-Net and the like on the basis of FasterRCNN, wherein the SF-Net controls the size of a characteristic diagram through up-down sampling to achieve the purpose of controlling the step length of anchor, and experiments prove that the smaller the step length is, the better the experimental effect is. The MAD-Net structure is used for extracting a feature map with a better effect, and the feature map directly extracted by the RPN has a large amount of noise, so that the boundaries between the background and the foreground and between the foreground and the foreground are fuzzy. Thus, a MAD-Net structure with attention mechanism is proposed, which is divided into three branches, the first branch being the attention mechanism at the pixel level, the second branch being the original signature, and the third branch being the attention mechanism in the channel direction. And finally multiplying the feature maps of the three branches to obtain a clearer feature map for detecting a subsequent network.
The improved rotating target detection based on FasterRCNN is mostly a two-stage network, and the two-stage network has a great disadvantage of slow speed because the speed is slowed down due to the fact that the RPN network firstly carries out primary selection of candidate frames and then secondary screening is carried out on the RPN network. However, the work of improving a one-stage target detection algorithm based on a YOLO series to detect a rotating target does not appear at present, and the invention provides a rotating target detection algorithm based on YOLOv5 mainly based on a target detection framework of the YOLO series.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems of the method, the invention provides a rotary target detection method based on YOLOv5, and realizes two types of rotary target detection modes, wherein different detection modes can be tried according to different data sets, and the best detection mode is selected.
The technical scheme is as follows: the invention comprises the following steps:
(1) receiving an input image, and performing data enhancement on the input image by a data loading module;
(2) selecting different rotating target detection algorithms;
(3) selecting a coordinate offset regression algorithm in the step (2), adding an Attention-Net module behind a backbone of YOLOv5, and extracting features;
(4) performing coordinate offset rotation target prediction on the extracted characteristic diagram;
(5) combining a loss function of YOLOv5 during training, and adding an offset loss function and a loss function of an Attention-Net module;
(6) performing target rotation post-processing on the predicted target frame to finally obtain a detection result;
(7) selecting a rotating Anchor detection algorithm in the step (2), and directly extracting the characteristics of data from a backbone of YOLOv 5;
(8) performing rotating Anchor target prediction on the extracted characteristic diagram;
(9) improving the horizontal frame loss function of YOLOv5 during training, and changing the horizontal frame loss function into a loss function of a rotating frame;
(10) and performing rotating target post-processing on the predicted target frame to finally obtain a detection result.
Has the advantages that: the post-processing module adopts a calculation mode of rotation intersection ratio and is added into a non-maximum value suppression algorithm. The final model obtains good detection for detecting the rotating target in the high-altitude shooting image, and the algorithm can effectively solve the problem of detecting the small rotating target.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of the coordinate offset algorithm label.
Fig. 3 is a network configuration diagram.
FIG. 4 is a structural diagram of Attention-Net.
FIG. 5 is a schematic view of a rotating Anchor.
Detailed Description
The algorithm is realized based on four modules, namely a data loading module, a feature extraction module, a rotating target prediction module and a post-processing module, and the details of each module are described in detail as follows:
a data loading module: the rotating target labeling form is that coordinates of four points of a quadrilateral are labeled clockwise, and the module can perform data enhancement such as up-down, left-right turning, Gaussian blur, splicing, rotation, label disturbance and the like on data.
A feature extraction module: (1) selecting a coordinate offset regression algorithm: after the image is sent to a backbone of YOLOv5 by the module 1, denoising of the feature map is carried out by an Attention-Net module, and finally, feature maps with three sizes of 1/8, 1/16 and 1/32 of the original image are obtained by sequentially passing through Focus, BottleneckCSP, SPP and PAN of the original YOLOv5, and are transmitted to a subsequent network for prediction. (2) Selecting a rotation anchor detection algorithm: the module 1 directly sends processed pictures into a backbone of YOLOv5 for feature map extraction, and finally obtains feature maps with three sizes of 1/8, 1/16 and 1/32 of the original pictures through Focus, BottleneckCSP, SPP and PAN of YOLOv5, and the feature maps are sent to a subsequent network for prediction.
A rotating target prediction module: the feature diagram output by the module 2 is used for detecting the rotating target frame, and the detection method is divided into two detection modes:
(1) performing coordinate offset regression detection to obtain a series of rotating target frames, and training by using a loss function corresponding to the algorithm in a training mode;
(2) rotating the Anchor detection to obtain a series of rotating target frames, and training the loss function by using the algorithm in a training mode;
a post-processing module: the module 3 obtains a rotating frame to be subjected to NMS operation, the NMS algorithm of the horizontal frame is not applicable any more, and the module uses the NMS algorithm of the rotating frame, so that the post-processing effect of the rotating frame is better.
The method comprises the following steps: inputting a picture, converting the picture into an RGB channel by a data loading module, converting resize into 672 x 672 size, converting original dimensionality [ H, W, C ] into [ C, H, W ], carrying out Gaussian blur with an average value of 0 and a variance of 0.5 on the picture, carrying out common up-down left-right turning on the picture, carrying out random selection rotation on the picture, carrying out operations such as label disturbance on a label and the like, and finally normalizing the enhanced picture.
Step two: after the pictures are processed, the pictures are input into a feature extraction module. Selecting different feature extraction modules according to different detection algorithms:
(1) coordinate offset regression algorithm: the network structure is shown in fig. 3, and an Attention-Net structure shown in fig. 4 is further connected after the backbone of YOLOv5 to form a complete feature extraction module and then output a feature map with three dimensions. Assuming that the width and height of the input picture are 672 a, the three dimensions are 84, 42, 21, respectively. The Attention-Net has three branches as follows:
the first branch is: the shape of the input feature map is (C × W × H), the green feature map shown in fig. 4 is obtained through the convolution operation, the shape is (2 × W × H), and then the softmax operation is performed in the channel direction to obtain the maximum one of the two pixel values on each channel, which is the attention at the pixel level, and the shape is (1 × W × H).
The second branch is as follows: and inputting the characteristic diagram.
The third branch is as follows: and (3) after the input feature map is subjected to convolution operation, obtaining a feature map with the shape of (C1) and then performing sigmoid activation function.
And finally multiplying the characteristic diagram (1 × W × H) obtained from the first branch, the characteristic diagram (C × W × H) with the shape obtained from the second branch and the characteristic diagram (C × 1) obtained from the third branch to obtain the characteristic diagram (C × W × H), wherein the characteristic diagram is the same as the shape of the Attention-Net input, and the subsequent network can perform subsequent operation on the characteristic diagram.
(2) Rotation Anchor detection algorithm: the network structure is shown in fig. 3, and feature extraction is directly performed by using a backbone of YOLOv5, and then a feature map with three sizes is output. Assuming that the width and height of the input picture are 672 a, the three dimensions are 84, 42, 21, respectively.
Step three: the module will make predictions on the three signatures obtained after step two. Also based on different detection algorithms, two cases are distinguished:
(1) coordinate offset regression algorithm: each grid will output 3 × channels (9+ num _ class), 3 indicates that each grid has three sizes of anchors, and 9 indicates that 8 parameters of label and one obj parameter are also represented. The 8 parameters representing label are shown in FIG. 2 in the form of
Figure BDA0003044323470000041
Wherein x, y represent the center coordinates of the horizontal frame, w, h represent its width and height, r1 represents the distance between the intersection of the uppermost edge of the rotating frame and the top left vertex of the horizontal frame, r2 represents the distance between the intersection of the right edge of the rotating frame and the top right vertex of the horizontal frameR3, r4 and so on. obj represents the probability that this grid exists for the target, and num _ class represents the number of target classes.
The loss function during the training phase is as follows:
Figure BDA0003044323470000042
Figure BDA0003044323470000043
Figure BDA0003044323470000051
Figure BDA0003044323470000052
Figure BDA0003044323470000053
loss=lobj+lbox+loff+latt+lcls
where lbox represents the loss of the horizontal box, loff represents the loss of 4 offsets, lobj represents the loss of the target, latt represents the loss of the Attention-Net network, and lcls represents the loss of the class. Lambda [ alpha ]coord、λclass、λatt、λnoobj、λobjRespectively representing the weight coefficient corresponding to the loss, S represents the length of one side of the feature diagram, namely the number of lattice points of one side on the feature diagram, B represents the number of anchors required to be predicted on each lattice point,
Figure BDA0003044323470000054
whether a target exists in the jth Anchor of the ith grid point on the characteristic diagram or not is represented as 1, otherwise, the target is represented as 0,
Figure BDA0003044323470000055
meaning of the representation is exactly opposite to that, smoothl1Represents a commonly used target box regression loss function, α 1*,α2*,α3*,α4*Respectively representing the difference, x, between the true and predicted values of the 4 offsets*,y*,w*,h*Respectively representing the difference between the real value and the predicted value of the central coordinate and the width and the height of the target frame, h and w represent the width and the height of the characteristic diagram, and pi,j(c),p′i,j(c) Actual and predicted values, c, representing Anchor classes, respectivelyi,j,c′i.jRespectively representing whether the real value and the predicted value of a target exist in the jth Anchor on the ith lattice point of the characteristic diagram, BCE represents a binary cross loss function, ui,j,u′i,jAnd representing a real value and a predicted value corresponding to the characteristic diagram of the upper branch in the Attention-Net structure, and loss represents loss sum of the network.
(2) Rotating the Anchor detection algorithm, each grid will output 6 × 5+ num _ class channels, 6 represents that each grid outputs 18 types of anchors, Yolov5 originally has three sizes of anchors, this algorithm in turn proposes 6 anchors with rotation angles, as shown in FIG. 5, the angles are 90 °, 60 °, 30 °, 0 °, -30 °, -60 °, respectively, so 3 × 6 has 18 types of anchors, 5 represents 5 parameters representing label: x, y, w, h, θ, num _ calls represents the number of categories.
The loss function during the training phase is as follows:
Figure BDA0003044323470000056
Figure BDA0003044323470000061
Figure BDA0003044323470000062
loss=lobj+lobx+lcls
wherein lbox representsThe loss of the spin box, lobj indicates whether there is a loss of the target, and lcls indicates a loss of the class. Lambda [ alpha ]coord、λclass、λnoobj、λobjRespectively representing the weight coefficient corresponding to the loss, S represents the length of one side of the feature diagram, namely the number of lattice points of one side on the feature diagram, B represents the number of anchors required to be predicted on each lattice point,
Figure BDA0003044323470000063
whether a target exists in the jth Anchor of the ith grid point on the characteristic diagram or not is represented as 1, otherwise, the target is represented as 0,
Figure BDA0003044323470000064
meaning of the representation is exactly opposite to that, smoothl1Representing a commonly used target box regression loss function, pi,j(c),p′i,j(c) Actual and predicted values, c, representing Anchor classes, respectivelyi,j,c′i.jRespectively showing whether the real value and the predicted value of the target exist in the jth Anchor on the ith lattice point of the characteristic diagram, x*,y*,w*,h*,θ*And respectively representing the central point coordinate, the width and the height of the target frame and the difference value between the true value and the predicted value of the rotating angle, and loss represents loss sum of the network.
Step four: and step three, obtaining a large number of detection boxes, and removing repeated boxes by applying an NMS algorithm. The module uses a rotary NMS algorithm, and comprises the following specific steps:
1. and sorting all the boxes from large to small according to the confidence coefficient.
2. And (3) selecting a box with the highest confidence level each time, if the box is marked, skipping the box, and repeating the step (2) until all the boxes are selected.
3. The selected frame and all the rest frames are rotated IoU, the actual practice is to calculate IoU according to the horizontal frame, calculate the angle similarity of the two rotated frames, and multiply the two angles to obtain IoU of the rotated frame.
4. Boxes IoU that are less than a certain threshold are retained and marked. Step 2 is repeated again.

Claims (7)

1. A rotary target detection algorithm based on YOLOv5 is characterized by comprising the following steps:
(1) receiving an input image, and performing data enhancement on the input image by a data loading module;
(2) selecting different rotating target detection algorithms;
(3) selecting a coordinate offset regression algorithm in the step (2), adding an Attention-Net module behind a backbone of YOLOv5, and extracting features;
(4) performing coordinate offset rotation target prediction on the extracted characteristic diagram;
(5) combining a loss function of YOLOv5 during training, and adding an offset loss function and a loss function of an Attention-Net module;
(6) performing target rotation post-processing on the predicted target frame to finally obtain a detection result;
(7) selecting a rotating Anchor detection algorithm in the step (2), and directly extracting the characteristics of data from a backbone of YOLOv 5;
(8) performing rotating Anchor target prediction on the extracted characteristic diagram;
(9) improving the horizontal frame loss function of YOLOv5 during training, and changing the horizontal frame loss function into a loss function of a rotating frame;
(10) and performing rotating target post-processing on the predicted target frame to finally obtain a detection result.
2. The YOLOv 5-based rotating target detection algorithm according to claim 1, wherein the step (1) is implemented as follows:
the method for enhancing data of label disturbance specifically comprises the following steps:
the generation of the training label can make the coordinate value of the marking point fluctuate within a certain range, and the fluctuation range of the marking point is determined according to the size of the target pixel.
3. The ylovov 5-based rotating target detection algorithm according to claim 1, wherein the step (3) is implemented as follows:
the Attention-Net structure is added behind the backbone of YOLOv5, and the Attention-Net structure has the function of removing or reducing noise in the feature map, so that the boundaries between objects and the background in the feature map are clearer.
4. The YOLOv 5-based rotating target detection algorithm according to claim 1, wherein the step (5) is implemented as follows:
on the basis of the YOLOv5 loss function, an offset loss function and the loss function of the Attention-Net module are newly added:
Figure FDA0003044323460000021
Figure FDA0003044323460000022
wherein loff represents an offset loss function, and latt represents a loss function of Attention-Net;
λcoordand λattWeight coefficients respectively representing offset loss and Attention-Net loss, S represents the length of one edge of the feature map, namely the number of lattice points of one edge on the feature map, B represents the number of anchors required to be predicted on each lattice point,
Figure FDA0003044323460000023
whether a target exists in the jth Anchor of the ith lattice point on the characteristic diagram or not is represented, if so, the target is 1, otherwise, the target is 0, smoothl1Represents a commonly used target box regression loss function, α 1*,α2*,α3*,α4*Respectively representing the difference between the real value and the predicted value of 4 offsets, h and w represent the width and the height of a characteristic diagram, BCE represents a binary cross loss function, ui,j,u′i,jRepresents Attention-NetAnd the real value and the predicted value corresponding to the characteristic diagram of the upper branch in the structure.
5. The YOLOv 5-based rotating target detection algorithm according to claim 1, wherein the step (8) is implemented as follows:
adding 6 different angle rotation anchors, together with the original 3 sizes of anchors of YOLOv5, each grid point will generate 18 rotation angle anchors in total, which can better match the real frame with rotation angle.
6. The YOLOv 5-based rotating target detection algorithm according to claim 1, wherein the step (9) is implemented as follows:
the horizontal box loss function of YOLOv5 was modified to be the loss function of the rotating box:
Figure FDA0003044323460000024
wherein λcoordThe weight coefficient lost by the rotating frame, S represents the length of one side of the feature graph, namely the number of lattice points of one side on the feature graph, B represents the number of anchors required to be predicted on each lattice point,
Figure FDA0003044323460000025
whether a target exists in the jth Anchor of the ith lattice point on the characteristic diagram or not is represented, if so, the target is 1, otherwise, the target is 0, smoothl1Represents a commonly used target box regression loss function, x*,y*,w*,h*,θ*And respectively representing the coordinate of the central point of the target frame, the width and the height and the difference value between the true value and the predicted value of the rotating angle.
7. The YOLOv 5-based rotating object detection algorithm according to claim 1, wherein the steps (6) and (10) are implemented as follows:
an approximation method for calculating the rotation target box IoU is proposed, mainly to perform matrix operation on the GPU like a horizontal box, and mainly to multiply an angle similarity factor on the basis of the horizontal box IoU to approximate IoU of the rotation box, where the angle similarity calculation formula is:
angle_factor=abs(cos(θ′-θ))
and the theta' are an angle true value and an angle predicted value respectively, so that the coordinate values of the angle frame and the horizontal frame are not combined together, and matrix operation can be used to improve the calculation efficiency.
CN202110468451.8A 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5 Active CN113326734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110468451.8A CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110468451.8A CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Publications (2)

Publication Number Publication Date
CN113326734A true CN113326734A (en) 2021-08-31
CN113326734B CN113326734B (en) 2023-11-24

Family

ID=77413879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110468451.8A Active CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Country Status (1)

Country Link
CN (1) CN113326734B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240946A (en) * 2022-02-28 2022-03-25 南京智莲森信息技术有限公司 Locator abnormality detection method, system, storage medium and computing device
CN115439765A (en) * 2022-09-17 2022-12-06 艾迪恩(山东)科技有限公司 Marine plastic garbage rotation detection method based on machine learning unmanned aerial vehicle visual angle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444809A (en) * 2020-03-23 2020-07-24 华南理工大学 Power transmission line abnormal target detection method based on improved YO L Ov3
CN111461110A (en) * 2020-03-02 2020-07-28 华南理工大学 Small target detection method based on multi-scale image and weighted fusion loss
CN111723748A (en) * 2020-06-22 2020-09-29 电子科技大学 Infrared remote sensing image ship detection method
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN112418108A (en) * 2020-11-25 2021-02-26 西北工业大学深圳研究院 Remote sensing image multi-class target detection method based on sample reweighing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN111461110A (en) * 2020-03-02 2020-07-28 华南理工大学 Small target detection method based on multi-scale image and weighted fusion loss
CN111444809A (en) * 2020-03-23 2020-07-24 华南理工大学 Power transmission line abnormal target detection method based on improved YO L Ov3
CN111723748A (en) * 2020-06-22 2020-09-29 电子科技大学 Infrared remote sensing image ship detection method
CN112418108A (en) * 2020-11-25 2021-02-26 西北工业大学深圳研究院 Remote sensing image multi-class target detection method based on sample reweighing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHENGYAO ZHANG: "A shale gas exploitation platform detection and positioning method based on YOLOv5", 《2021 3RD INTERNATIONAL CONFERENCE ON INTELLIGENT CONTROL, MEASUREMENT AND SIGNAL PROCESSING AND INTELLIGENT OIL FIELD (ICMSP)》 *
徐融: "基于YOLOv3的小目标检测算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 02 *
聂鑫;刘文;吴巍;: "复杂场景下基于增强YOLOv3的船舶目标检测", 计算机应用, no. 09 *
赵琼;李宝清;李唐薇;: "基于改进YOLO v3的目标检测算法", 激光与光电子学进展, no. 12 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240946A (en) * 2022-02-28 2022-03-25 南京智莲森信息技术有限公司 Locator abnormality detection method, system, storage medium and computing device
CN115439765A (en) * 2022-09-17 2022-12-06 艾迪恩(山东)科技有限公司 Marine plastic garbage rotation detection method based on machine learning unmanned aerial vehicle visual angle
CN115439765B (en) * 2022-09-17 2024-02-02 艾迪恩(山东)科技有限公司 Marine plastic garbage rotation detection method based on machine learning unmanned aerial vehicle visual angle

Also Published As

Publication number Publication date
CN113326734B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN110598609B (en) Weak supervision target detection method based on significance guidance
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN111914795B (en) Method for detecting rotating target in aerial image
CN111461212B (en) Compression method for point cloud target detection model
CN110909666A (en) Night vehicle detection method based on improved YOLOv3 convolutional neural network
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN112395975A (en) Remote sensing image target detection method based on rotating area generation network
CN109753878B (en) Imaging identification method and system under severe weather
CN112287941B (en) License plate recognition method based on automatic character region perception
CN110659550A (en) Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium
CN111914720B (en) Method and device for identifying insulator burst of power transmission line
CN110781882A (en) License plate positioning and identifying method based on YOLO model
CN114841972A (en) Power transmission line defect identification method based on saliency map and semantic embedded feature pyramid
CN113326734A (en) Rotary target detection method based on YOLOv5
CN111027538A (en) Container detection method based on instance segmentation model
CN114140672A (en) Target detection network system and method applied to multi-sensor data fusion in rainy and snowy weather scene
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN113822844A (en) Unmanned aerial vehicle inspection defect detection method and device for blades of wind turbine generator system and storage medium
CN112560852A (en) Single-stage target detection method with rotation adaptive capacity based on YOLOv3 network
Qian et al. Mask R-CNN for object detection in multitemporal SAR images
CN112949635B (en) Target detection method based on feature enhancement and IoU perception
CN115063679B (en) Pavement quality assessment method based on deep learning
CN114283431B (en) Text detection method based on differentiable binarization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant