CN113326734B - Rotational target detection method based on YOLOv5 - Google Patents

Rotational target detection method based on YOLOv5 Download PDF

Info

Publication number
CN113326734B
CN113326734B CN202110468451.8A CN202110468451A CN113326734B CN 113326734 B CN113326734 B CN 113326734B CN 202110468451 A CN202110468451 A CN 202110468451A CN 113326734 B CN113326734 B CN 113326734B
Authority
CN
China
Prior art keywords
loss function
feature map
rotation
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110468451.8A
Other languages
Chinese (zh)
Other versions
CN113326734A (en
Inventor
霍静
王宁
李文斌
高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Nanjing University
Original Assignee
Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd, Nanjing University filed Critical Jiangsu Wanwei Aisi Network Intelligent Industry Innovation Center Co ltd
Priority to CN202110468451.8A priority Critical patent/CN113326734B/en
Publication of CN113326734A publication Critical patent/CN113326734A/en
Application granted granted Critical
Publication of CN113326734B publication Critical patent/CN113326734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rotational target detection algorithm based on YOLOv5, which mainly comprises a data loading processing module, a feature extraction module, a rotational target detection module and a post-processing module. In addition to conventional data loading and data enhancement, a random rotation data enhancement method is designed for rotation target detection in the data loading processing stage. The feature extraction module adds an attribute-Net structure based on a backbone network of YOLOv5, so that noise of a feature map is reduced, and information is more accurate. The post-processing module adopts a rotation cross ratio calculation mode and adds the rotation cross ratio into a non-maximum value suppression algorithm. The final model can be used for detecting the rotating target in the high-altitude shooting image, and the algorithm can be used for effectively solving the problem of detecting the rotating target.

Description

Rotational target detection method based on YOLOv5
Technical Field
The invention relates to a detection method of a rotating target based on a deep neural detection network YOLOv5, and belongs to the field of computer application.
Background
With the development of the age and the improvement of society, remote sensing images and unmanned aerial vehicle images can develop a plurality of new application markets, such as: city planning, environmental resource dynamic monitoring, and guiding the update work in industry. Along with the upgrading of remote sensing satellite, unmanned aerial vehicle and other technologies, the resolution ratio of high-altitude shot images is higher and higher, the number is also higher and higher, and the high-altitude shot images are difficult to judge by means of manual work, so that the high-altitude shot images are very necessary to be processed by using deep learning target detection, and the labor cost can be greatly reduced. However, the conventional object detection method is difficult to be applied to the current high-altitude photographed image because the object of the high-altitude photographed image has a small pixel area, an extreme aspect ratio, and a rotation angle.
To accommodate these features, many scholars have proposed a rotation target detection method. Currently, the main detection method is improved on the basis of FaterRCNN, for example, R2CNN, after RPN, the ROIPooling is performed, instead of selecting only one 7*7 kernel size like FaterRCNN, three sizes of Pooling layers are adopted: 7*7, 3×11, 11×7, the output of the subsequent full convolution stage includes, in addition to the center coordinate point and width and height of the detection frame, the angular dimension describing this frame, and positive and negative representing directions. Other works include SCRDet proposed by Yang Xue doctor, and the algorithm adds SF-Net, MDA-Net and other structures on the basis of FasterRCNN, wherein the SF-Net controls the size of a feature map through up-down sampling, the purpose of controlling the anchor step size is achieved, and experiments prove that the smaller the step size is, the better the experimental effect is. The MAD-Net structure is used for extracting a characteristic diagram with better effect, and the characteristic diagram directly extracted by the RPN has a large amount of noise, so that the boundary between the background and the foreground and the boundary between the foreground and the foreground are blurred. Thus, a MAD-Net structure with attention mechanism is proposed, which is divided into three branches, the first branch being the attention mechanism at pixel level, the second branch being the original feature map, and the third branch being the attention mechanism in channel direction. And finally, multiplying the characteristic diagrams of the three branches to obtain a clearer characteristic diagram for detecting the subsequent network.
The improved rotation target detection based on FaterRCNN is mainly a two-stage network, and the two-stage network has a great disadvantage of slower speed because the two-stage network firstly performs primary selection of candidate frames through an RPN network and then performs secondary screening through a full connection layer, so that the speed is reduced. The work of improving the one-stage target detection algorithm based on the YOLO series to detect the rotating target at present does not appear, and the invention provides a rotating target detection algorithm based on the YOLOv5 mainly based on a target detection framework of the YOLO series.
Disclosure of Invention
The invention aims to: aiming at the problems of the method, the invention provides a rotational target detection method based on YOLOv5, and realizes two types of rotational target detection modes, different detection modes can be tried according to different data sets, and the best effect is selected.
The technical scheme is as follows: the invention comprises the following steps:
(1) Receiving an input image, and performing data enhancement on the input image by a data loading module;
(2) Selecting different rotation target detection algorithms;
(3) Selecting a coordinate offset regression algorithm in the step (2), and adding an attribute-Net module behind a backlight of Yolov5 to extract features;
(4) Performing coordinate offset rotation target prediction on the extracted feature map;
(5) Combining the loss function of YOLOv5 during training, and adding an offset loss function and the loss function of the Attention-Net module;
(6) Performing rotary target post-processing on the predicted target frame to finally obtain a detection result;
(7) Selecting a rotary Anchor detection algorithm in the step (2), and directly extracting features from the data by using a backup of Yolov 5;
(8) Performing rotation Anchor target prediction on the extracted feature map;
(9) Improving the horizontal frame loss function of YOLOv5 during training, and changing the horizontal frame loss function into the loss function of a rotating frame;
(10) And carrying out post-processing on the rotation target of the predicted target frame to finally obtain a detection result.
The beneficial effects are that: the post-processing module adopts a rotation cross ratio calculation mode and adds the rotation cross ratio into a non-maximum value suppression algorithm. The final model can be used for detecting the rotating target in the high-altitude shooting image, and the algorithm can be used for effectively solving the problem of detecting the rotating small target.
Drawings
Fig. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of a coordinate offset algorithm label.
Fig. 3 is a network configuration diagram.
FIG. 4 is a diagram of the Attention-Net structure.
Fig. 5 is a schematic diagram of a rotary Anchor.
Detailed Description
The algorithm is realized based on four modules, namely a data loading module, a characteristic extraction module, a rotation target prediction module and a post-processing module, and details of each module are described in detail below:
and a data loading module: the rotating target labeling form is to label coordinates of four points of a quadrilateral clockwise, and the module can perform up-down, left-right overturn, gaussian blur, splicing, rotation, label disturbance and other data enhancement on the data.
And the feature extraction module is used for: (1) selecting a coordinate offset regression algorithm: after the picture is sent to a background of the YOLOv5 by the module 1, denoising the feature map by the attribute-Net module, and finally sequentially passing through an original module Focus, bottleneckCSP, SPP, PAN of the YOLOv5 to obtain the feature map with the size of 1/8, 1/16 and 1/32 of the original map, and conveying the feature map to a subsequent network for prediction. (2) selecting a rotation anchor detection algorithm: and the module 1 processes the picture and then directly sends the picture to a backup of the YOLOv5 to extract a feature map, finally, the picture passes through an original module Focus, bottleneckCSP, SPP, PAN of the YOLOv5 to obtain a feature map with the size of 1/8, 1/16 and 1/32 of the original picture, and the feature map is sent to a subsequent network for prediction.
A rotation target prediction module: the feature map output by the module 2 is used for detecting the rotating target frame, and two detection modes are adopted:
(1) Regression detection of the coordinate offset is carried out to obtain a series of rotating target frames, and under a training mode, the loss function corresponding to the algorithm is utilized for training;
(2) Rotating Anchor detection to obtain a series of rotating target frames, and training the used loss function by using the algorithm in a training mode;
and a post-processing module: the NMS operation is carried out on the rotating frame obtained by the module 3, the NMS algorithm of the horizontal frame is not applicable any more, and the module uses the NMS algorithm of the rotating frame, so that the post-processing effect of the rotating frame is better.
Step one: inputting a picture, the data loading module will convert the picture into RGB channel, the size of the size is 672 x 672, the original dimension [ H, W, C ] is changed into [ C, H, W ], the average value of the picture is 0, the Gaussian blur with the variance of 0.5 can be used for general up-down left-right overturn of the picture, random selective rotation of the picture can be further carried out, label disturbance and other operations are carried out on the label, and finally the enhanced picture is normalized.
Step two: after the pictures are processed, the pictures are input to a feature extraction module. According to different detection algorithms, different feature extraction modules are selected:
(1) Coordinate offset regression algorithm: the network structure is shown in fig. 3, and after the backlight of YOLOv5, an Attention-Net structure is further connected as shown in fig. 4, so that a complete feature extraction module is formed, and then three feature graphs with the sizes are output. Assuming that the width and height of the input pictures are 672, the three dimensions are 84, 42, 21, respectively. The following describes three branches of the Attention-Net:
the first branch: the shape of the input feature map is (c×w×h), the convolution operation is performed to obtain a green feature map as shown in fig. 4, the shape is (2×w×h), and then the softmax operation is performed in the channel direction to obtain the largest one of the two pixel values on each channel, which is the intensity of the pixel level, and the shape is (1×w×h).
The second branch: a feature map at the time of input.
Third branch: the input feature map is convolved to obtain a feature map with a shape (C.times. 1*1), and then is subjected to a sigmoid activation function.
And finally multiplying the characteristic diagram (1 x W x H) obtained by the first branch, the characteristic diagram (C x W x H) obtained by the second branch and the characteristic diagram (C x 1*1) obtained by the third branch to obtain the characteristic diagram (C x W x H) which is the same as the input shape of the Attention-Net, wherein the subsequent network can perform subsequent operations on the characteristic diagram.
(2) Rotating Anchor detection algorithm: the network structure is as shown in fig. 3, and features are extracted directly by using a backlight of YOLOv5, and then three-size feature maps are output. Assuming that the width and height of the input pictures are 672, the three dimensions are 84, 42, 21, respectively.
Step three: the module predicts three feature maps obtained after the second step. Also, according to different detection algorithms, two cases are divided:
(1) Coordinate offset regression algorithm: each grid outputs 3 (9+num_class) channels, 3 represents an Anchor with three sizes per grid, and 9 represents 8 parameters of label and one obj parameter. The 8 parameters representing label are shown in FIG. 2, and the representation isWhere x, y represents the center coordinates of the horizontal frame, w, h represents the width and height thereof, r1 represents the distance between the intersection of the uppermost side of the rotating frame and the upper left vertex of the horizontal frame, r2 represents the distance between the intersection of the right side of the rotating frame and the upper right vertex of the horizontal frame, r3, r4, and so on. obj represents the probability that this grid exists for the target and num_class represents the number of target categories.
The loss function during the training phase is as follows:
loss=lobj+lbox+loff+latt+lcls
where lbox represents the loss of horizontal box, loff represents the loss of 4 offsets, lobj represents the loss of whether there is a target, latt represents the loss of the Attention-Net network, lcls represents the loss of category. Lambda (lambda) coord 、λ class 、λ att 、λ noobj 、λ obj Respectively representing the weight coefficient of the corresponding loss, wherein S represents the length of one side of the feature map, namely the number of grid points of one side of the feature map, B represents the number of Anchor to be predicted on each grid point,indicating whether targets exist in the jth Anchor of the ith lattice point on the feature map, wherein the targets are 1, otherwise, the targets are 0,/or not>The meaning of the expression is the exact opposite thereof, smooth l1 Represents a commonly used target frame regression loss function, alpha 1 * ,α2 * ,α3 * ,α4 * Representing the difference between the true value and the predicted value of the 4 offsets, x * ,y * ,w * ,h * The difference value between the true value and the predicted value of the center coordinate and the width and height of the target frame are respectively represented, h and w represent the width and height of the feature map, and p i,j (c),p′ i,j (c) Representing the real value and the predicted value of the Anchor class respectively, c i,j ,c′ i.j Respectively representing whether the target in the jth Anchor on the ith grid point of the feature map has the true value and the predicted value, and BCE represents a binary cross loss function, u i,j ,u′ i,j The true and predicted values corresponding to the feature map of the upper branch in the Attention-Net structure are represented, and loss of the network are represented.
(2) The rotating Anchor detection algorithm, each grid outputs 6 (5+num_class), 6 represents that each grid outputs 18 types of Anchor, YOLOv5 originally has three sizes of Anchor, and the algorithm also provides 6 anchors with rotation angles, as shown in FIG. 5, the angles are 90 degrees, 60 degrees, 30 degrees, 0 degrees, 30 degrees, 60 degrees, and 60 degrees respectively, thus 3*6 has 18 types of Anchor altogether, and 5 represents 5 parameters representing label: x, y, w, h, θ, num_cals represents the number of categories.
The loss function during the training phase is as follows:
loss=lobj+lobx+lcls
where lbox represents the loss of the spin box, lobj represents the loss of whether there is a target, lcls represents the loss of category. Lambda (lambda) coord 、λ class 、λ noobj 、λ obj Respectively representing the weight coefficient of the corresponding loss, wherein S represents the length of one side of the feature map, namely the number of grid points of one side of the feature map, B represents the number of Anchor to be predicted on each grid point,indicating whether targets exist in the jth Anchor of the ith lattice point on the feature map, wherein the targets are 1, otherwise, the targets are 0,/or not>Meaning of representationIn contrast thereto, smooth l1 Represents a commonly used target frame regression loss function, p i,j (c),p′ i,j (c) Representing the real value and the predicted value of the Anchor class respectively, c i,j ,c′ i.j Respectively representing whether the real value and the predicted value of the target exist in the jth Anchor on the ith grid point of the feature map, and x * ,y * ,w * ,h * ,θ * The difference between the true value and the predicted value of the center point coordinates, the width and height and the rotation angle of the target frame are respectively represented, and loss of the network are represented by loss.
Step four: step three, a large number of detection frames are obtained, and repeated frames need to be removed by using an NMS algorithm. The module uses a rotary NMS algorithm, and comprises the following specific steps:
1. all the boxes are ordered from big to small according to confidence level.
2. And selecting a frame with highest confidence each time, if the frame is marked, skipping the frame, and repeating the step 2 until all the frames are selected.
3. The selected frame is rotated IoU with all the remaining frames, in practice, the conventional IoU calculation is performed according to the horizontal frame, and then the angle similarity of the two rotated frames is calculated, and the two are multiplied to obtain IoU of the rotated frame.
4. Boxes IoU below a certain threshold are reserved and marked. And repeating the step 2.

Claims (7)

1. A YOLOv 5-based rotating target detection algorithm, comprising the steps of:
(1) Receiving an input image, and performing data enhancement on the input image by a data loading module;
(2) Selecting different rotation target detection algorithms;
(3) Selecting a coordinate offset regression algorithm in the step (2), and adding an attribute-Net module behind a backlight of Yolov5 to extract features;
(4) Performing coordinate offset rotation target prediction on the extracted feature map;
(5) Combining the loss function of YOLOv5 during training, and adding an offset loss function and the loss function of the Attention-Net module;
(6) Performing rotary target post-processing on the predicted target frame to finally obtain a detection result;
(7) Selecting a rotary Anchor detection algorithm in the step (2), and directly extracting features from the data by using a backup of Yolov 5;
(8) Performing rotation Anchor target prediction on the extracted feature map;
(9) Improving the horizontal frame loss function of YOLOv5 during training, and changing the horizontal frame loss function into the loss function of a rotating frame;
(10) And carrying out post-processing on the rotation target of the predicted target frame to finally obtain a detection result.
2. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein the step (1) is implemented as follows:
the data enhancement method for the label disturbance specifically comprises the following steps:
the generation of the training label can carry out fluctuation in a certain range on coordinate values of the marking point, and the fluctuation range of the marking point is determined according to the size of the target pixel.
3. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein the step (3) is implemented as follows:
the effect of adding the attribute-Net structure behind the backlight of YOLOv5 is to remove or reduce noise in the feature map, so that the boundaries between the targets in the feature map and the targets and the background are clearer.
4. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein the step (5) is implemented as follows:
based on the YOLOv5 loss function, an offset loss function and a loss function of the Attention-Net module are newly added:
where loff represents the offset loss function, latt represents the loss function of the Attention-Net;
λ coord and lambda (lambda) att The weight coefficients of the offset loss and the attribute-Net loss are respectively represented, S represents the length of one side of the feature map, namely the number of grid points of one side on the feature map, B represents the number of anchors to be predicted on each grid point,indicating whether targets exist in the jth Anchor of the ith lattice point on the feature map, if so, the targets are 1, otherwise, the targets are 0, and the smooth is zero l1 Represents a commonly used target frame regression loss function, alpha 1 * ,α2 * ,α3 * ,α4 * Representing the difference between the true and predicted values of 4 offsets, h and w representing the width and height of the feature map, BCE representing the binary cross-loss function, u i,j ,u′ i,j The real value and the predicted value corresponding to the feature map of the upper branch in the Attention-Net structure are represented.
5. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein the step (8) is implemented as follows:
adding 6 different angles of rotation Anchor, adding original 3 sizes of Anchor of YOLOv5, and generating 18 kinds of Anchor with rotation angle at each grid point in total, the real frame with rotation angle can be matched better.
6. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein the step (9) is implemented as follows:
improving the horizontal frame loss function of YOLOv5, changing it into a loss function of a rotating frame:
wherein lambda is coord The weight coefficient of the loss of the rotating frame, S represents the length of one side of the feature map, namely the number of grid points of one side of the feature map, B represents the number of Anchor to be predicted on each grid point,indicating whether targets exist in the jth Anchor of the ith lattice point on the feature map, if so, the targets are 1, otherwise, the targets are 0, and the smooth is zero l1 Representing a commonly used target frame regression loss function, x * ,y * ,w * ,h * ,θ * And respectively representing the difference value between the real value and the predicted value of the center point coordinate, the width and the height of the target frame and the rotation angle.
7. The YOLOv 5-based rotation target detection algorithm of claim 1, wherein steps (6) and (10) are implemented as follows:
the approximation method for computing the rotation target frame IoU is mainly to perform matrix operation on the GPU like a horizontal frame, and is mainly to multiply an angle similarity factor on the basis of the horizontal frame IoU to approximate IoU of the rotation frame, wherein the angle similarity calculation formula is as follows:
angle_factor=abs(cos(θ′-θ))
wherein θ, θ' are angle true value and angle predicted value, respectively, so that the coordinate values of the angle and the horizontal frame can be operated by using a matrix without combining together, thereby improving the calculation efficiency.
CN202110468451.8A 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5 Active CN113326734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110468451.8A CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110468451.8A CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Publications (2)

Publication Number Publication Date
CN113326734A CN113326734A (en) 2021-08-31
CN113326734B true CN113326734B (en) 2023-11-24

Family

ID=77413879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110468451.8A Active CN113326734B (en) 2021-04-28 2021-04-28 Rotational target detection method based on YOLOv5

Country Status (1)

Country Link
CN (1) CN113326734B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240946B (en) * 2022-02-28 2022-12-02 南京智莲森信息技术有限公司 Locator abnormality detection method, system, storage medium and computing device
CN115439765B (en) * 2022-09-17 2024-02-02 艾迪恩(山东)科技有限公司 Marine plastic garbage rotation detection method based on machine learning unmanned aerial vehicle visual angle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444809A (en) * 2020-03-23 2020-07-24 华南理工大学 Power transmission line abnormal target detection method based on improved YO L Ov3
CN111461110A (en) * 2020-03-02 2020-07-28 华南理工大学 Small target detection method based on multi-scale image and weighted fusion loss
CN111723748A (en) * 2020-06-22 2020-09-29 电子科技大学 Infrared remote sensing image ship detection method
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN112418108A (en) * 2020-11-25 2021-02-26 西北工业大学深圳研究院 Remote sensing image multi-class target detection method based on sample reweighing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN111461110A (en) * 2020-03-02 2020-07-28 华南理工大学 Small target detection method based on multi-scale image and weighted fusion loss
CN111444809A (en) * 2020-03-23 2020-07-24 华南理工大学 Power transmission line abnormal target detection method based on improved YO L Ov3
CN111723748A (en) * 2020-06-22 2020-09-29 电子科技大学 Infrared remote sensing image ship detection method
CN112418108A (en) * 2020-11-25 2021-02-26 西北工业大学深圳研究院 Remote sensing image multi-class target detection method based on sample reweighing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A shale gas exploitation platform detection and positioning method based on YOLOv5;Chengyao Zhang;《2021 3rd International Conference on Intelligent Control, Measurement and Signal Processing and Intelligent Oil Field (ICMSP)》;全文 *
基于YOLOv3的小目标检测算法研究;徐融;《中国优秀硕士学位论文全文数据库 (信息科技辑)》(第02期);全文 *
基于改进YOLO v3的目标检测算法;赵琼;李宝清;李唐薇;;激光与光电子学进展(12);全文 *
复杂场景下基于增强YOLOv3的船舶目标检测;聂鑫;刘文;吴巍;;计算机应用(09);全文 *

Also Published As

Publication number Publication date
CN113326734A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN110598609B (en) Weak supervision target detection method based on significance guidance
CN113326734B (en) Rotational target detection method based on YOLOv5
CN111640125A (en) Mask R-CNN-based aerial photograph building detection and segmentation method and device
CN111914795A (en) Method for detecting rotating target in aerial image
CN113052109A (en) 3D target detection system and 3D target detection method thereof
CN110991444B (en) License plate recognition method and device for complex scene
CN113989604B (en) Tire DOT information identification method based on end-to-end deep learning
CN107146219B (en) Image significance detection method based on manifold regularization support vector machine
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN110766002A (en) Ship name character region detection method based on deep learning
CN105678318A (en) Traffic label matching method and apparatus
CN112053441A (en) Full-automatic layout recovery method for indoor fisheye image
CN115439743A (en) Method for accurately extracting visual SLAM static characteristics in parking scene
CN111860175A (en) Unmanned aerial vehicle image vehicle detection method and device based on lightweight network
CN115578378A (en) Infrared and visible light image fusion photovoltaic defect detection method
CN112347805A (en) Multi-target two-dimensional code detection and identification method, system, device and storage medium
CN114140672A (en) Target detection network system and method applied to multi-sensor data fusion in rainy and snowy weather scene
CN115147488B (en) Workpiece pose estimation method and grabbing system based on dense prediction
Liao et al. Lr-cnn: Local-aware region cnn for vehicle detection in aerial imagery
CN111626241A (en) Face detection method and device
CN112101113B (en) Lightweight unmanned aerial vehicle image small target detection method
CN113436251A (en) Pose estimation system and method based on improved YOLO6D algorithm
Lin et al. Fast vehicle detector for autonomous driving
CN113902744B (en) Image detection method, system, equipment and storage medium based on lightweight network
CN114445726B (en) Sample library establishing method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant