CN113297996A - Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3 - Google Patents

Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3 Download PDF

Info

Publication number
CN113297996A
CN113297996A CN202110604048.3A CN202110604048A CN113297996A CN 113297996 A CN113297996 A CN 113297996A CN 202110604048 A CN202110604048 A CN 202110604048A CN 113297996 A CN113297996 A CN 113297996A
Authority
CN
China
Prior art keywords
target detection
unmanned aerial
aerial vehicle
detection method
insulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110604048.3A
Other languages
Chinese (zh)
Inventor
杨金铎
曾惜
王林波
王元峰
杨凤生
王恩伟
王宏远
赖劲舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202110604048.3A priority Critical patent/CN113297996A/en
Publication of CN113297996A publication Critical patent/CN113297996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle aerial photography insulator target detection method based on YooloV 3, which comprises the steps of constructing a target detection model based on a YooloV 3 model, and modifying a network structure of the target detection model; building a feature extraction layer of the target detection model through the feature pyramid, and then performing convolution and pooling on the feature extraction layer respectively to complete optimization of the target detection model; carrying out target detection on the unmanned aerial vehicle aerial photographing insulator by using the optimized target detection model; according to the method, the target detection model is constructed based on the YoloV3 model, the model has a more excellent recognition effect on insulators with smaller target areas and shielded insulators, the recognition accuracy is higher, and the recognition position is more accurate.

Description

Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle aerial insulator target detection method based on YoloV 3.
Background
At present, the field of unmanned aerial vehicles has great market value, and the processing technology based on aerial images of unmanned aerial vehicles becomes a popular research topic.
A traditional target detection algorithm used for aerial images mainly adopts a staged design method to carry out region window extraction, feature extraction and window classification on the images. However, for the target with diversity representation, the sliding window-based region selection strategy has the problems of lack of pertinence, high computational complexity, window redundancy, poor robustness and the like. In the aerial image of the unmanned aerial vehicle, besides being influenced by small-scale targets, large scale change and the like, target objects are also interfered by factors such as brightness, shielding, complex and changeable backgrounds and the like. The traditional target detection algorithm is easily influenced by interference factors, so that the situations of false detection and missed detection are caused. In recent years, with the emergence of a large number of deep learning algorithms, breakthrough progress is made in the technologies of target detection, instance segmentation and the like. The bottleneck that only shallow layer features can be extracted by the conventional target detection algorithm is broken through by the deep convolutional neural network. Meanwhile, the extraction capability of the image on deep features is obviously improved, so that the target detection performance under a complex background is improved.
Most of the existing aerial image processing algorithms adopt Convolutional Neural Network (CNN) for detection. CNN generates a feature representation of a complex object by collecting a hierarchy of semantic sub-features. These sub-features are typically distributed in the form of groups in feature vectors at each layer, representing various semantic entities. However, the activation of these sub-features is often spatially affected by similar patterns and noisy backgrounds, resulting in erroneous localization and recognition.
Due to the fact that the aerial still-state target has large scale change difference, target shielding and other various very challenging problems, no universally-suitable solution is found, and further intensive research is needed.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the invention provides the unmanned aerial vehicle aerial photography insulator target detection method based on the YoloV3, which can solve the problem that the identification precision of insulators with smaller target areas and shielded insulators is low.
In order to solve the technical problems, the invention provides the following technical scheme: constructing a target detection model based on a YoloV3 model, and modifying a network structure of the target detection model; building a feature extraction layer of the target detection model through the feature pyramid, and then performing convolution and pooling on the feature extraction layer respectively to complete optimization of the target detection model; and carrying out target detection on the unmanned aerial vehicle aerial photographing insulator by using the optimized target detection model.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the network structure for modifying the target detection model comprises a CSPDarknet-53 network serving as a backbone network of the target detection model; selecting a Mish function as an activation function; CIOU is used as a regression loss function.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the Mish function includes the functions of,
Mish=x×tanh(ln(1+ex))
where x is the input.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the regression loss function may include a function of,
Figure BDA0003093606070000021
among them, LOSSCIOUAs regression loss value, bgtP is the central point of the prediction frame and the central point of the real frame, p represents the Euclidean distance between the central points of the prediction frame and the real frame, c represents the diagonal distance of the minimum closure area which can simultaneously contain the prediction frame and the real frame, alpha is a weight function, and v is used for measuring the similarity of the length-width ratio.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the weighting function a comprises the weight of the signal,
Figure BDA0003093606070000022
wherein IOU is the cross-over ratio.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the v is a sum of the values of,
Figure BDA0003093606070000031
wherein w is the width, h is the length, wgtIs the width of the real frame, hgtIs the length of the real box.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: the feature extraction layer comprises a spatial pyramid pooling network and a path aggregation network; converting the characteristic diagram with any size into a characteristic vector with a fixed size through the spatial pyramid pooling network; and shortening an information path between the low-layer feature and the top-layer feature in the feature extraction layer through the path aggregation network.
The invention relates to a preferable scheme of an unmanned aerial vehicle aerial photography insulation subgoal detection method based on YoloV3, wherein the method comprises the following steps: performing convolution processing on the last characteristic layer of the CSPDarknet-53 network for three times through the spatial pyramid pooling network, and then performing pooling processing by respectively using four maximal pooled pooling kernels with different scales; wherein the maximum pooled nuclear sizes of the four different scales are 13x13, 9x9, 5x5, 1x1, respectively.
The invention has the beneficial effects that: according to the method, the target detection model is constructed based on the YoloV3 model, the model has a more excellent recognition effect on insulators with smaller target areas and shielded insulators, the recognition accuracy is higher, and the recognition position is more accurate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic diagram of a CSPDarknet-53 network structure of an unmanned aerial vehicle aerial insulator target detection method based on YoloV3 according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a PANet in an unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to a first embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a recognition result of a method for detecting an insulator target by aerial photography by an unmanned aerial vehicle based on yoolov 3 according to a second embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a second recognition result of the unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to the second embodiment of the present invention;
fig. 5 is a schematic diagram of a third recognition result of the unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to the second embodiment of the present invention;
fig. 6 is a fourth schematic view of an identification result of the unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to the second embodiment of the present invention;
fig. 7 is a fifth schematic view of an identification result of an unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to a second embodiment of the present invention;
fig. 8 is a sixth schematic view of an identification result of an unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to a second embodiment of the present invention;
fig. 9 is a seventh schematic view of an identification result of the unmanned aerial vehicle aerial photography insulator target detection method based on yoolov 3 according to the second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 2, a first embodiment of the present invention provides an unmanned aerial vehicle aerial insulator target detection method based on yoolov 3, including:
s1: and constructing a target detection model based on the YoloV3 model, and modifying the network structure of the target detection model.
Referring to fig. 1, using the CSPDarknet-53 network as a backbone network of the target detection model, the input picture size of the CSPDarknet-53 network is 416 × 416, and feature maps of three scales are output: 52, 26, 13;
further, a Mish function is selected as an activation function, so that the smoothness of each point is guaranteed, the gradient descent effect is improved, and the expression is as follows:
Mish=x×tanh(ln(1+ex))
where x is the input.
Further, CIOU (Complete-interaction over Union) is used as a regression loss function; preferably, the CIOU takes into account the distance between the target and the anchor, the overlap ratio, the scale and the penalty term, so that the regression of the target frame becomes more stable, and the problems of divergence in the training process and the like do not occur like the iou (interaction over unit) and the GIOU (Generalized-interaction over unit); the function expression is as follows:
Figure BDA0003093606070000051
among them, LOSSCIOUAs regression loss value, bgtP is the central point of the prediction frame and the central point of the real frame, p represents the Euclidean distance between the central points of the prediction frame and the real frame, c represents the diagonal distance of the minimum closure area which can simultaneously contain the prediction frame and the real frame, alpha is a weight function, and v is used for measuring the similarity of the length-width ratio.
The weighting function α is:
Figure BDA0003093606070000061
v is: ,
Figure BDA0003093606070000062
where IOU is the cross-over ratio, w is the width, h is the length, w is the lengthgtIs the width of the real frame, hgtIs the length of the real box.
S2: and building a feature extraction layer of the target detection model through the feature pyramid, and then performing convolution and pooling on the feature extraction layer respectively to complete optimization of the target detection model.
The feature extraction layer comprises a spatial pyramid pooling network and a path aggregation network; converting the feature map with any size into a feature vector with a fixed size through a Spatial Pyramid Pooling (SPP) network; and shortening an information Path between the low-layer feature and the top-layer feature in the feature extraction layer through a Path Aggregation Network (PANET).
It should be noted that the path aggregation network has a very important characteristic of repeated feature extraction, and referring to fig. 2, a conventional feature pyramid structure is provided in (a), and after feature extraction of a feature pyramid from bottom to top is completed, feature extraction from top to bottom in (b) needs to be realized; the method comprises the following steps of (a) an FPN (field programmable gate array) backbone network, (b) a bottom-up path expansion process, (c) an adaptive feature pool, (d) a box-type branch structure, and (e) a full connection layer.
Preferably, the last feature layer of the CSPDarknet-53 network is subjected to three times of convolution processing through the spatial pyramid pooling network, and then the pooling processing is performed by using four maximal pooled kernels with different scales, and then the pooling kernels are stacked together, so that the receptive field can be increased, and the significant contextual features can be separated.
The convolution block for convolution processing is DarknetConv2D _ BN _ Mish, and the sizes of the maximum pooling kernels of the four different scales are respectively 13x13, 9x9, 5x5 and 1x 1.
S3: and carrying out target detection on the unmanned aerial vehicle aerial photographing insulator by using the optimized target detection model.
Extracting a plurality of characteristic layers to carry out target detection on the unmanned aerial vehicle aerial photographing insulator, wherein three characteristic layers are extracted in total and are respectively positioned at the middle layer, the middle-lower layer and the bottom layer, and the shape of the three characteristic layers is (76, 76, 256), (38, 38, 512), (19, 19, 1024); the shape of the output layer is (19, 19, 75), (38, 38, 75), (76, 76, 75), respectively.
Example 2
In order to verify the technical effect adopted in the method, the simulation test is carried out on the insulator image aerial-photographed by the real unmanned aerial vehicle, and the test result is compared by means of scientific demonstration to verify the real effect of the method.
Firstly, marking an aerial photography insulator image data set, then inputting the marked data set to a pre-trained target detection model for transfer learning, and in the training process, setting the batch size to be 4, the initial learning rate to be 0.001, the weight attenuation to be 0.0001 and the momentum to be 0.9; the maximum number of iterations is set to 50; the results are shown in fig. 3 to fig. 9, and it can be seen from the experimental results that under the parameter setting, the algorithm of the present invention has good effects in the aspects of target identification precision, real-time performance, robustness and fault tolerance.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (8)

1. The utility model provides an unmanned aerial vehicle insulator target detection method of taking photo by plane based on yooloV 3 which characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
constructing a target detection model based on a YoloV3 model, and modifying a network structure of the target detection model;
building a feature extraction layer of the target detection model through the feature pyramid, and then performing convolution and pooling on the feature extraction layer respectively to complete optimization of the target detection model;
and carrying out target detection on the unmanned aerial vehicle aerial photographing insulator by using the optimized target detection model.
2. The YoloV 3-based unmanned aerial vehicle aerial photography insulator target detection method of claim 1, wherein: the network structure for modifying the object detection model includes,
a CSPDarknet-53 network is used as a backbone network of a target detection model;
selecting a Mish function as an activation function;
CIOU is used as a regression loss function.
3. The YoloV 3-based unmanned aerial vehicle aerial photography insulator target detection method of claim 2, wherein: the Mish function includes the functions of,
Mish=x×tanh(ln(1+ex))
where x is the input.
4. The YoloV 3-based unmanned aerial vehicle aerial photography insulator target detection method of claim 3, wherein: the regression loss function may include a function of,
Figure FDA0003093606060000011
among them, LOSSCIOUAs regression loss value, bgtP is the central point of the prediction frame and the central point of the real frame, p represents the Euclidean distance between the central points of the prediction frame and the real frame, c represents the diagonal distance of the minimum closure area which can simultaneously contain the prediction frame and the real frame, alpha is a weight function, and v is used for measuring the similarity of the length-width ratio.
5. The YoloV 3-based unmanned aerial vehicle aerial photography insulator target detection method of claim 4, wherein: the weighting function a comprises the weight of the signal,
Figure FDA0003093606060000012
wherein IOU is the cross-over ratio.
6. An unmanned aerial vehicle aerial photography insulator target detection method based on YoloV3 as claimed in claim 4 or 5, wherein: the v is a sum of the values of,
Figure FDA0003093606060000021
wherein the content of the first and second substances,w is the width, h is the length, wgtIs the width of the real frame, hgtIs the length of the real box.
7. An unmanned aerial vehicle aerial photography insulator target detection method based on YooloV 3 as claimed in any one of claims 1, 2 and 3, wherein: the feature extraction layer comprises a spatial pyramid pooling network and a path aggregation network;
converting the characteristic diagram with any size into a characteristic vector with a fixed size through the spatial pyramid pooling network;
and shortening an information path between the low-layer feature and the top-layer feature in the feature extraction layer through the path aggregation network.
8. The YoloV 3-based unmanned aerial vehicle aerial photography insulator target detection method of claim 7, wherein: also comprises the following steps of (1) preparing,
performing convolution processing on the last feature layer of the CSPDarknet-53 network for three times through the spatial pyramid pooling network, and then performing pooling processing by respectively using four maximal pooled pooling kernels with different scales;
wherein the maximum pooled nuclear sizes of the four different scales are 13x13, 9x9, 5x5, 1x1, respectively.
CN202110604048.3A 2021-05-31 2021-05-31 Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3 Pending CN113297996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604048.3A CN113297996A (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604048.3A CN113297996A (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3

Publications (1)

Publication Number Publication Date
CN113297996A true CN113297996A (en) 2021-08-24

Family

ID=77326585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604048.3A Pending CN113297996A (en) 2021-05-31 2021-05-31 Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3

Country Status (1)

Country Link
CN (1) CN113297996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022432A (en) * 2021-10-28 2022-02-08 湖北工业大学 Improved yolov 5-based insulator defect detection method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325584A1 (en) * 2018-04-18 2019-10-24 Tg-17, Llc Systems and Methods for Real-Time Adjustment of Neural Networks for Autonomous Tracking and Localization of Moving Subject
CN112288043A (en) * 2020-12-23 2021-01-29 飞础科智慧科技(上海)有限公司 Kiln surface defect detection method, system and medium
CN112308040A (en) * 2020-11-26 2021-02-02 山东捷讯通信技术有限公司 River sewage outlet detection method and system based on high-definition images
CN112614130A (en) * 2021-01-04 2021-04-06 东华大学 Unmanned aerial vehicle power transmission line insulator fault detection method based on 5G transmission and YOLOv3

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325584A1 (en) * 2018-04-18 2019-10-24 Tg-17, Llc Systems and Methods for Real-Time Adjustment of Neural Networks for Autonomous Tracking and Localization of Moving Subject
CN112308040A (en) * 2020-11-26 2021-02-02 山东捷讯通信技术有限公司 River sewage outlet detection method and system based on high-definition images
CN112288043A (en) * 2020-12-23 2021-01-29 飞础科智慧科技(上海)有限公司 Kiln surface defect detection method, system and medium
CN112614130A (en) * 2021-01-04 2021-04-06 东华大学 Unmanned aerial vehicle power transmission line insulator fault detection method based on 5G transmission and YOLOv3

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
智能算法: "目标检测算法YOLOv4详解", 《HTTPS://BLOG.CSDN.NET/X454045816/ARTICLE/DETAILS/109759989》 *
杨露菁等: "《智能图像处理及应用》", 31 March 2019, 《中国铁道出版社》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022432A (en) * 2021-10-28 2022-02-08 湖北工业大学 Improved yolov 5-based insulator defect detection method
CN114022432B (en) * 2021-10-28 2024-04-30 湖北工业大学 Insulator defect detection method based on improved yolov5

Similar Documents

Publication Publication Date Title
CN109493346B (en) Stomach cancer pathological section image segmentation method and device based on multiple losses
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN112580664A (en) Small target detection method based on SSD (solid State disk) network
CN109785298B (en) Multi-angle object detection method and system
CN111079739B (en) Multi-scale attention feature detection method
CN109492596B (en) Pedestrian detection method and system based on K-means clustering and regional recommendation network
US20230206603A1 (en) High-precision point cloud completion method based on deep learning and device thereof
CN106780543A (en) A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN108416266A (en) A kind of video behavior method for quickly identifying extracting moving target using light stream
CN110533041B (en) Regression-based multi-scale scene text detection method
CN113033520A (en) Tree nematode disease wood identification method and system based on deep learning
CN112163520A (en) MDSSD face detection method based on improved loss function
CN115171165A (en) Pedestrian re-identification method and device with global features and step-type local features fused
CN112750125B (en) Glass insulator piece positioning method based on end-to-end key point detection
CN112633257A (en) Potato disease identification method based on improved convolutional neural network
CN110046568A (en) A kind of video actions recognition methods based on Time Perception structure
CN114821299B (en) Remote sensing image change detection method
CN116563726A (en) Remote sensing image ship target detection method based on convolutional neural network
CN113297996A (en) Unmanned aerial vehicle aerial photographing insulator target detection method based on YoloV3
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN115223017A (en) Multi-scale feature fusion bridge detection method based on depth separable convolution
CN112818777B (en) Remote sensing image target detection method based on dense connection and feature enhancement
CN114067126A (en) Infrared image target detection method
CN113989291A (en) Building roof plane segmentation method based on PointNet and RANSAC algorithm
CN116994162A (en) Unmanned aerial vehicle aerial photographing insulator target detection method based on improved Yolo algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824