CN109376580B - Electric power tower component identification method based on deep learning - Google Patents

Electric power tower component identification method based on deep learning Download PDF

Info

Publication number
CN109376580B
CN109376580B CN201811002575.1A CN201811002575A CN109376580B CN 109376580 B CN109376580 B CN 109376580B CN 201811002575 A CN201811002575 A CN 201811002575A CN 109376580 B CN109376580 B CN 109376580B
Authority
CN
China
Prior art keywords
layer
layers
convolution
training
feature maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811002575.1A
Other languages
Chinese (zh)
Other versions
CN109376580A (en
Inventor
髙云园
陈强
黄威
袁世学
谷雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201811002575.1A priority Critical patent/CN109376580B/en
Publication of CN109376580A publication Critical patent/CN109376580A/en
Application granted granted Critical
Publication of CN109376580B publication Critical patent/CN109376580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle power tower part identification method based on deep learning. The invention comprises the following steps: 1. collecting a picture containing an electric power tower component by using unmanned aerial vehicle carried image collection equipment; 2. selecting a proper number of pictures from the collected pictures for preprocessing, and manufacturing a training set, a verification set and a test set according to a certain proportion; 3. training the processed data set by using a modified Yolov2 algorithm; 4. and testing the test set by using the trained model and evaluating the result. Compared with the YOLOv2 algorithm, the improved YOLOv2 algorithm effectively improves the identification accuracy and speed of the model to the power tower part, and has better robustness.

Description

Electric power tower component identification method based on deep learning
Technical Field
The invention belongs to the field of deep learning target identification, relates to an electric power tower component identification method, and particularly relates to a real-time electric power tower component identification method based on deep learning.
Background
Along with the rapid development of the unmanned aerial vehicle industry, the application of the unmanned aerial vehicle in the power inspection operation is widely concerned. Unmanned aerial vehicle power patrol may produce a large amount of picture data containing power tower components, which may require a significant amount of time if relying only on manual interpretation. Therefore, it is significant to automatically detect and identify the power tower components in the image data by using a target identification algorithm. Because the special operation mode and the complicated operation environment that unmanned aerial vehicle electric power patrolled and examined compare with discernment targets such as common pedestrian, vehicle, the unmanned aerial vehicle patrols and examines the background of the image that the process obtained more complicated, treat that the contrast of discernment target and background is low, still often accompanies great interference. The traditional electrical equipment identification algorithm relies on manual extraction of features, such as Scale-innovative Feature Transform (SIFT), power line edge detection (gog), and Feature acquisition (hog) combined with algorithms such as a support vector machine (svm) or a random forest for identification. In addition, image segmentation algorithms such as adaptive threshold and watershed are also adopted to segment the peripheral outline of the electrical equipment. However, because the structure of the electrical equipment is complex and irregular, the methods have general effects, low accuracy and poor generalization capability.
The proposal of Alexnet in 2012 made the application of deep learning in the fields of picture recognition and object detection to be of great interest. The current target detection framework based on the convolutional neural network as a feature extraction can be roughly divided into two types. One class is the two-step network (two-stage) proposed based on the region candidate box. Representative examples are R-CNN (region based volumetric Neural network), Fast R-CNN. The second category is a single-step network (one-stage) represented by yolo (young Only Look one), ssd (single Shot multi box detector). Redmon J et al propose the YOLO algorithm, which loses some precision while pursuing speed. Then, the problem of YOLOv2 loss of precision is solved, a good effect is achieved on the recognition rate and precision, and the method is suitable for application scenarios of real-time recognition of electric power tower components of unmanned aerial vehicles.
Disclosure of Invention
In order to quickly and accurately realize the identification of the electric tower part in the electric power inspection of the unmanned aerial vehicle, the invention provides an electric tower part identification method based on deep learning. Firstly, acquiring a large number of images containing electric tower components by carrying image acquisition equipment through an unmanned aerial vehicle; then selecting images of three targets to be recognized, namely an electric tower, a sign and a tower base, preprocessing the images, and respectively manufacturing a training set, a verification set and a test set according to a certain proportion; then training the data set by using a modified Yolov2 algorithm; and finally, testing the test set by using the model obtained by training, and evaluating the result.
The method mainly comprises the following steps:
(1) and carrying an image acquisition device by using the unmanned aerial vehicle, and acquiring image information of the power tower component in the power inspection process.
(2) Selecting three types of towers, signs and tower bases from the images of the electric tower parts acquired in the step (1) as identification objects. And selecting images containing the three types of objects for preprocessing, and making into a training set, a verification set and a test set for subsequent training and testing.
(3) Training the training set prepared in step (2) by using a modified YOLOv2 algorithm.
The modified YOLOv2 algorithm is as follows:
the improved YOLOv2 network structure contained 24 convolutional layers, 5 pooling layers, and two transfer layers. The convolution layers of the network use 3 x 3 and 1 x 1 convolution kernels, the convolution layers containing the 3 x 3 convolution kernels and the convolution layers containing the 1 x 1 convolution kernels are alternately arranged, the effect of the compression characteristic of the 1 x 1 convolution kernels is fully utilized, and the number of the convolution kernels is doubled after each pooling layer. The improved network structure removes the two 3 × 3 × 1024 convolutional layers that are redundant at the end of the YOLOv2 network and reduces the number of convolutional kernels of the Conv _3 to Conv _6 convolutional layers by half.
The transfer layer is composed of a route layer and a reorg layer, the route layer connects the feature maps of different layers, and the reorg is used for adjusting the size of the feature maps. The two are combined to adjust the size of the feature graph of other layers to be consistent with the size of the feature graph of the current layer, and then splicing is carried out, so that feature fusion of feature graphs of different sizes is realized. The improved network architecture fuses 26 x 26, 52 x 52 size feature maps with 13 x 13 size feature maps at the end of the network using two transition layers. The transfer layer is represented as follows:
Xn=fp1+fp2+...+fpj (1)
Xm=fp1+fp2+...+fpk (2)
Xnrepresenting all the features, fp, from the nth convolutional layer1~fpjRespectively correspond to j feature maps, XmRepresenting all the signatures, fp, from the convolution layer of the m-th layer1~fpkRespectively corresponding to the k feature maps.
Figure BDA0001783312110000031
Figure BDA0001783312110000032
Figure BDA0001783312110000033
After the characteristic diagram representing the convolution layer of the nth layer is sampled in alternate rows, the size of the characteristic diagram is adjusted to be consistent with that of the characteristic diagram of the convolution layer of the mth layer, and SnDimension S representing characteristic diagram of n-th layern×Sn,SmCharacteristic diagram size S of the mth layerm×SmThe number of feature maps after adjustment is 2 before adjustmentλMultiple, XmergeShows the result of fusing the characteristic diagrams of the (m) th and (n) th convolutional layers.
Training to model convergence using the improved network structure.
(4) Testing the test set by using the model obtained by training, and evaluating the obtained result by using mAP and P-R curves and the average test piece time of each picture; wherein mAP is Mean Average Precision, and P-R is Precision-Recall;
definition of accuracy Precision:
Figure BDA0001783312110000041
TP is represented as a true class, i.e., one instance is a positive class, and is also predicted as a positive class, FP is a false positive class, i.e., one instance is a negative class, and is predicted as a positive class. Precision reflects the ability of the model to predict the positive case.
Definition of Recall rate Recall:
Figure BDA0001783312110000042
FN is represented as a false negative class, i.e. one instance is a positive class, predicted as a negative class. Recall reflects the detectability of the model. The P-R curve reflects a tradeoff between the accuracy of the classifier's identification of the correct case and the coverage of the correct case.
AP is a graph area formed by enclosing a PR curve and an X axis, and is defined as:
Figure BDA0001783312110000043
the mAP is defined as:
Figure BDA0001783312110000044
q is the corresponding category number, namely the AP of each category is averaged, and mAP is the comprehensive capability embodiment of the model for identifying the multi-category targets.
Compared with the existing method for identifying the electric tower components, the method has the following characteristics:
by adopting a deep learning target detection algorithm, deep features of image data can be more fully utilized, and the model has stronger generalization capability. The improved YOLOv2 algorithm fuses the features of feature maps with different scales, so that the identification accuracy is improved; the simplification of the network structure accelerates the recognition rate of the model.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a modified YOLOv2 network structure;
FIG. 3 is a P-R curve of the three types of target test results.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the embodiment is implemented on the premise of the technical scheme of the invention, and a detailed implementation mode and a specific operation process are given.
As shown in fig. 1, the present embodiment includes the following steps:
the method comprises the steps that firstly, an unmanned aerial vehicle is used for carrying a high-definition camera, and a large number of pictures containing three types of objects, namely an electric power tower, a sign and a tower base, are shot when the high-voltage transmission line conducts electric power inspection.
And step two, selecting 1200 pictures of the three objects including the electric power tower, the direction board and the tower base obtained in the step one, and distinguishing the pictures according to the training set of 1000, the verification set and the test set of 100 pictures which are independent of each other. And adjusting the sizes of all the pictures to 600 multiplied by 450, and labeling the objects to be identified contained in all the pictures for subsequent training and testing.
Step three, the improved Yolov2 network structure is shown in FIG. 2, compared with the original Yolov2 network structure, two convolution layers of 3 × 3 × 1024 at the end of the network are removed, and the number of convolution kernels of the convolution layers in the middle of the network is halved; in addition, two transfer layers are added to fuse the feature maps of three sizes of 52 × 52, 26 × 26 and 13 × 13 at the network end. Respectively training the well-made training set by using an original YOLOv2 algorithm and an improved YOLOv2, observing the accuracy of the verification set and the loss value of the training in the training process, and waiting until the accuracy of the verification set and the loss value of the training tend to be stable, namely representing that the model is converged, storing the model, and stopping the training.
And step four, respectively testing the test set by using a YOLOv2 algorithm and a modified YOLOv2 algorithm, and evaluating and comparing the results.
TABLE 1 comparison of test results for improved YOLOv2 and YOLOv2
Figure BDA0001783312110000061
Table 1 records the maps, recalls and average test times per photograph of the models obtained from the training of YOLOv2 algorithm and modified YOLOv2 algorithm on the test set. It can be seen that improved YOLOv2 has higher recognition accuracy while having higher recall, and that the average test time per photograph is about 35% faster. FIG. 3 is a P-R curve respectively drawn by a model trained according to the YOLOv2 algorithm and the improved YOLOv2 algorithm on the test results of three targets, namely a signboard, a tower and a tower base in a test set, wherein when the accuracy rate of the P-R curve corresponding to the YOLOv2 test result is increased, the recall rate is decreased quickly, which shows that the robustness of the model is poor, and the P-R curve corresponding to the improved YOLOv2 shows that the model has better robustness.
Therefore, the improved YOLOv2 algorithm not only improves the identification accuracy and speed of the power tower component, but also the trained model has better robustness, which shows that the improved algorithm has greater advantages in power tower component identification compared with the YOLOv2 algorithm.

Claims (1)

1. A deep learning-based electric tower component identification method is characterized by specifically comprising the following steps:
(1) carrying an image acquisition device by using an unmanned aerial vehicle, and acquiring image information of the power tower component in the power inspection process;
(2) selecting three types of towers, indication boards and tower bases from the images of the electric tower parts acquired in the step (1) as identification objects; selecting images containing the three types of objects for preprocessing, and making the images into a training set, a verification set and a test set for subsequent training and testing;
(3) training the training set prepared in the step (2) by using a modified Yolov2 algorithm;
the modified YOLOv2 algorithm is as follows:
the improved YOLOv2 network structure comprises 24 convolutional layers, 5 pooling layers and two transfer layers; the convolution layers of the network use 3 x 3 and 1 x 1 convolution kernels, the convolution layers containing the 3 x 3 convolution kernels and the convolution layers containing the 1 x 1 convolution kernels are alternately arranged, and the number of the convolution kernels is doubled after each pooling layer; the improved network structure removes two 3 × 3 × 1024 convolutional layers redundant at the end of the YOLOv2 network, and reduces the number of convolutional kernels of Conv _ 3-Conv _6 convolutional layers by half;
the transfer layer consists of a route layer and a reorg layer, the route layer connects the feature maps of different layers, and the reorg is used for adjusting the size of the feature maps; the two are combined to realize that the sizes of the feature maps of other layers are adjusted to be consistent with the size of the feature map of the current layer, and then splicing is carried out to realize feature fusion of feature maps with different sizes; the improved network structure fuses feature maps with the size of 26 × 26 and 52 × 52 and feature maps with the size of 13 × 13 by using two transition layers at the end of the network; the transfer layer is represented as follows:
Xn=fpn1+fpn2+...+fpnj (1)
Xm=fpm1+fpm2+...+fpmk (2)
Xnrepresenting all the features, fp, from the nth convolutional layern1~fpnjRespectively correspond to j feature maps, XmRepresenting all the signatures, fp, from the convolution layer of the m-th layerm1~fpmkRespectively corresponding to the k characteristic graphs;
Figure FDA0003600520690000021
Figure FDA0003600520690000022
Figure FDA0003600520690000023
after the characteristic diagram representing the nth layer of convolution layer is sampled in interlaced rows, the size of the characteristic diagram is adjusted to be consistent with that of the characteristic diagram of the mth layer of convolution layer, and SnDimension S representing characteristic diagram of n-th layern×Sn,SmCharacteristic diagram size S of the mth layerm×SmThe number of feature maps after adjustment is 2 before adjustmentλMultiple, XmergeShowing the result obtained by fusing the characteristic diagrams of the mth and nth layers of convolution layers;
training to model convergence using the improved network structure;
(4) testing the test set by using the model obtained by training, and evaluating the obtained result by using mAP and P-R curves and the average test piece time of each picture; wherein mAP is Mean Average Precision, and P-R is Precision-Recall;
definition of accuracy Precision:
Figure FDA0003600520690000024
TP is expressed as a true class, namely one instance is a positive class and is also predicted as a positive class, FP is a false positive class, namely one instance is a negative class and is predicted as a positive class; precision reflects the ability of the model to predict the true case;
definition of Recall rate Recall:
Figure FDA0003600520690000025
FN represents false negative class, i.e. one instance is positive class, predicted as parent class; recall reflects the detectability of the model; the P-R curve reflects the balance between the identification accuracy of the classifier on the positive examples and the coverage capability of the positive examples;
AP is a graph area formed by enclosing a PR curve and an X axis, and is defined as:
Figure FDA0003600520690000031
the mAP is defined as:
Figure FDA0003600520690000032
q is the corresponding category number, namely the AP of each category is averaged, and mAP is the comprehensive capability embodiment of the model for identifying the multi-category targets.
CN201811002575.1A 2018-08-30 2018-08-30 Electric power tower component identification method based on deep learning Active CN109376580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811002575.1A CN109376580B (en) 2018-08-30 2018-08-30 Electric power tower component identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811002575.1A CN109376580B (en) 2018-08-30 2018-08-30 Electric power tower component identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109376580A CN109376580A (en) 2019-02-22
CN109376580B true CN109376580B (en) 2022-05-20

Family

ID=65404862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811002575.1A Active CN109376580B (en) 2018-08-30 2018-08-30 Electric power tower component identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109376580B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245644A (en) * 2019-06-22 2019-09-17 福州大学 A kind of unmanned plane image transmission tower lodging knowledge method for distinguishing based on deep learning
CN110647977B (en) * 2019-08-26 2023-02-03 北京空间机电研究所 Method for optimizing Tiny-YOLO network for detecting ship target on satellite
CN110992307A (en) * 2019-11-04 2020-04-10 华北电力大学(保定) Insulator positioning and identifying method and device based on YOLO
CN112634129A (en) * 2020-11-27 2021-04-09 国家电网有限公司大数据中心 Image sensitive information desensitization method and device
CN112528318A (en) * 2020-11-27 2021-03-19 国家电网有限公司大数据中心 Image desensitization method and device and electronic equipment
CN112598054B (en) * 2020-12-21 2023-09-22 福建京力信息科技有限公司 Power transmission and transformation project quality common disease prevention and detection method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930906A (en) * 2016-04-15 2016-09-07 上海大学 Trip detection method based on characteristic weighting and improved Bayesian algorithm
CN107563412A (en) * 2017-08-09 2018-01-09 浙江大学 A kind of infrared image power equipment real-time detection method based on deep learning
CN108256634A (en) * 2018-02-08 2018-07-06 杭州电子科技大学 A kind of ship target detection method based on lightweight deep neural network
CN108389197A (en) * 2018-02-26 2018-08-10 上海赛特斯信息科技股份有限公司 Transmission line of electricity defect inspection method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171622A1 (en) * 2014-12-15 2016-06-16 Loss of Use, Inc. Insurance Asset Verification and Claims Processing System

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930906A (en) * 2016-04-15 2016-09-07 上海大学 Trip detection method based on characteristic weighting and improved Bayesian algorithm
CN107563412A (en) * 2017-08-09 2018-01-09 浙江大学 A kind of infrared image power equipment real-time detection method based on deep learning
CN108256634A (en) * 2018-02-08 2018-07-06 杭州电子科技大学 A kind of ship target detection method based on lightweight deep neural network
CN108389197A (en) * 2018-02-26 2018-08-10 上海赛特斯信息科技股份有限公司 Transmission line of electricity defect inspection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Pedestrian Detection Based on YOLO Network Model;Wenbo Lan etc.;《IEEExplorer》;20180808;论文第III-IV部分,表1 *
基于YOLOv2的复杂场景下车辆目标检测;李云鹏等;《电视技术》;20181231;第42卷(第5期);全文 *
基于改进YOLOv2网络的遗留物检测算法;张瑞林;《浙江理工大学学报(自然科学版)》;20180531;第39卷(第03期);全文 *

Also Published As

Publication number Publication date
CN109376580A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376580B (en) Electric power tower component identification method based on deep learning
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN106096561B (en) Infrared pedestrian detection method based on image block deep learning features
CN105426870B (en) A kind of face key independent positioning method and device
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
CN111951212A (en) Method for identifying defects of contact network image of railway
CN110969166A (en) Small target identification method and system in inspection scene
CN113378890B (en) Lightweight pedestrian vehicle detection method based on improved YOLO v4
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN111079640A (en) Vehicle type identification method and system based on automatic amplification sample
CN115690542A (en) Improved yolov 5-based aerial insulator directional identification method
CN110443279B (en) Unmanned aerial vehicle image vehicle detection method based on lightweight neural network
CN102142078A (en) Method for detecting and identifying targets based on component structure model
TWI497449B (en) Unsupervised adaptation method and image automatic classification method applying the same
CN112766170B (en) Self-adaptive segmentation detection method and device based on cluster unmanned aerial vehicle image
CN110826415A (en) Method and device for re-identifying vehicles in scene image
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN111680705A (en) MB-SSD method and MB-SSD feature extraction network suitable for target detection
CN113486877A (en) Power equipment infrared image real-time detection and diagnosis method based on lightweight artificial intelligence model
CN111738036A (en) Image processing method, device, equipment and storage medium
CN113255634A (en) Vehicle-mounted mobile terminal target detection method based on improved Yolov5
Asgarian Dehkordi et al. Vehicle type recognition based on dimension estimation and bag of word classification
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN111160100A (en) Lightweight depth model aerial photography vehicle detection method based on sample generation
CN110618129A (en) Automatic power grid wire clamp detection and defect identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant