CN112861952B - Partial discharge image matching deep learning method - Google Patents

Partial discharge image matching deep learning method Download PDF

Info

Publication number
CN112861952B
CN112861952B CN202110134154.XA CN202110134154A CN112861952B CN 112861952 B CN112861952 B CN 112861952B CN 202110134154 A CN202110134154 A CN 202110134154A CN 112861952 B CN112861952 B CN 112861952B
Authority
CN
China
Prior art keywords
partial discharge
view
image
far
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110134154.XA
Other languages
Chinese (zh)
Other versions
CN112861952A (en
Inventor
彭晶
王科
谭向宇
邓云坤
马仪
赵现平
沈龙
于辉
李�昊
刘红文
彭兆裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202110134154.XA priority Critical patent/CN112861952B/en
Publication of CN112861952A publication Critical patent/CN112861952A/en
Application granted granted Critical
Publication of CN112861952B publication Critical patent/CN112861952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a partial discharge image matching deep learning method, which comprises the steps of obtaining a near view feature point probability map and a far view feature point probability map by extracting moment features of a partial discharge near view map and a partial discharge far view map, obtaining near view transformation matrix parameters and far view transformation matrix parameters through a space transformation network, obtaining near view feature points and far view feature points, matching the two feature points based on an image matching algorithm, obtaining positions corresponding to the partial discharge near view map in the partial discharge far view map, and identifying the discharge type of the partial discharge through a convolutional neural network. The partial discharge image matching deep learning method can match and position partial discharge images under different scales and different visual angles, improves the influence on images of different discharge types due to the size and visual angle difference in a partial discharge near view image and a partial discharge distant view image, and accordingly improves the identification accuracy of the whole partial discharge image.

Description

Partial discharge image matching deep learning method
Technical Field
The application relates to the technical field of image matching, in particular to a deep learning method for partial discharge image matching.
Background
The depth model has strong learning ability and high-efficiency characteristic expression ability, more obviously, the depth model can extract information layer by layer between pixel-level original data and abstract semantic concepts, so that the depth model has outstanding advantages in the aspects of extracting global characteristics and contextual information of images, and provides a new thought for solving the traditional computer vision problems (such as image segmentation and key point detection). The partial discharge of the insulation inside is widely considered as an important factor causing the deterioration of the insulation of the electrical equipment, and has a close relationship with the safety and reliability of the operation of the high-voltage electrical equipment. The method can automatically identify the discharge type of partial discharge, discover the internal partial defect of insulation and the development degree of discharge in time, and has great significance for preventing insulation accidents.
At present, deep learning is widely applied to the field of partial discharge image recognition, but the image recognition precision of partial discharge at different angles and different distances is not required.
Disclosure of Invention
The application provides a partial discharge image matching deep learning method for improving the image recognition precision of partial discharge at different angles and different distances.
The application provides a partial discharge image matching deep learning method, which comprises the following steps:
based on the partial discharge close-range image and the partial discharge distant view image, extracting moment features of the partial discharge close-range image and moment features of the partial discharge distant view image;
based on the moment characteristics of the partial discharge close-range image and the moment characteristics of the partial discharge distant-range image, obtaining a close-range characteristic point probability image and a distant-range characteristic point probability image;
performing image transformation on the near-view feature point probability map and the far-view feature point probability map through a space transformation network to obtain a near-view transformation matrix parameter and a far-view transformation matrix parameter;
transforming the near-field feature point probability map based on the near-field transformation matrix parameters to obtain transformed near-field feature points; transforming the distant view feature point probability map based on the distant view transformation matrix parameters to obtain transformed distant view feature points;
matching the near view feature points and the far view feature points based on an image matching algorithm to obtain the positions of the partial discharge near view images corresponding to the partial discharge far view images;
and identifying the discharge type of the partial discharge based on the partial discharge close-range diagram, the partial discharge far-range diagram and the position corresponding to the partial discharge close-range diagram in the partial discharge far-range diagram.
Optionally, in the step of extracting the moment features of the partial discharge close-range view and the moment features of the partial discharge far-range view based on the partial discharge close-range view and the partial discharge far-range view, the moment features of the partial discharge close-range view and the partial discharge far-range view are extracted through a convolutional neural network.
Optionally, the step of obtaining the near-view transformation matrix parameters and the far-view transformation matrix parameters includes any one or more of translation, scaling, rotation and miscut by performing image transformation on the near-view feature point probability map and the far-view feature point probability map through a spatial transformation network.
Optionally, transforming the near-field feature point probability map based on the near-field transformation matrix parameters to obtain transformed near-field feature points; and transforming the distant view feature point probability map based on the distant view transformation matrix parameters, wherein in the step of obtaining transformed distant view feature points, the close view feature points comprise any one or more of turning points, skewness, kurtosis and peaks, and the distant view feature points comprise any one or more of turning points, skewness, kurtosis and peaks.
Optionally, in the step of identifying the discharge type of the partial discharge, the convolutional neural network is used for identifying the discharge type of the partial discharge based on the partial discharge close-range diagram, the partial discharge distant view diagram and the position corresponding to the partial discharge close-range diagram in the partial discharge distant view diagram.
According to the technical scheme, the partial discharge image matching deep learning method comprises the steps of obtaining a near view feature point probability map and a far view feature point probability map by extracting moment features of a partial discharge near view map and a partial discharge far view map, obtaining near view transformation matrix parameters and far view transformation matrix parameters from the near view feature point probability map and the far view feature point probability map through a space transformation network, transforming the near view feature point probability map and the far view feature point probability map based on the near view transformation matrix parameters and the far view transformation matrix parameters to obtain near view feature points and far view feature points, matching the near view feature points and far view feature points based on an image matching algorithm to obtain positions corresponding to the partial discharge near view map in the partial discharge far view map, and identifying the discharge type of the partial discharge through a convolutional neural network. The partial discharge image matching deep learning method can match and position partial discharge images under different scales and different visual angles, improves the influence on images of different discharge types due to the size and visual angle difference in a partial discharge near view image and a partial discharge distant view image, and accordingly improves the identification accuracy of the whole partial discharge image.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a partial discharge image matching deep learning method.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the present application. Merely as examples of systems and methods consistent with some aspects of the present application as detailed in the claims.
Partial discharge is one of the main factors of insulation degradation in long-term operation of various large-scale electrical equipment, and the partial insulation defect and development condition of the electrical equipment can be timely found by detecting the partial discharge condition on line, so that maintenance is timely arranged, and major safety accidents are avoided. The partial discharge is closely related to the insulation defect of the equipment, so that the types of the partial discharge caused by different defects are different, and when the types of the partial discharge are identified by a certain method, the nature and the severity of the partial discharge can be diagnosed, so that the insulation mode identification is predicted, and the partial discharge is positioned. At present, a method combining deep learning and image recognition is mostly adopted for recognizing the partial discharge type, and the method is simple to operate and rapid in recognition, but the accuracy of the partial discharge image recognition at different angles and different distances cannot meet the requirements.
The application provides a deep learning method for partial discharge image matching, and referring to fig. 1, a flowchart of the deep learning method for partial discharge image matching is provided. As can be seen from fig. 1, the method comprises:
s1: based on the partial discharge close-range image and the partial discharge distant view image, extracting moment features of the partial discharge close-range image and moment features of the partial discharge distant view image;
in S1, the moment features are extracted by convolutional neural networks.
In the field of image recognition, a moment feature is a widely used image statistical feature parameter that describes the overall distribution of all pixels in a partial discharge image, as well as the basic geometry of the image. Different types of partial discharge have different degrees of similarity in the positive half cycle and the negative half cycle of the power frequency, and images of the partial discharge also show certain differences. Therefore, the study of the moment characteristics of the partial discharge image contributes to the recognition of the partial discharge pattern. In addition, convolutional neural networks (Convoluti onal Neural Networks, CNN) are a type of feedforward neural network that includes convolutional calculations and has a deep structure, and are one of representative algorithms for deep learning. The convolutional neural network has characteristic learning capability and can carry out translation invariant classification on input information according to a hierarchical structure of the convolutional neural network, so the convolutional neural network is also called a translation invariant artificial neural network.
S2: based on the moment characteristics of the partial discharge close-range image and the moment characteristics of the partial discharge distant-range image, obtaining a close-range characteristic point probability image and a distant-range characteristic point probability image;
s3: performing image transformation on the near-view feature point probability map and the far-view feature point probability map through a space transformation network to obtain a near-view transformation matrix parameter and a far-view transformation matrix parameter;
in S3, the image transformation includes, but is not limited to, any one or several of translation, scaling, rotation, miscut.
It should be noted that, the spatial transformation network (Spatial Transformer Networks, STNs) is a convolutional neural network architecture model, and by transforming the input pictures, the influence of the spatial diversity of the data is reduced, so as to improve the classification accuracy of the convolutional network model, instead of changing the network structure. The STNs are very robust and spatially invariant, such as translation, telescoping, rotation, perturbation, bending, etc.
The STNs consists of three parts, a localization network, a grid generator and a sampler, respectively. STNs can be used for input layers, or inserted behind convolutional or other layers, without changing the internal structure of the original CNN model. For an input picture, the STNs firstly predicts the transformation required to be carried out by using a localization network, namely, the transformation is carried out on the picture through a plurality of continuous layers of calculation (including convolution and full-connection calculation), then a grid generator and a sampler carry out transformation on the picture, and the picture obtained by transformation is put into a CNN for classification. Wherein the network generator uses bilinear interpolation to generate the network, the sampler is a formally scalable image sampling method in order to keep the whole network end-to-end back-propagation training possible. The STNs are able to adaptively spatially transform and align the data such that the CNN model remains unchanged from translation, scaling, rotation, or other transformations, etc.
In addition, the calculation speed of the STNs is very fast, the training speed of the original CNN model is hardly affected, the STNs module is very tiny, the STNs module can be seamlessly embedded into the existing network architecture, and additional supervision information is not needed to help training. The STNs are able to adaptively spatially transform and align the data such that the CNN model remains unchanged from translation, scaling, rotation, or other transformations, etc. In addition, the calculation speed of the STNs is very fast, the training speed of the original CNN model is hardly affected, the STNs module is very tiny, the STNs module can be seamlessly embedded into the existing network architecture, and additional supervision information is not needed to help training.
S4: transforming the near-field feature point probability map based on the near-field transformation matrix parameters to obtain transformed near-field feature points; transforming the distant view feature point probability map based on the distant view transformation matrix parameters to obtain transformed distant view feature points;
in S4, the near view feature points include, but are not limited to, any one or more of turning points, skewness, kurtosis and peaks, and the far view feature points include, but are not limited to, any one or more of turning points, skewness, kurtosis and peaks.
S5: matching the near view feature points and the far view feature points based on an image matching algorithm to obtain the positions of the partial discharge near view images corresponding to the partial discharge far view images;
in S5, the image matching algorithm is any one or more of a relational structure matching algorithm, a matching algorithm combined with a specific theoretical tool, a matching algorithm based on gray information, a matching algorithm based on sub-pel matching algorithm and a matching algorithm based on content characteristics.
S6: and identifying the discharge type of the partial discharge based on the partial discharge close-range diagram, the partial discharge far-range diagram and the position corresponding to the partial discharge close-range diagram in the partial discharge far-range diagram.
In S6, the partial discharge type is identified by convolutional neural network, and the partial discharge type includes, but is not limited to, free metal particle defect discharge, metal tip defect discharge, floating electrode defect discharge, insulator air gap defect discharge, etc.
According to the technical scheme, the partial discharge image matching deep learning method comprises the steps of obtaining a near view feature point probability map and a far view feature point probability map by extracting moment features of a partial discharge near view map and a partial discharge far view map, obtaining near view transformation matrix parameters and far view transformation matrix parameters from the near view feature point probability map and the far view feature point probability map through a space transformation network, transforming the near view feature point probability map and the far view feature point probability map based on the near view transformation matrix parameters and the far view transformation matrix parameters to obtain near view feature points and far view feature points, matching the near view feature points and far view feature points based on an image matching algorithm to obtain positions corresponding to the partial discharge near view map in the partial discharge far view map, and identifying the discharge type of the partial discharge through a convolutional neural network. The partial discharge image matching deep learning method can match and position partial discharge images under different scales and different visual angles, improves the influence of size and visual angle differences in a partial discharge near view image and a partial discharge distant view image on images of different discharge types, and accordingly improves the identification accuracy of the whole partial discharge image.
Further, since the partial discharge close-range view and the partial discharge far-range view are shot at different visual angles and distances of the same defect, if the discharge type of the partial discharge close-range view is known, the discharge type of the partial discharge far-range view can be obtained through transformation and comparison; similarly, if the discharge type of the partial discharge perspective view is known, the discharge type of the partial discharge perspective view can be obtained through transformation and comparison.
While the fundamental principles and main features of the present application and advantages thereof have been shown and described, it will be apparent to those skilled in the art that the present application is not limited to the details of the above-described exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (5)

1. A partial discharge image matching deep learning method, comprising:
based on the partial discharge close-range image and the partial discharge distant view image, extracting moment features of the partial discharge close-range image and moment features of the partial discharge distant view image;
based on the moment characteristics of the partial discharge close-range image and the moment characteristics of the partial discharge distant-range image, obtaining a close-range characteristic point probability image and a distant-range characteristic point probability image;
image transformation is carried out on the near-view feature point probability map and the far-view feature point probability map through a space transformation network, so that near-view transformation matrix parameters and far-view transformation matrix parameters are obtained, wherein the space transformation network comprises a localization network for predicting the near-view feature point probability map and the far-view feature point probability map to carry out image transformation, and a grid generator and a sampler for carrying out transformation on the near-view feature point probability map and the far-view feature point probability map;
transforming the near-field feature point probability map based on the near-field transformation matrix parameters to obtain transformed near-field feature points; transforming the distant view feature point probability map based on the distant view transformation matrix parameters to obtain transformed distant view feature points;
matching the near view feature points and the far view feature points based on an image matching algorithm to obtain the positions of the partial discharge near view images corresponding to the partial discharge far view images;
and identifying the discharge type of the partial discharge based on the partial discharge close-range diagram, the partial discharge far-range diagram and the position corresponding to the partial discharge close-range diagram in the partial discharge far-range diagram.
2. The deep learning method for matching partial discharge images according to claim 1, wherein in the step of extracting moment features of a partial discharge close-range image and moment features of a partial discharge far-range image based on the partial discharge close-range image and the partial discharge far-range image, the moment features of the partial discharge close-range image and the partial discharge far-range image are extracted through a convolutional neural network.
3. The deep learning method for matching partial discharge images according to claim 1, wherein the step of obtaining the near-view transformation matrix parameters and the far-view transformation matrix parameters comprises any one or more of translation, scaling, rotation and miscut by performing image transformation on the near-view feature point probability map and the far-view feature point probability map through a spatial transformation network.
4. The partial discharge image matching deep learning method according to claim 1, wherein the near-field feature point probability map is transformed based on near-field transformation matrix parameters to obtain transformed near-field feature points; and transforming the distant view feature point probability map based on the distant view transformation matrix parameters, wherein in the step of obtaining transformed distant view feature points, the close view feature points comprise any one or more of turning points, skewness, kurtosis and peaks, and the distant view feature points comprise any one or more of turning points, skewness, kurtosis and peaks.
5. The deep learning method for matching partial discharge images according to claim 1, wherein in the step of identifying the discharge type of partial discharge, the convolutional neural network is used for identifying the discharge type of partial discharge based on the partial discharge close-range diagram, the partial discharge far-range diagram and the position corresponding to the partial discharge close-range diagram in the partial discharge far-range diagram.
CN202110134154.XA 2021-01-29 2021-01-29 Partial discharge image matching deep learning method Active CN112861952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110134154.XA CN112861952B (en) 2021-01-29 2021-01-29 Partial discharge image matching deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110134154.XA CN112861952B (en) 2021-01-29 2021-01-29 Partial discharge image matching deep learning method

Publications (2)

Publication Number Publication Date
CN112861952A CN112861952A (en) 2021-05-28
CN112861952B true CN112861952B (en) 2023-04-28

Family

ID=75987320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110134154.XA Active CN112861952B (en) 2021-01-29 2021-01-29 Partial discharge image matching deep learning method

Country Status (1)

Country Link
CN (1) CN112861952B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108751A (en) * 2017-12-08 2018-06-01 浙江师范大学 A kind of scene recognition method based on convolution multiple features and depth random forest
CN109120883A (en) * 2017-06-22 2019-01-01 杭州海康威视数字技术股份有限公司 Video monitoring method, device and computer readable storage medium based on far and near scape
CN109345575A (en) * 2018-09-17 2019-02-15 中国科学院深圳先进技术研究院 A kind of method for registering images and device based on deep learning
CN110705553A (en) * 2019-10-23 2020-01-17 大连海事大学 Scratch detection method suitable for vehicle distant view image
CN111260794A (en) * 2020-01-14 2020-06-09 厦门大学 Outdoor augmented reality application method based on cross-source image matching
WO2020141468A1 (en) * 2019-01-03 2020-07-09 Ehe Innovations Private Limited Method and system for detecting position of a target area in a target subject
WO2020173022A1 (en) * 2019-02-25 2020-09-03 平安科技(深圳)有限公司 Vehicle violation identifying method, server and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203759160U (en) * 2014-03-04 2014-08-06 国家电网公司 Integrated intelligent monitoring system of transformer station
US10204299B2 (en) * 2015-11-04 2019-02-12 Nec Corporation Unsupervised matching in fine-grained datasets for single-view object reconstruction
CN106556781A (en) * 2016-11-10 2017-04-05 华乘电气科技(上海)股份有限公司 Shelf depreciation defect image diagnostic method and system based on deep learning
CN106597235A (en) * 2016-12-12 2017-04-26 国网北京市电力公司 Partial discharge detection apparatus and method
CN107843818B (en) * 2017-09-06 2020-08-14 同济大学 High-voltage insulation fault diagnosis method based on heterogeneous image temperature rise and partial discharge characteristics
CN107705574A (en) * 2017-10-09 2018-02-16 荆门程远电子科技有限公司 A kind of precisely full-automatic capturing system of quick road violation parking
CN109635824A (en) * 2018-12-14 2019-04-16 深源恒际科技有限公司 A kind of images match deep learning method and system
CN110675351B (en) * 2019-09-30 2022-03-11 集美大学 Marine image processing method based on global brightness adaptive equalization
CN112034310A (en) * 2020-07-31 2020-12-04 国网山东省电力公司东营供电公司 Partial discharge defect diagnosis method and system for combined electrical appliance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120883A (en) * 2017-06-22 2019-01-01 杭州海康威视数字技术股份有限公司 Video monitoring method, device and computer readable storage medium based on far and near scape
CN108108751A (en) * 2017-12-08 2018-06-01 浙江师范大学 A kind of scene recognition method based on convolution multiple features and depth random forest
CN109345575A (en) * 2018-09-17 2019-02-15 中国科学院深圳先进技术研究院 A kind of method for registering images and device based on deep learning
WO2020141468A1 (en) * 2019-01-03 2020-07-09 Ehe Innovations Private Limited Method and system for detecting position of a target area in a target subject
WO2020173022A1 (en) * 2019-02-25 2020-09-03 平安科技(深圳)有限公司 Vehicle violation identifying method, server and storage medium
CN110705553A (en) * 2019-10-23 2020-01-17 大连海事大学 Scratch detection method suitable for vehicle distant view image
CN111260794A (en) * 2020-01-14 2020-06-09 厦门大学 Outdoor augmented reality application method based on cross-source image matching

Also Published As

Publication number Publication date
CN112861952A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
Wang et al. Automatic fault diagnosis of infrared insulator images based on image instance segmentation and temperature analysis
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN111046950B (en) Image processing method and device, storage medium and electronic device
CN115527036A (en) Power grid scene point cloud semantic segmentation method and device, computer equipment and medium
Mao et al. Automatic image detection of multi-type surface defects on wind turbine blades based on cascade deep learning network
Yi et al. Insulator and defect detection model based on improved YOLO-S
CN115131560A (en) Point cloud segmentation method based on global feature learning and local feature discrimination aggregation
Qiu et al. A lightweight yolov4-edam model for accurate and real-time detection of foreign objects suspended on power lines
CN109919936B (en) Method, device and equipment for analyzing running state of composite insulator
JP2022159010A (en) Overhead wire fitting abnormality detection device
Wang et al. Semantic segmentation and defect detection of aerial insulators of transmission lines
Wang et al. Multisource cross-domain fault diagnosis of rolling bearing based on subdomain adaptation network
Wang et al. Target detection algorithm based on super-resolution color remote sensing image reconstruction
CN112861952B (en) Partial discharge image matching deep learning method
CN116778164A (en) Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN113343765B (en) Scene retrieval method and system based on point cloud rigid registration
CN115641510A (en) Remote sensing image ship detection and identification method
CN113989793A (en) Graphite electrode embossed seal character recognition method
Yu et al. Saliency Object Detection Method Based on Real-Time Monitoring Image Information for Intelligent Driving.
Chen et al. Accurate object recognition for unmanned aerial vehicle electric power inspection using an improved yolov2 algorithm
Wu et al. An object detection method for catenary component images based on improved Faster R-CNN
Wang et al. Image classification of missing insulators based on EfficientNet
Liang et al. Recognition of Live Equipment Models in Substations Based on 3D Point Cloud Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant