CN111696070A - Multispectral image fusion power internet of things fault point detection method based on deep learning - Google Patents

Multispectral image fusion power internet of things fault point detection method based on deep learning Download PDF

Info

Publication number
CN111696070A
CN111696070A CN202010275304.4A CN202010275304A CN111696070A CN 111696070 A CN111696070 A CN 111696070A CN 202010275304 A CN202010275304 A CN 202010275304A CN 111696070 A CN111696070 A CN 111696070A
Authority
CN
China
Prior art keywords
image
multispectral
things
power internet
fault point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010275304.4A
Other languages
Chinese (zh)
Inventor
侯瑞
胡阳
赵云灏
李建彬
任国文
方苏婉
任羽圻
袁梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202010275304.4A priority Critical patent/CN111696070A/en
Publication of CN111696070A publication Critical patent/CN111696070A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multispectral image fusion power Internet of things fault point detection method based on deep learning, which comprises the following steps of: 1) acquiring multispectral images of a plurality of electric power Internet of things devices, and preprocessing each multispectral image; 2) describing the multispectral image by using the characteristic space distance based on the depth convolution neural network; 3) inputting the multispectral image serving as a training sample into a deep convolutional neural network to train the deep convolutional neural network; 4) the trained deep convolutional neural network is used for detecting the multispectral image to be detected to obtain the position of a fault point of the power internet of things, and the method can be used for accurately detecting the fault point of the power internet of things.

Description

Multispectral image fusion power internet of things fault point detection method based on deep learning
Technical Field
The invention belongs to the field of fault detection, and relates to a multispectral image fusion power internet of things fault point detection method based on deep learning.
Background
Electric power is a national economic life line and is one of the important basic industries supporting national economy. With the rapid development and wide application of computer technology, sensor technology and communication technology, power internet of things equipment has already been put into practical work. This brings many conveniences to the production, but also gradually exposes some problems. Due to the multiple sources and extremely complex equipment information in the power internet of things, the difficulty is brought to the interoperability of the application of the power internet of things to a certain extent. These challenges include detecting a point of failure in the power internet of things device. The fault detection depends heavily on the experience of personnel, and the detection efficiency is low. Nowadays, although the power equipment fault detection and diagnosis technology is developed rapidly, and various types of power equipment faults have corresponding detection technologies, people still face various difficulties in the aspect of power internet of things equipment fault detection. The power internet of things equipment is vital to production and life. Once a problem occurs, huge economic loss and social influence are caused. Therefore, the research on the electric power internet of things equipment fault point detection technology is helpful for improving the stability of the equipment and ensuring the safe and efficient operation of the power grid.
In the existing power equipment fault detection, various detection methods can play a certain role, but still have many defects. Firstly, China power grids are wide in distribution range, and fault detection is far from enough only by manpower cooperation. Secondly, equipment faults in the power grid are various, so that a fault detection method or a fault detection model is required to have high generalization capability and can cope with various equipment faults. Finally, the existing fault detection method cannot meet the requirement of rapid and accurate fault detection under the framework of the smart grid. Therefore, a new method is needed to accurately detect the power internet of things equipment and diagnose equipment faults in time.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multispectral image fusion power internet of things fault point detection method based on deep learning, and the method can be used for accurately detecting the fault point of the power internet of things.
In order to achieve the purpose, the multispectral image fusion power internet of things fault point detection method based on deep learning comprises the following steps:
1) acquiring multispectral images of a plurality of electric power Internet of things devices, and preprocessing each multispectral image;
2) describing the multispectral image by using the characteristic space distance based on the depth convolution neural network;
3) inputting the multispectral image serving as a training sample into a deep convolutional neural network to train the deep convolutional neural network;
4) and detecting the multispectral image to be detected by using the trained deep convolutional neural network to obtain the fault point position of the power Internet of things.
The specific operation of the step 4) is as follows:
extracting candidate regions from the multispectral image to be detected, adjusting all the candidate regions to be uniform in size, inputting the candidate regions into the trained deep convolutional neural network, and separating the candidate regions by using a classifier or softmax to determine the position of a fault point of the power internet of things.
The specific process of preprocessing the multispectral image in the step 1) is as follows: and carrying out image registration, image enhancement and image fusion on the multispectral image.
And carrying out image registration on the multispectral image by utilizing a gray-scale-based image registration method, a transform domain-based image registration method or a feature-based image registration method.
And carrying out image enhancement on the multispectral image by utilizing a linear transformation method, a piecewise linear transformation method or a histogram equalization method.
And carrying out image fusion based on continuous wavelet transform.
The invention has the following beneficial effects:
the multispectral image fusion power internet of things fault point detection method based on deep learning is characterized in that when the fault point detection method is specifically operated, multispectral images of power internet of things equipment are preprocessed, then the preprocessed multispectral images are used for training a deep convolutional neural network, and then the trained deep convolutional neural network is used for detecting fault points.
Drawings
FIG. 1 is an Internet of things architecture diagram;
FIG. 2 is a ROC graph;
FIG. 3a is a visible light image before fusion;
FIG. 3b is an infrared image before fusion;
FIG. 3c is a fused image;
fig. 4 is a schematic diagram of the evaluation result of the fused image.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
with the continuous development of electric power systems in China, the development targets of the electric power systems gradually turn to the construction of smart grids for electric power equipment by using the technology of the internet of things so as to obtain considerable social and economic benefits, wherein the electric power internet of things is a key technical support for the construction of the smart grids and plays an important role in the aspects of safe production of the grids, user interaction and information collection.
Once the power internet of things equipment fails, the power internet of things equipment cannot be quickly repaired, production and life are seriously affected, however, the existing fault detection method is insufficient in generalization capability, and a fault area cannot be quickly and accurately found. The deep convolutional neural network is used for training through the image of the analog power supply equipment, and then the trained deep convolutional neural network is used for detecting the fault point, so that the method has high accuracy and remarkable image fusion effect, and the multispectral image of the power equipment can be accurately fused, and the method is favorable for quickly and accurately positioning the fault point.
Generally, the method mainly comprises two parts of image preprocessing, image fusion and equipment fault detection, wherein the detection of the fault point of the power equipment comprises identification and positioning, for an identification task, after a deep convolutional neural network is trained, a classifier is designed or softmax is used for classifying output parameters of the deep convolutional neural network into features, and for a positioning task, the position of a candidate frame needs to be adjusted. In a detection algorithm based on a convolutional neural network, about 3000 candidate regions are extracted from a multispectral image, then each candidate region is adjusted, all the candidate regions are adjusted to be uniform in size, then the candidate regions are input into a trained deep convolutional neural network, each candidate region is classified to generate a final detection position, the candidate region classification is executed by a support vector machine, and a convolutional neural network is utilized to perform regression positioning on a candidate frame so as to improve the positioning accuracy and facilitate the detection of fault points, and the method specifically comprises the following steps:
1) image preprocessing: carrying out image registration, image enhancement and image fusion on a source image;
image registration: aligning two or more images of the same object at spatial positions using an image registration method, which is a gray-scale-based image registration method, a transform-domain-based image registration method, or a feature-based image registration method
Image enhancement: the purpose of image enhancement is to improve the image quality, highlight important information in an image, suppress unnecessary important information in the image and obtain an image more meeting the application requirements of people.
Image fusion: the images detected by the different sensors are combined using a fusion strategy to obtain a high quality image, i.e. by extracting and combining information from the different spectral images, a more reliable, comprehensive image description of the same object is obtained.
The image fusion step based on continuous wavelet transform is divided into three steps:
a) carrying out Conourlet decomposition on a source image to be fused to obtain a series of directional sub-bands and low-pass sub-bands;
b) fusing the images of all decomposition layers to obtain Conourtlet coefficients corresponding to the fused images;
c) firstly, carrying out multi-scale scaling on an image through Laplace transform to capture singular points, then combining the singular points distributed in the same direction into a coefficient through a directional filter, namely combining the singular points with approximate directions, and reconstructing the coefficient of the fused contour wave to obtain a final fused image.
2) A depth convolutional neural network based multi-spectral image descriptor;
the images are described by using the feature space distance based on the deep convolutional neural network, wherein the multispectral images containing the same scene are closer in the feature space, and the multispectral images containing different scenes are farther in the feature space.
3) Training the deep convolutional neural network, wherein the specific process is as follows:
a) inputting the image as a training sample into a deep convolutional neural network, extracting and mapping image characteristics through each convolutional layer and each sampling layer, and outputting through a fully-connected layer;
b) according to the expected output optimization parameters, the most common optimization method is a gradient descent method, specifically:
n training samples of C types are set, and the cost function E is as follows:
Figure RE-GDA0002617320060000061
wherein, tn,kFor corresponding to network outputKth dimension tag, y, for nth samplen,kIs the actual output of the kth dimensional network corresponding to the nth sample.
Updating a t threshold, wherein eta represents a learning rate, and the neuron weight of the t +1 th convolution layer of the deep convolution neural network is updated as follows:
Figure RE-GDA0002617320060000062
the neuron weight of the downsampling layer is updated as follows:
Figure RE-GDA0002617320060000063
4) the multispectral image descriptor training based on the deep convolutional neural network specifically comprises the following steps:
a) inputting visible light and infrared light patch pairs;
b) reconstructing input data through a sample reconstruction module;
sample reconstruction the multispectral image descriptor training for deep learning neural networks aims at the total maximum classification accuracy, a large proportion of negative samples can lead the algorithm to pay more attention to improving the classification accuracy of the negative samples and neglect the classification accuracy of the positive samples, therefore, a data sampling layer is added before the convolutional layer, the data sampling layer is used for reconstructing a data set, so that the positive samples and the negative samples in batch processing data selected by the neural network in each iteration are balanced, the classification accuracy of several categories is improved, and a sample reconstruction module can freely set the ratio of the positive samples to the negative samples. The ratio in the present invention is set to 1: 1.
c) Extracting features of the two input blocks using a feature learning network;
and (3) feature learning: the convolutional neural network is the most suitable network for multispectral image feature extraction, and for the multispectral image descriptor, hyper-parameters such as convolutional kernel size, convolutional kernel number, learning rate and the like need to be set in the convolutional neural network. The feature learning module is a core part of the multispectral image descriptor network, and network parameters are set as follows: the number of test iterations and the number of test interval iterations are both 1500, the initial learning rate is set to 0.02, and the maximum number of iterations is 500000.
d) Calculating a distance between the two features using a measurement learning network;
and (3) measurement and learning: to assess the degree of similarity between two feature vectors, a similarity measure is typically used for the metric. The similarity measure of multispectral descriptors is a criterion that evaluates whether the descriptors describe the accuracy of the multispectral image. Poor performance of the similarity metric results in the feature learning network failing to obtain correct learning feedback and thus learning correct parameters, and the similarity metric is usually performed using cosine distance and euclidean distance.
Wherein, the formula of Euclidean distance is as follows:
Figure RE-GDA0002617320060000071
the cosine distance between two vectors can be expressed as the cosine of the angle between them:
Figure RE-GDA0002617320060000072
e) and (3) estimating the degree of inconsistency of the predicted value f (X) and the group Truth by using a loss function estimation model, and updating the network parameters by using a random gradient descent method.
f) Repeating the steps a) to e) until the network loss is stabilized within a preset range.
In addition, the quality evaluation index of the multispectral fusion image generally adopts information entropy, standard deviation, definition (average gradient) and correlation coefficient.
Information entropy E: measuring rich information contained in the image, wherein the information entropy E is as follows:
Figure RE-GDA0002617320060000081
wherein, L represents the total number of gray levels, pi represents the ratio of the number of pixels of which the gray value is equal to the total number of pixels, and the larger the information entropy value is, the larger the information amount contained in the image is, the better the image quality is.
Standard deviation σ: for reflecting the dispersion of the gray levels of the image with respect to the mean gray level, wherein the standard deviation σ is:
Figure RE-GDA0002617320060000082
where f (x, y) represents the pixel grey value for the (x, y) location in the image,
Figure RE-GDA0002617320060000083
the larger the standard deviation value is, the more dispersed the gray distribution of the image is, and the larger the contrast of the image is, the more information can be seen.
Definition of
Figure RE-GDA0002617320060000084
For representing the sharpness, sharpness
Figure RE-GDA0002617320060000085
Comprises the following steps:
Figure RE-GDA0002617320060000088
where M and N represent the number of rows and columns of the image, respectively,
Figure RE-GDA0002617320060000086
and
Figure RE-GDA0002617320060000087
the differences in the x-direction and y-direction are shown, respectively.
Correlation coefficient CC (f, g); for reflecting the degree of correlation between the fused image and the source image, the correlation coefficient CC (f, g) is:
Figure RE-GDA0002617320060000091
wherein, the larger the correlation coefficient, the more information the fused image extracts from the original image.
Compared with the traditional method, the fusion image obtained by the invention has better fusion effect, more comprehensive information, higher definition and correlation coefficient, and can obtain better and more comprehensive images to help a system or a worker to accurately detect the fault point of the power equipment.
It will be appreciated by those skilled in the art that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The embodiments disclosed above are therefore to be considered in all respects as illustrative and not restrictive. All changes which come within the scope of or equivalence to the invention are intended to be embraced therein.

Claims (6)

1. A multispectral image fusion power Internet of things fault point detection method based on deep learning is characterized by comprising the following steps:
1) acquiring multispectral images of a plurality of electric power Internet of things devices, and preprocessing each multispectral image;
2) describing the multispectral image by using the characteristic space distance based on the depth convolution neural network;
3) inputting the multispectral image serving as a training sample into a deep convolutional neural network to train the deep convolutional neural network;
4) and detecting the multispectral image to be detected by using the trained deep convolutional neural network to obtain the fault point position of the power Internet of things.
2. The method for detecting the fault point of the power internet of things based on the deep learning multispectral image fusion based on the deep learning as claimed in claim 1, wherein the specific operation of the step 4) is as follows:
extracting candidate regions from the multispectral image to be detected, adjusting all the candidate regions to be uniform in size, inputting the candidate regions into the trained deep convolutional neural network, and separating the candidate regions by using a classifier or softmax to determine the position of a fault point of the power internet of things.
3. The method for detecting the fault point of the power internet of things fused with the multispectral images based on the deep learning as claimed in claim 1, wherein the specific process of preprocessing the multispectral images in the step 1) is as follows: and carrying out image registration, image enhancement and image fusion on the multispectral image.
4. The method for detecting the fault point of the power internet of things based on the fusion of the multispectral images based on the deep learning as claimed in claim 3, wherein the multispectral images are subjected to image registration by using a gray-scale-based image registration method, a transform domain-based image registration method or a feature-based image registration method.
5. The method for detecting the fault point of the power internet of things based on the fusion of the multispectral images and the deep learning as claimed in claim 4, wherein the multispectral images are subjected to image enhancement by using a linear transformation method, a piecewise linear transformation method or a histogram equalization method.
6. The deep learning-based multispectral image fusion power internet of things fault point detection method according to claim 5, wherein image fusion is performed based on continuous wavelet transform.
CN202010275304.4A 2020-04-09 2020-04-09 Multispectral image fusion power internet of things fault point detection method based on deep learning Pending CN111696070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010275304.4A CN111696070A (en) 2020-04-09 2020-04-09 Multispectral image fusion power internet of things fault point detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010275304.4A CN111696070A (en) 2020-04-09 2020-04-09 Multispectral image fusion power internet of things fault point detection method based on deep learning

Publications (1)

Publication Number Publication Date
CN111696070A true CN111696070A (en) 2020-09-22

Family

ID=72476390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010275304.4A Pending CN111696070A (en) 2020-04-09 2020-04-09 Multispectral image fusion power internet of things fault point detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN111696070A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966788A (en) * 2021-04-19 2021-06-15 扬州大学 Power transmission line spacer fault detection method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146831A (en) * 2018-08-01 2019-01-04 武汉大学 Remote sensing image fusion method and system based on double branch deep learning networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HOU RUI ET AL.: "(Fault Point Detection of IOT using Multi-Spectral Image Fusion based on Deep Learning" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966788A (en) * 2021-04-19 2021-06-15 扬州大学 Power transmission line spacer fault detection method based on deep learning

Similar Documents

Publication Publication Date Title
CN108960140B (en) Pedestrian re-identification method based on multi-region feature extraction and fusion
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN111680614B (en) Abnormal behavior detection method based on video monitoring
CN108596203B (en) Optimization method of parallel pooling layer for pantograph carbon slide plate surface abrasion detection model
CN114359283B (en) Defect detection method based on Transformer and electronic equipment
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
CN111784633A (en) Insulator defect automatic detection algorithm for power inspection video
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN111563896B (en) Image processing method for detecting abnormality of overhead line system
Xie et al. Fabric defect detection method combing image pyramid and direction template
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
Liang et al. Automatic defect detection of texture surface with an efficient texture removal network
Zhang et al. Research on surface defect detection of rare-earth magnetic materials based on improved SSD
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
CN105825215A (en) Instrument positioning method based on local neighbor embedded kernel function and carrier of method
CN111696070A (en) Multispectral image fusion power internet of things fault point detection method based on deep learning
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN110443169B (en) Face recognition method based on edge preservation discriminant analysis
CN116934820A (en) Cross-attention-based multi-size window Transformer network cloth image registration method and system
CN116188445A (en) Product surface defect detection and positioning method and device and terminal equipment
CN116109849A (en) SURF feature matching-based high-voltage isolating switch positioning and state identification method
CN114743257A (en) Method for detecting and identifying image target behaviors
CN113989742A (en) Nuclear power station plant pedestrian detection method based on multi-scale feature fusion
CN113139496A (en) Pedestrian re-identification method and system based on time sequence multi-scale fusion
Ouyang et al. ASAFPN: An End-to-End Defect Detector With Adaptive Spatial Attention Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination