CN112233074A - Power failure detection method based on visible light and infrared fusion image - Google Patents

Power failure detection method based on visible light and infrared fusion image Download PDF

Info

Publication number
CN112233074A
CN112233074A CN202011058538.XA CN202011058538A CN112233074A CN 112233074 A CN112233074 A CN 112233074A CN 202011058538 A CN202011058538 A CN 202011058538A CN 112233074 A CN112233074 A CN 112233074A
Authority
CN
China
Prior art keywords
image
frequency
sub
fused
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011058538.XA
Other languages
Chinese (zh)
Inventor
郝建军
赵晓宇
赵国伟
张政
樊兴超
薛震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Original Assignee
Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd filed Critical Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority to CN202011058538.XA priority Critical patent/CN112233074A/en
Publication of CN112233074A publication Critical patent/CN112233074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses power failure detection method based on visible light and infrared fusion images, including: step 1, acquiring a first image and a second image of a region to be detected under the same visual angle; step 2, decomposing the first image and the second image respectively step by step, and generating a fused image according to a preset fusion condition and the images at the same positions in the decomposed first image and second image; and 3, carrying out fault identification on the fused image according to the fault detection model, adding a prediction frame in the fused image, wherein the prediction frame is marked with a prediction label, and the prediction label is used for judging whether the power equipment in the fused image is in fault or not, wherein the model parameters of the fault detection model are determined by the neural network model and the sample fused image. Through the technical scheme in this application, can effectively carry out fault detection to power equipment, the practicality is strong, facilitate promotion, and economic benefits and social are all higher, have promoted the stability of electric wire netting operation.

Description

Power failure detection method based on visible light and infrared fusion image
Technical Field
The application relates to the technical field of power failure detection, in particular to a power failure detection method based on visible light and infrared fusion images.
Background
In order to meet the domestic demand for higher and higher electric energy, the electric power line needs to be continuously expanded in the direction of high voltage and large capacity, which causes the maintenance of electric power equipment to become a problem which cannot be ignored. On one hand, accurate maintenance and high-efficiency maintenance across the terrain are difficult to solve; on the other hand, power transmission equipment on the power line is easy to damage, particularly for remote areas, the positions of the power transmission equipment are far away, the operation environment is worse, and large-scale power supply faults and accidents are easily caused when the equipment is damaged. Therefore, power lines need to be regularly inspected, and conventional power line inspection is usually manual inspection, which results in large workload of inspection personnel, high danger degree and low inspection efficiency.
In the prior art, the infrared imaging sensor can be used for detecting faults of the power line with good detection performance of the thermal target, but background information of an image formed by the infrared imaging sensor is fuzzy, an overheated specific part cannot be accurately identified, namely which part fails to be identified, so that remote fault detection cannot be performed, and accurate maintenance of fault equipment cannot be facilitated.
Disclosure of Invention
The purpose of this application lies in: the problem that exists among the current power equipment fault detection process is solved, the validity that power equipment carries out fault detection is improved, the stability of electric wire netting operation is promoted.
The technical scheme of the application is as follows: the power failure detection method based on the visible light and infrared fusion image comprises the following steps: step 1, acquiring a first image and a second image of a region to be detected under the same visual angle; step 2, decomposing the first image and the second image respectively step by step, and generating a fused image according to a preset fusion condition and the images at the same positions in the decomposed first image and second image; and 3, carrying out fault identification on the fused image according to the fault detection model, adding a prediction frame in the fused image, wherein the prediction frame is marked with a prediction label, and the prediction label is used for judging whether the power equipment in the fused image is in fault or not, wherein the model parameters of the fault detection model are determined by the neural network model and the sample fused image.
In any one of the above technical solutions, further, the preset fusion condition includes a low-frequency preset fusion condition and a high-frequency preset fusion condition, and step 2 specifically includes:
step 21, respectively carrying out filtering processing on the first image and the second image according to a Gaussian filter function;
step 22, respectively decomposing the filtered first image and the filtered second image step by adopting a wavelet transform method, wherein the image obtained by each stage of decomposition comprises a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image, and the first low-frequency image and the second low-frequency image of the current level can be decomposed into a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image of the next level;
step 23, according to a low-frequency preset fusion condition, performing low-frequency fusion on the first low-frequency image and the second low-frequency image at the same position, and recording the low-frequency images as a fused low-frequency image;
step 24, sequentially selecting the first high-frequency image or the second high-frequency image which is decomposed at each level and is at the same position according to the high-frequency preset fusion condition and the positions of the decomposed first high-frequency image and the decomposed second high-frequency image, and recording the first high-frequency image or the second high-frequency image as the fused high-frequency image at the position;
and 25, generating a fused image according to the fused low-frequency image and the fused high-frequency images at different positions by adopting a wavelet inverse transformation method.
In any one of the above technical solutions, further, step 23 specifically includes:
a step 231 of splitting the first low-frequency image at the same position into a plurality of first sub-block images with equal size and splitting the second low-frequency image into a plurality of second sub-block images with equal size, wherein the number of the first sub-block images is equal to the number of the second sub-block images;
step 232, calculating the correlation coefficients of the first sub-block image and the second sub-block image at the same position respectively in sequence, wherein the calculation formula of the correlation coefficients rho (a, B) is as follows:
ρ(A,B)=Cov(A,B)/σAσB
Cov(A,B)=E[(A-E[A])(B-E[B])]
wherein A is the pixel value in the first sub-block image, B is the pixel value in the second sub-block image, Cov (-) is covariance operation, E (-) is variance operation, and σAIs the standard deviation, σ, of the first sub-block imageBIs the standard deviation of the second sub-block image;
and 233, generating a fused low-frequency image according to the correlation coefficient, the first sub-block image and the second sub-block image.
In any one of the above technical solutions, further, step 24 specifically includes:
step 241, splitting the first high-frequency image of the same level into a plurality of third sub-block images with the same size and splitting the second high-frequency image into a plurality of fourth sub-block images with the same size, wherein the number of the third sub-block images is equal to that of the fourth sub-block images;
and 242, calculating the average value of the pixels in the third sub-block image and the fourth sub-block image, and sequentially selecting the image with the larger average value of the pixels in the third sub-block image and the fourth sub-block image at the same level and the same position as the fused high-frequency image at the position.
In any one of the above technical solutions, further, in step 3, the determining, by the neural network model and the sample fusion image, a model parameter of the fault detection model specifically includes:
step 31, acquiring sample images at the same view angle under the operation state of the power equipment, and generating a sample fusion image according to the sample images;
step 32, constructing a neural network model, wherein the neural network model comprises a forward propagation path and a backward propagation path, momentum factors are introduced into the backward propagation path, and weight values and threshold values in the backward propagation path are adjusted, wherein the calculation formula for adjusting the weight values and the threshold values is as follows:
Δωji(k+1)=(1-mc(k))η(k)δjxji+mc(k)Δωji(k)
Δbj(k+1)=(1-mc(k))η(k)δj+mc(k)Δbj(k)
Figure BDA0002711531970000041
in the formula,. DELTA.omegaji(k +1) is the weight adjustment amount of the (k +1) th iteration, Δ bj(k +1) is the threshold adjustment amount of the k +1 th iteration, mc (k) is the corresponding momentum factor of the k th iteration, eta (k) is the corresponding learning rate of the k th iteration, and delta (k) is the corresponding learning rate of the k th iterationjError term for node j, xjiTransmitting the input of a node j to a node i, wherein i and j are labels of the nodes, and E (k) is the error square sum of the actual output and the expected output of the kth iteration;
and step 33, inputting the sample fusion image into the neural network model introduced with the momentum factor, performing iteration for the (k +1) th time according to the weight and the threshold value returned by the back propagation path in the k-th iteration process, recording the neural network model as a fault detection model when the (k +1) th iteration is determined to be converged, and recording the parameters of the neural network model as model parameters.
In any of the above technical solutions, further, the calculation formula of the learning rate η is:
Figure BDA0002711531970000042
in any one of the above technical solutions, further, an equipment tag is calibrated in the sample fusion image.
In any one of the above technical solutions, further, the device tag includes a first coordinate, a second coordinate and a device type of the power device to be detected in the fused image, where the second coordinate is located opposite to the first coordinate.
The beneficial effect of this application is:
through the technical scheme in this application, can effectively carry out fault detection to power equipment, the practicality is strong, the facilitate promotion, and economic benefits and social are all higher, have promoted the stability of electric wire netting operation.
When the method and the device fuse the images of the area to be detected, in order to fully utilize the characteristics of the visible light image and the infrared image, a multi-level decomposition and same-level fusion image processing and fusion mechanism is adopted to fuse the visible light image and the infrared image, so that the final result not only can keep the color, clear detail outline and edge of the visible light image, but also can keep the brightness information of the infrared image in the infrared image, the relative background brightness of an infrared target object is prominent, and the target is easier to identify.
The method and the device also use a BP algorithm which is improved by combining two strategies of an additional momentum method and a self-adaptive learning rate adjustment method to update the model parameters, can effectively inhibit the network from falling into local minimum, are favorable for shortening the learning time, improve the identification efficiency of the fault detection model to the fusion image of the area to be detected, and optimize the real-time performance of the fault detection of the power equipment.
In an optimal implementation mode of the application, the visible light imaging device and the infrared imaging device are simultaneously carried on the unmanned aerial vehicle, the visible light image and the infrared image at the same visual angle can be acquired through monitoring of personnel, sufficient preparation is made for obtaining a fused image, picture information of a longer line and the device can be shot, and the problem of low efficiency of the traditional method is avoided. Meanwhile, an image processing and fusion mechanism can be built on the unmanned aerial vehicle, image real-time acquisition based on a V4L2 interface and image real-time transmission based on a TCP/IP protocol of infrared and visible light images are carried out on the mechanism, efficient fusion of the visible light and the infrared images can be realized, and the real-time performance of power equipment fault detection is improved.
Furthermore, in the process of fault identification of the fusion image by the fault detection model, in order to reduce overlapped prediction frames and improve the operation efficiency of the model, on the basis of the improved method Soft-NMS of the NMS, a plurality of prediction frames around the same object are reduced by dynamically changing the threshold value, so that the accuracy of fault identification is improved, and the display effect of fault identification is optimized.
Drawings
The advantages of the above and/or additional aspects of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of a method for power failure detection based on fused visible and infrared images according to one embodiment of the present application;
FIG. 2 is a schematic view of an image acquisition device according to an embodiment of the present application;
FIG. 3 is a schematic flow diagram of a fused image generation process according to one embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
As shown in fig. 1, the present embodiment provides a power failure detection method based on a visible light and infrared fusion image, and the method includes:
step 1, acquiring a first image and a second image of a region to be detected under the same visual angle;
specifically, as shown in fig. 2, adopt unmanned aerial vehicle as image acquisition equipment in this embodiment, set up two fixing bases respectively at unmanned aerial vehicle's fuselage lower extreme, each fixing base sets up one and rotates the cloud platform, rotates the cloud platform and can control free rotation from top to bottom to obtain two images under the same visual angle through two cameras, wait to detect regional first image and second image promptly.
In this embodiment, the first image is a visible light image, and the second image is an infrared image.
Step 2, decomposing the first image and the second image respectively step by step, and generating a fused image according to a preset fusion condition and the images at the same positions in the decomposed first image and second image;
it should be noted that the fusion of the first image and the second image may be implemented by an image processing and fusion mechanism built on the unmanned aerial vehicle, or by an image processing and fusion mechanism built on the terminal after the unmanned aerial vehicle sends the information to the terminal through the wireless transmission network, which is not limited in this embodiment.
In this embodiment, the unmanned aerial vehicle acquires images in real time through the V4L2 interface and transmits images in real time based on the TCP/IP protocol.
Through the image processing and fusion mechanism in the embodiment, the obtained fusion image can not only retain the color and clear outline edge of the visible light image, but also retain the brightness information of the infrared image.
It should be noted that, in the process of training and testing the fault detection model in this embodiment, the sample image needs to be acquired by using the image acquisition device, and after the sample fusion image is generated by fusing the image processing and fusion mechanism, the device label needs to be calibrated for the power device in the sample fusion image, so as to calibrate: coordinates of the power equipment in the image (which may include coordinates of the upper left corner and the lower right corner); the type of electrical equipment, such as an insulator, may be labeled jyz; whether the power equipment fails or not can be marked as jyzgz if the insulator fails, and only the type can be marked if the insulator does not fail.
In this embodiment, when calibrating the device tag of the power device, a labelImg tool may be used to perform manual data tagging.
It should be noted that, the conventional image fusion method has a large calculation amount and a slow fusion rate, and in order to apply image fusion to the power failure detection in this embodiment, the embodiment shows a method for generating a fused image, which specifically includes:
and 21, respectively performing filtering processing on the first image and the second image according to a Gaussian filter function, and converting the resolution of the first image and the second image into 320 × 320.
In the filtering process, each pixel in the image is scanned by using a template (or convolution or mask) designated by a user, and the weighted average gray value of the pixels in the neighborhood determined by the template is used for replacing the value of the central pixel point of the template. Since the image is two-dimensional, a two-dimensional gaussian function is often used in image processing, where the calculation formula of the two-dimensional gaussian distribution function is:
Figure BDA0002711531970000071
in the formula, x and y are coordinates of a pixel point, the preset parameter θ can be usually set to 1.5, and the size of the preset parameter θ determines the width of the gaussian function.
And step 22, respectively decomposing the filtered first image and the filtered second image step by adopting a wavelet transform method, wherein the image obtained after decomposition at each level comprises a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image.
After decomposition, the first low-frequency image and the second low-frequency image of the current level can be decomposed again by adopting the same wavelet transform method again, and then the first low-frequency image and the second low-frequency image of the current level are decomposed into a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image of the next level, namely the low-frequency image of each level can also be decomposed step by step, so that the precision of image fusion is improved;
the specific process of decomposing the image by using the wavelet transform method is not limited in this embodiment.
The decomposed images with different frequency component levels can be fused by adopting different fusion rules to obtain fused images.
It should be noted that the position of the image obtained by each level of decomposition is not changed, and when fusion is performed, fusion is performed according to the images at the same position, so that a fused image corresponding to the first image and the second image can be obtained.
In this embodiment, the low frequency part of the image is used to represent the profile and average characteristics of the image, and the high frequency part of the image reflects the detailed characteristics of the image.
Step 23, performing low-frequency fusion on the first low-frequency image and the second low-frequency image at the same position step by step according to a low-frequency preset fusion condition in the preset fusion conditions, and recording the low-frequency images as a fused low-frequency image;
step 24, sequentially selecting the first high-frequency image or the second high-frequency image at the same position and decomposed by each stage according to the high-frequency preset fusion condition in the preset fusion condition and the positions of the first high-frequency image and the second high-frequency image at each stage after decomposition, and recording the first high-frequency image or the second high-frequency image at the position as the fused high-frequency image;
as shown in fig. 3, when the decomposition is performed, taking the first image as an example, the first image is decomposed into the first low-frequency image and the first high-frequency image (including images 8 to 10) at the first level, the first low-frequency image at the first level is decomposed again to obtain the first low-frequency image and the first high-frequency image (including images 5 to 7) at the second level, and the first low-frequency image at the second level is decomposed to obtain the first low-frequency image (including image 1) and the first high-frequency image (including images 2 to 4) at the third level.
During fusion, firstly, carrying out low-frequency fusion on an image 1 in the two decomposed images according to a low-frequency preset fusion condition, and selecting images 2-4 according to a high-frequency preset fusion condition to obtain a fused high-frequency image corresponding to a third level; selecting the images 5-7 according to the same high-frequency preset fusion condition to obtain a second-level corresponding fusion high-frequency image; and finally, selecting the images 8-10 according to high-frequency preset fusion conditions to obtain a first-level corresponding fusion high-frequency image.
It should be noted that the decomposition of the first image and the second image is determined by the preset image decomposition condition and the pixel information in the image, and the process of the image decomposition is not limited in this embodiment.
When the low-frequency fusion is performed on the first low-frequency image and the second low-frequency image according to the low-frequency preset fusion condition, the method specifically includes:
a step 231 of splitting the first low-frequency image at the same position into a plurality of M × N first sub-block images of equal size and splitting the second low-frequency image into a plurality of M × N second sub-block images of equal size, the number of the first sub-block images being equal to the number of the second sub-block images;
in this embodiment, since the size of the first sub-block image is equal to the size of the second sub-block image, and the shooting angles of the original images (the first image and the second image) are the same, a scene in the split first sub-block image corresponds to a scene in the second sub-block image, and the two may be in one-to-one correspondence by numbering, that is, the q-th first sub-block image corresponds to the q-th second sub-block image.
And taking the sub-block images as a unit, carrying out numerical value distribution statistics on each sub-block image, calculating the variance, standard deviation and covariance of the sub-block images, and further calculating the correlation coefficient.
Step 232, calculating correlation coefficients ρ (a, B) of the first sub-block image and the second sub-block image at the same position respectively in sequence, where ρ (a, B) belongs to [0,1], and the calculation formula of the correlation coefficients ρ (a, B) is as follows:
ρ(A,B)=Cov(A,B)/σAσB
Cov(A,B)=E[(A-E[A])(B-E[B])]
wherein A is the pixel value in the first sub-block image, B is the pixel value in the second sub-block image, Cov (-) is covariance operation, E (-) is variance operation, and σAIs the standard deviation, σ, of the first sub-block imageBIs the standard deviation of the second sub-block image;
step 233, generating the fused low-frequency image according to the correlation coefficient, the first sub-block image and the second sub-block image.
Specifically, whether the correlation coefficients rho (A, B) are within the fusion threshold range is sequentially judged by setting a fusion threshold, when the correlation coefficients rho (A, B) are within the fusion threshold range, the first sub-block image is selected as a fusion sub-image, otherwise, the second sub-block image is selected as a fusion sub-image.
And further generating a fused low-frequency image according to the selected fused sub-image and the position of the image.
Further, in order to improve the accuracy of the fused image, and fully utilize the color and clear contour edge of the visible light image and the luminance information of the infrared image, this embodiment also shows an improved low-frequency image fusion method, which specifically includes:
when it is determined that the correlation coefficient ρ (A, B) belongs to the first threshold value, i.e., ρ (A, B) ∈ [0,0.19), if the standard deviation σ of the first sub-block image is larger than the standard deviation σ of the first sub-block imageAGreater than or equal to the standard deviation sigma of the second sub-block imageBIf so, recording the first sub-block image as a fused low-frequency sub-image, otherwise, recording the second sub-block image as a fused low-frequency sub-image;
when the correlation coefficient ρ (a, B) is determined to belong to the second threshold, i.e., ρ (a, B) ∈ [0.19,0.5), the calculation formula for fusing the low-frequency sub-image F is:
Figure BDA0002711531970000101
when the correlation coefficient ρ (a, B) is determined to belong to the third threshold, i.e., ρ (a, B) is ∈ [0.5,1), the calculation formula for fusing the low-frequency sub-image F is:
Figure BDA0002711531970000102
further, when the fused high-frequency image is selected according to the high-frequency preset fusion condition, the method specifically includes:
step 241, splitting the first high-frequency image of the same level into a plurality of third sub-block images with the same size and splitting the second high-frequency image into a plurality of fourth sub-block images with the same size, wherein the number of the third sub-block images is equal to that of the fourth sub-block images;
and 242, calculating the average value of the pixels in the third sub-block image and the fourth sub-block image, and sequentially selecting an image with a larger average value of the pixels in the third sub-block image and the fourth sub-block image at the same level and the same position as the fused high-frequency image at the corresponding position.
Specifically, as shown in fig. 3, first, splitting an image 2-4 in the first image to obtain a third sub-block image corresponding to the third level, calculating a pixel average value corresponding to each third sub-block image, simultaneously splitting an image 2-4 in the second image to obtain a fourth sub-block image corresponding to the third level, calculating a pixel average value corresponding to each fourth sub-block image, and sequentially selecting an image with a larger pixel average value as each fused high-frequency image corresponding to the third level according to the position of the sub-block image.
Then, in the same manner, the respective fused high-frequency images corresponding to the second level are determined in images 5-7, and the respective fused high-frequency images corresponding to the first level are determined in images 8-10.
And 25, generating a fused image according to the fused low-frequency image and the fused high-frequency image by adopting a wavelet inverse transformation method.
And 3, carrying out fault identification on the fused image according to the fault detection model, adding a prediction frame in the fused image, wherein the prediction frame is marked with a prediction label, and the prediction label is used for judging whether the power equipment in the fused image is in fault or not, wherein the model parameters of the fault detection model are determined by the neural network model and the sample fused image.
In this embodiment, the model parameters of the fault detection model are determined by inputting the sample fusion image into the network model with self-learning capability, and the network model with the determined model parameters is recorded as the fault detection model, which can identify the fault of the fusion image obtained in the above process.
In this embodiment, a BP network model in a neural network is used as a network model, and the process of determining model parameters in the BP network model specifically includes:
step 31, acquiring sample images at the same view angle under the operation state of the power equipment, and generating a sample fusion image according to the sample images;
step 32, constructing a neural network model, wherein the neural network model comprises a forward propagation path and a backward propagation path, momentum factors are introduced into the backward propagation path, and weight values and threshold values in the backward propagation path are adjusted, wherein the calculation formula for adjusting the weight values and the threshold values is as follows:
Δωji(k+1)=(1-mc(k))η(k)δjxji+mc(k)Δωji(k)
Δbj(k+1)=(1-mc(k))η(k)δj+mc(k)Δbj(k)
Figure BDA0002711531970000121
in the formula,. DELTA.omegaji(k +1) is the weight adjustment amount of the (k +1) th iteration, Δ bj(k +1) is the threshold adjustment amount of the k +1 th iteration, mc (k) is the corresponding momentum factor of the k th iteration, eta (k) is the corresponding learning rate of the k th iteration, and delta (k) is the corresponding learning rate of the k th iterationjError term for node j, xjiTransmitting the input of a node j to a node i, wherein i and j are labels of the nodes, and E (k) is the error square sum of the actual output and the expected output of the kth iteration;
specifically, by introducing momentum factors, the network not only considers the effect of errors on gradient but also considers the influence of variation trend on an error surface when correcting the weight. Without the effect of the additional momentum, the network may fall into shallow local minima, which are likely to be slipped by the effect of the additional momentum.
Since the number of training times increases when the learning rate η (k) is too small in the flat region of the error curved surface, it is desirable to increase the value of η (k); in the region with drastic error change, the adjustment amount is too large to span the narrow pit and valley, so that the training oscillation occurs when the iteration number is increased. Therefore, the learning rate η (k) is adaptively adjusted, and the convergence of the BP network is accelerated. The learning rate η (k) is calculated as:
Figure BDA0002711531970000131
by introducing momentum factors and adaptively adjusting the learning rate eta, the local minimum of the network can be effectively inhibited, and the learning time can be shortened.
And step 33, inputting the sample fusion image into the neural network model introduced with the momentum factor, performing iteration for the (k +1) th time according to the weight and the threshold value returned by the back propagation path in the k-th iteration process, recording the neural network model as a fault detection model when the (k +1) th iteration is determined to be converged, and recording the parameters of the neural network model as model parameters.
Specifically, data labeling is performed on the whole sample data set, and then the sample data set is divided into a training set and a test set according to a certain proportion according to actual conditions (the test set is generally less, for example, 20% of the sample data set).
Inputting the training set into the model for iterative training, and adjusting the weight and the threshold value by using an improved BP algorithm according to the error between the calibrated equipment label and the output result of the model until the model converges, namely the error between the calibrated equipment label and the output result of the model is stabilized in a certain range.
And then inputting the test set into the trained model, carrying out model test, outputting the accuracy of the model, and if the accuracy accords with the expectation, taking the model as a fault detection model.
The fault detection model obtained in the embodiment can be obtained by adding a prediction tag on the fused image, wherein the prediction tag corresponds to an equipment tag calibrated in the sample fused image, and is obtained by predicting through a self-adaptive BP neural network algorithm after momentum factors and learning rates are introduced.
In this embodiment, an original image (a first image and a second image) of an area to be detected is obtained by an unmanned aerial vehicle, the first image and the second image are fused by an image processing and fusion mechanism to obtain a fused image, the fused image is input into the fault detection model as an input, and the fused image with a prediction frame is output by a BP neural network algorithm, wherein a prediction label (device label) is marked on the prediction frame, and the prediction label in the prediction frame can indicate a predicted device state, and if the label is "jyzgzz", the device insulator fault in the area to be detected is represented.
It should be noted that in this embodiment, a Soft-NMS method that dynamically changes a threshold is further used to reduce overlapped prediction frames, leave a prediction frame with a confidence score higher than a threshold t, and improve the intelligibility of an output result of a fault detection model, where an adjustment formula of the threshold t is:
Figure BDA0002711531970000141
in the formula, IoU represents the cross-over ratio.
The technical scheme of the present application is described in detail above with reference to the accompanying drawings, and the present application provides a power failure detection method based on visible light and infrared fusion images, which includes: step 1, acquiring a first image and a second image of a region to be detected under the same visual angle; step 2, decomposing the first image and the second image step by step respectively, and generating a fused image according to a preset fusion condition and the images at the same position in the decomposed first image and second image; and 3, carrying out fault identification on the fused image according to the fault detection model, adding a prediction frame in the fused image, wherein the prediction frame is marked with a prediction label, and the prediction label is used for judging whether the power equipment in the fused image is in fault or not, wherein the model parameters of the fault detection model are determined by the neural network model and the sample fused image. Through the technical scheme in this application, can effectively carry out fault detection to power equipment, the practicality is strong, facilitate promotion, and economic benefits and social are all higher, have promoted the stability of electric wire netting operation.
The steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and is not intended to limit the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.

Claims (8)

1. A power failure detection method based on visible light and infrared fusion images is characterized by comprising the following steps:
step 1, acquiring a first image and a second image of a region to be detected under the same visual angle;
step 2, decomposing the first image and the second image respectively step by step, and generating a fused image according to a preset fusion condition and the images at the same positions in the decomposed first image and the second image;
and 3, carrying out fault identification on the fused image according to the fault detection model, adding a prediction frame in the fused image, wherein the prediction frame is marked with a prediction label, and the prediction label is used for judging whether the power equipment in the fused image is in fault or not, wherein the model parameters of the fault detection model are determined by a neural network model and a sample fused image.
2. The power failure detection method based on the visible light and infrared fusion image according to claim 1, wherein the preset fusion condition includes a low-frequency preset fusion condition and a high-frequency preset fusion condition, and the step 2 specifically includes:
step 21, respectively performing filtering processing on the first image and the second image according to a Gaussian filter function;
step 22, respectively decomposing the filtered first image and the filtered second image step by adopting a wavelet transform method, wherein the image obtained by each level of decomposition comprises a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image, and the first low-frequency image and the second low-frequency image of the current level can be decomposed into a first low-frequency image, a first high-frequency image, a second low-frequency image and a second high-frequency image of the next level;
step 23, according to the low-frequency preset fusion condition, performing low-frequency fusion on the first low-frequency image and the second low-frequency image at the same position, and recording as a fused low-frequency image;
step 24, sequentially selecting the first high-frequency image or the second high-frequency image at the same position and decomposed at each level according to the high-frequency preset fusion condition and the positions of the first high-frequency image and the second high-frequency image after decomposition, and recording the selected first high-frequency image or the second high-frequency image as the fused high-frequency image at the position;
and 25, generating the fused image according to the fused low-frequency image and the fused high-frequency images at different positions by adopting a wavelet inverse transformation method.
3. The power failure detection method based on the visible light and infrared fusion image according to claim 2, wherein the step 23 specifically includes:
a step 231 of splitting the first low-frequency image at the same position into a plurality of first sub-block images of equal size and splitting the second low-frequency image into a plurality of second sub-block images of equal size, the number of the first sub-block images being equal to the number of the second sub-block images;
step 232, sequentially and respectively calculating correlation coefficients of the first sub-block image and the second sub-block image at the same position, wherein the calculation formula of the correlation coefficients ρ (a, B) is as follows:
ρ(A,B)=Cov(A,B)/σAσB
Cov(A,B)=E[(A-E[A])(B-E[B])]
wherein A is a pixel value in the first sub-block image,b is the pixel value in the second sub-block image, Cov (-) is covariance operation, E (-) is variance operation, sigma (-) isAIs the standard deviation, σ, of the first sub-block imageBIs the standard deviation of the second sub-block image;
step 233, generating the fused low-frequency image according to the correlation coefficient, the first sub-block image and the second sub-block image.
4. The power failure detection method based on the visible light and infrared fusion image according to claim 2, wherein the step 24 specifically includes:
step 241, splitting the first high-frequency image of the same level into a plurality of third sub-block images with the same size and splitting the second high-frequency image into a plurality of fourth sub-block images with the same size, wherein the number of the third sub-block images is equal to that of the fourth sub-block images;
and 242, calculating the average pixel values in the third sub-block image and the fourth sub-block image, and sequentially selecting an image with a larger average pixel value in the third sub-block image and the fourth sub-block image at the same level and the same position as the fused high-frequency image at the position.
5. The method according to claim 1, wherein in the step 3, the model parameters of the fault detection model are determined by a neural network model and a sample fusion image, and specifically comprises:
step 31, acquiring sample images at the same view angle under the operation state of the power equipment, and generating a sample fusion image according to the sample images;
step 32, constructing a neural network model, where the neural network model includes a forward propagation path and a backward propagation path, and introducing a momentum factor into the backward propagation path to adjust a weight and a threshold in the backward propagation path, where a calculation formula for adjusting the weight and the threshold is:
Δωji(k+1)=(1-mc(k))η(k)δjxji+mc(k)Δωji(k)
Δbj(k+1)=(1-mc(k))η(k)δj+mc(k)Δbj(k)
Figure FDA0002711531960000031
in the formula,. DELTA.omegaji(k +1) is the weight adjustment amount of the (k +1) th iteration, Δ bj(k +1) is the threshold adjustment amount of the k +1 th iteration, mc (k) is the corresponding momentum factor of the k th iteration, eta (k) is the corresponding learning rate of the k th iteration, and delta (k) is the corresponding learning rate of the k th iterationjError term for node j, xjiTransmitting the input of a node j to a node i, wherein i and j are labels of the nodes, and E (k) is the error square sum of the actual output and the expected output of the kth iteration;
step 33, inputting the sample fusion image into the neural network model after the momentum factor is introduced, performing iteration for the (k +1) th time according to the weight and the threshold returned by the back propagation path in the k-th iteration process, recording the neural network model as the fault detection model when the (k +1) th iteration is determined to be converged, and recording the parameters of the neural network model as the model parameters.
6. The power failure detection method based on visible light and infrared fusion image as claimed in claim 5, wherein the learning rate η is calculated by the formula:
Figure FDA0002711531960000041
7. the power failure detection method based on the visible light and infrared fusion image as claimed in any one of claims 1 to 6, wherein a device tag is calibrated in the sample fusion image.
8. The power failure detection method based on the visible light and infrared fused image as claimed in claim 7, wherein the device tag comprises a first coordinate, a second coordinate and a device type of the to-be-detected power device in the fused image, and the second coordinate is located at an opposite side of the first coordinate.
CN202011058538.XA 2020-09-30 2020-09-30 Power failure detection method based on visible light and infrared fusion image Pending CN112233074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011058538.XA CN112233074A (en) 2020-09-30 2020-09-30 Power failure detection method based on visible light and infrared fusion image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011058538.XA CN112233074A (en) 2020-09-30 2020-09-30 Power failure detection method based on visible light and infrared fusion image

Publications (1)

Publication Number Publication Date
CN112233074A true CN112233074A (en) 2021-01-15

Family

ID=74119792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011058538.XA Pending CN112233074A (en) 2020-09-30 2020-09-30 Power failure detection method based on visible light and infrared fusion image

Country Status (1)

Country Link
CN (1) CN112233074A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487529A (en) * 2021-07-12 2021-10-08 吉林大学 Meteorological satellite cloud picture target detection method based on yolk
CN113592849A (en) * 2021-08-11 2021-11-02 国网江西省电力有限公司电力科学研究院 External insulation equipment fault diagnosis method based on convolutional neural network and ultraviolet image
CN113688828A (en) * 2021-07-23 2021-11-23 山东云海国创云计算装备产业创新中心有限公司 Bad element identification method and related device
CN116403057A (en) * 2023-06-09 2023-07-07 山东瑞盈智能科技有限公司 Power transmission line inspection method and system based on multi-source image fusion
WO2024098678A1 (en) * 2022-11-11 2024-05-16 深圳供电局有限公司 Ultraviolet light and visible light fusion method for power device detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN107506695A (en) * 2017-07-28 2017-12-22 武汉理工大学 Video monitoring equipment failure automatic detection method
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN109446925A (en) * 2018-10-08 2019-03-08 中山大学 A kind of electric device maintenance algorithm based on convolutional neural networks
CN109658371A (en) * 2018-12-05 2019-04-19 北京林业大学 The fusion method of infrared image and visible images, system and relevant device
CN111612736A (en) * 2020-04-08 2020-09-01 广东电网有限责任公司 Power equipment fault detection method, computer and computer program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN107506695A (en) * 2017-07-28 2017-12-22 武汉理工大学 Video monitoring equipment failure automatic detection method
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN109446925A (en) * 2018-10-08 2019-03-08 中山大学 A kind of electric device maintenance algorithm based on convolutional neural networks
CN109658371A (en) * 2018-12-05 2019-04-19 北京林业大学 The fusion method of infrared image and visible images, system and relevant device
CN111612736A (en) * 2020-04-08 2020-09-01 广东电网有限责任公司 Power equipment fault detection method, computer and computer program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487529A (en) * 2021-07-12 2021-10-08 吉林大学 Meteorological satellite cloud picture target detection method based on yolk
CN113688828A (en) * 2021-07-23 2021-11-23 山东云海国创云计算装备产业创新中心有限公司 Bad element identification method and related device
CN113688828B (en) * 2021-07-23 2023-09-29 山东云海国创云计算装备产业创新中心有限公司 Bad element identification method and related device
CN113592849A (en) * 2021-08-11 2021-11-02 国网江西省电力有限公司电力科学研究院 External insulation equipment fault diagnosis method based on convolutional neural network and ultraviolet image
WO2024098678A1 (en) * 2022-11-11 2024-05-16 深圳供电局有限公司 Ultraviolet light and visible light fusion method for power device detection
CN116403057A (en) * 2023-06-09 2023-07-07 山东瑞盈智能科技有限公司 Power transmission line inspection method and system based on multi-source image fusion
CN116403057B (en) * 2023-06-09 2023-08-18 山东瑞盈智能科技有限公司 Power transmission line inspection method and system based on multi-source image fusion

Similar Documents

Publication Publication Date Title
CN112233074A (en) Power failure detection method based on visible light and infrared fusion image
US20230418250A1 (en) Operational inspection system and method for domain adaptive device
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN110570454B (en) Method and device for detecting foreign matter invasion
CN107784661B (en) Transformer substation equipment infrared image classification and identification method based on region growing method
CN110458839B (en) Effective wire and cable monitoring system
CN111583198A (en) Insulator picture defect detection method combining FasterR-CNN + ResNet101+ FPN
CN109118479A (en) Defects of insulator identification positioning device and method based on capsule network
CN112446429B (en) CGAN (Carrier grade Access network) -based routing inspection image data small sample expansion method
CN108229587A (en) A kind of autonomous scan method of transmission tower based on aircraft floating state
CN107767374A (en) A kind of GIS disc insulators inner conductor hot-spot intelligent diagnosing method
CN116228780B (en) Silicon wafer defect detection method and system based on computer vision
CN110390261A (en) Object detection method, device, computer readable storage medium and electronic equipment
CN115908407B (en) Power equipment defect detection method and device based on infrared image temperature value
CN112668754A (en) Power equipment defect diagnosis method based on multi-source characteristic information fusion
CN114359167A (en) Insulator defect detection method based on lightweight YOLOv4 in complex scene
CN116485802B (en) Insulator flashover defect detection method, device, equipment and storage medium
CN115830302B (en) Multi-scale feature extraction fusion power distribution network equipment positioning identification method
CN114792328A (en) Infrared thermal imaging image processing and analyzing method
CN110598669A (en) Method and system for detecting crowd density in complex scene
CN113781375B (en) Vehicle-mounted vision enhancement method based on multi-exposure fusion
CN114445694A (en) Patrol report generation method and device, electronic equipment and storage medium
CN114199381A (en) Electrical equipment fault detection method for improving infrared detection model
CN113506230A (en) Photovoltaic power station aerial image dodging processing method based on machine vision
Chu et al. Edge-Eye: Rectifying Millimeter-level Edge Deviation in Manufacturing using Camera-enabled IoT Edge Device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination