CN112001260A - Cable trench fault detection method based on infrared and visible light image fusion - Google Patents

Cable trench fault detection method based on infrared and visible light image fusion Download PDF

Info

Publication number
CN112001260A
CN112001260A CN202010735272.1A CN202010735272A CN112001260A CN 112001260 A CN112001260 A CN 112001260A CN 202010735272 A CN202010735272 A CN 202010735272A CN 112001260 A CN112001260 A CN 112001260A
Authority
CN
China
Prior art keywords
image
visible light
infrared
light image
cable trench
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010735272.1A
Other languages
Chinese (zh)
Inventor
凌志勇
唐名峰
樊绍胜
吴长江
刘华飞
贾智伟
李志强
胡九龙
王璇
谌彬
易波
刘铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuzhou Power Supply Branch Of State Grid Hunan Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Hunan Electric Power Co Ltd
Original Assignee
Zhuzhou Power Supply Branch Of State Grid Hunan Electric Power Co ltd
State Grid Corp of China SGCC
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuzhou Power Supply Branch Of State Grid Hunan Electric Power Co ltd, State Grid Corp of China SGCC, State Grid Hunan Electric Power Co Ltd filed Critical Zhuzhou Power Supply Branch Of State Grid Hunan Electric Power Co ltd
Priority to CN202010735272.1A priority Critical patent/CN112001260A/en
Publication of CN112001260A publication Critical patent/CN112001260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention relates to the technical field of power inspection, in particular to a cable trench fault detection method based on infrared and visible light image fusion. The method comprises the following steps: s1, acquiring a visible light image and an infrared image of a cable trench detection site, wherein camera view fields of the visible light image and the infrared image are consistent; s2, fusing the visible light image and the infrared image by using an image feature point matching method and based on a double-scale fusion mode of image significance to obtain a fused image; s3, inputting the fusion image into a pre-trained Yolo _ v3 model improved based on a focal loss function to obtain fault information of the cable trench; the pre-trained improved Yolo _ v3 model based on the focal local function is a model trained based on the fusion image in the training sample and the corresponding fault information. The method solves the technical problem that the detection information is incomplete in the existing detection method.

Description

Cable trench fault detection method based on infrared and visible light image fusion
Technical Field
The invention relates to the technical field of power inspection, in particular to a cable trench fault detection method based on infrared and visible light image fusion.
Background
With the increasing of urban power consumption, the number of power transmission and distribution lines is gradually increased, and the original electric power overhead line in the city is overwhelmed. In addition, the overhead line can occupy public space on the ground of a city, and the appearance of the city is influenced. Therefore, the main trend of city construction is to place the power transmission and distribution line facilities underground.
However, the power cable running for a long time in the underground cable trench may cause aging discharge of the external insulation of the cable, lack management of accumulated water and accumulated moisture in the cable trench, and may cause fire explosion in the underground cable trench due to accumulation of combustible gases such as methane generated by decomposition of some animal and plant carcasses by microorganisms in the cable trench. At present, the infrared thermal imaging technology and the visible light imaging technology are increasingly widely applied in electric power systems in China, and become necessary means for carrying out state inspection on electrical equipment.
However, in practical application, it is found that although a single infrared thermal imaging technology can detect the temperatures of various heating devices, the contrast is poor, the field scene information is not sufficiently acquired, and details in a cable trench, such as accumulated water, accumulated stains and animal and plant corpses, cannot be observed.
Disclosure of Invention
Technical problem to be solved
In view of the above disadvantages and shortcomings in the prior art, the present invention provides a cable trench fault detection method based on infrared-visible light image fusion, which solves the technical problem of incomplete detection information in the existing detection method.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
the embodiment of the invention provides a cable trench fault detection method based on infrared and visible light image fusion, which comprises the following steps:
s1, acquiring a visible light image and an infrared image of a cable trench detection site, wherein camera view fields of the visible light image and the infrared image are consistent;
s2, decomposing and fusing the visible light image and the infrared image by using an image feature point matching method and based on a double-scale fusion mode of image significance to obtain a fused image;
s3, inputting the fusion image into a pre-trained Yolo _ v3 model improved based on a focal loss function to obtain fault information of the cable trench;
the fault information of the cable trench comprises high-temperature damage of the cable and animal and plant corpses;
the pre-trained improved Yolo _ v3 model based on the focal loss function is a model trained based on the fusion image in the training sample and the corresponding fault information.
According to the cable trench fault detection method based on infrared and visible light image fusion, provided by the embodiment of the invention, an infrared image and a visible light image are fused by using an image feature point matching method and a double-scale fusion mode based on image significance, and the fused image is input into a trained Yolo _ v3 model improved based on a focal loss function, so that scene information of a cable trench and the surrounding scene information of the cable trench can be more comprehensively reflected, and faults such as cable breakage, discharge, heating and the like can be more accurately found.
Optionally, the focal loss function is a modified function based on a cross-entropy loss function.
Optionally, the focal loss function satisfies the following formula:
FL(pt)=-αt(1-pt)γlog(pt)
Figure BDA0002604787460000021
Figure BDA0002604787460000022
in the formula, i is a real label of the training sample, p is a confidence coefficient that the output training sample is positive, alpha and gamma are both preset hyper-parameters, and alpha is a real label of the training sampletTo adjust the slope of the objective function, ptFor the modulation factor, FL (p)t) Is the output information of the focal loss function.
Optionally, step S2 includes:
s21, performing scale invariant feature transformation on the visible light image and the infrared image to extract feature points, and matching the feature points to obtain a transformation matrix;
s22, obtaining an infrared image matched with the visible light image based on the transformation matrix and the visible light image;
s23, performing double-scale image decomposition on the visible light image and the infrared image matched with the visible light image to obtain a visible light image base layer, an infrared image base layer, a visible light image detail layer and an infrared image detail layer;
s24, extracting the image significance of the visible light image and the infrared image matched with the visible light image, and further obtaining the pixel point weight of the visible light image and the pixel point weight of the infrared image;
s25, acquiring a final basic layer image based on the visible light image basic layer and the infrared image basic layer;
s26, obtaining a final detail layer image based on the visible light image detail layer, the infrared image detail layer, the pixel point weight of the visible light image and the pixel point weight of the infrared image;
and S27, carrying out double-scale image reconstruction on the final basic layer image and the final detail layer image to obtain a fused image.
Alternatively, in step S23, the visible-light image base layer satisfies the following formula:
Figure BDA0002604787460000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000032
is a visible light image base layer of phi1(x, y) is a visible image, and μ (x, y) is an averaging filter;
the infrared image base layer satisfies the following formula:
Figure BDA0002604787460000033
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000034
is the base layer of the infrared image phi2(x, y) is an infrared image;
the visible light image detail layer satisfies the following formula:
Figure BDA0002604787460000035
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000036
a visible light image detail layer;
the infrared image detail layer satisfies the following formula:
Figure BDA0002604787460000037
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000038
is an infrared image detail layer.
Optionally, in step S24, performing saliency extraction on the visible light image and the infrared image matched with the visible light image through mean filtering and median filtering to obtain saliency images of the visible light image and the infrared image;
wherein the saliency image of the visible light image satisfies the following formula:
ξ1(x,y)=||φμ1(x,y)-φη1(x,y)||
in the formula, xi1(x, y) is a saliency image of the visible image, phiμ1(x, y) is the mean filtering of the visible image, phiη1(x, y) is the median filtering of the visible image;
the saliency image of the infrared image satisfies the following formula:
ξ2(x,y)=||φμ2(x,y)-φη2(x,y)||
in the formula, xi2(x, y) is a saliency image of the infrared image, phiμ2(x, y) is the mean filtering of the infrared image, phiη2(x, y) is the median filtering of the infrared image.
Optionally, in step S24, the weight of the pixel point of the visible light image satisfies the following formula:
ψ1(x,y)=ξ1(x,y)/(ξ1(x,y)+ξ2(x,y))
in the formula, #1(x, y) is the pixel point weight of the visible light image;
the weight of the pixel points of the infrared image meets the following formula:
ψ2(x,y)=ξ2(x,y)/(ξ1(x,y)+ξ2(x,y))
in the formula, #2And (x, y) is the weight of the pixel point of the infrared image.
Optionally, the weight of the pixel point of the visible light image is complementary to the weight of the pixel point of the infrared image, and the following formula is satisfied:
ψ1(x,y)+ψ2(x,y)=1。
optionally, in step S25, the final base layer image satisfies the following formula:
Figure BDA0002604787460000041
in the formula, phiB(x, y) is the final base layer image.
Optionally, in step S26, the final detail layer image satisfies the following formula:
Figure BDA0002604787460000042
in the formula, phiD(x, y) is the final detail layer image.
(III) advantageous effects
The invention has the beneficial effects that: according to the cable trench fault detection method based on infrared and visible light image fusion, the infrared image and the visible light image are fused in a double-scale fusion mode based on image saliency by using an image feature point matching method, the fused image is input into a trained Yolo _ v3 model improved based on the focal loss function, and compared with the prior art, the cable trench fault detection method based on infrared and visible light image fusion can reflect scene information of the cable trench and the surrounding scene information more comprehensively and can more accurately find faults such as cable breakage, discharge, heating and the like.
Drawings
FIG. 1a is a gray scale image for detecting cable insulation damage in a wavelet transform image segmentation and image contour detection method according to a related embodiment;
FIG. 1b is a binarized image for detecting damage of a cable insulation layer in a wavelet transform image segmentation and image contour detection method according to a related embodiment;
FIG. 1c is a diagram illustrating a contour detection result image of a damaged cable insulation layer in a wavelet transform image segmentation and image contour detection method according to a related embodiment;
FIG. 2a is a gray scale image for detecting abnormal temperature of cable in a method for image segmentation and image contour detection based on wavelet transform according to a related embodiment;
FIG. 2b is a binarized image based on abnormal cable temperature detection in a wavelet transform image segmentation and image contour detection method according to a related embodiment;
FIG. 2c is a diagram illustrating a contour detection result image based on abnormal cable temperature detection in a wavelet transform image segmentation and image contour detection method according to a related embodiment;
FIG. 3a is a graph illustrating temperature rise identification of a connection point of a line in a temperature rise detection method for equipment based on infrared image characteristics and a seed region growing method according to another related embodiment;
fig. 3b is a graph showing temperature rise identification of a contaminated insulator in an apparatus temperature rise detection method based on infrared image characteristics and a seed region growing method according to another related embodiment;
fig. 4a is a surface intact cable image collected in a deep learning-based power cable image damage batch detection method according to a fourth related embodiment;
fig. 4b is an image of a sample of a cable with a good surface acquired in a method for batch detection of damaged images of a power cable based on deep learning according to a fourth related embodiment;
fig. 4c is a cable image with a broken surface in the method for detecting broken batch of power cable images based on deep learning according to the fourth related embodiment;
fig. 4d is a cable sample image of a damaged surface in the deep learning-based power cable image damaged batch detection method according to the fourth related embodiment;
FIG. 5 is a flowchart of a cable trench fault detection method based on infrared and visible light image fusion according to the present invention;
FIG. 6 is a visible light image of the cable trench fault detection method based on infrared visible light image fusion provided by the invention;
FIG. 7 is an infrared image of the cable trench fault detection method based on infrared and visible image fusion provided by the invention;
FIG. 8 is a flowchart of image fusion in the cable trench fault detection method based on infrared and visible light image fusion provided by the invention;
FIG. 9a is a left camera image of visible light during an effect test of a visible light camera according to the present invention;
FIG. 9b is a diagram of a right-hand camera image of visible light during an effect test of the visible light camera according to the present invention;
FIG. 9c is a feature matching graph of left and right visible light camera images based on a scale invariant feature transform algorithm during an effect test of a visible light camera according to the present invention;
FIG. 9d is a diagram of a visual left and right camera image mosaic during a visual camera effect test process in accordance with the present invention;
fig. 10 is a fused image of the cable trench fault detection method based on infrared and visible light image fusion provided by the present invention;
FIG. 11 is a network structure diagram of a fundamental _ v3 model improved based on a focal loss function in the cable trench fault detection method based on infrared and visible light image fusion provided by the invention;
FIG. 12a is a labeled image during the improved Yolo _ v3 model training process based on the focal loss function in the present invention;
FIG. 12b is the visible light image during the improved Yolo _ v3 model training process based on the focal loss function in the present invention;
FIG. 12c is an infrared image of the improved Yolo _ v3 model training process based on focal loss function according to the present invention;
FIG. 12d is a fused image during the improved Yolo _ v3 model training process based on the focal loss function in the present invention.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
In order to meet the task requirement of cable trench routing inspection in the operation and maintenance process of a power system, various advanced underground cable trench detection devices are produced.
In a related embodiment, a cable trench inspection robot system is provided, a cable defect detection key technology in the system is researched, and a cable insulation layer damage detection algorithm and a cable temperature anomaly detection algorithm based on wavelet transform image segmentation and image contour detection are provided, as shown in fig. 1a-1c and fig. 2a-2c, the cable insulation layer damage detection image and the cable temperature anomaly detection image detected by the method are provided. However, the method has strict environmental requirements on the underground cable trench, larger error and low operability.
In another related embodiment, a method for detecting the temperature rise of the equipment is provided, the method adopts a neighborhood averaging method to reduce noise interference of an infrared image, extracts a red component diagram and a green component diagram in an RGB space of the infrared image, divides the two components by applying a seed region growing method, divides regions by searching a local highest temperature point, calculates morphological gradients in each region to screen out a high temperature point with a fault, takes the region where the point is located as a seed region to realize automatic selection of the seed region, takes the maximum gradients in 4 directions of pixel points and the gray level difference between the pixel points and the seed points as judgment conditions of seed growth, fuses the divided red component diagram and the green component diagram by using an intersection method, and extracts the over-high temperature region of the equipment. The experiment and research result shows that the method can determine the high temperature rise area, has clear outline and provides basis for the temperature rise fault diagnosis of the electrical equipment. As shown in fig. 3a and 3b, the temperature rise identification chart of the line connection point and the temperature rise identification chart of the pollution insulator obtained by the method are shown. However, in the method, only image data obtained by an infrared thermal imaging camera is used for identification, the contrast of infrared imaging is poor, the obtained image data is single, and accumulated water and dirt in a cable trench and animal and plant corpses cannot be found.
In a third related embodiment, a method for detecting cable abnormality is provided, and the method effectively identifies phenomena that a wire clamp, a stress cone and a tail pipe part of a porcelain bushing type terminal of a cable trench possibly generate abnormal heat in the running process of the cable trench based on infrared images. The method introduces Radon and Fourier-Mellin transformation to extract the characteristics of the infrared image, extracts 4 characteristics of the image after Radon and Fourier-Mellin transformation based on an invariant function, inputs the extracted characteristic vector into a BP neural network to perform image recognition, and the result shows that the method for extracting the geometric transformation invariant characteristics based on Radon and Fourier-Mellin transformation can effectively reflect the characteristics of the infrared image, has good recognition effect and has stronger robustness to noise. However, in the method, only image data obtained by an infrared thermal imaging camera is used for identification, the contrast of infrared imaging is poor, the obtained image data is single, and accumulated water and dirt in a cable trench and animal and plant corpses cannot be found.
In a fourth related embodiment, a batch detection method for image breakage of power cables is provided, which combines an image lossless contact mode and a deep learning method. As shown in fig. 4a to 4d, there may be white flocculent dust on the intact cable surface, white lettering on the cable itself, and areas with uneven color and shadow due to illumination, which all greatly increase the difficulty of identification, and the sampled damaged sample includes various types such as nicks, scratches, holes, etc. on the cable surface. The method has good detection results for single category, but has poor data effects for detecting and identifying cable trench damage and various categories of animal and plant corpses, garbage and accumulated water with unbalanced information.
Based on the above, the cable trench fault detection method based on infrared and visible light image fusion provided by the embodiment of the invention fuses the infrared image and the visible light image, and performs image detection based on the Yolo _ v3 model, so that compared with the prior art, the method can more comprehensively reflect scene information of the cable trench and the surroundings thereof, and more accurately find faults such as cable breakage, discharge, heating and the like.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Example 1
The present embodiment provides a cable trench fault detection method based on infrared and visible light image fusion, as shown in fig. 5, which is a flowchart of the cable trench fault detection method based on infrared and visible light image fusion in the present embodiment, and includes the following steps:
s1, acquiring a visible light image and an infrared image of the cable trench detection site based on the infrared imaging and visible light binocular camera, wherein the camera view angle fields of the visible light image and the infrared image are consistent.
The visible light image and the infrared image are shown in fig. 6 and 7, respectively.
Because the image contrast obtained by the single infrared imaging camera is low, the obtained site details are poor, and accumulated water, accumulated stains and animal and plant corpses in the cable trench cannot be identified, the infrared imaging and visible light binocular camera is matched to carry out image data acquisition on the site of the cable trench, so that a visible light image and an infrared image are respectively obtained. Further, it is necessary to ensure that the camera view angle fields of the infrared imaging and the visible light camera (the lens of the optical instrument is used as the vertex, and the included angle formed by the two edges of the maximum range through which the object image of the detected object can pass through the lens) are consistent, and the aspect ratio of the camera picture pixels is the same, so as to provide a better precondition for the subsequent image fusion.
S2, fusing the visible light image and the infrared image based on the image saliency to obtain a fused image, where the method includes:
and S21, performing scale invariant feature transformation on the visible light image and the infrared image to extract feature points, and matching the feature points to obtain a transformation matrix.
And S22, obtaining an infrared image matched with the visible light image based on the transformation matrix and the visible light image.
Specifically, because the scale invariant feature transform algorithm is related to the image pixel value, if the difference between the infrared image and the visible light image pixel value is large, the algorithm splicing is directly carried out, and a large error is caused, so that the infrared imaging camera is replaced by a visible light camera with the same parameters as those of the visible light camera in the binocular camera in advance to carry out image acquisition. As shown in fig. 9a-9d, the testing process of the effect of the visible light camera is shown. And extracting feature points of the two pieces of visible light image data by a scale-invariant feature transformation algorithm to obtain the feature points of a first visible light image (a visible light left camera image) and a second visible light image (a visible light right camera image), matching the feature points, and calculating to obtain an affine transformation matrix H from the first visible light image to the second visible light image. And obtaining parameters of the infrared imaging camera based on the parameters of the visible light camera and the affine transformation matrix H.
Furthermore, the binocular camera of infrared imaging and visible light is arranged on a platform based on the robot operating system, so that automatic navigation of the cable trench is realized, and excessive and complicated manual operation is not needed.
And S23, performing double-scale image decomposition on the visible light image and the infrared image matched with the visible light image through mean value filtering to obtain a visible light image basic layer, an infrared image basic layer, a visible light image detail layer and an infrared image detail layer.
The visible light image base layer satisfies the following formula:
Figure BDA0002604787460000101
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000102
is a visible light image base layer of phi1(x, y) is a visible light image, mu (x, y) is an average filter, and (x, y) is an image pixel point;
the infrared image base layer satisfies the following formula:
Figure BDA0002604787460000103
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000104
is the base layer of the infrared image phi2(x, y) is an infrared image;
the visible light image detail layer satisfies the following formula:
Figure BDA0002604787460000105
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000106
a visible light image detail layer;
the infrared image detail layer satisfies the following formula:
Figure BDA0002604787460000107
in the formula (I), the compound is shown in the specification,
Figure BDA0002604787460000108
is an infrared image detail layer.
The base layer obtains a large number of low-frequency parts of the image, the detail layer obtains a large number of high-frequency parts of the image, and edge information is reserved.
S24, performing image significance extraction on the visible light image and the infrared image matched with the visible light image through mean filtering and median filtering to obtain a significance image of the visible light image and a significance image of the infrared image, and obtaining pixel point weights of the visible light image and the infrared image according to the significance image of the visible light image and the significance image of the infrared image.
Specifically, the saliency image of an image is calculated by taking the difference of the mean filtering and median filtering outputs, because these filtered output differences can highlight saliency information such as edges and lines that are more important than the neighboring images.
The human visual system is made sensitive to the detail layer information of the image compared to its base layer information by assigning appropriate weights to the visible image detail layer and the infrared image detail layer (pixels with insignificant information are assigned lower weights and pixels with significant information are assigned more weights). Therefore, the detail layers are fused based on the weight graph construction method of the significance information. Because the normalization enables the range of the weight map to reach [0,1] and meets the requirements related to information, the pixel point weight of the visible light image and the pixel point weight of the infrared image are obtained through normalization.
Wherein the saliency image of the visible light image satisfies the following formula:
ξ1(x,y)=||φμ1(x,y)-φη1(x,y)|| (5)
in the formula, xi1(x, y) is a saliency image of the visible image, phiμ1(x, y) is the mean filtering of the visible image, phiη1(x, y) is the median filtering of the visible image;
the saliency image of the infrared image satisfies the following formula:
ξ2(x,y)=||φμ2(x,y)-φη2(x,y)|| (6)
in the formula, xi2(x, y) is a saliency image of the infrared image, phiμ2(x, y) is the mean filtering of the infrared image, phiη2(x, y) is the median filtering of the infrared image.
The pixel point weight of the visible light image satisfies the following formula:
ψ1(x,y)=ξ1(x,y)/(ξ1(x,y)+ξ2(x,y)) (7)
in the formula, #1(x, y) is the pixel point weight of the visible light image;
the weight of the pixel points of the infrared image meets the following formula:
ψ2(x,y)=ξ2(x,y)/(ξ1(x,y)+ξ2(x,y)) (8)
in the formula, #2And (x, y) is the weight of the pixel point of the infrared image.
Further, the weight of the pixel point of the visible light image is complementary with the weight of the pixel point of the infrared image, and the following formula is satisfied:
ψ1(x,y)+ψ2(x,y)=1 (9)
thus, if the weight construction process gives more weight to pixels in one detail image that have important information, it gives less weight to pixels in another detail image that have unimportant information at the same pixel location, and vice versa.
And S25, acquiring a final basic layer image based on the visible light image basic layer and the infrared image basic layer. The final base layer image satisfies the following formula:
Figure BDA0002604787460000121
in the formula, phiB(x, y) is the final base layer image.
And S26, obtaining a final detail layer image based on the visible light image detail layer, the infrared image detail layer, the pixel point weight of the visible light image and the pixel point weight of the infrared image. The final detail layer image satisfies the following formula:
Figure BDA0002604787460000122
in the formula, phiD(x, y) is the final detail layer image.
S27, performing a dual-scale image reconstruction on the final base layer image and the final detail layer image, obtaining a fused image as shown in fig. 10.
And S3, inputting the fusion image into a pre-trained Yolo _ v3 model improved based on the focal loss function to obtain the fault information of the cable trench. The pre-trained improved Yolo _ v3 model based on the focal loss function is a model trained based on the fusion image in the training sample and the corresponding fault information.
The network structure of the improved Yolo _ v3 model based on the focal loss function is shown in fig. 11.
For the classification problem, when the number of classes of the machine learning model is not greatly different, the influence on the model is not large. If the number of the categories is greatly different, the training effect of the model is influenced. For example, in the binary problem, there are 1000 samples, with 990 negative samples and only 10 positive samples. If the model is used for training the samples, the model still can achieve 99% accuracy as long as the classification result is all negative samples, and the model is almost a classifier which only predicts the negative samples and has little value in both research and practical use.
In the classification problem, the phenomenon that the number of sample classes differs too much is called class imbalance. For convenience of exposition, the impact of class imbalance is discussed from the simplest linear classifier.
Assuming that there is a linear classifier, which is a method of classifying by comparing the output result with a threshold, a model of the linear classifier satisfies the following formula:
y=wTx+b (12)
in the formula, x is an input sample, y is an output result of the model, the classification is considered as a positive case when y is greater than 0.5, otherwise, the classification is a negative case, w is a connection weight value from x to y, and b is an offset.
For two classes, the value of y can also be understood as the probability that the sample is a positive case, and the ratio of the positive case to the negative case is
Figure BDA0002604787460000131
The threshold is set to 0.5 and the probability of positive and negative examples occurring is equal for the classifier. In other words,if it is
Figure BDA0002604787460000132
The prediction result is positive. The farther y is from the threshold, the more unambiguous the classification result is. In the ideal case, the training sample is an unbiased sample of the population of true samples, so the observation probability can represent the true probability. If the numbers of the positive and negative samples in the training samples are different, the order is positive NpNumber of samples, NnIs the number of negative samples, then the probability of observation for that sample is
Figure BDA0002604787460000133
When the prediction probability of the classifier is higher than the observation probability, i.e.
Figure BDA0002604787460000134
The sample will be determined to be a positive example. The reason why the classifier is actually determined as the positive example is
Figure BDA0002604787460000135
Contradict with the observation probability, it needs to be adjusted. The following formula should be satisfied for the positive case to be distinguished:
Figure BDA0002604787460000136
the above-described procedure corresponds to changing the threshold value for determination based on the difference between the positive and negative samples. This is a more suitable approach for simple linear models.
For a complex network model, the learning weight of the positive and negative samples can be controlled by changing the curve of the loss function, so that the effect of balancing the samples is achieved. The object detection model needs to complete both classification and coordinate regression, so that one object detection model generally comprises a plurality of loss functions. In the Yolo _ v3 model, the loss function responsible for the classification task is binary cross entropy, which is a good classifier for two classes. However, when the amount of data of damaged aging discharge, internal sundries and accumulated water of a cable duct fused by an infrared image and a visible image is detected, the difference of results is large, and when a picture sample is calibrated, the phenomenon of serious imbalance of sample categories can occur. For example, the number of transformers in a section of overhead line is far less than that of electric poles, phenomena such as electric pole inclination and insulator falling are relatively uncommon, and the phenomena are more prominent particularly in the case of insufficient number of samples, so that adverse effects caused by unbalanced sample types need to be dealt with.
Aiming at the problem that the cable trench image samples have sample type imbalance, the focal local can relieve the influence of the sample type imbalance by adjusting the learning weight of the samples, and can relieve the influence of the sample type imbalance when the Yolo _ v3 trains the samples by combining the sample type imbalance and the learning weight of the samples.
The local loss is a function modified based on a cross entropy loss function, two hyper-parameters are added on the basis of the cross entropy function, so that the proportional weight of positive and negative samples and the weight of a difficult-to-classify sample are controlled, and the effect of balancing a training sample is achieved by controlling the loss. The method for controlling the sample learning weight through the loss function belongs to one of cost-sensitive learning methods.
The formula of focal loss is shown as follows:
FL(pt)=-αt(1-pt)γlog(pt)
Figure BDA0002604787460000141
Figure BDA0002604787460000142
in the formula, i is a real label of the training sample; p is the confidence that the model outputs the sample as positive; alpha is alphatTo adjust the slope of the objective function; p is a radical oftIs a modulation factor; alpha and gamma are both preset hyper-parameters. The effect of focal loss is determined by the hyper-parameters alpha and gamma, and is determined by comparative experiments.
By setting the coefficients, the slope of the objective function can be adjusted so that the value of the slope is closer to a certain sample valueThe larger the gradient is, the better the training effect of the control model on a certain label is. In the absence of atWhen the target function is adjusted mainly by controlling the magnitude of gamma, if gamma > 0, (1-p)t)γReferred to as modulation coefficients. When the sample is misclassified, the confidence coefficient is not high, so the value is small, the modulation coefficient is close to 1, and the target function is not changed greatly; when the value is close to 1, the classification is correct, the confidence coefficient is high, the sample is easy to classify, the modulation coefficient is close to 0, and the influence on the loss function is small. By adjusting the size of gamma, the influence of the difficult and easy samples on the target function can be controlled, thereby influencing the training effect.
The fused images were labeled by LabelImg for cable trench labeling, the labeling categories were cable high temperature damage and animal and plant carcasses, and the process of identification was trained based on the focal loss modified Yolo _ v3 model and the results are shown in FIGS. 12a-12 d.
In summary, the cable trench fault detection method based on infrared-visible image fusion provided by the invention integrates complementary information of visible images and infrared images into a single image, so that the problems that when a single infrared camera is poor in illumination condition, the infrared image is low in contrast and resolution, and the detail reflecting capacity is poor, so that water accumulation, water accumulation and carcasses in the cable trench cannot be detected are solved, and the understanding of the damaged scene of the cable trench is better increased. Aiming at the problem of identifying sample type imbalance in image samples such as dirt accumulation, animal and plant corpses, cable trench breakage, aging discharge and the like in a cable trench environment, focal loss can relieve the influence of the type imbalance on the Yolo _ v3 during sample training through adjusting the learning weight of the samples. Faults such as broken discharge and heat generation are accurately found through improved Yolo _ v3 model identification based on the focal loss function.
Example 2
And placing the trolley based on the robot operating system under the cable trench, and arranging the infrared and visible binocular cameras on the trolley. The system is communicated with a ground terminal computer through Ethernet, and the terminal computer performs automatic navigation by issuing a trolley destination command. Storing and sending image data shot by the infrared binocular camera and the visible binocular camera to a terminal for splicing and fusing to obtain a single picture, and using a focal loss-based improved Yolo _ v3 model for real-time scene recognition.
It should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (10)

1. A cable trench fault detection method based on infrared and visible light image fusion is characterized by comprising the following steps:
s1, acquiring a visible light image and an infrared image of a cable trench detection site, wherein the camera view angle fields of the visible light image and the infrared image are consistent;
s2, fusing the visible light image and the infrared image by using an image feature point matching method and based on an image significance dual-scale fusion mode to obtain a fused image;
s3, inputting the fusion image into a pre-trained Yolo _ v3 model improved based on a focal loss function to obtain fault information of the cable trench;
the fault information of the cable trench comprises high-temperature damage of a cable and animal and plant corpses;
the pre-trained improved Yolo _ v3 model based on the focal local function is a model trained based on the fusion image in the training sample and the corresponding fault information.
2. The cable trench fault detection method of claim 1, wherein the focal loss function is a modified function based on a cross-entropy loss function.
3. The cable trench fault detection method of claim 1, wherein the focal loss function satisfies the following equation:
FL(pt)=-αt(1-pt)γlog(pt)
Figure FDA0002604787450000011
Figure FDA0002604787450000012
in the formula, i is a real label of the training sample, p is a confidence coefficient that the output training sample is positive, alpha and gamma are both preset hyper-parameters, and alpha is a real label of the training sampletTo adjust the slope of the objective function, ptFor the modulation factor, FL (p)t) Is the output information of the focal loss function.
4. The cable trench fault detection method of claim 1, wherein the step S2 includes:
s21, performing scale invariant feature transformation on the visible light image and the infrared image to extract feature points, and matching the feature points to obtain a transformation matrix;
s22, obtaining an infrared image matched with the visible light image based on the transformation matrix and the visible light image;
s23, carrying out double-scale image decomposition on the visible light image and the infrared image matched with the visible light image to obtain a visible light image basic layer, an infrared image basic layer, a visible light image detail layer and an infrared image detail layer;
s24, extracting the image significance of the visible light image and the infrared image matched with the visible light image, and further obtaining the pixel point weight of the visible light image and the pixel point weight of the infrared image;
s25, acquiring a final basic layer image based on the visible light image basic layer and the infrared image basic layer;
s26, obtaining a final detail layer image based on the visible light image detail layer, the infrared image detail layer, the pixel point weight of the visible light image and the pixel point weight of the infrared image;
and S27, carrying out double-scale image reconstruction on the final basic layer image and the final detail layer image to obtain a fused image.
5. The cable trench fault detection method of claim 4, wherein in step S23, the visible light image base layer satisfies the following formula:
Figure FDA0002604787450000021
in the formula (I), the compound is shown in the specification,
Figure FDA0002604787450000022
is a visible light image base layer of phi1(x, y) is a visible light image, mu (x, y) is an average filter, and (x, y) is an image pixel point;
the infrared image base layer satisfies the following formula:
Figure FDA0002604787450000023
in the formula (I), the compound is shown in the specification,
Figure FDA0002604787450000024
is the base layer of the infrared image phi2(x, y) is an infrared image;
the visible light image detail layer satisfies the following formula:
Figure FDA0002604787450000025
in the formula (I), the compound is shown in the specification,
Figure FDA0002604787450000026
a visible light image detail layer;
the infrared image detail layer satisfies the following formula:
Figure FDA0002604787450000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002604787450000032
is an infrared image detail layer.
6. The cable trench fault detection method according to claim 5, wherein in step S24, saliency extraction is performed on the visible light image and the infrared image matched with the visible light image through mean filtering and median filtering to obtain saliency images of the visible light image and the infrared image;
wherein the saliency image of the visible light image satisfies the following formula:
ξ1(x,y)=||φμ1(x,y)-φη1(x,y)||
in the formula, xi1(x, y) is a saliency image of the visible image, phiμ1(x, y) is the mean filtering of the visible image, phiη1(x, y) is the median filtering of the visible image;
the saliency image of the infrared image satisfies the following formula:
ξ2(x,y)=||φμ2(x,y)-φη2(x,y)||
in the formula, xi2(x, y) is a saliency image of the infrared image, phiμ2(x, y) is the mean filtering of the infrared image, phiη2(x, y) is the median filtering of the infrared image.
7. The cable trench fault detection method according to claim 6, wherein in step S24, the weight of the pixel points of the visible light image satisfies the following formula:
ψ1(x,y)=ξ1(x,y)/(ξ1(x,y)+ξ2(x,y))
in the formula, #1(x, y) is the pixel point weight of the visible light image;
the weight of the pixel points of the infrared image meets the following formula:
ψ2(x,y)=ξ2(x,y)/(ξ1(x,y)+ξ2(x,y))
in the formula, #2And (x, y) is the weight of the pixel point of the infrared image.
8. The cable trench fault detection method of claim 7, wherein pixel weights of the visible light image and pixel weights of the infrared image are complementary to each other, and satisfy the following formula:
ψ1(x,y)+ψ2(x,y)=1。
9. the cable trench fault detection method of claim 8, wherein in step S25, the final base layer image satisfies the following formula:
Figure FDA0002604787450000041
in the formula, phiB(x, y) is the final base layer image.
10. The cable trench fault detection method of claim 9, wherein in step S26, the final detail layer image satisfies the following formula:
Figure FDA0002604787450000042
in the formula, phiD(x, y) is the final detail layer image.
CN202010735272.1A 2020-07-28 2020-07-28 Cable trench fault detection method based on infrared and visible light image fusion Pending CN112001260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010735272.1A CN112001260A (en) 2020-07-28 2020-07-28 Cable trench fault detection method based on infrared and visible light image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010735272.1A CN112001260A (en) 2020-07-28 2020-07-28 Cable trench fault detection method based on infrared and visible light image fusion

Publications (1)

Publication Number Publication Date
CN112001260A true CN112001260A (en) 2020-11-27

Family

ID=73467894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010735272.1A Pending CN112001260A (en) 2020-07-28 2020-07-28 Cable trench fault detection method based on infrared and visible light image fusion

Country Status (1)

Country Link
CN (1) CN112001260A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097531A (en) * 2019-05-05 2019-08-06 河海大学常州校区 A kind of isomery image co-registration detection method for unmanned plane electric inspection process
CN112541478A (en) * 2020-12-25 2021-03-23 国网吉林省电力有限公司信息通信公司 Insulator string stain detection method and system based on binocular camera
CN112734692A (en) * 2020-12-17 2021-04-30 安徽继远软件有限公司 Transformer equipment defect identification method and device
CN112966576A (en) * 2021-02-24 2021-06-15 西南交通大学 System and method for aiming insulator water washing robot based on multi-light source image
CN113095321A (en) * 2021-04-22 2021-07-09 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN113129243A (en) * 2021-03-10 2021-07-16 同济大学 Blood vessel image enhancement method and system based on infrared and visible light image fusion
CN113269748A (en) * 2021-05-25 2021-08-17 中国矿业大学 Cable joint fault early warning system and method based on infrared and visible light image fusion
CN113420810A (en) * 2021-06-22 2021-09-21 中国民航大学 Cable trench intelligent inspection system and method based on infrared and visible light
CN113537029A (en) * 2021-07-09 2021-10-22 河北建筑工程学院 Model transfer method based on near infrared spectrum and terminal equipment
CN114937142A (en) * 2022-07-20 2022-08-23 北京智盟信通科技有限公司 Power equipment fault diagnosis model implementation method based on graph calculation
CN116973671A (en) * 2023-09-22 2023-10-31 江苏大圆电子科技有限公司 Aging early warning method and system for cable
CN117173190A (en) * 2023-11-03 2023-12-05 成都中轨轨道设备有限公司 Insulator infrared damage inspection system based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN110472510A (en) * 2019-07-16 2019-11-19 上海电力学院 Based on infrared and visual picture electrical equipment fault detection method and assessment equipment
CN110555819A (en) * 2019-08-20 2019-12-10 中国石油大学(北京) Equipment monitoring method, device and equipment based on infrared and visible light image fusion
CN111382804A (en) * 2020-03-18 2020-07-07 长沙理工大学 Method for identifying overhead line abnormity of unbalanced sample
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN110472510A (en) * 2019-07-16 2019-11-19 上海电力学院 Based on infrared and visual picture electrical equipment fault detection method and assessment equipment
CN110555819A (en) * 2019-08-20 2019-12-10 中国石油大学(北京) Equipment monitoring method, device and equipment based on infrared and visible light image fusion
CN111382804A (en) * 2020-03-18 2020-07-07 长沙理工大学 Method for identifying overhead line abnormity of unbalanced sample
CN111429391A (en) * 2020-03-23 2020-07-17 西安科技大学 Infrared and visible light image fusion method, fusion system and application

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIN LIAO 等: "Fusion of Infrared-Visible Images in UE-IoT for FaultiPoint Detection Based on GAN", 《IEEE ACCESS》 *
DURGA PRASAD BAVIRISETTI 等: "Two-scale image fusion of visible and infrared images using saliency detection", 《INFRARED PHYSICS & TECHNOLOGY》 *
许埕秸: "异源图像配准融合关键技术应用研究及快速实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈潮起;孟祥超;邵枫;符冉迪;: "一种基于多尺度低秩分解的红外与可见光图像融合方法", 《光学学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097531B (en) * 2019-05-05 2023-05-12 河海大学常州校区 Heterogeneous image fusion detection method for unmanned aerial vehicle power inspection
CN110097531A (en) * 2019-05-05 2019-08-06 河海大学常州校区 A kind of isomery image co-registration detection method for unmanned plane electric inspection process
CN112734692A (en) * 2020-12-17 2021-04-30 安徽继远软件有限公司 Transformer equipment defect identification method and device
CN112734692B (en) * 2020-12-17 2023-12-22 国网信息通信产业集团有限公司 Defect identification method and device for power transformation equipment
CN112541478A (en) * 2020-12-25 2021-03-23 国网吉林省电力有限公司信息通信公司 Insulator string stain detection method and system based on binocular camera
CN112966576A (en) * 2021-02-24 2021-06-15 西南交通大学 System and method for aiming insulator water washing robot based on multi-light source image
CN113129243A (en) * 2021-03-10 2021-07-16 同济大学 Blood vessel image enhancement method and system based on infrared and visible light image fusion
CN113095321A (en) * 2021-04-22 2021-07-09 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN113095321B (en) * 2021-04-22 2023-07-11 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN113269748A (en) * 2021-05-25 2021-08-17 中国矿业大学 Cable joint fault early warning system and method based on infrared and visible light image fusion
CN113420810A (en) * 2021-06-22 2021-09-21 中国民航大学 Cable trench intelligent inspection system and method based on infrared and visible light
CN113537029A (en) * 2021-07-09 2021-10-22 河北建筑工程学院 Model transfer method based on near infrared spectrum and terminal equipment
CN113537029B (en) * 2021-07-09 2023-04-07 河北建筑工程学院 Model transfer method based on near infrared spectrum and terminal equipment
CN114937142A (en) * 2022-07-20 2022-08-23 北京智盟信通科技有限公司 Power equipment fault diagnosis model implementation method based on graph calculation
CN114937142B (en) * 2022-07-20 2022-09-23 北京智盟信通科技有限公司 Power equipment fault diagnosis model implementation method based on graph calculation
CN116973671A (en) * 2023-09-22 2023-10-31 江苏大圆电子科技有限公司 Aging early warning method and system for cable
CN116973671B (en) * 2023-09-22 2023-12-01 江苏大圆电子科技有限公司 Aging early warning method and system for cable
CN117173190A (en) * 2023-11-03 2023-12-05 成都中轨轨道设备有限公司 Insulator infrared damage inspection system based on image processing
CN117173190B (en) * 2023-11-03 2024-02-02 成都中轨轨道设备有限公司 Insulator infrared damage inspection system based on image processing

Similar Documents

Publication Publication Date Title
CN112001260A (en) Cable trench fault detection method based on infrared and visible light image fusion
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN108846418B (en) Cable equipment temperature abnormity positioning and identifying method
CN112183788B (en) Domain adaptive equipment operation detection system and method
Ale et al. Road damage detection using RetinaNet
Xia et al. Infrared thermography‐based diagnostics on power equipment: State‐of‐the‐art
Souza et al. Hybrid-YOLO for classification of insulators defects in transmission lines based on UAV
CN112734692A (en) Transformer equipment defect identification method and device
CN112395972B (en) Unmanned aerial vehicle image processing-based insulator string identification method for power system
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN110490261B (en) Positioning method for power transmission line inspection image insulator
CN113888462A (en) Crack identification method, system, readable medium and storage medium
CN114694050A (en) Power equipment running state detection method based on infrared image
CN111310899B (en) Power defect identification method based on symbiotic relation and small sample learning
CN112541478A (en) Insulator string stain detection method and system based on binocular camera
CN115393747A (en) Photovoltaic fault detection method based on deep learning
CN116310274A (en) State evaluation method for power transmission and transformation equipment
CN113486873B (en) Transformer substation equipment inspection method and system based on big data and artificial intelligence
CN115631154A (en) Power equipment state monitoring and analyzing method and system
CN115147591A (en) Transformer equipment infrared image voltage heating type defect diagnosis method and system
CN113326873A (en) Method for automatically classifying opening and closing states of power equipment based on data enhancement
CN113947563A (en) Cable process quality dynamic defect detection method based on deep learning
CN111488820B (en) Intelligent cable tunnel engineering inspection method and system based on light and shadow separation
Tang et al. Fault diagnosis of the external insulation infrared images based on Mask Region convolutional neural network and perceptual hash joint algorithm
Dosso et al. Deep Learning for Segmentation of Critical Electrical Infrastructure from Vehicle-Based Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination