CN112288761B - Abnormal heating power equipment detection method and device and readable storage medium - Google Patents
Abnormal heating power equipment detection method and device and readable storage medium Download PDFInfo
- Publication number
- CN112288761B CN112288761B CN202010643380.6A CN202010643380A CN112288761B CN 112288761 B CN112288761 B CN 112288761B CN 202010643380 A CN202010643380 A CN 202010643380A CN 112288761 B CN112288761 B CN 112288761B
- Authority
- CN
- China
- Prior art keywords
- image
- visible light
- infrared
- edge
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 59
- 238000010438 heat treatment Methods 0.000 title claims abstract description 47
- 238000001514 detection method Methods 0.000 title claims description 43
- 230000009466 transformation Effects 0.000 claims abstract description 70
- 238000000034 method Methods 0.000 claims abstract description 58
- 230000011218 segmentation Effects 0.000 claims abstract description 28
- 238000003708 edge detection Methods 0.000 claims abstract description 22
- 238000009792 diffusion process Methods 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 22
- 238000013528 artificial neural network Methods 0.000 claims description 20
- 238000001914 filtration Methods 0.000 claims description 19
- 230000020169 heat generation Effects 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 13
- 238000012546 transfer Methods 0.000 claims description 11
- 230000004931 aggregating effect Effects 0.000 claims description 10
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 34
- 238000010586 diagram Methods 0.000 description 21
- 239000013598 vector Substances 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000003331 infrared imaging Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012885 constant function Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S20/00—Management or operation of end-user stationary applications or the last stages of power distribution; Controlling, monitoring or operating thereof
- Y04S20/20—End-user application control systems
- Y04S20/242—Home appliances
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Radiation Pyrometers (AREA)
Abstract
The invention provides a method and a device for detecting abnormally heated electric equipment and a readable storage medium, wherein the method for detecting the abnormally heated electric equipment comprises the following steps: performing edge detection on the infrared image and the visible light image of the power equipment to obtain an infrared edge image and a visible light edge image; detecting the characteristic points of the infrared edge image and the visible light edge image through a KAZE algorithm, and calculating the transformation parameters of a transformation model between the infrared edge image and the visible light edge image according to the matching characteristic points obtained by matching the infrared characteristic points and the visible light characteristic points obtained by detecting the characteristic points; determining a non-matching area in the visible light edge image according to the transformation parameters; and determining an abnormal heating area in the visible light image according to a mask image obtained by performing threshold segmentation on the infrared image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally. The invention improves the accuracy of detecting the abnormally heated power equipment.
Description
Technical Field
The invention relates to the technical field of power equipment abnormity detection, in particular to a method and a device for detecting abnormal heating power equipment and a readable storage medium.
Background
The construction and maintenance of the power grid provide power and guarantee for the national improvement of comprehensive national strength and the high-speed development of the society. The stable operation of the power equipment serving as an important component in a power grid is an important guarantee for national safety and electricity utilization. When the power equipment normally works, the conductor of the power equipment can generate certain heat due to the circulation of current, so that the temperature of the power equipment is in a certain range. When the power equipment is in fault or is aged, the condition of abnormal heating of a local area can occur, and if the condition is long, the safe operation of a power system can be endangered, so the abnormal heating area of the power equipment needs to be found in time and maintained. The existing power equipment heating detection method is to use a thermodetector or infrared thermal imaging for detection, corresponding instruments need to be arranged on the whole power system, the power equipment heating detection cost is high, dead corners are easy to omit, the detection success rate is low, technical personnel are needed to analyze the measurement result, the cost of manpower and material resources is high, and the intelligence level is low.
With the development and maturity of the technology, people gradually carry out cooperative processing on an infrared image and a visible light image acquired by an infrared imaging technology, combine the characteristics of the infrared image that the temperature can be measured, the environmental interference resistance is strong, the detail information of the visible light image is rich, and the resolution ratio is high, and can automatically position a heating area in the visible light image and distinguish specific equipment and elements causing heating while detecting the abnormal heating area in the infrared image. However, the infrared image and the visible light image need to be registered first by using the cooperative processing of the infrared image and the visible light image, but the infrared image has low resolution, serious detail loss and blurred image compared with the visible light image, and the gray attributes of the infrared image and the visible light image have obvious difference. The existing registration method is usually based on feature points for registration, wherein SIFT (Scale-invariant feature transform) has low accuracy of matching results for the condition of average distribution of the feature points, and the condition that the same feature point is matched for multiple times is easy to occur due to the fact that SIFT descriptors have multiple directions; speeded Up Robust Features (Speeded Up Robust Features) improve the operation speed compared with SIFT, but the problem of low accuracy of the matching result is not solved.
Therefore, in the process of coordinately detecting the heating area of the power equipment through the infrared image and the visible light image, the matching result accuracy of the characteristic points between the infrared image and the visible light image is low.
Disclosure of Invention
Based on the above current situation, the present invention provides a method, an apparatus, and a readable storage medium for detecting abnormal heat generation of an electrical device, so as to improve the accuracy of the matching result of the feature points between the infrared image and the visible light image, in order to solve the technical problem that the accuracy of the matching result of the feature points between the infrared image and the visible light image is low in the process of coordinately detecting the heat generation area of the electrical device through the infrared image and the visible light image.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an electrical equipment detection method of abnormal heat generation, comprising the steps of:
s100, acquiring an infrared image and a visible light image corresponding to power equipment, and performing edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image;
s200, performing feature point detection on the infrared edge image and the visible light edge image through a KAZE algorithm to obtain an infrared feature point corresponding to the infrared edge image and a visible light feature point corresponding to the visible light edge image;
s300, matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, obtaining distance ratios corresponding to the matched characteristic point pairs, sequencing the distance ratios from small to large to obtain sequenced distance ratios, selecting a first preset number of target distance ratios from front to back in the sequenced distance ratios, and determining the matched characteristic point pairs corresponding to the target distance ratios as an initial data set;
s400, executing an internal feature point pair determining process: randomly selecting a subset containing a second preset number of matched feature point pairs in the initial data set, calculating to obtain a transformation model parameter according to the subset, and judging the matched feature point pairs except the subset in the initial data set according to the transformation model parameter to determine an internal feature point pair in the matched feature points;
s500, calculating the execution times of the internal feature point pair determination process, and determining the transformation model parameters corresponding to the most internal feature point pairs as the transformation parameters of the transformation model between the infrared edge image and the visible light edge image when the execution times are greater than the preset times;
s600, amplifying the infrared edge image according to the transformation parameters to obtain an amplified infrared edge image, determining a region to be matched in the visible light edge image according to the transformation parameters, and searching out a sub-image with the same size as the amplified infrared edge image in the region to be matched in a sliding window mode;
s700, calculating a normalized correlation coefficient corresponding to each sliding window, determining the maximum coefficient in the calculated normalized correlation coefficients, determining the area of the sub-image corresponding to the maximum coefficient as a matching edge area, and determining a non-matching area in the visible light image according to the matching edge area;
s800, performing threshold segmentation on the infrared image to obtain a mask image corresponding to the infrared image, determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally.
Preferably, in step S100, the step of acquiring an infrared image and a visible light image corresponding to the power device includes:
the infrared image of the power equipment is acquired through the infrared camera equipment, the visible light image of the power equipment is acquired through the visible light camera equipment, wherein the infrared camera equipment and the visible light camera equipment are arranged in the same optical axis, the distance between baselines of the infrared camera equipment and the visible light camera equipment is within a preset range, and a lens of the infrared camera equipment and a lens of the visible light camera equipment are located in the same plane.
Preferably, in step S800, the step of determining an abnormal heat generation region in the visible light image according to the mask map and the non-matching region includes:
nesting the mask image in the visible light image to determine a target image area in the visible light image having a pixel value of 0;
and determining the area except the target image area and the non-matching area in the visible light image as an abnormal heating area in the visible light image.
Preferably, in step S700, the formula for calculating the normalized correlation coefficient is:
wherein,a sub-picture is represented,represents the magnified infrared edge image, W 'represents the width of the magnified infrared edge image, H' represents the height of the magnified infrared edge image,where (m, n) is the coordinates of the pixel points in the sub-image,wherein (m, n) is the coordinates of the pixel points in the amplified infrared edge image,the sub-image is the coordinate of the pixel point at the upper left corner,
preferably, in step S200, the process of feature point detection by the KAZE algorithm includes the steps of:
s201, nonlinear diffusion filtration: in the forming process of the nonlinear diffusion filtering, the evolution of the image brightness value L in different scales is described as the divergence of a flow equation, the divergence of the flow equation is expressed by a nonlinear partial derivative equation, and the nonlinear partial derivative equation is expressed as a formula:
wherein div represents the divergence calculation,representing a gradient operation, x d 、y d The abscissa and the ordinate of the image are represented, t represents the image scale, and a transfer function c is introduced into a diffusion function, so that the self-adaption of the nonlinear diffusion to a local image structure is possible, wherein the transfer function is represented as:
wherein,is a Gaussian smoothed image L σ A new conduction function g is obtained by the conduction function c, and the new conduction function g is expressed as:
wherein k is a contrast factor determining a diffusion level for controlling a diffusion level of an image edge, and k is a gradient L of the smoothed image σ Vertical square column70% of the graph is taken as k value;
s202, AOS: an approximate solution of a nonlinear partial differential equation in nonlinear diffusion filtering is solved using implicit discrete difference equations whose matrix form is expressed as:
wherein, A l Is in the form of a matrix of image conduction for each dimension, τ denotes the step size, L i Is the ith layer image in the multilayer image, and m' is any integer larger than 1; in a discrete difference equation, a linear system of equations is solved, and the corresponding solution of the linear system is expressed as:
wherein, I is a unit matrix with a certain dimensionality;
s203, constructing a nonlinear scale space: constructing a nonlinear scale space constructed by the AOS and the variable conduction diffusion, wherein the nonlinear scale space comprises the group number of O image scale spaces and the layer number of S image scale spaces, and the scale factor of each sub-layer is expressed as:
σ i (o,s)=σ 0 2 o+s/S ,o∈[0,...,O-1],s∈[0,...,S-1],i∈[0,...,N];
wherein σ 0 A reference value representing the size of an image, wherein N is the total number of images subjected to nonlinear diffusion filtering, N is O × S, the scale factor of each layer is pixel-by-pixel converted into time, and an expression of the scale factor of each sub-layer in time is obtained, and the expression of the scale factor of each sub-layer in time is expressed as:
wherein, t i Is shown as enteringChange time, σ i The scale relation among all layers in the nonlinear scale space model is obtained; calculating a gradient histogram of an input image to obtain a contrast factor k, and obtaining a nonlinear scale space through iteration by using AOS, wherein the nonlinear scale space is expressed as:
s204, feature point detection: the formula for calculating the response value of the scale normalized Hessian under different scales is expressed as follows:
wherein L is xx Is the second derivative of the luminance value L in the x direction, L yy Is the second derivative of the luminance value L in the y direction, L xy Is the mixed second derivative of the brightness value L in the x direction and the y direction, sigma is the scale coefficient of the layer where the image is located, and the linear scale space L is used i In the filtered image, at different scales sigma i The responses are analyzed below, and except that i-0 and i-N, response extrema are found in all the filtered images, resulting in feature points.
Preferably, in the step S300, the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs includes:
sequentially determining each infrared feature point in the infrared edge image as a target feature point, calculating the distance between the target feature point and each visible light feature point in the visible light edge image through an Euclidean distance formula, and searching a visible light first feature point with the minimum distance to the target feature point and a visible light second feature point with the second smallest distance to the target feature point in the visible light edge image;
determining the distance between the target characteristic point and the first visible light characteristic point as a first distance, and determining the distance between the target characteristic point and the second visible light characteristic point as a second distance;
and if the distance ratio of the first distance to the second distance is within a preset range, determining the target characteristic point and the first visible light characteristic point as a matching characteristic point pair to obtain the matching characteristic point pair in the infrared characteristic point and the visible light characteristic point.
Preferably, in step S300, the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs and obtaining distance ratios corresponding to the matched characteristic point pairs includes:
matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, and deleting wrong characteristic point pairs in the characteristic point pairs by using an RANSAC algorithm to obtain target characteristic point pairs;
and acquiring the corresponding distance ratio of the target characteristic point pair.
Preferably, in step S100, the step of performing edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image includes:
inputting the infrared image and the visible light image into a neural network obtained by training based on an HED algorithm to obtain an infrared edge map result corresponding to the infrared image and a visible light edge map result corresponding to the visible light image; and aggregating the infrared edge image result to obtain an infrared edge image corresponding to the infrared image, and aggregating the visible light edge image result to obtain a visible light edge image corresponding to the visible light image.
The present invention also provides an electrical equipment detection device for abnormal heating, comprising:
the acquisition module is used for acquiring an infrared image and a visible light image corresponding to the power equipment; the edge detection module is used for carrying out edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image; the characteristic point detection module is used for detecting the characteristic points of the infrared edge image and the visible light edge image through a KAZE algorithm to obtain the infrared characteristic points corresponding to the infrared edge image and the visible light characteristic points corresponding to the visible light edge image; the matching module is used for matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs; the calculation module is used for calculating the transformation parameters of a transformation model between the infrared edge image and the visible light edge image according to the matching feature point pairs; the judging module is used for determining a matching edge area in the visible light edge image according to the transformation parameters and determining a non-matching area in the visible light image according to the matching edge area; the segmentation module is used for carrying out threshold segmentation on the infrared image to obtain a mask image corresponding to the infrared image; the judging module is further used for determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally.
The present invention also provides a computer-readable storage medium having stored thereon a detection program which, when executed by a processor, implements the steps of the electrical equipment detection method of abnormal heat generation as described above.
[ PROBLEMS ] the present invention
Performing edge detection on an infrared image and a visible light image of power equipment to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image, performing feature point detection on the infrared edge image and the visible light edge image through a KAZE algorithm to obtain infrared feature points corresponding to the infrared edge image and visible light feature points corresponding to the visible light edge image, matching the infrared feature points and the visible light feature points to obtain matched feature point pairs, calculating transformation parameters of a transformation model between the infrared edge image and the visible light edge image according to the matched feature point pairs, determining a matched edge area in the visible light edge image according to the transformation parameters, determining a non-matched area in the visible light image according to the matched edge area, segmenting the infrared image to obtain a mask map corresponding to the infrared image, and determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally. The method has the advantages that the infrared edge image and the visible light edge image are roughly registered by using the KAZE algorithm, and then the conversion parameters of the conversion model between the infrared edge image and the visible light edge image are paired, so that the detection accuracy of the electrical equipment with abnormal heating is improved; meanwhile, the method of using KAZE to combine with transformation model matching not only solves the disadvantages of no multi-scale and slow speed of transformation model matching, but also solves the problem of low accuracy of KAZE to infrared and visible light matching, further improves the success rate of matching infrared edge images and visible light edge images, and the direct segmentation of the power equipment in the visible light image may be affected by illumination, color, texture, etc., resulting in incomplete segmentation of the power equipment, and so on, for this reason, the invention only segments the infrared image of the power equipment, the visible light image of the power equipment is not divided, the division result of the infrared image is nested into the visible light image, and then the electrical equipment which generates heat abnormally is determined, the interference is less, and the segmentation result is accurate, so that the accuracy of detecting the electrical equipment which generates heat abnormally is further improved.
Other advantages of the present invention will be described in the detailed description, which is provided by the technical features and technical solutions.
Drawings
Preferred embodiments according to the present invention will be described below with reference to the accompanying drawings. In the figure:
fig. 1 is a flow chart of a method for detecting an abnormal heat generation power device according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a positional relationship between an infrared imaging device and a visible light imaging device in an embodiment of the present invention;
FIG. 3 is a diagram illustrating an embodiment of finding local maxima of an image according to a normalized Hessian matrix of different scales;
FIG. 4 is a schematic structural diagram of an abnormal heat generation power equipment detecting device according to the present invention;
fig. 5 is a schematic diagram of searching a sub-image in the embodiment of the present invention, where a in fig. 5 is a schematic diagram of an enlarged infrared edge image, and B in fig. 5 is a schematic diagram of a sub-image covered by a search window, which is searched in an area to be matched by using the enlarged infrared edge image as a template;
fig. 6 is a schematic diagram of segmentation of an abnormal heating area in a visible light image according to an embodiment of the present invention, where C in fig. 6 is a schematic diagram of a result diagram obtained after threshold segmentation is performed on an infrared image, D in fig. 6 is a schematic diagram of a visible light image, and E in fig. 6 is a schematic diagram of determination of an abnormal heating area in a visible light image.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the step numbers (letter or number numbers) are used to refer to some specific method steps in the present invention only for the purpose of convenience and brevity of description, and the order of the method steps is not limited by letters or numbers in any way. It will be clear to a person skilled in the art that the order of the steps of the method in question, as determined by the technology itself, should not be unduly limited by the presence of step numbers.
Fig. 1 is a block flow diagram illustrating a method for detecting abnormal heat generation in an electrical device according to an embodiment of the present invention.
Step S100, acquiring an infrared image and a visible light image corresponding to the power equipment, and performing edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image.
The method comprises the steps of obtaining an infrared image and a visible light image corresponding to the power equipment, wherein the power equipment comprises power generation equipment and power supply equipment. Specifically, when an acquisition instruction is detected, the infrared image and the visible light image corresponding to the power equipment can be acquired according to the acquisition instruction, and the acquisition instruction can be triggered at a preset timing task timing, or can be triggered manually by a user in a system corresponding to the power equipment as required, or triggered by the user in a held mobile terminal and then sent to the system corresponding to the power equipment by the mobile terminal. After the infrared image and the visible light image corresponding to the power equipment are obtained, edge detection is carried out on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image. Specifically, the method can be used for edge detection of the infrared image and the visible image by using a Canny edge detection algorithm, a Sobel operator (Sobel operator), a Prewitt operator (Prewitt operator) and the like.
Further, step S100 includes:
step a, acquiring an infrared image of the power equipment through infrared camera equipment, and acquiring a visible light image of the power equipment through visible light camera equipment, wherein the infrared camera equipment and the visible light camera equipment are arranged in the same optical axis, the distance between a base line of the infrared camera equipment and the visible light camera equipment is within a preset range, and a lens of the infrared camera equipment and a lens of the visible light camera equipment are in the same plane.
In this embodiment, an infrared image corresponding to the power device is acquired by the infrared imaging device, and a visible light image of the power device is acquired by the visible light imaging device, where one infrared imaging device and one visible light imaging device may correspond to one power device or may correspond to a plurality of power devices. Specifically, referring to fig. 2, fig. 2 is a schematic diagram of a positional relationship between an infrared imaging apparatus and a visible light imaging apparatus in an embodiment of the present invention. As can be seen from fig. 2, the infrared image capturing device and the visible light image capturing device are disposed on the same optical axis, and the distance between the baseline of the infrared image capturing device and the baseline of the visible light image capturing device is within a preset range, where the preset range may be set according to specific needs, for example, the preset range may be set to 5cm (centimeter) -10cm, or may be set to 3cm-11cm, etc. The lens of the infrared image pickup apparatus and the lens of the visible light image pickup apparatus are in the same plane. It should be noted that, in this embodiment, the infrared image and the visible light image are registered based on a condition that the field angle of the infrared image pickup device is smaller than that of the visible light image pickup device, and the infrared image is a part of the visible light image, so that the resolution of the visible light image pickup device in this embodiment is greater than that of the infrared image pickup device, for example, the resolution of the infrared image pickup device may be set to 320 × 240, and the resolution of the visible light image pickup device may be set to 2048 × 1536.
Further, step S100 includes:
and b, inputting the infrared image and the visible light image into a neural network obtained by training based on an HED algorithm, and obtaining an infrared edge map result corresponding to the infrared image and a visible light edge map result corresponding to the visible light image.
And d, aggregating the infrared edge image result to obtain an infrared edge image corresponding to the infrared image, and aggregating the visible light edge image result to obtain a visible light edge image corresponding to the visible light image.
Specifically, the Edge Detection is performed on the infrared image and the visible light image through an HED (Holistically-Nested Edge Detection) algorithm, and the specific process of obtaining the infrared Edge image and the visible light Edge image is as follows: and training a neural network, so that the neural network can learn and generate the characteristics of the edge graph close to the ground route (with labels), thereby obtaining the corresponding edge image.
The process of obtaining the neural network based on HED algorithm training comprises the following steps: obtaining a training image set { (X) n ,Y n ) N ═ 1., N }, wherein the sample isRepresenting images of electrical equipment, the sample being taken when the neural network is used to obtain infrared edge map resultsAn infrared image representing the power device; when neural networks are used to obtainTaking the result of the visible light edge map, the sampleA visible light image representing the power device.WhereinRepresents a sample X n And the corresponding binary edge image is a ground truth edge image corresponding to the visible light image or the infrared image. Note that, since each image is considered independently as a whole in the training process, the subscript n is deleted. With W 0 The set of parameters representing all standard network layers in a neural network, since there are M side output layers after the convolutional layer in the neural network, and each side-output layer is associated with one classifier, the response weight of each side-output layer can be expressed as w ═ w (w ═ w (1) ,...,w (M) ) Then, it can be determined that the objective function corresponding to the neural network can be expressed by formula (1):
Wherein z is side Representing the loss function, w, of the side-output layer (m) Refer to the weight, α, of the mth side-output layer of the HED m To balance the coefficients contributing to the final loss per side-output layer output.
It will be appreciated that the training loss function uses the image X of the power plant in training the neural network n ={x j J ═ 1., | X | } and a binary edge image Y n ={y j J ═ 1., | X | }, where y is j E {0,1} represents all pixels in the binary edge image. It should be noted that, in a natural image, edge pixels and non-edge pixels are severely unbalanced, and 90% of pixels do not belong to edge pixels, so that a balance-like weight β is introduced on the basis of each pixel term to offset edge pixelsImbalance between pixels and non-edge pixels. It is understood that the natural image is the original image taken. Specifically, the class balance cross entropy loss function can be expressed by equation (2):
formula (2):
where X denotes an image of the power equipment, and β ═ Y - |/|Y|,1-β=|Y + |/|Y|,|Y - I represents the labeled data set (ground route) of the edge pixel, Y + I represents a labeling data set of non-edge pixels;using the activation function delta at pixel jThe value is obtained by calculation; at each side-output layer, edge image prediction is performed, wherein,representing the side-output on m layers.
Adding a weighted-fusion layer to the neural network and simultaneously learning fusion weights during training to directly utilize the side-output, wherein the loss function of the fusion layer can be expressed by formula (3):
Wherein,h=(h 1 ,...,h M ) Representing the fusion weight, Dist (.,) is the distance between the fused predicted annotation data and the actual annotation data, in this embodiment Dist (.,) is set as the cross-entropy loss, and finally the random gradient is calculated by the criterion (back propagation)The descent algorithm to minimize the objective function, in particular, the objective function by the standard stochastic gradient descent algorithm can be expressed by equation (4):
formula (4) (W) 0 ,w,h) * =argmin(Z side (W 0 ,w)+Z fuse (W 0 ,w,h))。
It should be noted that the objective function minimization is obtained by training a neural network based on the HED algorithm. Inputting the infrared image and the visible light image into a neural network obtained based on HED algorithm training, and obtaining an infrared edge map result corresponding to the infrared image and a visible light edge map result corresponding to the visible light image, specifically referring to formula (5).
The CNN (the) represents an output result of the neural network, and the output result is an edge graph result; x denotes an image of the input neural network. When the image input into the neural network is an infrared image, outputting an infrared edge image result; when the image input into the neural network is a visible light image, the output result is a visible light edge map result.
And after the infrared edge image result and the visible light edge image result are obtained, aggregating the infrared edge image result to obtain an infrared edge image corresponding to the infrared image, and aggregating the visible light edge image result to obtain a visible light edge image corresponding to the visible light image. Specifically, the polymerization process can be represented by formula (6):
As can be seen from equation (6), the edge image is obtained by superimposing the average values. In the present embodiment, for convenience of description, the infrared edge image is described asRecord the visible edge image as
It should be noted that, because the infrared image has the characteristic of thermal diffusion, and the edge of the infrared image is relatively blurred, the embodiment performs edge detection on the infrared image and the visible light image by using the HED to ignore weak edges in the infrared image and the visible light image, solve the problem that the edge of a natural image and the edge of a target are blurred, and improve the detection efficiency of the abnormal heating power device detection.
And S200, detecting feature points of the infrared edge image and the visible light edge image through a KAZE algorithm to obtain infrared feature points corresponding to the infrared edge image and visible light feature points corresponding to the visible light edge image.
After the infrared edge image and the visible light edge image are obtained, feature point detection is carried out on the infrared edge image and the visible light edge image through a KAZE algorithm, and infrared feature points corresponding to the infrared edge image and visible light feature points corresponding to the visible light edge image are obtained. The KAZE algorithm is proposed by French scholars Pablo and the like, is a feature point extraction algorithm with strong robustness, and is regarded as an improvement of a linear spatial feature algorithm.
Specifically, the process of feature point detection by the KAZE algorithm includes the following steps:
step S201, nonlinear diffusion filtration: in the forming process of the nonlinear diffusion filtering, the evolution of the image brightness value L in different scales is described as the divergence of a flow equation, the divergence of the flow equation is expressed by a nonlinear partial derivative equation, and the nonlinear partial derivative equation is expressed as a formula:
wherein div represents the divergence calculation,representing a gradient operation, x d 、y d Representation diagramThe abscissa and the ordinate of the image, t represents the image scale, and a transfer function c is introduced into the diffusion function, so that the self-adaptation of the nonlinear diffusion to the local image structure is possible, wherein the transfer function is expressed as:
wherein,is a Gaussian smoothed image L σ A new conduction function g is obtained by the conduction function c, the new conduction function g is expressed as:
wherein k is a contrast factor determining a diffusion level for controlling a diffusion level of an image edge, and k is a gradient L of the smoothed image σ 70% of the histogram was taken as the k value.
It should be noted that, in the process of forming the nonlinear diffusion filter, the evolution of the image luminance values L at different scales is described as the divergence of the flow function (flow function), and this process can be expressed by a nonlinear partial derivative equation. In the nonlinear partial derivative equation, the larger t, the simpler the corresponding image representation.
Obtaining a new conduction function g through the conduction function c, wherein the new conduction function comprises g 1 And g 2 Wherein, in the process,
wherein k is a contrast factor determining a diffusion level for controlling a diffusion level of an image edge, and k is a gradient L of the smoothed image σ 70% of the histogram was taken as the k value. g is a radical of formula 1 Edge of high contrast is improved, g 2 The edge of the wide area portion is raised. In the KAZE algorithmUsing g of 2 。
Step S202, AOS: solving an approximate solution of a nonlinear partial differential equation in nonlinear diffusion filtering using an implicit discrete differential equation, the matrix form of the discrete differential equation being expressed as:
wherein A is l Is in the form of a matrix of image conduction for each dimension, τ denotes the step size, L i Is the ith layer image in the multilayer image, and m' is any integer larger than 1; in a discrete difference equation, a linear system of equations is solved, and the corresponding solution of the linear system is expressed as:
wherein, I is a unit matrix with a certain dimension.
For the nonlinear partial differential equation in the nonlinear diffusion filtering, there is no analytic solution, and therefore, it is necessary to use a numerical analysis method to find the approximate solution, and in particular, the present embodiment finds the approximate solution using an implicit (semi-implicit) discrete differential equation. Wherein, in the matrix form of the discrete difference equation, A l Is in the form of a matrix of image conduction for each dimension, i.e. the conductivity of the image in different dimensions. It should be noted that the discrete difference equation is stable in any step size, it creates a discrete nonlinear diffusion scale space, and this method must solve a linear system of equations, and the matrix of this system is dominant for triangles and diagonals, so that the system can use the efficient thomas algorithm, i.e. the system can solve quickly by the thomas algorithm.
Step S203, constructing a nonlinear scale space: constructing a nonlinear scale space constructed by the AOS and the variable conduction diffusion, wherein the nonlinear scale space comprises the group number of O image scale spaces and the layer number of S image scale spaces, and the scale factor of each sub-layer is expressed as:
σ i (o,s)=σ 0 2 o+s/S ,o∈[0,...,O-1],s∈[0,...,S-1],i∈[0,...,N];
wherein σ 0 A reference value representing the size of an image, where N is the total number of images subjected to nonlinear diffusion filtering, where N is O × S, the scale factor of each layer is in units of pixels, the unit is converted from pixel to time, and an expression of the scale factor of each sub-layer in units of time is obtained, where the expression of the scale factor of each sub-layer in units of time is expressed as:
wherein, t i Expressed as evolution time, σ i The scale relation among all layers in the nonlinear scale space model is obtained; calculating a gradient histogram of an input image to obtain a contrast factor k, and obtaining a nonlinear scale space through iteration by using AOS, wherein the nonlinear scale space is expressed as:
it should be noted that the nonlinear scale space constructed by the AOS method and the variable transmission diffusion described above includes O octaves (number of groups in the image scale space) and S sub-levels (number of layers in the image scale space), and the scale space always operates based on the original image, rather than performing downsampling at each layer. The scale factor of each layer is in pixel units, namely the size of the Gaussian template is in pixel units, the unit is required to be converted from pixel to time, and an expression of the scale factor of each sub-layer in time is obtained. In the present embodiment, the mapping σ is used i →t i But to obtain a set of evolution times from the established non-linear scale space. At each filtered image t i In the non-linear scale space of (2), the generated image does not conform to the standard deviation sigma of the gaussian convolution of the original image i . By setting the diffusion function g 2 1 (i.e. a constant function) toAnd obtaining an equation of a Gaussian scale space to obtain an effect equivalent to the Gaussian scale space. The propagation function tends to be constant for most image pixels, except for strong image edges corresponding to object boundaries, which evolve continuously in a non-linear scale space. For an input image, firstly, Gaussian convolution is carried out to reduce the influence of noise and other human factors, a gradient histogram of the image is calculated, a contrast factor k is obtained, and then a nonlinear scale space is obtained through iteration by using AOS.
Step S204, feature point detection: the formula for calculating the response value of the scale normalized Hessian at different scales is as follows:
wherein L is xx Is the second derivative of the luminance value L in the x direction, L yy Is the second derivative of the luminance value L in the y direction, L xy Is the mixed second derivative of the brightness value L in the x direction and the y direction, sigma is the scale coefficient of the layer where the image is located, and the linear scale space L is used i In (2) filtering the image at different scales sigma i And analyzing the response, and searching response extremum in all the filtered images except for i-0 and i-N to obtain the characteristic points.
It should be noted that the nonlinear invariant feature algorithm KAZE searches for a local maximum point of an image according to normalized Hessian matrices of different scales. Therefore, response values of the scale-normalized Hessian at different scales are calculated. For multi-scale feature detection, scale normalization of the subset of differential operators is required, since the magnitude of the spatial derivative generally decreases with increasing scale. For a given non-linear scale space L i In (2) filtering the image at different scales sigma i The responses are analyzed. The maximum is found at each scale and spatial position, except that i-0 and i-N, extreme values are found in all filtered images. As shown in fig. 3, by taking the size σ on the current filtered image i, the last filtered image i +1 and the next filtered image i-1 i ×σ i The rectangular window of (a) slides to find an extremum. To addThe speed of fast extremum search is to examine the response on a window with the size of 3 × 3 pixels so as to discard the non-extremum response quickly and obtain the extremum response; it should be noted that the extreme response value found by sliding in the rectangular window is the feature point, and finally, the position of the feature point is estimated with sub-pixel accuracy. It will be appreciated that in other embodiments, responses over windows of other sizes of pixels may be examined to quickly discard non-extreme responses, such as examining responses over windows of 4 x 4 pixels in size. In this embodiment, a Scharr filter of 3 x 3 size is used in the calculation of the first and second derivatives in order to better approximate the rotation invariance. It should be noted that although the derivative at multiple scales needs to be calculated for each pixel, the calculation result is saved, and the step of calculating the feature description can be reused to reduce the calculation amount.
Further, the process of feature point detection by the KAZE algorithm includes a feature description step. And (3) feature description: finding the principal direction by using a neighborhood of feature points, the neighborhood being a 6 sigma radius i Range of (a), σ i Is the size of the scale in which the feature points are located. In other embodiments, the radius of the neighborhood may also be set according to specific needs, such as setting the radius of the neighborhood to 8 σ i Or 5 sigma i And the like. Gradients in the x direction and the y direction are calculated for pixels in the neighborhood, and weighting is performed using gaussian (gaussian distribution with a feature point as the center), so that the importance of a sampling point (pixel) closer to the feature point is higher. And performing rotary scanning by taking a sector of 60 degrees as a sliding window in the circular neighborhood, calculating the sum of vectors in the sector, and selecting the longest vector as a main direction. In other embodiments, the rotational scanning may be performed with a sector of 80 degrees as a sliding window in the neighborhood of the circle, or with a sector of 90 degrees as a sliding window in the neighborhood of the circle.
The M-SURF descriptor is used to accommodate the non-linear space. For a scale size of σ i At its 24 σ characteristic point i ×24σ i The partial derivatives in the x direction and the y direction are calculated by the pixel points in the rectangular neighborhood. The domain is divided into 4 × 4 sub-regions, each having a size of 9 σ i ×9σ i Overlap of 2 σ between every two subregions i . Assigning a Gaussian kernel of σ to each subregion 1 =2.5σ i After gaussian weighting, a 4-dimensional description vector can be obtained, and specifically, the 4-dimensional description vector can be represented by formula (16):
formula (16) d v =(∑L x ,∑L y ,∑|L x |,∑|L y |)。
When a 4-dimensional description vector is obtained, the 4-dimensional description vector is further processed into a 4 x 4 Gaussian kernel 1 =1.5σ i The 4-dimensional vectors of each sub-region are gaussian weighted and then normalized. After normalization processing, the description vector of each feature point is 64-dimensional, at this time, all points in the rectangular region are rotated to the main direction, and the gradient is calculated according to the main direction, so that the obtained 64-dimensional description vector is rotation-invariant. It is understood that, in the present embodiment, the description vectors of the infrared characteristic points and the visible light characteristic points obtained by the KAZE algorithm are 64-dimensional.
Step S300, matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, obtaining distance ratios corresponding to the matched characteristic point pairs, sequencing the distance ratios from small to large to obtain sequenced distance ratios, selecting a first preset number of target distance ratios from front to back in the sequenced distance ratios, and determining the matched characteristic point pairs corresponding to the target distance ratios as an initial data set.
After the infrared characteristic points and the visible light characteristic points are obtained, the infrared characteristic points and the visible light characteristic points are matched to obtain matched characteristic points in the infrared characteristic points and the visible light characteristic points, and it can be understood that the matched characteristic points in the infrared characteristic points and the visible light characteristic points are matched characteristic point pairs. And after the matched characteristic point pairs are obtained, the distance ratios corresponding to the matched characteristic point pairs are obtained, the distance ratios are sorted from small to large to obtain the sorted distance ratios, a first preset number of target distance ratios are selected from the sorted distance ratios from front to back, and the matched characteristic point pairs corresponding to the target distance ratios are determined as an initial data set. It is understood that the distance ratio corresponding to the matching feature point pair is the distance ratio calculated by the formula (18), and the distance ratio is a neighboring distance ratio.
Specifically, if the matching feature point pairs are grouped into a Sample set Sample, all the matching feature point pairs (x) in the Sample are matched i ,y i )、(x′ i ,y′ i ) Sorting according to the distance ratio from small to large, and selecting front Q pairs of matched feature point pairs as an initial data set G, wherein (x) i ,y i ) Can represent infrared feature points in the matched feature point pair, (x' i ,y′ i ) The visible light feature points in the matching feature point pairs are represented, Q represents a first preset number, the size of the first preset number is not specifically limited in this embodiment, and a user can set the visible light feature points as needed.
Step S400, executing an inner feature point pair determination process: and randomly selecting a subset containing a second preset number of matched feature point pairs in the initial data set, calculating to obtain a transformation model parameter according to the subset, and judging the matched feature point pairs except the subset in the initial data set according to the transformation model parameter so as to determine the internal feature point pairs in the matched feature points.
And after the initial data set is obtained, executing an internal characteristic point pair determining process according to the initial data set. Specifically, the internal feature point pair determination process is as follows: and randomly selecting a subset containing a second preset number of matched characteristic point pairs in the initial data set, namely, the subset contains the second preset number of matched characteristic point pairs. The second preset number may be set to 4, 5 or 8, etc. And after the subset is obtained, calculating to obtain a transformation model parameter according to the subset, and judging the matching characteristic point pairs except the subset in the initial data set according to the transformation model parameter to determine the internal characteristic point pairs in the matching characteristic point pairs. Specifically, if (x) i ,y i ) Feature point and (x 'after transformation of transformation model' i ,y′ i ) If the distance therebetween is still less than the set threshold value, (x) i ,y i ) And (x' i ,y′ i ) The matching characteristic point pair is an inner characteristic point pair; if it is(x i ,y i ) Feature point and (x 'after transformation of transformation model' i ,y′ i ) The distance between the two is greater than or equal to the set threshold value, (x) i ,y i ) And (x' i ,y′ i ) This matching characteristic point pair is an outer characteristic point pair, wherein the size of the threshold may be set according to specific needs, and the size of the threshold is not limited in this embodiment.
Specifically, the process of obtaining the transformation model parameters according to the subset calculation is illustrated by taking the second preset number as 4 in the embodiment: selecting 4 pairs of matched characteristic point pairs, wherein each pair of matched characteristic point pairs can obtain x' ═ m 1 x+m 3 ,y′=m 5 y+m 6 Two equations, 4 pairs of matching characteristic points have eight equations in total, and the eight equations are solved by using a least square method to obtain m 1 ,m 3 ,m 5 ,m 6 . It is understood that m is obtained 1 ,m 3 ,m 5 ,m 6 Are the transformation model parameters.
Step S500, calculating the execution times of the internal feature point pair determination process, and determining the transformation model parameter corresponding to the most internal feature point pairs as the transformation parameter of the transformation model between the infrared edge image and the visible light edge image when the execution times is greater than a preset time.
And calculating the execution times of the determined flow of the internal feature point pairs, and judging whether the execution times is greater than the preset times. And if the execution times are greater than the preset times, calculating the number of the internal feature point pairs corresponding to each conversion model parameter in the preset times, and determining the conversion model parameter with the largest number of the internal feature point pairs as the conversion parameter of the conversion model between the infrared edge image and the visible light edge image. It should be noted that, the number of pairs of inner feature points is the largest, and the number of pairs of feature points representing correct matching is the largest, and the corresponding transformation model parameters are the most accurate.
Specifically, the preset times may be preset according to specific needs, and may also be calculated by formula (19):
Wherein, U represents a preset number of times, p is a probability that all matching point feature point pairs appear at least once in all samples as an inner feature point pair, and q is a ratio between the number of the inner feature point pairs and the total number of the matching feature point pairs. It can be understood that when the preset number is U, a total of U sets of transformation model parameters can be obtained.
It should be noted that, since the infrared imaging apparatus and the visible light imaging apparatus are on the same optical axis, the present embodiment only retains the scale transformation parameters in the six-parameter model, and ignores the transformation parameters of the angle and the shape. Specifically, the relationship describing the transformation model between the infrared edge image and the visible edge image using the six-parameter affine transformation model can be expressed by equation (20):
Wherein, (x ', y') is a pixel point in the visible edge image, and (x, y) is a pixel point corresponding to (x ', y') in the infrared edge image 2 =m 4 0. It can be understood that the transformation parameter m of the transformation model between the infrared edge image and the visible edge image 1 ,m 3 ,m 5 ,m 6 。
Specifically, the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs includes:
and e, sequentially determining each infrared characteristic point in the infrared edge image as a target characteristic point, calculating the distance between the target characteristic point and each visible light characteristic point in the visible light edge image through an Euclidean distance formula, and searching a visible light first characteristic point with the minimum distance to the target characteristic point and a visible light second characteristic point with the second minimum distance to the target characteristic point in the visible light edge image.
Specifically, each infrared feature point in the infrared edge image is sequentially determined as a target feature point, the distance between the target feature point and each visible light feature point in the visible light edge image is calculated through an Euclidean distance formula, and the visible light feature point with the smallest distance to the target feature point and the second smallest distance to the target feature point is searched in the visible light edge image. For convenience of description, in the present embodiment, the visible light feature point with the smallest distance from the target feature point in the visible light edge image is denoted as a visible light first feature point, and the visible light feature point with the second smallest distance from the target feature point in the visible light edge image is denoted as a visible light second feature point. In this embodiment, the distance between two feature points can be calculated by the euclidean distance formula.
It is understood that the present embodiment is an infrared edge imageSelecting an infrared edge image for the image to be registeredOf the feature point R, the M-SURF feature description vector of which is R i With visible edge imagesSelecting a visible edge image for the reference imageIs the M-SURF feature description vector of one feature point S in (1) is S i Then, the distance between two feature points can be calculated by using a euclidean distance formula, where the euclidean distance formula (17) is:
In thatIs found inThe characteristic point R in (1) has the minimum Euclidean distance e and the second minimum point f, e is the first characteristic point of visible light, f is the second characteristic point of visible light, and the distance between e and R is represented as d er And the distance between f and R is denoted d fr 。
And g, determining the distance between the target characteristic point and the first visible light characteristic point as a first distance, and determining the distance between the target characteristic point and the second visible light characteristic point as a second distance.
And h, if the distance ratio of the first distance to the second distance is within a preset range, determining the target characteristic point and the first visible light characteristic point as a matching characteristic point pair to obtain the matching characteristic point pair in the infrared characteristic point and the visible light characteristic point.
After the first characteristic point of the visible light and the second characteristic point of the visible light are determined, the distance between the target characteristic point and the first characteristic point of the visible light is determined as a first distance, and the distance between the target characteristic point and the second characteristic point of the visible light is determined as a second distance. And after the first distance and the second distance are determined, calculating a distance ratio between the first distance and the second distance, and judging whether the calculated distance ratio is within a preset range. If the calculated distance ratio is within the preset range, determining the target characteristic points and the first visible light characteristic points as matching characteristic point pairs, and sequentially executing the same operation on each target characteristic point to obtain the matching characteristic point pairs in the infrared characteristic points and the visible light characteristic points; and if the calculated distance ratio is not within the preset range, determining that the target characteristic point and the first visible light characteristic point fail to be matched, namely that the target characteristic point and the first visible light characteristic point are not matched characteristic point pairs.
Specifically, the distance ratio between the first distance and the second distance can be expressed by equation (18):
formula (18)Wherein T' represents a preset range, in the present embodiment, T' the value range is [0.4,0.6 ]]In other embodiments, the value range of T' may be set to other values as needed. As can be seen from equation (18), the distance ratio between the first distance and the second distance is equal to the first distance divided by the second distance.
And S600, amplifying the infrared edge image according to the transformation parameters to obtain an amplified infrared edge image, determining a region to be matched in the visible light edge image according to the transformation parameters, and searching out a sub-image with the same size as the amplified infrared edge image in the region to be matched in a sliding window mode.
And after the transformation parameters of the transformation model between the infrared edge image and the visible light edge image are obtained, determining the matching edge area in the visible light edge image according to the transformation parameters. Specifically, the scale relation between the infrared edge image and the visible light edge image is determined according to the transformation parameter, and the scale relation between the infrared edge image and the visible light edge is (m) 1 ,m 5 ) Amplifying the infrared edge image according to the scale relation to obtain an amplified infrared edge image, wherein the width of the amplified infrared edge image is the width of the infrared edge image before amplification multiplied by m 1 The height of the magnified infrared edge image is the height of the infrared edge image before magnification multiplied by m 5 . In this embodiment, for ease of understanding, the infrared edge image after enlargement is described asThe size between the infrared edge image before enlargement and the infrared edge image after enlargement becomes (W ', H'). It should be noted that, when the edge detection is performed on the infrared image and the visible light image, the resolutions of the infrared image and the visible light image are not modified, and therefore, the scale relationship between the infrared edge image and the visible light edge image is the same as the scale relationship between the infrared image and the visible light image.
After the transformation parameters are obtained, the coordinates of the upper left corner point of the area to be expanded in the visible light edge image are determined to be (m) 1 +m 3 ,m 5 +m 6 ) To be expandedThe width of the large area is the width of the amplified infrared edge image, and the height of the area to be amplified is the height of the amplified infrared edge image. Specifically, if four corners of the region to be enlarged are denoted as (a) 1 ,B 1 )、(A 2 ,B 1 )、(A 1 ,B 2 )、(A 2 ,B 2 ) And taking the area to be expanded as a center, and obtaining the range of four times of area expansion to obtain the area to be matched. It is understood that the size of the region to be matched is (2W ', 2H'), and the four corner points become
And after the area to be matched is determined, searching sub-images with the same size in the amplified infrared edge image in the area to be matched in a sliding window mode.
Step S700, calculating a normalized correlation coefficient corresponding to each sliding window, determining the maximum coefficient in the calculated normalized correlation coefficients, determining the area of the sub-image corresponding to the maximum coefficient as a matching edge area, and determining a non-matching area in the visible light image according to the matching edge area.
After the area to be matched is determined, in the area to be matched, a sub-image with the same size as the amplified infrared edge image is searched in a sliding window mode, namely, the sub-image is searchedAs a template, searching in the area to be matched to search the sub-image covered by the window, and recording the sub-image as a templateWhereinThe sub-image is the coordinates of the upper left corner point,calculating outAndthe normalized correlation coefficient corresponding to each sliding window is calculated, specifically, the normalized correlation coefficient is calculated by using formula (21):
Wherein,where (m, n) is the coordinates of the pixel points in the sub-image,and (m, n) in the infrared edge image is the coordinate of the pixel point in the amplified infrared edge image.
It should be noted that, in the process of searching to obtain the sub-image, multiple searches may be performed in a sliding window manner. After the normalized correlation coefficients are obtained through calculation, the maximum normalized correlation coefficient in the calculated normalized correlation coefficients is determined, and for convenience of description, the maximum normalized correlation coefficient is recorded as the maximum coefficient in this embodiment. After the maximum coefficient is determined, the area corresponding to the sub-image corresponding to the maximum coefficient is determined as a matching edge area. Specifically, referring to fig. 5, fig. 5 is a schematic diagram of searching for a sub-image in the embodiment of the present invention, where a in fig. 5 is a schematic diagram of an enlarged infrared edge image, and B in the diagram is a schematic diagram of a sub-image covered by a search window, which is searched in a region to be matched by using the enlarged infrared edge image as a template.
And after the matching edge area is determined, determining a non-matching area in the visible light image according to the matching edge area. It will be appreciated that in the visible light image, the matching edge regions identified in the visible light edge image are removed, and the remaining regions are non-matching regions in the visible light image. It should be noted that the resolutions of the visible light edge image and the visible light image are the same, so the matching edge area and the non-matching area of the visible light edge image and the visible light image are corresponding, the matching edge area in the visible light edge image is determined, that is, the matching edge area in the visible light image is determined, and the area in the visible light image except the matching edge area is the non-matching area
Step S800, performing threshold segmentation on the infrared image to obtain a mask map corresponding to the infrared image, determining an abnormal heating area in the visible light image according to the mask map and the non-matching area, and determining an electrical device corresponding to the abnormal heating area as an electrical device that abnormally heats.
And carrying out threshold segmentation on the infrared image to obtain a mask image corresponding to the infrared image. Specifically, in the present embodiment, threshold segmentation is performed on the infrared image. Specifically, Otsu (maximum inter-class variance) threshold segmentation, maximum entropy threshold segmentation, iterative threshold segmentation, and the like may be employed to threshold the infrared image. It should be noted that, due to the characteristics of the infrared image, after the infrared image is subjected to threshold segmentation, a segmentation result is obtained, a mask map corresponding to the infrared image is obtained according to the segmentation result, and an image area with a pixel value of 255 and an image area with a pixel value of 0 in the infrared image can be determined by the mask map corresponding to the infrared image. And after a mask image corresponding to the infrared image is obtained, determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally.
Further, the step of determining an abnormal heat generation region in the visible light image according to the mask map and the non-matching region includes:
and step p, nesting the mask image in the visible light image to determine a target image area with a pixel value of 0 in the visible light image.
And q, determining the area except the target image area and the non-matching area in the visible light image as an abnormal heating area in the visible light image.
Specifically, the mask image is nested in the visible light image to determine the target image area with the pixel point of 0 in the visible light image. It should be noted that, in the process of nesting the mask map in the visible light image, position matching is performed, and the mask map is nested in the corresponding area in the visible light image. And when the target image area with the pixel point of 0 in the visible light image is determined, determining the area except the target image area and the non-matching area in the visible light image as an abnormal heating area in the visible light image. It can be understood that, in the visible light image, the pixel values of the pixel points of the non-matching area and the area corresponding to the infrared image with the pixel value of 0 are set to be 0, and at this time, the image area with the pixel value not being 0 in the visible light image is the abnormal heating area. Specifically, referring to fig. 6, fig. 6 is a schematic diagram of a segmentation of an abnormal heating area in a visible light image according to an embodiment of the present invention, where C in fig. 6 is a schematic diagram of a result graph obtained after threshold segmentation is performed on an infrared image, D in fig. 6 is a schematic diagram of a visible light image, and E in fig. 6 is a schematic diagram of determining an abnormal heating area in a visible light image.
In the embodiment, an infrared image and a visible light image of power equipment are subjected to edge detection to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image, then feature point detection is performed on the infrared edge image and the visible light edge image through a KAZE algorithm to obtain infrared feature points corresponding to the infrared edge image and visible light feature points corresponding to the visible light edge image, the infrared feature points and the visible light feature points are matched to obtain matched feature point pairs, transformation parameters of a transformation model between the infrared edge image and the visible light edge image are calculated according to the matched feature point pairs, a matched edge area in the visible light edge image is determined according to the transformation parameters, a non-matched area in the visible light image is determined according to the matched edge area, the infrared image is segmented to obtain a mask image corresponding to the infrared image, and determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which abnormally heats. The method realizes the coarse registration of the infrared edge image and the visible light edge image by using the KAZE algorithm, and matches the infrared edge image and the visible light edge image by using the transformation parameters of the transformation model, thereby improving the detection accuracy of the abnormally heated power equipment; meanwhile, because the method of KAZE combined with transformation model matching is used, the defects of multi-scale and low speed of transformation model matching are overcome, the problem of low infrared and visible light matching precision of KAZE is solved, the success rate of infrared edge image and visible light edge image matching is further improved, and the direct segmentation of the power equipment in the visible light image may be affected by illumination, color, texture, etc., resulting in incomplete segmentation of the power equipment, and for this reason, the present embodiment only segments the infrared image of the power equipment, the visible light image of the power equipment is not divided, the division result of the infrared image is nested into the visible light image, and then the power equipment which generates heat abnormally is determined, the interference is less, and the segmentation result is accurate, so that the accuracy of detecting the power equipment which generates heat abnormally is further improved.
Further, another embodiment of the method for detecting abnormal heat generation of an electrical device of the present invention is provided.
The other embodiment of the method for detecting an abnormal heating power device is different from the above-mentioned embodiment of the method for detecting an abnormal heating power device in that, in step S300, the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs and obtaining distance ratios corresponding to the matched characteristic point pairs includes:
and y, matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, and deleting wrong characteristic point pairs in the characteristic point pairs by using an RANSAC algorithm to obtain target characteristic point pairs.
And z, acquiring the corresponding distance ratio of the target characteristic point pair.
After the infrared characteristic points and the visible light characteristic points are obtained, the infrared characteristic points and the visible light characteristic points are matched to obtain matched characteristic point pairs, and then a RANSAC (random Sample consensus) algorithm is adopted to delete wrong characteristic point pairs in the characteristic point pairs to obtain target characteristic point pairs. It is understood that the target characteristic point pair is a characteristic point pair other than the wrong characteristic point in the matched characteristic point pair, that is, the target characteristic point pair is a correct characteristic point pair. And after the target characteristic point pair is obtained, obtaining the distance ratio corresponding to the target characteristic point pair.
In the embodiment, after the characteristic point pairs are obtained through matching, the RANSAC algorithm is adopted to delete the error characteristic point pairs in the characteristic point pairs to obtain the target characteristic point pairs, and then the distance ratio corresponding to the target characteristic point pairs is obtained, so that the accuracy of determining the non-matching area according to the distance ratio is improved, and the accuracy of detecting the abnormally heated power equipment is further improved.
The present invention also provides an abnormal heat generation power equipment detection apparatus, which includes, with reference to fig. 4:
the acquisition module 10 is configured to acquire an infrared image and a visible light image corresponding to the power device; an edge detection module 20, configured to perform edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image; a feature point detection module 30, configured to perform feature point detection on the infrared edge image and the visible light edge image through a KAZE algorithm, so as to obtain an infrared feature point corresponding to the infrared edge image and a visible light feature point corresponding to the visible light edge image; the matching module 40 is configured to match the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs; the initial data set judgment module 50 is configured to obtain distance ratios corresponding to the matching feature point pairs, sort the distance ratios from small to large to obtain sorted distance ratios, select a first preset number of target distance ratios from front to back in the sorted distance ratios, and determine the matching feature point pairs corresponding to the target distance ratios as an initial data set; an executing module 60, configured to execute the intra-characteristic point pair determining process: randomly selecting a subset containing a second preset number of matching feature point pairs in the initial data set, calculating to obtain a transformation model parameter according to the subset, and judging the matching feature point pairs in the initial data set except the subset according to the transformation model parameter to determine an internal feature point pair in the matching feature points; a calculating module 70, configured to calculate the number of times that the internal feature point pair is executed in the determination process; a determining module 80, configured to determine, when the execution times are greater than preset times, a transformation model parameter corresponding to the most inner feature point pairs as a transformation parameter of a transformation model between the infrared edge image and the visible edge image; the calculating module 70 is further configured to calculate a normalized correlation coefficient corresponding to each sliding window; the determining module 80 is further configured to determine a maximum coefficient in the calculated normalized correlation coefficients, determine a region of the sub-image corresponding to the maximum coefficient as a matching edge region, and determine a non-matching region in the visible light image according to the matching edge region; a segmentation module 90, configured to perform threshold segmentation on the infrared image to obtain a mask map corresponding to the infrared image; the determining module 80 is further configured to determine an abnormal heating area in the visible light image according to the mask map and the non-matching area, and determine an electrical device corresponding to the abnormal heating area as an electrical device that generates heat abnormally.
Further, the acquiring module 10 is further configured to acquire the infrared image of the power device through an infrared camera, and acquire the visible light image of the power device through a visible light camera, where the infrared camera and the visible light camera are disposed on a same optical axis, a distance between a base line of the infrared camera and the visible light camera is within a preset range, and a lens of the infrared camera and a lens of the visible light camera are in a same plane.
Further, the determination module 60 includes: the nesting unit is used for nesting the mask image in the visible light image so as to determine a target image area with a pixel value of 0 in the visible light image; a first determining unit, configured to determine, as an abnormal heat generation region in the visible light image, a region in the visible light image excluding the target image region and the non-matching region.
Further, the formula for calculating the normalized correlation coefficient is:
wherein,a sub-picture is represented,represents the magnified infrared edge image, W 'represents the width of the magnified infrared edge image, H' represents the height of the magnified infrared edge image,where (m, n) is the coordinates of the pixel points in the sub-image,wherein (m, n) is the coordinates of the pixel points in the amplified infrared edge image,the sub-image is the coordinate of the pixel point at the upper left corner,
further, the feature point detection module 30 includes a nonlinear diffusion filtering unit, configured to describe the evolution of the image luminance value L at different scales as the divergence of the flow equation in the nonlinear diffusion filtering forming process, and express the divergence of the flow equation by a nonlinear partial derivative equation, where the nonlinear partial derivative equation is expressed as a formula:
wherein div represents the divergence calculation,representing a gradient operation, x d 、y d The abscissa and the ordinate of the image are represented, t represents the image scale, and a transfer function c is introduced into the diffusion function, so that the self-adaption of the nonlinear diffusion to the local image structure is possible, wherein the transfer function is represented as:
wherein,is a Gaussian smoothed image L σ A new conduction function g is obtained by the conduction function c, the new conduction function g is expressed as:
wherein k is a contrast factor determining a diffusion level for controlling a diffusion level of an image edge, and k is a gradient L of the smoothed image σ 70% of the histogram as k value;
the AOS unit is used for solving an approximate solution of a nonlinear partial differential equation in nonlinear diffusion filtering by using an implicit discrete differential equation, and the matrix form of the discrete differential equation is expressed as:
wherein A is l Is in the form of a matrix of image conduction for each dimension, τ denotes the step size, L i Is the ith layer image in the multilayer image, and m' is any integer larger than 1; in a discrete difference equation, a linear system of equations is solved, and the corresponding solution of the linear system is expressed as:
a constructing unit, configured to construct a nonlinear scale space constructed by the AOS and the variable conduction diffusion, where the nonlinear scale space includes a number of groups of O image scale spaces and a number of layers of S image scale spaces, and a scale factor of each sub-layer is represented as:
σ i (o,s)=σ 0 2 o+s/S ,o∈[0,...,O-1],s∈[0,...,S-1],i∈[0,...,N];
wherein σ 0 A reference value representing the size of an image, wherein N is the total number of images subjected to nonlinear diffusion filtering, N is O × S, the scale factor of each layer is pixel-by-pixel converted into time, and an expression of the scale factor of each sub-layer in time is obtained, and the expression of the scale factor of each sub-layer in time is expressed as:
wherein, t i Expressed as evolution time, σ i The scale relation among all layers in the nonlinear scale space model is obtained; calculating a gradient histogram of an input image to obtain a contrast factor k, and obtaining a nonlinear scale space through iteration by using AOS, wherein the nonlinear scale space is expressed as:
the characteristic point detection unit is used for calculating the response values of the scale-normalized Hessian under different scales by the following formula:
wherein L is xx Is the second derivative of the luminance value L in the x direction, L yy Is the second derivative of the luminance value L in the y direction, L xy Is the mixed second derivative of the intensity values L in the x-direction and the y-direction, and sigma is the scale coefficient of the layer in which the image is located, forNonlinear scale space L i In the filtered image, at different scales sigma i The responses are analyzed below, and except that i-0 and i-N, response extrema are found in all the filtered images, resulting in feature points.
Further, the matching module 40 includes:
the second determining unit is used for sequentially determining each infrared characteristic point in the infrared edge image as a target characteristic point; the calculation unit is used for calculating the distance between the target characteristic point and each visible light characteristic point in the visible light edge image through an Euclidean distance formula; the searching unit is used for searching a visible light first characteristic point with the minimum distance from the target characteristic point and a visible light second characteristic point with the second smallest distance from the target characteristic point in the visible light edge image; the fourth determining unit is further configured to determine a distance between the target feature point and the first visible light feature point as a first distance, and determine a distance between the target feature point and the second visible light feature point as a second distance; and if the distance ratio of the first distance to the second distance is within a preset range, determining the target characteristic point and the first visible light characteristic point as a matching characteristic point pair to obtain the matching characteristic point pair in the infrared characteristic point and the visible light characteristic point.
Further, the electrical equipment detection device for abnormal heat generation further includes: a deleting module, configured to delete an erroneous feature point pair in the feature point pair by using an RANSAC algorithm to obtain a target feature point pair; the initial data set determining module 50 is further configured to obtain a corresponding distance ratio of the target feature point pair.
Further, the edge detection module 20 includes: the input unit is used for inputting the infrared image and the visible light image into a neural network obtained by training based on an HED algorithm to obtain an infrared edge map result corresponding to the infrared image and a visible light edge map result corresponding to the visible light image; and the aggregation unit is used for aggregating the infrared edge map result to obtain an infrared edge image corresponding to the infrared image, and aggregating the visible light edge map result to obtain a visible light edge image corresponding to the visible light image.
The specific implementation of the apparatus for detecting an abnormally heated electrical device of the present invention is substantially the same as the embodiments of the method for detecting an abnormally heated electrical device, and will not be described herein again.
The present invention also proposes a computer-readable storage medium having stored thereon a detection program which, when executed by a processor, implements the steps of the electrical equipment detection method of abnormal heating as described above.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the above embodiments of the method for detecting abnormal heating power equipment, and will not be described herein again.
It will be appreciated by those skilled in the art that the various preferences described above can be freely combined, superimposed without conflict.
It will be understood that the embodiments described above are illustrative only and not restrictive, and that various obvious and equivalent modifications and substitutions for details described herein may be made by those skilled in the art without departing from the basic principles of the invention.
Claims (10)
1. An electrical equipment detection method for abnormal heat generation, characterized by comprising:
s100, acquiring an infrared image and a visible light image corresponding to power equipment, and performing edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image;
s200, performing feature point detection on the infrared edge image and the visible light edge image through a KAZE algorithm to obtain an infrared feature point corresponding to the infrared edge image and a visible light feature point corresponding to the visible light edge image;
s300, matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, obtaining distance ratios corresponding to the matched characteristic point pairs, sequencing the distance ratios from small to large to obtain sequenced distance ratios, selecting a first preset number of target distance ratios from front to back in the sequenced distance ratios, and determining the matched characteristic point pairs corresponding to the target distance ratios as an initial data set;
s400, executing an internal feature point pair determining process: randomly selecting a subset containing a second preset number of matching feature point pairs in the initial data set, calculating to obtain a transformation model parameter according to the subset, and judging the matching feature point pairs in the initial data set except the subset according to the transformation model parameter to determine an internal feature point pair in the matching feature points;
s500, calculating the execution times of the internal feature point pair determining process, and determining the transformation model parameters corresponding to the most internal feature point pairs as the transformation parameters of the transformation model between the infrared edge image and the visible light edge image when the execution times are greater than the preset times;
s600, amplifying the infrared edge image according to the transformation parameters to obtain an amplified infrared edge image, determining a region to be matched in the visible light edge image according to the transformation parameters, and searching out a sub-image with the same size as the amplified infrared edge image in the region to be matched in a sliding window mode;
s700, calculating a normalized correlation coefficient corresponding to each sliding window, determining the maximum coefficient in the calculated normalized correlation coefficients, determining the area of the sub-image corresponding to the maximum coefficient as a matching edge area, and determining a non-matching area in the visible light image according to the matching edge area;
s800, performing threshold segmentation on the infrared image to obtain a mask image corresponding to the infrared image, determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally.
2. The method for detecting abnormally heated electrical equipment according to claim 1, wherein in step S100, the step of acquiring the infrared image and the visible light image corresponding to the electrical equipment includes:
the infrared image of the power equipment is acquired through the infrared camera equipment, the visible light image of the power equipment is acquired through the visible light camera equipment, wherein the infrared camera equipment and the visible light camera equipment are arranged in the same optical axis, the distance between baselines of the infrared camera equipment and the visible light camera equipment is within a preset range, and a lens of the infrared camera equipment and a lens of the visible light camera equipment are located in the same plane.
3. The abnormal heat generation power equipment detection method according to claim 1, wherein in step S800, the step of determining an abnormal heat generation region in the visible light image from the mask map and the non-matching region includes:
nesting the mask image in the visible light image to determine a target image area in the visible light image having a pixel value of 0;
determining an area except the target image area and the non-matching area in the visible light image as an abnormal heating area in the visible light image.
4. The abnormal heat generation power equipment detection method according to claim 1, wherein in step S700, the formula for calculating the normalized correlation coefficient is:
wherein,a sub-picture is represented,represents the magnified infrared edge image, W 'represents the width of the magnified infrared edge image, H' represents the height of the magnified infrared edge image,where (m, n) is the coordinates of the pixel points in the sub-image,wherein (m, n) is the coordinates of the pixel points in the amplified infrared edge image,the sub-image is the coordinate of the pixel point at the upper left corner,
5. the abnormal heat generation power equipment detection method according to claim 1, wherein in step S200, the process of feature point detection by the KAZE algorithm includes the steps of:
s201, nonlinear diffusion filtration: in the process of forming the nonlinear diffusion filter, the evolution of the image brightness value L in different scales is described as the divergence of a flow equation, the divergence of the flow equation is expressed by a nonlinear partial derivative equation, and the nonlinear partial derivative equation is expressed as a formula:
wherein, div represents the divergence calculation,representing a gradient operation, x d 、y d Representing the abscissa and the ordinate of the image, t representing the image scale, and introducing a transfer function c in the diffusion function such thatThe self-adaption of the nonlinear diffusion to the local image structure is possible, and the transfer function is expressed as:
wherein,is a Gaussian smoothed image L σ By a transfer function c, a new transfer function g is obtained, the new transfer function g comprising g 1 And g 2 Wherein g is 2 Expressed as:
wherein k is a contrast factor determining a diffusion level for controlling a diffusion level of the image edge, k is a gradient L of the smoothed image σ 70% of the histogram as k value;
s202, AOS: an approximate solution of a nonlinear partial differential equation in nonlinear diffusion filtering is solved using implicit discrete difference equations whose matrix form is expressed as:
wherein A is l Is in the form of a matrix of image conduction for each dimension, τ denotes the step size, L i Is the ith layer image in the multilayer image, and m' is any integer larger than 1; in the discrete difference equation, a linear system of equations is solved, and the corresponding solution of the linear system is expressed as:
wherein, I is a unit matrix with a certain dimension;
s203, constructing a nonlinear scale space: constructing a nonlinear scale space constructed by the AOS and the variable conduction diffusion, wherein the nonlinear scale space comprises the group number of O image scale spaces and the layer number of S image scale spaces, and the scale factor of each sub-layer is represented as:
σ i (o,s)=σ 0 2 o+s/S ,o∈[0,...,O-1],s∈[0,...,S-1],i∈[0,...,N];
wherein σ 0 A reference value representing the size of an image, where N is the total number of images subjected to nonlinear diffusion filtering, where N is O × S, the scale factor of each layer is in units of pixels, the unit is converted from pixel to time, and an expression of the scale factor of each sub-layer in units of time is obtained, where the expression of the scale factor of each sub-layer in units of time is expressed as:
wherein, t i Expressed as evolution time, σ i The scale relation among all layers in the nonlinear scale space model is obtained; calculating a gradient histogram of an input image to obtain a contrast factor k, and obtaining a nonlinear scale space by using AOS through iteration, wherein the nonlinear scale space is expressed as follows:
s204, feature point detection: the formula for calculating the response value of the scale normalized Hessian at different scales is as follows:
wherein L is xx Is the second derivative of the luminance value L in the x direction, L yy Being the second derivative of the luminance value L in the y-direction,L xy is the mixed second derivative of the luminance value L in the x direction and the y direction, and sigma is the scale coefficient of the layer where the image is located, and the linear scale space L is i In (2) filtering the image at different scales sigma i And analyzing the response, and searching response extremum in all the filtered images except for i-0 and i-N to obtain the characteristic points.
6. The method for detecting abnormally heated electrical equipment according to claim 1, wherein the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs in step S300 includes:
sequentially determining each infrared characteristic point in the infrared edge image as a target characteristic point, calculating the distance between the target characteristic point and each visible light characteristic point in the visible light edge image through an Euclidean distance formula, and searching a visible light first characteristic point with the minimum distance from the target characteristic point and a visible light second characteristic point with the second minimum distance from the target characteristic point in the visible light edge image;
determining the distance between the target characteristic point and the first visible light characteristic point as a first distance, and determining the distance between the target characteristic point and the second visible light characteristic point as a second distance;
and if the distance ratio of the first distance to the second distance is within a preset range, determining the target characteristic point and the first visible light characteristic point as a matching characteristic point pair to obtain the matching characteristic point pair in the infrared characteristic point and the visible light characteristic point.
7. The method according to claim 1, wherein in step S300, the step of matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs and obtaining distance ratios corresponding to the matched characteristic point pairs comprises:
matching the infrared characteristic points and the visible light characteristic points to obtain matched characteristic point pairs, and deleting wrong characteristic point pairs in the characteristic point pairs by using an RANSAC algorithm to obtain target characteristic point pairs;
and acquiring the corresponding distance ratio of the target characteristic point pair.
8. The method for detecting an abnormally heated electric power apparatus according to any one of claims 1 to 7, wherein in step S100, the step of performing edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image includes:
inputting the infrared image and the visible light image into a neural network obtained by training based on an HED algorithm to obtain an infrared edge map result corresponding to the infrared image and a visible light edge map result corresponding to the visible light image;
and aggregating the infrared edge image results to obtain an infrared edge image corresponding to the infrared image, and aggregating the visible light edge image results to obtain a visible light edge image corresponding to the visible light image.
9. An abnormally heated electric power equipment detection apparatus, characterized in that the abnormally heated electric power equipment detection apparatus includes:
the acquisition module is used for acquiring an infrared image and a visible light image corresponding to the power equipment;
the edge detection module is used for carrying out edge detection on the infrared image and the visible light image to obtain an infrared edge image corresponding to the infrared image and a visible light edge image corresponding to the visible light image;
the characteristic point detection module is used for detecting the characteristic points of the infrared edge image and the visible light edge image through a KAZE algorithm to obtain the infrared characteristic points corresponding to the infrared edge image and the visible light characteristic points corresponding to the visible light edge image;
the matching module is used for matching the infrared characteristic points with the visible light characteristic points to obtain matched characteristic point pairs;
the initial data set judgment module is used for obtaining the distance ratios corresponding to the matched feature point pairs, sequencing the distance ratios from small to large to obtain the sequenced distance ratios, selecting a first preset number of target distance ratios from front to back in the sequenced distance ratios, and determining the matched feature point pairs corresponding to the target distance ratios as an initial data set;
the execution module is used for executing the internal characteristic point pair determination process: randomly selecting a subset containing a second preset number of matching feature point pairs in the initial data set, calculating to obtain a transformation model parameter according to the subset, and judging the matching feature point pairs in the initial data set except the subset according to the transformation model parameter to determine an internal feature point pair in the matching feature points;
the calculation module is used for calculating the execution times of the determined flow of the internal feature points;
the judging module is used for determining the transformation model parameters corresponding to the most internal feature point pairs as the transformation parameters of the transformation model between the infrared edge image and the visible light edge image when the execution times are greater than the preset times;
the calculation module is further used for calculating a normalized correlation coefficient corresponding to each sliding window;
the judging module is further used for determining the maximum coefficient in the calculated normalized correlation coefficients, determining the area of the sub-image corresponding to the maximum coefficient as a matching edge area, and determining a non-matching area in the visible light image according to the matching edge area;
the segmentation module is used for carrying out threshold segmentation on the infrared image to obtain a mask image corresponding to the infrared image;
the judging module is further used for determining an abnormal heating area in the visible light image according to the mask image and the non-matching area, and determining the electric equipment corresponding to the abnormal heating area as the electric equipment which generates heat abnormally.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a detection program which, when executed by a processor, implements the steps of the abnormally heated electric power equipment detection method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643380.6A CN112288761B (en) | 2020-07-07 | 2020-07-07 | Abnormal heating power equipment detection method and device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643380.6A CN112288761B (en) | 2020-07-07 | 2020-07-07 | Abnormal heating power equipment detection method and device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112288761A CN112288761A (en) | 2021-01-29 |
CN112288761B true CN112288761B (en) | 2022-08-30 |
Family
ID=74420628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010643380.6A Active CN112288761B (en) | 2020-07-07 | 2020-07-07 | Abnormal heating power equipment detection method and device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112288761B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113792684B (en) * | 2021-09-17 | 2024-03-29 | 中国科学技术大学 | Multi-mode visual flame detection method for fire-fighting robot under weak alignment condition |
CN114018982B (en) * | 2021-10-14 | 2023-11-07 | 国网江西省电力有限公司电力科学研究院 | Visual monitoring method for dust deposit of air preheater |
CN113793372A (en) * | 2021-10-15 | 2021-12-14 | 中航华东光电有限公司 | Optimal registration method and system for different-source images |
TWI810863B (en) * | 2022-03-24 | 2023-08-01 | 中華電信股份有限公司 | An abnormal inspection system and method for power generation equipment and computer-readable medium thereof |
CN115908871B (en) * | 2022-10-27 | 2023-07-28 | 广州城轨科技有限公司 | Wearable equipment track equipment data detection method, device, equipment and medium |
CN116740653A (en) * | 2023-08-14 | 2023-09-12 | 山东创亿智慧信息科技发展有限责任公司 | Distribution box running state monitoring method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104134208A (en) * | 2014-07-17 | 2014-11-05 | 北京航空航天大学 | Coarse-to-fine infrared and visible light image registration method by adopting geometric construction characteristics |
CN106257535A (en) * | 2016-08-11 | 2016-12-28 | 河海大学常州校区 | Electrical equipment based on SURF operator is infrared and visible light image registration method |
CN110232387A (en) * | 2019-05-24 | 2019-09-13 | 河海大学 | A kind of heterologous image matching method based on KAZE-HOG algorithm |
CN110266268A (en) * | 2019-06-26 | 2019-09-20 | 武汉理工大学 | A kind of photovoltaic module fault detection method based on image co-registration identification |
CN111369605A (en) * | 2020-02-27 | 2020-07-03 | 河海大学 | Infrared and visible light image registration method and system based on edge features |
-
2020
- 2020-07-07 CN CN202010643380.6A patent/CN112288761B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104134208A (en) * | 2014-07-17 | 2014-11-05 | 北京航空航天大学 | Coarse-to-fine infrared and visible light image registration method by adopting geometric construction characteristics |
CN106257535A (en) * | 2016-08-11 | 2016-12-28 | 河海大学常州校区 | Electrical equipment based on SURF operator is infrared and visible light image registration method |
CN110232387A (en) * | 2019-05-24 | 2019-09-13 | 河海大学 | A kind of heterologous image matching method based on KAZE-HOG algorithm |
CN110266268A (en) * | 2019-06-26 | 2019-09-20 | 武汉理工大学 | A kind of photovoltaic module fault detection method based on image co-registration identification |
CN111369605A (en) * | 2020-02-27 | 2020-07-03 | 河海大学 | Infrared and visible light image registration method and system based on edge features |
Also Published As
Publication number | Publication date |
---|---|
CN112288761A (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112288761B (en) | Abnormal heating power equipment detection method and device and readable storage medium | |
AU2019275232B2 (en) | Multi-sample whole slide image processing via multi-resolution registration | |
Shi et al. | Cloud detection of remote sensing images by deep learning | |
Psyllos et al. | Vehicle model recognition from frontal view image measurements | |
CN111783576B (en) | Pedestrian re-identification method based on improved YOLOv3 network and feature fusion | |
CN108520226B (en) | Pedestrian re-identification method based on body decomposition and significance detection | |
JP5546317B2 (en) | Visual inspection device, visual inspection discriminator generation device, visual inspection discriminator generation method, and visual inspection discriminator generation computer program | |
CN105975929A (en) | Fast pedestrian detection method based on aggregated channel features | |
CN113361495A (en) | Face image similarity calculation method, device, equipment and storage medium | |
CN108846404B (en) | Image significance detection method and device based on related constraint graph sorting | |
Vanetti et al. | Gas meter reading from real world images using a multi-net system | |
CN110472081B (en) | Shoe picture cross-domain retrieval method based on metric learning | |
CN112884782B (en) | Biological object segmentation method, apparatus, computer device, and storage medium | |
CN111275010A (en) | Pedestrian re-identification method based on computer vision | |
CN113435407B (en) | Small target identification method and device for power transmission system | |
CN111553422A (en) | Automatic identification and recovery method and system for surgical instruments | |
CN115375917B (en) | Target edge feature extraction method, device, terminal and storage medium | |
Leng et al. | Adaptive multiscale segmentations for hyperspectral image classification | |
CN114782948B (en) | Global interpretation method and system for cervical fluid-based cytological smear | |
CN109145770B (en) | Automatic wheat spider counting method based on combination of multi-scale feature fusion network and positioning model | |
Marcuzzo et al. | Automated Arabidopsis plant root cell segmentation based on SVM classification and region merging | |
CN113628252A (en) | Method for detecting gas cloud cluster leakage based on thermal imaging video | |
CN117218672A (en) | Deep learning-based medical records text recognition method and system | |
CN117036342A (en) | Chip defect identification method and system | |
CN116740652A (en) | Method and system for monitoring rust area expansion based on neural network model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |