CN112379231B - Equipment detection method and device based on multispectral image - Google Patents

Equipment detection method and device based on multispectral image Download PDF

Info

Publication number
CN112379231B
CN112379231B CN202011263184.2A CN202011263184A CN112379231B CN 112379231 B CN112379231 B CN 112379231B CN 202011263184 A CN202011263184 A CN 202011263184A CN 112379231 B CN112379231 B CN 112379231B
Authority
CN
China
Prior art keywords
image
infrared
ultraviolet
discharge
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011263184.2A
Other languages
Chinese (zh)
Other versions
CN112379231A (en
Inventor
戴波
蒋城颖
高明
姚一杨
梅峰
沈桂竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd
Priority to CN202011263184.2A priority Critical patent/CN112379231B/en
Publication of CN112379231A publication Critical patent/CN112379231A/en
Application granted granted Critical
Publication of CN112379231B publication Critical patent/CN112379231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/12Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing
    • G01R31/1218Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing using optical methods; using charged particle, e.g. electron, beams or X-rays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0096Radiation pyrometry, e.g. infrared or optical thermometry for measuring wires, electrical contacts or electronic systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Radiation Pyrometers (AREA)

Abstract

The application discloses a device detection method and a device based on multispectral images, wherein the method comprises the following steps: obtaining a plurality of image sets; at least carrying out image recognition on the infrared images in each image set to obtain a temperature rise recognition result of the infrared images, wherein the temperature rise recognition result comprises the following steps: detecting the temperature rise abnormity of each image area contained in the infrared image; performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, wherein the discharge recognition result comprises: a discharge abnormality detection result for each image area included in the ultraviolet image; obtaining an equipment detection result according to the temperature rise identification result and the discharge identification result in each image set, wherein the equipment detection result comprises: and an abnormality detection result of each image area contained in the infrared image or the ultraviolet image, wherein the abnormality detection result represents whether an equipment component corresponding to the image area in the power equipment is abnormal or not.

Description

Equipment detection method and device based on multispectral image
Technical Field
The application relates to the technical field of power systems, in particular to a device detection method and device based on multispectral images.
Background
The safe and stable operation of the power system plays a vital role in the development of the economy and the society and the stability of the life of residents. With the increasing national power demand, the scale of power grid construction is continuously enlarged, and the power load is increasingly improved. The electrical equipment is in an outdoor high-voltage, high-temperature and high-load working state for a long time, and once damage or failure occurs, huge economic loss can be caused. In order to ensure the safe operation of an electric power system, people adopt a power failure maintenance method for a long time, and carry out a large amount of maintenance work blindly under the condition of not knowing the operation state of equipment, thereby causing the waste of manpower and material resources to a great extent.
In order to improve the detection efficiency of the power equipment, image recognition is often performed on an image including an equipment area in an image processing mode at present, so that the equipment state is sensed, and whether the equipment fails or not is detected.
However, the conventional image processing method may have a low detection accuracy.
Disclosure of Invention
In view of this, the present application provides a device detection method and apparatus based on a multispectral image, so as to solve the technical problem in the prior art that the accuracy of detecting an abnormality of an electrical device is low.
The present application provides the following:
a device detection method based on multispectral images, comprising:
the method comprises the steps that a plurality of image sets are obtained, each image set corresponds to different acquisition parameters, the acquisition parameters at least comprise any one or more of acquisition scale, acquisition angle and acquisition time, each image set comprises equipment images of multi-frame power equipment, the equipment images comprise infrared images, visible light images and ultraviolet images, and the equipment images in each image set correspond to the same acquisition parameter;
performing image recognition on at least the infrared images in each image set to obtain a temperature rise recognition result of the infrared images, wherein the temperature rise recognition result comprises: detecting the abnormal temperature rise of each image area contained in the infrared image;
performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, wherein the discharge recognition result comprises: a discharge abnormality detection result for each image area included in the ultraviolet image;
obtaining an equipment detection result according to the temperature rise identification result and the discharge identification result in each image set, wherein the equipment detection result comprises: and an abnormality detection result for each image area included in the infrared image or the ultraviolet image, the abnormality detection result representing whether an apparatus component corresponding to the image area in the electric power apparatus is abnormal or not.
Preferably, the method at least performs image recognition on the infrared image in each image set to obtain a temperature rise recognition result of the infrared image, and includes:
performing image segmentation on the infrared image in each image set to obtain a plurality of image areas, wherein each image area corresponds to one device component in the target device;
obtaining a temperature matrix corresponding to each image area at least according to the image radiance of the infrared image;
obtaining characteristic parameters of each image area according to the temperature matrix corresponding to each image area, wherein the characteristic parameters comprise characteristic parameters corresponding to current heating types and/or characteristic parameters corresponding to voltage heating types;
and obtaining the abnormal temperature rise detection result of each image area at least according to the characteristic parameters and a preset parameter threshold value.
In the above method, preferably, the parameter threshold is determined at least according to the influence factor that the correlation with the characteristic parameter satisfies a preset influence condition;
wherein, the influencing factors at least comprise any one or more of seasonal factors, time factors and voltage grade factors.
In the above method, preferably, the parameter threshold includes a plurality of thresholds corresponding to abnormal temperature rise levels, and the threshold corresponding to each abnormal temperature rise level is different;
and the parameter threshold is obtained by processing a plurality of historical image samples corresponding to each temperature abnormal grade.
In the above method, preferably, before obtaining the characteristic parameter of each image region according to the temperature matrix corresponding to each image region, the method further includes:
acquiring initial infrared data corresponding to each image area in the infrared image, and correcting the image radiance;
and adjusting the temperature matrix corresponding to each image area in the infrared image by using the corrected image radiance.
The above method, preferably, performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, includes:
performing image recognition on the visible light image in each image set to identify a plurality of image areas included in the visible light image and a corresponding one of the device components in the power device in each image area;
obtaining a plurality of image areas included in the ultraviolet image in each image set and a device component in the power device corresponding to each image area according to a plurality of image areas included in the visible light image and a device component of a target device corresponding to each image area;
detecting each image area in the ultraviolet image by using a detection model based on a convolutional neural network to obtain a discharge abnormity detection result of each image area in the ultraviolet image;
the detection model is obtained by training a training sample set with a discharge label, the training sample set at least comprises a plurality of frames of training images, the training images are images of ultraviolet spectrums, and the discharge label represents whether discharge exists in equipment components in the training images.
Preferably, the method performs image recognition on the visible light image in each image set to recognize a plurality of image areas included in the visible light image and one device component in the power device corresponding to each image area, and includes:
detecting the visible light image by utilizing a previous YOLO object recognition and positioning algorithm to obtain a plurality of equipment components in the electric power equipment contained in the visible light image;
wherein, the YOLO object recognition and positioning algorithm at least comprises a tilt angle vector;
according to the plurality of equipment components, the visible light image is segmented to obtain an image area where each equipment component is located in the visible light image;
and identifying the equipment components in each image area segmented in the visible light image to obtain the component identification of each equipment component.
The above method, preferably, further comprises:
compressing the detection model based on the convolutional neural network;
wherein the compression process comprises: any one or more of pruning, quantization, weight sharing, and tensor decomposition.
The above method, preferably, obtaining an apparatus detection result according to the temperature rise recognition result and the discharge recognition result, includes:
judging whether the discharge abnormity detection result of each image area in the discharge identification result represents that the corresponding image area has discharge abnormity;
and under the condition that the image area corresponding to the discharge abnormity detection result representation has the discharge abnormity, adjusting the temperature rise abnormity detection result corresponding to the image area with the discharge abnormity represented by the discharge abnormity detection result in the temperature rise identification result from the current temperature rise abnormity level to a higher temperature rise abnormity level so as to obtain an equipment detection result.
An apparatus for device detection based on multispectral images, comprising:
the device comprises an image obtaining unit, a processing unit and a processing unit, wherein the image obtaining unit is used for obtaining a plurality of image sets, each image set corresponds to different acquisition parameters, the acquisition parameters at least comprise any one or more of acquisition scale, acquisition angle and acquisition time, each image set comprises a plurality of frames of device images of power equipment, the device images comprise infrared images, visible light images and ultraviolet images, and the device images in each image set correspond to the same acquisition parameter;
an infrared detection unit, configured to perform image recognition on at least the infrared image in each image set to obtain a temperature rise recognition result of the infrared image, where the temperature rise recognition result includes: detecting the abnormal temperature rise of each image area contained in the infrared image;
an ultraviolet detection unit, configured to perform image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, where the discharge recognition result includes: a discharge abnormality detection result for each image area included in the ultraviolet image;
a result obtaining unit, configured to obtain an apparatus detection result according to the temperature rise identification result and the discharge identification result in each image set, where the apparatus detection result includes: and an abnormality detection result for each image area included in the infrared image or the ultraviolet image, the abnormality detection result representing whether an apparatus component corresponding to the image area in the electric power apparatus is abnormal or not.
According to the scheme, in the multispectral image-based equipment detection method and device, multispectral images of electric equipment such as infrared images, ultraviolet images and visible light images are collected, image recognition is further performed on the multispectral images respectively, abnormal detection results corresponding to temperature rise and discharging can be obtained, and then the abnormal detection results of each equipment component of the electric equipment can be finally generated by combining the abnormal detection results in multiple aspects. Therefore, the accurate abnormal detection result is obtained through the respective detection and the comprehensive processing of the multispectral image, and the purpose of improving the detection accuracy is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a device detection method based on a multispectral image according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an example of image processing in an embodiment of the present application;
fig. 3 is a partial flowchart of a device detection method based on a multispectral image according to an embodiment of the present disclosure;
fig. 4 is a flowchart of another part of a method for device detection based on multispectral images according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus detecting device based on multispectral images according to a second embodiment of the present disclosure;
fig. 6-29 are diagrams respectively illustrating the application of the power system to detect an abnormality of an electrical device.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of an implementation of a device detection method based on multispectral images according to an embodiment of the present disclosure is shown, where the method is applied to an electronic device capable of image acquisition and image processing, such as a computer or a server connected to a multispectral camera. The technical scheme in the embodiment is mainly used for improving the accuracy of detecting whether the equipment components of the power equipment have faults or not.
Specifically, the method in this embodiment may include the following steps:
step 101: a plurality of image sets is obtained.
Each image set corresponds to different acquisition parameters, the acquisition parameters at least comprise any one or more of acquisition scale, acquisition angle and acquisition time, each image set comprises equipment images of a plurality of frames of power equipment, the equipment images comprise infrared images, visible light images and ultraviolet images, and the equipment images in each image set correspond to the same acquisition parameters. For example, an infrared image, a visible light image, and an ultraviolet image belonging to the same image set correspond to the same acquisition angle, acquisition scale, and acquisition time. Hereinafter, the infrared image, the visible light image, and the ultraviolet image on which the abnormality detection is performed in the present embodiment are for the device images in the image set of the same acquisition parameter.
Specifically, in this embodiment, multispectral device images may be acquired at the same image acquisition angle through a multi-sensor system configured for the power device, for example, an infrared image including the power device is acquired through an infrared sensor or an infrared camera in the multi-sensor system, an ultraviolet image including the power device is acquired through an ultraviolet sensor or an ultraviolet camera in the multi-sensor system, and a visible light image including the power device is acquired through an RGB camera in the multi-sensor system.
Further, in an implementation manner, in this embodiment, the visible light image, the infrared image, and the ultraviolet image may be fused by an image fusion algorithm, and then the fused image is output, or after the fused image is detected, an obtained detection result is output, and the detection result can represent whether the power device has a fault.
Step 102: and performing image recognition on the infrared image in at least each image set to obtain a temperature rise recognition result of the infrared image.
Wherein, the temperature rise recognition result includes: and detecting the abnormal temperature rise of each image area contained in the infrared image.
Specifically, in this embodiment, after the infrared image is segmented, the extracted characteristic parameters are extracted by using the temperature matrix of the segmented image region, and the extracted characteristic parameters can be used to determine the temperature rise abnormality detection result of each image region.
For example, the temperature rise abnormality result may be divided into a plurality of temperature rise abnormality levels, such as three abnormality levels of general, severe, and critical, based on which each image area included in the infrared image in the temperature rise recognition result corresponds to the general, severe, or critical temperature rise abnormality level.
Step 103: and performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image.
Wherein, the discharge recognition result includes: and a discharge abnormality detection result for each image area included in the ultraviolet image.
Specifically, in this embodiment, the region of the ultraviolet image including each device component in the electrical device may be segmented by the visible light image to obtain a plurality of image regions, and then whether the discharge abnormality exists in the image regions is detected, so as to obtain a discharge abnormality detection result of each image region.
It should be noted that the infrared image and the ultraviolet image correspond to the same image capturing angle, and therefore correspond to the same power equipment and are in the same orientation, and the image segmentation performed in the infrared image and the image segmentation performed in the ultraviolet image are both performed according to the equipment components of the power equipment, and therefore, the image areas in the infrared image and the image areas in the ultraviolet image correspond to one another with respect to the equipment components, as shown in fig. 2. Based on this, the temperature rise abnormality detection result for each image area in the infrared image corresponds to the discharge abnormality detection result for the corresponding image area in the ultraviolet image.
Step 104: and obtaining a device detection result according to the temperature rise identification result and the discharge identification result in each image set.
Wherein, the equipment detection result includes: and the abnormal detection result of each image area contained in the infrared image or the ultraviolet image represents whether the equipment component corresponding to the image area in the power equipment is abnormal or not.
Specifically, in this embodiment, based on the temperature rise abnormality detection result corresponding to each image region in the temperature rise identification result and the discharge abnormality detection result corresponding to each image region in the discharge identification result, the temperature rise abnormality detection result is adjusted by using the corresponding discharge abnormality detection result or the discharge abnormality detection result is adjusted by using the corresponding temperature rise abnormality detection result, so as to obtain the abnormality detection result of each image region, and the abnormality detection result of each image region represents whether there is an abnormality in the equipment components in the image region, for example, whether there is an abnormality in the equipment components such as a connection, a switch, a circuit breaker, an expander, a military cap, and the like in the power equipment.
Based on the above implementation, in this embodiment, a set of device detection results can be obtained for different acquisition parameters, for example, an abnormality detection result of each device component in the electronic device is obtained for different acquisition angles, acquisition times, and acquisition scales, so as to improve the accuracy of detection.
According to the above scheme, in the multispectral image-based device detection method provided in the embodiment of the present application, multispectral images of the power device, such as infrared, ultraviolet, and visible light images, are collected, and then the multispectral images are subjected to image recognition, so that abnormal detection results corresponding to the multispectral images in temperature rise and discharge can be obtained, and then the abnormal detection results of each device component of the power device can be finally generated by combining the abnormal detection results in the multiple aspects. Therefore, in the embodiment, a more accurate abnormality detection result is obtained by respectively detecting and comprehensively processing the multispectral images, so that the purpose of improving the detection accuracy is achieved.
In one implementation manner, when performing image recognition on at least the infrared image in each image set to obtain a temperature rise recognition result of the infrared image in step 102, specifically, the following steps may be implemented, as shown in fig. 3:
step 301: and performing image segmentation on the infrared image in each image set to obtain a plurality of image areas.
Wherein each image area corresponds to one device component in the target device, and each image area as shown in fig. 2 corresponds to a connection, a switch, a breaker, etc. component of the power device.
Specifically, in this embodiment, the bezel detection method may be used to identify and segment the area where the device component in the power device is located. For example, in this embodiment, each pixel point in the infrared image is segmented, and whether the power device in the infrared image is inclined or not does not need to be considered.
Step 302: and obtaining a temperature matrix corresponding to each image area at least according to the image radiance of the infrared image.
In this embodiment, different manners are respectively adopted for the infrared images with the image radiance of 1 and the image radiance of less than 1 to obtain the temperature matrix corresponding to each image area, because the radiances of different materials are different.
For example, when the image radiance (Emissivity) is 1, the temperature matrix is expressed by the following formula (1):
T=B/ln(R1/(R2*(S+O))+F) (1)
wherein T is a temperature matrix, S is an original data matrix in the infrared image, B, R1, R2 and S, O, F are Planckian constants, and ln () is a natural logarithm function.
When the image radiance (Emissivity) is less than 1, the temperature matrix is expressed by the following formula (2):
T=B/ln(R1/(R2*RAW_obj+O))+F) (2)
wherein, RAW _ refl and RAW _ obj are intermediate parameters respectively, and are obtained by the following formulas (3) and (4):
RAW_refl=R1/(R2*(e^(B/T_refl)-F))-O (3)
RAW_obj=(S-(1-Emissivity)*RAW_refl)/Emissivity (4)
wherein B, R1, R2 and S, O, F are Planck constants, T _ refl is the reflection explicit temperature, one of the extracted calculation parameters, e is a natural constant (which can be taken as 2.71828), and T is a temperature matrix.
Optionally, the radiance may also be modified as an external parameter input.
In an implementation manner, in order to improve accuracy of a temperature matrix, in this embodiment, after image areas where device components of an electrical device are located in an infrared image are segmented, initial infrared data corresponding to each image area in the infrared image, that is, a previous original data matrix, is obtained, based on which an image radiance is corrected, and then the temperature matrix corresponding to each image area in the infrared image is adjusted by using the corrected image radiance, so that a purpose of improving processing quality of the infrared image is achieved, and accuracy of a subsequently obtained temperature rise recognition result is improved.
Step 303: and obtaining the characteristic parameters of each image area according to the temperature matrix corresponding to each image area.
In this embodiment, it is considered that the abnormal fault of the power equipment may be classified into two types, i.e., current heating type and voltage heating type, so that when the characteristic parameter for abnormality detection is extracted in this embodiment, the obtained characteristic parameter may only include the characteristic parameter corresponding to the current heating type, or only include the characteristic parameter corresponding to the voltage heating type, or include both the characteristic parameter corresponding to the current heating type and the characteristic parameter corresponding to the voltage heating type.
Specifically, for the characteristic parameters corresponding to the current heating type, in this embodiment, the temperature matrix corresponding to each image area may be processed to extract temperature information on the device component included in each image area and the heating area where the device component is connected to the power transmission line as the characteristic parameters;
for the characteristic parameters corresponding to the voltage heating type, in this embodiment, the temperature matrix corresponding to each image region may be processed to extract the temperature information of the device component of the casing class in each image region as the characteristic parameters.
Step 304: and obtaining the temperature rise abnormity detection result of each image area at least according to the characteristic parameters and the preset parameter threshold value.
In this embodiment, whether the characteristic parameter meets the corresponding parameter threshold may be determined through a preset or real-time updated parameter threshold, so as to obtain a result of detecting abnormal temperature rise of the corresponding device component in each image region.
It should be noted that different equipment components correspond to different parameter thresholds, and the parameter threshold of each equipment component includes a plurality of thresholds corresponding to abnormal temperature rise levels, and the corresponding thresholds of each equipment component at each abnormal temperature rise level are the same or different.
In one implementation, the threshold value of each equipment component at each temperature rise abnormity level can be obtained by processing a plurality of corresponding historical image samples at each temperature rise abnormity level.
Specifically, in this embodiment, historical image samples of a plurality of equipment components with known abnormal temperature rise levels can be obtained in advance, the equipment components and the abnormal temperature rise levels are divided, the historical image samples belonging to the same equipment component and the same abnormal temperature rise level are divided into a group, and then a distribution model is established for the historical image samples divided into the same group, so as to obtain a threshold value, that is, a parameter threshold value, of each equipment component at each abnormal temperature rise level.
For example, the cumulative distribution probability of the historical sample images divided into the same group under the corresponding temperature rise abnormal level is obtained from the distribution model established by the historical sample images, and then the threshold value under the corresponding temperature rise abnormal level is obtained by calculation according to the cumulative distribution probability, for example, the threshold value under the general temperature rise abnormal level can be obtained by subtracting the cumulative distribution probability under the general temperature rise abnormal level from 1, the threshold value under the severe temperature rise abnormal level can be obtained by subtracting the cumulative distribution probability under the severe temperature rise abnormal level from 1, and the threshold value under the critical temperature rise abnormal level can be obtained by subtracting the cumulative distribution probability under the critical temperature rise abnormal level from 1.
Further, in order to improve the accuracy of the parameter threshold value to improve the accuracy of the temperature rise recognition result, in this embodiment, when the parameter threshold value is obtained, first, influence factors whose correlation with the characteristic parameter meets a preset influence condition are obtained, and then, the parameter threshold values of different equipment components at different temperature rise abnormal levels are determined according to the influence factors, for example, the parameter threshold value obtained through the historical sample image is adjusted, such as increased or decreased, according to the influence factors.
The influence condition is that the correlation between the influence factor and the characteristic parameter is higher than a preset correlation threshold or the influence factor can make the original data corresponding to the characteristic parameter more discrete. The influencing factors at least comprise any one or more of seasonal factors, time factors and voltage grade factors.
Specifically, in this embodiment, the influence factors meeting the influence condition may be analyzed based on an influence factor analysis method of the fuzzy C-means.
For example, firstly, the fuzzy C-means clustering is used for automatically clustering numerical diagnostic characteristic parameters, in a multidimensional space, the fuzzy C-means clustering method can objectively obtain a clustering center of data, and based on the clustering center, different influence factors are subjected to interval division, for example, the voltage grades are divided into 110kV, 220kV, 500kV and more than 500 kV; the detection seasons are spring, summer, autumn and winter; the detection time is divided into day, night and the like, and based on the detection time, clusters of diagnosis characteristic parameters which belong to different influence factors are calculated among different cluster categories, so that the influence degrees of the different influence factors are obtained. For different clustering categories, if the distance between the two categories is large, the data under the influence factor is more discrete, and the influence factor has large influence on the data; if the distance between the two types is small, the influence factor is considered not to influence the result of data clustering, that is, the influence on the data is small, therefore, in this embodiment, the classification basis and the main influence factor in the process of calculating the differentiation threshold are determined, and the determining of the factor having a large influence at least includes: detecting factors such as season, detection time and voltage level.
Based on the above, the variables corresponding to the influence factors are added into the distribution model for obtaining the cumulative distribution probability, so that the accuracy of the parameter threshold is improved.
In one implementation, the step 103 may be implemented by performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, as shown in fig. 4:
step 401: and performing image recognition on the visible light images in each image set to identify a plurality of image areas contained in the visible light images and one equipment component in the corresponding power equipment in each image area.
Specifically, in this embodiment, the object recognition and positioning algorithm based on YOLO may perform visual detection on the visible light image, so as to detect the types of the multiple device components of the power device in the visible light image, that is, divide the device components corresponding to different types, and then divide the visible light image according to the device components, so as to obtain an image area where each device component in the visible light image is located, and then perform component identification on the device components in each image area divided in the visible light image, so as to obtain component identifiers of each device component, such as component identifiers of a switch.
Further, in order to improve accuracy, in the present embodiment, a YOLO object recognition and positioning algorithm is improved, and a tilt angle vector is added to a prediction vector of a rectangular frame involved in implementing object recognition in YOLO, so as to implement oblique frame prediction.
In addition, in order to further improve the accuracy, in this embodiment, a larger convolution kernel, such as a convolution kernel of 5 × 5, is selected for the YOLO, so that the receptive field of the prediction unit of each mesh in the YOLO can cover a sufficiently large area in the original image area.
In addition, in this embodiment, a regularization operation may be added before performing multi-scale feature fusion in YOLO, and since the matching problem of the activation value amplitudes between different feature maps needs to be considered in the process of fusing the feature maps, in this embodiment, L2 regularization is performed on the matched feature maps, so that the feature maps after regularization may be spliced together, and dimension reduction is performed through 1 × 1 convolution layer.
In an implementation manner, in this embodiment, an image area where equipment components in the electrical equipment are located may be segmented by using a depelab-based visible light image electrical equipment area segmentation algorithm to segment image areas where the equipment components of the electrical equipment are located, where the equipment components are included in the visible light image.
In an implementation manner, in this embodiment, the device component in each image region may be identified based on a preset registration algorithm, so as to obtain a device identifier of each device component. Among these, the registration algorithm may be: the two-dimensional image feature extraction-based registration method, such as an angular point extraction algorithm or an edge point extraction algorithm, may also be a registration method based on a solid geometric transformation, for example, a correspondence between a current image region and image pixel coordinates between sample images of known component identifiers is found by using a spatial geometric coordinate transformation relationship, and then the component identifiers of device components in the current image region are identified based on the correspondence.
Step 402: and obtaining a plurality of image areas contained in the ultraviolet image in each image set and the equipment components in the corresponding power equipment in each image area according to the plurality of image areas contained in the visible light image and the equipment components of the corresponding target equipment in each image area.
In this embodiment, based on a plurality of image regions included in the visible light image and device components of the target device corresponding to each image region, a plurality of image regions included in the ultraviolet image and device components in the power device corresponding to each image region can be obtained, as shown in fig. 2.
Step 403: and detecting each image area in the ultraviolet image by using a detection model based on a convolutional neural network to obtain a discharge abnormity detection result of each image area in the ultraviolet image.
The detection model is obtained by training a training sample set with a discharge label, the training sample set at least comprises a plurality of frames of training images, the training images are images of ultraviolet spectrums, and the discharge label represents whether discharge exists in equipment components in the training images. Based on this, in this embodiment, training images are sequentially input into the detection model in advance, the output of the detection model is compared with the corresponding discharge label, and then the model parameters in the detection model are adjusted by judging the established loss function until the loss function of the detection model converges, so as to complete the training of the detection model, and the trained detection model can accurately detect whether discharge abnormality exists in the device component in the input image region.
Therefore, in the embodiment, the image area in the ultraviolet image, which may have the discharge abnormality, is located through the visible light image at the same image acquisition angle, and after the convolution of the image area in the ultraviolet image, the feature extraction and other processing are performed through the detection model, the image area in which the discharge abnormality exists and the corresponding equipment component on the image area can be located.
In a preferred implementation, in the embodiment, along with the detection of the image region by the detection model, in order to improve accuracy, the detection model may be further trained and optimized by using a result of whether the discharge abnormality exists in the image region after being manually reviewed.
In a preferred implementation scheme, the detection model in this embodiment may be configured with an input layer, multiple convolution layers, a pooling layer, a full connection layer, and an output layer, so as to implement convolution processing on an image region, and further learn, according to a trained model parameter, whether a detection result of a device component corresponding to the image region has discharge abnormality.
In an embodiment, in order to reduce the complexity of the detection model and facilitate configuration on an application terminal, the detection model based on the convolutional neural network may be compressed under the condition that it is ensured that a requirement of a detection accuracy is met, where the compression process in this embodiment may include: any one or any plurality of items of pruning processing, quantization processing, weight sharing processing, and tensor decomposition. Thus, after the complexity of the detection model is reduced, the abnormality detection of the electric equipment can be realized on the portable terminal such as pad and the like.
In one implementation manner, when obtaining the device detection result according to the temperature rise identification result and the discharge identification result in step 104, the following manner may be implemented:
firstly, judging whether the discharge abnormity detection result of each image area in the discharge identification result represents that the corresponding image area has discharge abnormity, and under the condition that the discharge abnormity detection result represents that the corresponding image area has discharge abnormity, adjusting the temperature rise abnormity detection result corresponding to the image area with discharge abnormity represented by the discharge abnormity detection result in the temperature rise identification result from the current temperature rise abnormity level to a higher temperature rise abnormity level to obtain an equipment detection result.
Taking the temperature rise abnormal grade divided into a general grade, a serious grade and an emergency grade as an example, under the condition that the discharge abnormality detection result corresponding to the image area of the equipment part such as a switch represents that the discharge abnormality exists, the temperature rise abnormal detection result is adjusted from the general grade to the serious grade; for another example, in the case where the discharge abnormality detection result corresponding to the image area of the equipment part such as the joint indicates the presence of the discharge abnormality, the temperature rise abnormality detection result is adjusted from the severity level to the criticality level.
In an implementation manner, after step 104, the method in this embodiment may further output the device detection result, for example, output the device detection result to an operation and maintenance person through a mobile phone, a projection screen, a display screen, or the like, and the operation and maintenance person of the electrical device refers to the device detection result to perform maintenance or repair on the electrical device in time, so as to improve the safety of the electrical device.
Referring to fig. 5, a schematic structural diagram of an apparatus detecting device based on multispectral images according to a second embodiment of the present disclosure is provided, where the apparatus may be configured in an electronic device capable of image acquisition and image processing, such as a computer or a server connected to a multispectral camera. The technical scheme in the embodiment is mainly used for improving the accuracy of detecting whether the equipment components of the power equipment have faults or not.
Specifically, the apparatus in this embodiment may include the following units:
an image obtaining unit 501, configured to obtain multiple image sets, where each image set corresponds to different acquisition parameters, where the acquisition parameters at least include any one or more of an acquisition scale, an acquisition angle, and an acquisition time, each image set includes device images of multiple frames of power devices, the device images include an infrared image, a visible light image, and an ultraviolet image, and the device images in each image set correspond to the same acquisition parameter;
an infrared detection unit 502, configured to perform image recognition on at least the infrared image in each image set to obtain a temperature rise recognition result of the infrared image, where the temperature rise recognition result includes: detecting the abnormal temperature rise of each image area contained in the infrared image;
an ultraviolet detection unit 503, configured to perform image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, where the discharge recognition result includes: a discharge abnormality detection result for each image area included in the ultraviolet image;
a result obtaining unit 504, configured to obtain a device detection result according to the temperature rise identification result and the discharge identification result, where the device detection result includes: and an abnormality detection result for each image area included in the infrared image or the ultraviolet image, the abnormality detection result representing whether an apparatus component corresponding to the image area in the electric power apparatus is abnormal or not.
According to the scheme, in the device detection device based on the multispectral image, the multispectral images of the power device, such as infrared images, ultraviolet images and visible light images, are collected, image recognition is further performed on the multispectral images, abnormal detection results corresponding to temperature rise and discharging can be obtained, and then the abnormal detection results in multiple aspects can be combined to finally generate the abnormal detection result of each device component of the power device. Therefore, in the embodiment, a more accurate abnormality detection result is obtained by the respective detection and the comprehensive processing of the multispectral image, so that the purpose of improving the detection accuracy is achieved.
In one implementation, the infrared detection unit 502 is specifically configured to: performing image segmentation on the infrared image in each image set to obtain a plurality of image areas, wherein each image area corresponds to one device component in the target device; obtaining a temperature matrix corresponding to each image area at least according to the image radiance of the infrared image; obtaining characteristic parameters of each image area according to the temperature matrix corresponding to each image area, wherein the characteristic parameters comprise characteristic parameters corresponding to current heating types and/or characteristic parameters corresponding to voltage heating types; and obtaining the abnormal temperature rise detection result of each image area at least according to the characteristic parameters and a preset parameter threshold value.
Optionally, the parameter threshold is determined at least according to an influence factor that the correlation with the characteristic parameter satisfies a preset influence condition; wherein, the influencing factors at least comprise any one or more of seasonal factors, time factors and voltage grade factors.
Optionally, the parameter threshold includes a plurality of thresholds corresponding to abnormal temperature rise levels, and the threshold corresponding to each abnormal temperature rise level is different; and the parameter threshold is obtained by processing a plurality of historical image samples corresponding to each temperature anomaly level.
Further, before obtaining the characteristic parameter of each image region according to the temperature matrix corresponding to each image region, the infrared detection unit 502 is further configured to: acquiring initial infrared data corresponding to each image area in the infrared image, and correcting the image radiance; and adjusting the temperature matrix corresponding to each image area in the infrared image by using the corrected image radiance.
In one implementation, the ultraviolet detection unit 503 is specifically configured to: performing image recognition on the visible light image in each image set to identify a plurality of image areas included in the visible light image and a corresponding one of the device components in the power device in each image area; obtaining a plurality of image areas included in the ultraviolet image in each image set and a device component in the power device corresponding to each image area according to a plurality of image areas included in the visible light image and a device component of a target device corresponding to each image area; detecting each image area in the ultraviolet image by using a detection model based on a convolutional neural network to obtain a discharge abnormity detection result of each image area in the ultraviolet image; the detection model is obtained by training a training sample set with a discharge label, the training sample set at least comprises a plurality of frames of training images, the training images are images of ultraviolet spectrums, and the discharge label represents whether discharge exists in equipment components in the training images.
In one implementation, the ultraviolet detection unit 503 is specifically configured to, when performing image recognition on the visible light image: detecting the visible light image by utilizing a previous YOLO object recognition and positioning algorithm to obtain a plurality of equipment components in the electric power equipment contained in the visible light image; wherein, the YOLO object recognition and positioning algorithm at least comprises a tilt angle vector; according to the plurality of equipment components, the visible light image is segmented to obtain an image area where each equipment component is located in the visible light image; and identifying the equipment components in each image area segmented in the visible light image to obtain the component identification of each equipment component.
In one implementation, the ultraviolet detection unit 503 is further configured to: compressing the detection model based on the convolutional neural network; wherein the compression process comprises: any one or more of pruning, quantization, weight sharing, and tensor decomposition.
In one implementation, the result obtaining unit 504 is specifically configured to: judging whether the discharge abnormity detection result of each image area in the discharge identification result represents that the corresponding image area has discharge abnormity; and under the condition that the image area corresponding to the discharge abnormity detection result representation has the discharge abnormity, adjusting the temperature rise abnormity detection result corresponding to the image area with the discharge abnormity represented by the discharge abnormity detection result in the temperature rise identification result from the current temperature rise abnormity level to a higher temperature rise abnormity level so as to obtain an equipment detection result.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Taking the abnormal detection of the power equipment in the power grid as an example, the system implemented by the present application is illustrated in detail by combining the technical architecture in fig. 6:
first, as shown in fig. 6, the present inventors have studied the four problems in fig. 6 to obtain the technical solution of the present application, specifically as follows:
(1) the method comprises the steps of researching a multi-sensor fusion intelligent imaging technology, wherein multispectral images are collected through an ultraviolet sensor, an infrared sensor and a visible light sensor, three-light fusion images are obtained through the research of a multi-sensor image fusion processing hardware technology, and the three-light fusion images are processed through the pixel-by-pixel scene registration of the three-light fusion video images and the technical research of time domain frames;
(2) the method comprises the following steps of researching a visible light image power equipment type and identity recognition technology based on deep learning, wherein the method is realized by algorithms based on YOLO power equipment component type recognition, equipment component segmentation algorithm, two-dimensional graph and three-dimensional graph registration and the like;
(3) researching an ultraviolet-infrared-visible light image light-weight defect rapid diagnosis network which is suitable for an edge computing platform and is based on deep learning; the method is realized by ultraviolet characteristic parameter extraction and intelligent diagnosis based on a deep convolutional neural network, infrared video image analysis technical research and equipment fault intelligent diagnosis research of multi-angle multi-time multi-scale infrared images;
(4) the technology of neural network training model hardware compression, pruning optimization and the like is researched, the edge device prototype development based on the deep neural network hardware acceleration module is researched, and the full-process automatic generation of a diagnosis report is realized; the neural network model compression is realized through modes such as pruning, quantization and weight sharing, tensor decomposition and the like, and the neural network model is transplanted to a corresponding computer or server through means such as hardware platform model selection, embedded development environment deployment, application program transplantation and the like.
1. Multi-sensor fusion intelligent imaging technology
1.1, multi-sensor fusion imaging principle and theoretical model
The multi-sensor fusion intelligent imaging is mainly divided into two aspects of multi-sensor fusion acquisition hardware and multi-sensing map fusion processing hardware on the aspect of hardware.
(1) Multi-sensor technology hardware
The multi-sensor device (i.e. the multispectral image capturing device in the foregoing) integrates three detection cameras, namely visible light, infrared and ultraviolet, into one integrated platform, which is mainly divided into an optical design and a hardware part.
In the aspect of optics, the design can be carried out by selecting a mode of combining an ultraviolet/visible common light path and a thermal infrared parallel path. Thus, the ultraviolet light and the visible light output two paths of video image signals sharing the optical axis, and the thermal infrared output the other path of video image signal with certain parallax.
In the ultraviolet imaging module, a solar blind ultraviolet detector can be a solar full-blind ultraviolet imaging module, the video resolution is 720 multiplied by 576, and the frame frequency is 25 fps. The visual angle of the ultraviolet lens can reach 12.5 degrees multiplied by 10 degrees. And the performance and sensitivity are improved by combining the design of the ultraviolet detection and spectroscope sub-module. The whole structure of the ultraviolet module consists of a visible light camera, an ultraviolet imaging module (comprising a reflection component, an ultraviolet lens, a filter set, a solar full-blind imaging module and the like), a motor driving system, an electric control module, a network module and the like. The motor drives the CCD to move back and forth through the lead screw system so as to realize focusing of the system. Visible light in the environment and ultraviolet light emitted by a power patrol fault position simultaneously enter the ultraviolet module through the lens, and an internal light splitting system separates two types of light. The visible light camera collects visible light signals and transmits the visible light signals to the processing module, the ultraviolet imaging module collects ultraviolet signals and transmits the ultraviolet signals to the processing module, and meanwhile, an operator can finish ultraviolet focusing by sending an instruction.
The main function of the infrared optical system is to image the target thermal radiation of a specific spectral range onto an infrared focal plane array. The infrared detector on the focal plane array responds to the incident thermal radiation and outputs the response to the signal processing system, and the target thermal image can be obtained after signal processing and image coding (or video synthesis). Infrared optical system design needs to be started from meeting requirements including spectral response, pixel selection, spatial resolution, temperature resolution, frame rate, and the like. The FLIR Tau 2640 can be used as an infrared imaging module, the specific model is 46640025H-SPNLP, the thermal imaging image resolution is 640 multiplied by 480, the temperature measurement precision is +/-2 ℃, the video resolution is 640 multiplied by 512, and PAL mode video can be output.
The method comprises the steps that cmos is collected into video signals inside the high-definition visible camera, preprocessing such as filtering and noise reduction is carried out on the video signals, then stable video streams are formed, the stable video streams are compressed into a standard H.264 format according to requirements, a network video service interface is provided on software design, and an rtsp protocol is supported, so that a processing platform can request the video streams and process the video streams.
The optical design of the multi-sensor fusion acquisition device can be seen with reference to fig. 7 and 8. After the light is emitted into the module, the visible light part is refracted into the visible light camera beside under the reflection action of the spectroscope, and the ultraviolet wave band passes through the spectroscope and is emitted into the ultraviolet channel behind the ultraviolet spectroscope, so that the separation of the visible light and the ultraviolet wave band is realized. The infrared module is arranged side by side with the visible light camera, and the infrared image and the visible light image are subjected to characteristic value matching and superposition when the images are fused.
After the three paths of videos are output, in order to facilitate transmission processing and increase the anti-interference capability, the videos are coded and then output. The video service module is used for performing professional processing on the images, compressing the images into a standard format, providing a video service function and constructing an external accessible function, and the power supply module is used for performing secondary voltage stabilization on an external input power supply to form a more stable input source and performing power supply distribution control.
(2) Multi-sensing-map fusion processing hardware
The multi-sensing map fusion hardware processing platform can provide multi-path video access, processing, compression and display. The hardware part can adopt one X86 processing board and one power supply control module. The X86 processing board is used for processing ultraviolet, infrared and high-definition visible video data acquired by the analysis equipment, and the power supply module is responsible for supplying power for the processing platform and the multi-sensor fusion acquisition equipment. The power supply control module needs to be powered by a battery, so that the wide voltage stabilization design needs to be carried out on an input power supply. The module meets the requirement of multi-path power supply, and simultaneously needs to provide multi-path external power output.
1.2, pixel-by-pixel scene registration and time domain frame alignment technology of multi-sensor fusion video image
The multi-sensor fusion optical design scheme determines the algorithm design of multi-sensor fusion. According to the multi-sensor fusion optical design, when visible light and ultraviolet light are fused, the common light path structure can be used for superposition matching, and precision correction is carried out through fine adjustment of the transformation matrix. When infrared light and visible light are fused, due to the fact that a visual angle difference exists, matching and fusion are needed to be conducted through image feature recognition.
Referring to the design of the multi-sensor fusion light path, the resolutions and the fields of view of ultraviolet, infrared and visible videos are different, and the infrared channel and the ultraviolet/visible channel do not share the same optical axis, so a video fusion algorithm needs to be written aiming at the above situations.
The design idea of the multi-sensor fusion algorithm is refined and mainly comprises the following steps:
a) geometric correction and preprocessing of the fused image: transforming the input multi-sensor image according to optical design and visual angle, including cutting, zooming, interpolation and the like;
b) extracting image feature points: extracting characteristic points of the visible and infrared images through a sift operator;
c) and (3) feature matching: matching the feature points through a BBF algorithm;
d) matching extraction and image superposition: and extracting a final affine transformation moment through a registration threshold value, and performing image superposition.
In the aspect of image fusion processing, firstly, image data acquired at each time point of each channel is matched, and data acquired at the same time of each channel is found (deviation after image superposition is prevented). Then, the images are cut according to different pairs of view angles of each image, and the images with proper sizes are cut out to prepare for superposition. And then, carrying out position matching, selecting the corresponding overlapping area of each channel, and selecting the corresponding part of video data. And finally, stacking, wherein the weighted average fusion algorithm is adopted for stacking. And finally pushing the processed composition to be stored. The display transmission thread is mainly used for displaying, compressing and transmitting the compressed successful image.
The thermal infrared spectrum and the ultraviolet spectrum have large visual difference, so that a pixel layer and characteristic layer image fusion method is integrated by taking a thermal infrared characterization mode and an ultraviolet characterization mode of defects as starting points, and parameters and images of thermal infrared and ultraviolet characterizations are effectively fused to a visible light environment background by taking a visible light image as the reference of the environment and electrical equipment. As shown in fig. 9, the image preprocessing is performed on the thermal infrared image, the defect heating area is extracted, the corona defect point position and the corona discharge form area are extracted after the image preprocessing is performed on the solar-blind ultraviolet image, and meanwhile, the image fusion with the visible light image as the background is obtained after the visible light image is preprocessed, the defect heating area, the corona defect point position, the corona discharge form area and the preprocessed visible light image are combined.
Based on the above algorithm flow, the basic framework design of software operation is as shown in fig. 10:
the software firstly carries out system initialization after the system is started, and establishes a stable running environment. And then acquiring and caching the multi-channel video stream acquired by the multi-channel detection equipment through the network. Next, the acquired image is preprocessed, for example, by filtering, denoising, erosion and dilation operations to mask unnecessary or weak interference signals, and at the same time, by opening and closing operations on the desired feature signal, the feature is sharpened. And thirdly, performing matching operation on each image frame among the channels, acquiring the acquired frames at the same time point and marking the acquired frames. And finally, carrying out image fusion processing. In the fusion processing, image pairs are cut according to different viewing angles of channels, then position matching is carried out, and finally superposition processing is carried out by adopting a basic superposition algorithm.
From the perspective of the business process, software needs to open up 4 thread groups for original video acquisition, video preprocessing, image fusion processing, and display transmission, respectively. Fig. 11 is a software operation framework diagram, where for an original video acquisition thread group, it monitors transmission of video data of each channel, identifies the video data after the video data is acquired, and performs respective caching according to video types (constructs a video data queue of each channel), for example, performs video frame storage through channel a, performs video frame storage through channel B, and so on, to implement multi-channel video data caching.
The video preprocessing thread can continuously poll the video acquisition thread storage queue to judge whether video data are stored and are not processed, and after videos of the type are found, video data of each channel are extracted, and filtering and noise reduction processing are firstly carried out. The average filtering algorithm can reduce random noise in the image, then for highlighting part of the detected features, corrosion operation and expansion operation are carried out on the ultraviolet image, the noise of the features is removed, the edge of the extracted features is smooth, the image skeleton is continuous, threshold detection is carried out on the part at the same time, and data of the part which does not accord with the threshold are discarded. And after the processing is finished, merging the images of all channels into a processed video queue, buffering by channels, and waiting for fusion.
The video preprocessing thread will poll the video acquisition thread storage queue continuously, determine whether video data is stored and unprocessed, and perform subsequent processing after finding such video.
After extracting video data of each channel, the image fusion thread group caches the fusion video through frame matching, clipping, position matching, overlapping and fusing.
And the display transmission thread displays the fused video through a compressed display algorithm.
2. Visible light image power equipment type and identity recognition technology based on deep learning
2.1 YOLO-based Power Equipment component type identification
For the target detection task, the power equipment in the visible light image mainly has the following three characteristics:
the obliquity: the actually processed visible light image is obtained by shooting through a visible light thermal imager by an electric worker or an inspection robot, and most of equipment in the visible light image is inclined due to the problem of shooting angles;
rare aspect ratio: most porcelain bushing type devices, such as lightning arresters, are often long but thin. Too large an aspect ratio may be detrimental to the detection of the device.
Highly structuring: power equipment is a highly structured object, and one equipment can be subdivided into different parts, each of which can be the object of independent detection.
In view of the above features of the power equipment, the present application provides a regression model-based power equipment and equipment component bezel detection model. The inclination angles of the power equipment components are directly predicted through a regression method, the prediction capability of the model for the components with the overlarge length-width ratio is improved by utilizing the shape priors of the power equipment components in the training set, and meanwhile, the prediction accuracy of the components is improved by fully utilizing the position relation among the power equipment components.
Yolo (young only look once) is a single-stage, regression-based target detection model for the current mainstream. His feature extraction network is based on a modified version of GoogleNet in the previous section, and predicts the target location in the image by regression after extracting the features of the visible image.
In view of the above characteristics of the power equipment, it is obviously impractical to directly adopt YOLO as the component bezel detection model of the present application. For this purpose, the YOLO basic model may be modified as follows:
designing a network model: the slant frame prediction is realized by adding a slant angle to a prediction vector of a rectangular frame. In order to enable the network to better predict the inclination angle, the inclination angles of the equipment in the experimental training set can be counted to obtain thetaminAnd thetamaxBy limiting the net predicted tilt angle between the two, errors caused by too wide a prediction range are avoided.
Second, the model employs a larger scale convolution kernel. Because there are a number of longer sleeve-like devices in visible photothermographic images, the field of view of a conventional 3x 3-sized convolution kernel is not very large. Therefore, the size of the convolution kernel in the deep-level convolution layer can be increased from 3x 3 to 5x 5, and the fact that the receptive field of the prediction unit of each grid can cover a large enough area on the original image is guaranteed finally.
Finally, the model adds regularization operation before multi-scale feature fusion. The multi-scale feature fusion is a widely applied method for fusing a bottom-layer and high-resolution feature map and a high-layer and low-resolution feature map with richer semantic information in deep learning. In order to fuse feature maps with different scales output by different convolutional layers, dimension reconstruction needs to be performed on a feature map with high resolution at the bottom layer to ensure that the resolution of the reconstructed feature map can be consistent with that of a feature map at the upper layer. In addition, the matching problem of the activation value amplitude between different feature maps also needs to be considered in the process of fusing the feature maps. Therefore, in the model, the feature maps to be matched are normalized by L2 in an experiment, the normalized feature maps are spliced together, and dimension reduction is performed through one 1x 1 convolutional layer.
The schematic diagram of the power equipment component oblique frame detection adopted in the present application is shown in fig. 12, wherein for an input test image, firstly, features are extracted through an improved neural network, for example, after convolution of conv1-5, a final detection prediction layer mainly outputs two parts of prediction quantities, one part is an oblique rectangular frame and a confidence coefficient of the equipment contained corresponding to the rectangular frame, namely, oblique frame prediction, the other part is a probability distribution of a class to which the prediction frame belongs, namely, equipment type prediction, and only prediction classes of the prediction frames with confidence coefficients greater than a certain threshold value are selected in fig. 12 for display. And finally, combining the two prediction results, and removing the prediction frame with high overlapping degree by using a non-maximum suppression method to obtain the final prediction result.
Design of an objective function: in order to train the actual application detection model, a multitask objective function is defined in the application. For the classification task, the model predicts a discrete conditional probability distribution p ═ for each diagonal rectangular box (p)0,...pk-1) A total of K categories, and a rectangular box exist confidence c of the device component. The values of p and c are calculated by a softmax function.
For positioning tasks, the model predicts a five-tuple t ═ t (t)x,ty,tθ,tw,th) An oblique rectangular frame is described.The model adopts a coordinate parameterization mode consistent with YOLO, and in addition, the model directly predicts the inclination angle of the rectangular frame
Figure BDA0002775288750000251
During the training process, each predicted rectangular box is assigned a true probability distribution
Figure BDA0002775288750000252
A true confidence value
Figure BDA0002775288750000253
And true regression targets
Figure BDA0002775288750000254
The model selects the prediction box with the largest IoU value between the current model and the real target box as a positive sample during training, and the rest prediction boxes are used as negative samples.
The objective function of the overall multitask learning is shown in equation (5):
Figure BDA0002775288750000255
wherein
Figure BDA0002775288750000256
And
Figure BDA0002775288750000257
representing the cost function of the classification and the cost function of the positioning, respectively. L isangleRepresenting the model-attached component tilt angle consistency cost function.
The cost function of the classification task can be expressed in detail as formula (6):
Figure BDA0002775288750000258
in the actual positioning processThe mechanism of the anchor box was introduced. Suppose (x)a,ya) Represents the coordinate position of the upper left corner of the grid where the prediction box is located, (w)a,ha) Represents the width and height of the anchor box, sigma represents the logistic function, (b)x,by,bw,bh) The bounding box parameters representing the actual prediction. The actually predicted slant-box parameter can be obtained by converting the direct prediction result of the network as the following formula (7):
bx=σ(tx)+xa
by=σ(ty)+ya
Figure BDA0002775288750000259
Figure BDA00027752887500002510
by providing a priori information on the length and width of the prediction box and limiting the center position of the prediction box to fall within the grid responsible for its prediction, the network can be made easier to learn these predictions.
2.2 Equipment parts segmentation Algorithm
The method can adopt a visible light image power equipment region segmentation algorithm based on Deeplab. Firstly, extracting by using VGG-based image features: the network design concept of VGG is contrary to the design principle of LeNet in terms of the use and size of convolution kernels, because the basic idea of LeNet is that large convolution kernels can capture similar features (weight sharing) in images. AlexNet also uses 9 × 9, 11 × 11 convolution kernels in shallow networks, and avoids using 1 × 1 convolution kernels in shallow networks as much as possible. The success of VGG models in classifying data sets suggests that images can be perceived locally as mimicking larger convolution kernels using multiple 3x 3 convolution kernels. This idea of using multiple small convolution kernels in series to replace a large convolution kernel is also adopted by GoogleNet and ResNet, among others, afterward.
A schematic diagram of the domain segmentation of the Deeplab device is shown in fig. 13, where the device domain segmentation model mainly includes a Convs module, an ASPP (advanced Spatial clustering Pooling) module, and a fusion and Upsampling (Upsampling) module, where an image is input to the Convs module to extract a preliminary feature map of the image, then input to the ASPP module to further extract a multi-scale feature map with a low resolution of the image, and finally the multi-scale feature map is upsampled by the Upsampling module to obtain a semantic segmentation result map with the same size as an original image input to a deep convolutional network (fully connected conditional random field module), that is, a final prediction result map.
The ASPP (small pore space pyramid pooling) module mainly comprises four parallel sub-modules, and the four sub-modules are used for respectively processing the extracted feature maps and performing corresponding element superposition to obtain a multi-scale feature map output by the ASPP module; in the ASPP module, each submodule consists of three same layers, namely an FC6 layer, an FC7 layer and an FC8 layer, wherein the FC6 layer is formed by sequentially connecting a convolution layer with the number of output neurons being N1, a Relu nonlinear activation layer and a Dropout layer, the FC7 layer is formed by sequentially connecting a full-connection layer with the number of output neurons being N2, a Relu nonlinear activation layer and a Dropout layer, and the FC8 layer is formed by sequentially connecting a full-connection layer with the number of output neurons being C2; in convolutional layer processing of FC6 layers of four sub-modules, different small-hole convolutional sampling rates are used for convolution operations.
The fusion and up-sampling module firstly fuses the fractional images of different branches output by the ASPP module in a point-by-point addition mode to obtain a fractional image with lower resolution (1/8 size of the original image) relative to the original input image, and then passes through an up-sampling layer to perform 8 times of up-sampling on the fused fractional image in a bilinear interpolation mode to obtain a semantic segmentation result image with the same resolution as the original image input to the depth convolution network.
The main function of the full-connection Conditional Random Field (CRF) module is to post-process and optimize semantic segmentation results obtained by the previous network, and the specific calculation process is as follows:
for a fully connected conditional random field, the potential energy function E (x) is shown in equation (8):
Figure BDA0002775288750000271
in the above formula, i represents each pixel in the image, j represents other pixels in the image, and xi,xjRespectively representing the values of pixel point i and pixel point j on the semantic segmentation result image, psiu(xi) For the unitary potential energy function of each pixel point, the specific calculation is as shown in formula (9):
ψu(xi)=-logP(xi) (9)
in the above formula, P (x)i) Representing a probability value obtained by normalizing the value of a pixel point i on the semantic segmentation result image;
ψp(xi,xj) The specific calculation is a binary potential energy function as shown in formula (10):
Figure BDA0002775288750000272
in the above formula,. mu. (x)i,xj) To indicate a function, only at xi≠xjWhen not 0; I.C. AiAnd IjPixel values, p, of points i, j of the image, respectivelyiAnd pjThe positions of the pixel points of i and j points in the image,
Figure BDA0002775288750000273
a bilateral gaussian kernel function is represented that,
Figure BDA0002775288750000274
a kernel function of a second term is represented,
Figure BDA0002775288750000275
and
Figure BDA0002775288750000276
respectively, the sum of the position-dependent kernel function variances andvariance of kernel function, σ, related to pixel valueγ 2Is the variance of the kernel function of the second term, w1And w2Are weighted weights.
2.3 device identification based on registration of two-dimensional images and three-dimensional images
The image registration method based on the feature extraction and the geometric transformation can be adopted.
(1) Registration method based on feature extraction
Points, lines, regions, and templates may be used as features of the image. The point features' coordinates for image registration can be used to derive parameters of the registered image transfer function. The point feature extraction algorithm comprises a corner point extraction algorithm, such as Harris operator and Beaudet operator; the edge point extraction algorithm includes Sobel operator, Laplacian operator, Canny operator, LOG operator, and the like.
And establishing space coordinate transformation parameters between each point in the two images by utilizing the relation of the corresponding characteristic points in the image pixel coordinate system. Geometric transformations are further classified into linear transformations and nonlinear transformations. Methods to solve for the non-linear transformation are thin plate splines, multisquarics, and weighted mean, piecewise linear, weighted linear, etc. The semblance transform and the projective transmission transform are methods often used in linear geometry. The similarity transformation matrix can be used to register the aerial and satellite images because these scenes are relatively flat and planar. Image acquisition is a projection process. If the lens and the sensor have no nonlinear distortion, two images of a very smooth scene can be described through projection transformation, when the scene is very far away, a projection transformation matrix can be estimated through an affine matrix, six parameters are obtained through affine transformation, and the coordinates of three non-collinear corresponding points can be obtained. Affine transformation is a weakened projective transformation that is often used in aerial or satellite images. If a straight line in the reference image is mapped to a curve in the sensed image, the registration is performed by a non-linear method. Thin-plate splines or curvilinear splines are very widely used in the field of non-linear geometric registration. If the control points of one image can be triangulated and the corresponding points of the two images are known, the triangular region correspondence of the two images can be determined. And if a linear transformation function is used to map a region of the sensed image to a region of the reference image, the transformation is called piecewise linear. Piecewise linearity can produce very good results when the area is small or the local geometric differences between images are small. But piecewise linearity can produce inaccurate registration when local distortions are large, or the gradient of the region boundary is large. Triangles can be applied in piecewise linearity. The choice of triangle will influence the result of the registration. As a criterion, the use of elongated triangles with sharp angles should be avoided.
(2) Registration technology based on solid geometric transformation
And finding the corresponding relation of the image pixel coordinates between the two images by utilizing the space geometric coordinate conversion relation. When the internal and external parameters of the camera and the world coordinate system of the spatial point are known, the coordinates of the corresponding point of the image formed by the RGB at the point can be obtained, and the change matrix is M1. The inverse of this transformation does not exist, i.e. the coordinates of the corresponding spatial point in the world coordinate system cannot be derived from the pixel coordinates of the image, since all points on the OP axis would map to one point on the sensor. But for depth camera images, when the distance of the OP is known, the present application can calculate the coordinates of the point-corresponding spatial point in the world coordinate system. The projected point of the point on the RGB sensor and the pixel coordinates of the point on the formed image can be acquired. Thus, registration between the RGB two-dimensional image and the depth three-dimensional image is completed.
The process of the stereo registration technology of the RGB two-dimensional image and the depth three-dimensional image which can be adopted comprises the following specific steps:
(1) by calibration, the internal parameters of the two cameras and the external parameters R1 and T1 of the depth camera and the external parameters R2 and T2 of the RGB two-dimensional camera are acquired.
(2) And optimizing the internal and external parameters obtained by calibration.
(3) And obtaining the coordinates PTOF (x1, y1, z1) of the space point P corresponding to each depth image pixel point in the depth camera coordinate system by utilizing the depth image through inverse projection transformation.
(4) The coordinates PRGB (x2, y2, z2) of the spatial point P under the RGB camera are acquired.
(5) And calculating the image pixel coordinates of the corresponding point of the P point on the RGB image according to the perspective model.
The method has the advantages that: the method is not limited by the gray scale contrast between the target and the background and the image resolution; has real-time performance and meets the requirement of machine vision. A registration diagram is shown in fig. 14.
3. Equipment defect diagnosis algorithm based on deep learning ultraviolet-infrared image
3.1, deep learning-based feature extraction and fault diagnosis of electric power equipment discharge ultraviolet image
Intercepting each frame of image from the ultraviolet video, then filtering most noise in the image after binarization processing and mathematical morphology opening and closing operation, then adopting a small region area elimination algorithm to remove noise regions which are difficult to filter morphologically, and obtaining related image quantization parameters by processing the image, such as the original ultraviolet image, the ultraviolet image after image processing and the image obtained after contour tracking shown in fig. 15, 16 and 17.
The application provides an ultraviolet light image intelligent feature extraction method and a defect assessment method by utilizing a deep convolutional neural network, a multi-level composite diagnosis method model can be adopted, and the architecture block diagram of the method is shown in fig. 18.
In the ultraviolet image comprehensive evaluation method described in fig. 18, a defect area is located on a visible light image acquired by a visible light sensor through a YOLO algorithm, and a defect preliminary warning is performed on an ultraviolet image acquired by an ultraviolet camera through a typical convolutional Neural network cnn (volumetric Neural networks), and this step only determines whether an abnormal discharge behavior exists in the acquired image; and combining ultraviolet image features extracted by CNN, quantized electric signal parameters and the like to form a multi-parameter classification model so as to perform classification evaluation of defect degree based on a deep neural network.
The ultraviolet image discharge evaluation algorithm by the deep convolutional neural network mainly comprises three parts, namely preprocessing of an input ultraviolet image, updating of model parameters for training of the convolutional neural network and detection output judgment results of the convolutional neural network.
And (3) mean value removal: and averaging all the grayed image pixel values to obtain a sample library average value, and performing difference calculation on the grayed input image and the average value, namely averaging calculation. Input image data of different dimensions are normalized to the same interval, and consistency of the input images is guaranteed.
Reshaping: because the sizes of the input ultraviolet images are different, and the input of the convolutional neural network model is set to be an image of 227x227 pixels, the input images with different sizes are rescaled to 227x227 pixels by a Nearest neighbor difference algorithm (NND) so as to meet the set interface requirement of the input image of the convolutional neural network.
In order to further show the change process of the input ultraviolet picture in the convolutional neural network, three types of ultraviolet spectrums of non-defective discharge, slight defective corona discharge and serious defective spark discharge are extracted from a test library, and forward calculation is carried out on the trained convolutional neural network, as shown in fig. 19, in the whole forward calculation process, the most representative 5 volume concentration layers and two full connecting layers are selected through multiple layers of different types of operation processes. From fig. 19, the present application can intuitively understand in more detail that the whole visualization process of the input insulator ultraviolet image in the convolutional neural network is changed from visualization to abstraction until the final classification result.
a) In the convolution characteristic diagram of the first layer, the edge contour and the characteristic of the insulator sheet and the discharge which are obvious can still be seen by performing convolution operation of different convolution kernels on the input image;
b) in the convolution characteristic diagram of the second layer, the convolution operation of multiple convolution kernels is carried out on the convolution characteristic diagram of the first layer again, and the outline of the insulator and the obvious discharge characteristic of the layer are difficult to see;
c) in the convolution characteristic diagrams of the third layer and the fourth layer, the original insulator and the discharge visual characteristic are completely seen, the characteristic of the discharge degree is abstractly extracted, and the characteristic parameters represented by the characteristic diagrams are further compressed to 169(13x13) characteristic values;
d) in the fifth-layer pooling characteristic diagram, it can be intuitively found that the characteristic diagram without discharge is mostly represented by two white points arranged up and down, the characteristic diagram with corona discharge is mostly represented by a single white point in the diagram, and the characteristic diagram with spark discharge is mostly represented by two white points arranged left and right, so that compared with an input image, the characteristic diagram becomes very abstract, and the classification characteristic becomes more obvious;
e) and (3) connecting characteristic parameter values of all pixel points represented in the characteristic diagram of the fifth layer end to end in sequence with neurons of a full-connection layer of the sixth layer to form a full-connection characteristic value of the sixth layer, and finally obtaining a classification evaluation result of the output layer of the eighth layer, such as parameter values of no-discharge, slight-defect corona discharge or more-serious-defect spark discharge, through a full-connection model framework similar to a BP (back propagation) neural network of the seventh layer and the sixth layer.
In conclusion, the classification method of the convolutional neural network does not need to artificially design a feature extractor, but is obtained by the model algorithm through training and autonomous learning of a large number of samples, and the great limitation of the traditional machine learning technology in the aspect of processing natural data of an original form is eliminated. The convolutional neural network realizes the autonomous extraction of abstract features of the visual picture through multilayer convolution and pooling operation, and has relatively excellent generalization capability and robustness.
3.2 Infrared image fault diagnosis and analysis based on artificial intelligence technology
(1) Data parsing for infrared video images
According to the method and the device, the original images collected by the thermal infrared imagers of different models are analyzed in batches to read image data in a file format, and key information such as an image temperature matrix is formed. The method mainly comprises the functions of quickly extracting basic information which is beneficial to subsequent analysis, such as the model and lens parameters of the thermal infrared imager, a temperature matrix, the lowest temperature and the highest temperature of an image, the atmospheric temperature, the acquisition time and the like. The temperature matrix is calculated according to parameters such as radiance obtained by analysis.
Wherein the emissivity of different materials may differ. According to the method and the device, after the sub-device area is determined, the original data matrix read in batches is adjusted through the corrected radiance, the purpose of improving the quality of the detected image is achieved, and a more accurate temperature analysis result is obtained.
In summary, the flow of the data analysis algorithm in the infrared image refers to that shown in fig. 20:
calling exif tool to obtain parameters such as shooting time, highest and lowest temperature and the like, and storing the parameters into a TXT file;
analyzing the obtained parameters from the TXT file, and calling exif tool to obtain intermediate parameters for calculating a temperature matrix, wherein the intermediate parameters comprise a parameter RawThermalImageType;
after the intermediate parameter is obtained through analysis, the value of the RawThermalImageType is judged, if the value is png, exif tool is called to obtain raw data and stored as raw.png, and after high-low byte conversion, the raw data and the calculation parameter are substituted into a temperature matrix calculation formula to obtain a temperature matrix;
and under the condition that the value is TIFF, calling exif tool to obtain raw data, storing the raw data as raw TIFF, and finally substituting the raw data and the calculation parameters into a temperature matrix calculation formula to calculate the temperature matrix.
(2) Equipment fault intelligent diagnosis based on multi-angle multi-time scale infrared image
On the premise of accurately positioning the power equipment in the infrared image, the method aims to extract temperature distribution of the power equipment distributed in multiple angles in the infrared image, and predict and evaluate the fault type and the fault grade of the possible voltage heating and current heating of the power equipment according to the fixed files such as the infrared diagnosis application specification of equipment with points and the like by combining the environmental factors such as the shooting time and season of the infrared image.
According to the regulations of the defect management system of the power equipment, the overheating defects at the operation time of the power equipment can be classified into the following three types according to the overheating degree:
general disadvantages: the device has the defects of overheating, certain temperature difference and certain gradient of a temperature field, but does not cause accidents. The defects generally need to be recorded on a case, the development of the defects is observed, the power failure opportunity is utilized for maintenance, and test maintenance is scheduled in a planned mode to eliminate the defects. For equipment with small coincidence rate, small temperature rise and large relative temperature difference, if the load has a condition or a chance to change, retesting can be carried out after the load current is increased so as to determine the property of the equipment defect, and when the load cannot be changed, the equipment defect can be tentatively set as a common defect so as to strengthen monitoring.
Serious defects are as follows: the defects of overheating, heavier program, larger temperature field distribution gradient and larger temperature of equipment exist. Such defects should be dealt with as quickly as possible. For the current heating type device, necessary measures such as enhancement of detection and the like should be taken. Reducing load current if necessary; for voltage heating type equipment, other testing means should be intensively monitored and arranged, and after the defect type is determined, measures are immediately taken to eliminate the defect.
Critical defects: refers to the defect that the maximum temperature of the equipment exceeds the maximum allowable temperature specified by the regulation. Such defects should be scheduled for immediate disposal. For current-heating type devices, the load current should be reduced immediately or defects should be eliminated immediately; for voltage heating type equipment, when the defect is obvious, the defect is eliminated or the operation is stopped immediately, if necessary, other test means can be arranged to further determine the defect property. The defects of the voltage heating type devices are generally defined as serious and above defects.
For an input infrared image to be detected, the intelligent fault diagnosis process of the power equipment is divided into three links of temperature matrix extraction, diagnostic characteristic parameter extraction and equipment fault diagnosis.
The first step is as follows: multi-angle power equipment fault diagnosis
The power equipment in the infrared image often shows a tilted state in the image because of the problem of the shooting angle. This feature of the power equipment brings great difficulty to the conventional fault diagnosis method based on the positive frame detection. The method for detecting the inclined frame of the power equipment component can well handle the situation. Through the inclined frame detection, the position of the power equipment in the image can be better expressed, the interference of background noise in the detection frame can be reduced, and convenience is brought to subsequent fault diagnosis. Furthermore, the method for segmenting the area of the power equipment provided by the application aims to segment each pixel point, whether the power equipment in an image is inclined or not does not need to be considered, and possibility is provided for multi-angle fault diagnosis of the power equipment.
The second step is that: fault diagnosis feature parameter extraction
The extraction of the fault diagnosis characteristic parameters is to extract the characteristic parameters for subsequent fault diagnosis on the basis of accurate positioning of the power equipment components and analysis of the infrared image temperature matrix. Considering that the fault of the power equipment can be divided into two types of current heating and voltage heating, the extraction of the fault diagnosis characteristic parameter of the application is also divided into two types of the characteristic parameters of the power heating and the voltage heating.
Extracting current heating fault diagnosis characteristic parameters: current heating type faults are often due to significant localized heating at the junction of electrical equipment and metal parts, at the junction of line sections and electrical equipment or at the junction of switches and circuit breakers, due to poor contact or line aging. Therefore, the current pyrogenicity analysis module mainly has the function of extracting temperature information of heating areas on equipment and at the connection position of the equipment and a power transmission line in the infrared thermography.
The overall flow of the current heating fault diagnosis characteristic parameter extraction is shown in fig. 21. The generation of the pseudo-color image is to firstly normalize the temperature matrix into a gray-scale image and then map the gray-scale image into the pseudo-color image by utilizing the colormap of the iron oxide red. This is in effect a look-up table process. The electrical equipment region segmentation utilizes the deplab equipment region segmentation model introduced above.
After the high-temperature area in the foreground mask is accurately positioned, the highest temperature and the average temperature in the high-temperature area can be used as characteristic parameters for diagnosing the current heating fault.
Extracting characteristic parameters of voltage heating fault diagnosis: the voltage heating type fault mainly refers to the fault generated by poor insulation inside the power equipment or abnormal voltage distribution and increased leakage current, and is characterized in that the heating effect is mainly caused by voltage and is unrelated to load current. Since common voltage heating mainly occurs on the sleeve type components, the characteristic parameter extraction process of voltage heating fault diagnosis is mainly described by taking the sleeve type components as an example.
Voltage pyrogenic fault analysis, like current pyrogenic analysis, requires extraction of high temperature points and comparison and judgment with reference to temperature points, so that in the case of precise positioning of power equipment components in a given infrared image, the steps of extracting temperature points for voltage pyrogenic analysis are as follows:
merging the component detection results output by the detection model into equipment detection results by applying a component merging algorithm;
for each device, if an expander is present or the armcap is to be separated from the porcelain;
for a porcelain bushing or an arc extinguish chamber, extracting a temperature curve of a main shaft;
extracting the highest temperature and the lowest temperature of each section of the single-phase equipment;
if the multi-phase equipment exists, extracting the point pair with the maximum three-phase temperature difference of each section, thereby obtaining a single-term and multi-phase comparison temperature point pair, wherein a specific algorithm flow chart is shown in FIG. 22.
The third step: power equipment fault diagnosis based on multi-angle and multi-time scale
The main purpose of performing multi-angle multi-time scale fault intelligent diagnosis on an infrared image is to establish a differential diagnosis characteristic parameter threshold analysis model through refined detection, intelligent diagnosis characteristic parameter extraction and strong historical data and multi-time scale analysis.
An infrared image for refining detection is usually acquired, and the application can obtain the associated key attributes of the detection image: the infrared thermal imager reads and obtains bottom layer information (radiance, detection time, detection distance, instrument model and the like), detection seasons (spring, summer, autumn and winter for example) can be correspondingly obtained according to detection time, the detection time (day and night for example) can be obtained, meteorological conditions (temperature, humidity, wind speed, wind direction and the like) of the detected equipment at the detection time, operation information (load, load current and the like) of the detected time and static standing book information (voltage level, operation year limit and the like) of the detected equipment at the detection time can be obtained by combining a power transmission equipment state evaluation big data analysis system established by Shandong corporation.
When infrared image fault diagnosis is carried out, proper attributes are selected for classification, so that the differences of different equipment types under different time scales can be embodied to the greatest extent, the internal implicit association of equipment fault diagnosis parameters is disclosed, and the accuracy and reliability of infrared image fault diagnosis are higher.
Analyzing influence factors based on fuzzy C-means clustering: firstly, the fuzzy C-means clustering is used for automatically clustering numerical diagnostic characteristic parameters, and in a multidimensional space, the fuzzy C-means clustering method can objectively obtain the clustering center of data.
And carrying out interval division on different influence factors (such as voltage grades of 110kV, 220kV, 500kV and more than 500kV, detection seasons of spring, summer, autumn and winter, detection moments of day, night and the like), and calculating clusters of diagnosis characteristic parameters belonging to different influence factors among different cluster categories so as to obtain influence degrees of different influence factors. For different clustering categories, if the distance between the two categories is large, the data under the influence factor is more discrete, and the influence factor has large influence on the data; if the distance between the two classes is small, the influence factor is considered not to influence the result of data clustering, i.e. the influence on the data is small.
By the method, classification basis and main influence factors in the process of calculating the differentiation threshold are determined, and according to analysis of previous data, the method preliminarily determines detection seasons, detection time and voltage levels with larger influence factors.
Fault diagnosis parameter distribution analysis based on the distribution model: the Weibull distribution is the theoretical basis for reliability analysis and life test. The weibull distribution is widely used in reliability engineering. The probability density function of the Weibull distribution can be expressed as equation (11):
Figure BDA0002775288750000351
its corresponding cumulative probability distribution function, also called the failure distribution function, is expressed as equation (12):
Figure BDA0002775288750000361
wherein β represents the weibull slope, also known as the shape parameter; η represents a characteristic value, also called a scaling parameter. When these two parameters are determined, the Weibull parametric model is uniquely determined.
Assuming that the collected data column is X (X1, X2, …, xn), let θ be the model parameter column (β, η) to be estimated, and according to the basic principle of maximum likelihood function estimation, the log likelihood function is expressed as formula (13):
Figure BDA0002775288750000362
the system of likelihood equations is represented as equation (14):
Figure BDA0002775288750000363
estimates of β and η can be obtained. Therefore, the parameters of the model can be estimated according to the sample data, and the distribution model can be established.
Calculating a fault diagnosis differentiation threshold value: for any distribution model, given a value associated with a particular cumulative probability, a response value associated with the particular probability can be determined using an inverse cumulative distribution function.
The inverse cumulative distribution function of the weibull function is expressed as equation (15):
x=F-1(p|η,β)=-η[ln(1-p)]1/β,p∈[0,1] (15)
wherein p represents the cumulative distribution probability, and x represents the corresponding value when the cumulative probability is p. The cumulative probability of the weibull distribution model is associated with the general defect rate, the serious defect rate and the critical defect rate (divided according to the DLT664 rule) in the infrared image fault diagnosis state, when the cumulative probability is set to 1-the general defect rate, a diagnosis threshold value related to the general defect can be obtained, when the cumulative probability is set to 1-the serious defect rate, a diagnosis threshold value related to the serious defect can be obtained, and so on, and the calculation flow is as shown in fig. 23.
4. Edge calculation based multi-sensor image intelligent diagnosis device
4.1 deep neural network model compression
The technical route of the research content of the application is shown in figure 24:
aiming at the problems of complex neural network structure, parameter redundancy and excessive data bit requirement applied to multispectral image recognition, the method researches the influence of the deep neural network structure and parameter scale on algorithm precision, establishes the relation between different compression methods and the algorithm precision, performs deep compression through network model pruning, network parameter quantization and tensor division, reduces the network, reduces the calculated amount, reduces the algorithm complexity, removes redundant parameters, selects the optimal bit width, balances the calculation load and the like, performs full-connection calculation after convolution calculation of the neural network, theoretically analyzes the recognition rate of the compressed target network, expects to approach the optimal target detection performance with lower complexity, and meets the requirements of embedded artificial intelligence on high compression rate and high precision.
(1) Pruning
Within the precision loss allowable range, a neuron-level model pruning method and a neuron link-level model pruning method are adopted. By means of gradually increasing pruning granularity, on the premise of considering load balance of a computing unit, pruning is performed on neurons at a row direction magnitude level and a two-dimensional convolution kernel level, a pruning algorithm for a satellite-borne deep neural network is provided, the defects that weight distribution is uneven after network pruning, accuracy of a network model is reduced and the like in the existing research are overcome, an optimal scheme is selected between accuracy of the network and complexity of hardware implementation, the network scale is simplified to the maximum degree, high compression rate is obtained, and a powerful deep neural network compression means is provided for satellite-borne artificial intelligence. The three pruned granularities of the present application are shown in fig. 25, i.e., three pruned granularities that reduce depth by reducing the convolutional layer 2, reduce width in the convolutional layer, and reduce density within the convolutional layer.
(2) Quantization and weight sharing
Different from the traditional method of representing precision by reducing weight, the method applies quantization constraint to the weight and the characteristic diagram during training, so that the deep neural network can learn the optimal quantization weight and characteristic value from training data to ensure the accuracy of the network. From the comparison between the target quantization algorithm in fig. 26 and the conventional method for reducing the precision, it can be seen that after the precision of the weight and the eigenvalue of the neural network compressed by the conventional quantization method is reduced to a certain critical precision, the recognition accuracy of the network is sharply reduced, and is reduced from 90.6% to approximately 0. In comparison, the deep neural network constrained by a quantification algorithm can maintain higher recognition accuracy at very low bit accuracy.
In the method, the influence of different quantization bit widths on the accuracy rate of a network model and the compression ratio of a storage space is researched mainly by adopting a uniform quantization means, the most appropriate quantization bit width is further selected, and a quantization strategy is optimized at the same time, so that a high-precision and high-compression-ratio deep neural network quantization algorithm suitable for a general embedded platform is finally obtained.
As shown in fig. 27, assuming the present application has a layer with 4 input neurons and 4 output neurons, the weight is a 4 × 4 matrix. The top left corner is a 4 × 4 weight matrix and the bottom left corner is a 4 × 4 gradient matrix. The weights are quantized into 4 intervals (represented by 4 colors), and all weights in the same interval share the same value, so for each weight, the application needs to store only a small index into the shared weight table. During the update, all fades are grouped by color and added, multiplied by the learning rate, and subtracted from the shared centroid in the last iteration. For pruned AlexNet, the present application is able to quantize 8 bits (256 shared weights) for each CONV layer and 5 bits (32 shared weights) for each FC layer without losing any precision.
To compute the compression ratio, given k clusters, the present application only needs log2(k) bits to encode the index. In general, for a network with n connections, and each connection represented by b bits, constraining the connections to have only k shared weights can result in the following compression ratio, as shown in equation (16):
Figure BDA0002775288750000381
FIG. 27 shows weights for a single layer neural network with four input cells and four output cells. Initially there are 16 weights, but only 4 shared weights: similar weights are combined together to share the same value. The application needs to store 16 weights at first, each weight has 32 bits, after compression, the application only needs to store 4 valid weights (blue, green, red and orange), each has 32 bits, and 16 2-bit indexes are added, and the compression rate is 16 × 32/(4 × 32+2 × 16) ═ 3.2.
(3) Tensor resolution
Tensors are a natural generalization of vectors, which may be referred to as first-order tensors, and matrices, which may be referred to as second-order tensors, and the matrices are stacked to form a cube, and this data structure is referred to as a third-order tensor. Tensor decomposition is an important component in tensor analysis, and the basic principle of the tensor decomposition is to decompose a tensor into a combination of a plurality of tensors with simpler forms and smaller storage scale by using structural information in tensor data.
In neural networks, the parameters are usually kept centrally in the form of tensors. For the fully-connected layer, the fully-connected layer transforms the input vector to the output vector through a weight matrix, and the parameters are second-order tensors. For the convolutional layer, the input data is assumed to be a third order tensor. Each convolution kernel in the convolutional layer is also a third-order convolution kernel, and the basic idea of network compression based on tensor decomposition is to use a tensor decomposition technique to re-express the parameters of the network as a combination of small tensors. The newly expressed tensor group can be approximately the same as the original tensor under certain precision, and the occupied space is greatly reduced, so that the effect of network compression is obtained. The weight tensor decomposition method in the present application can refer to (a) and (b) in fig. 28.
As shown in fig. 28, after the image is input to the c channel, each decomposed low-order tensor is called a tensor, and is output to the d channel, wherein the dimension of the kth tensor kernel is (rk-1, nk, rk), and (rk-1, rk) is called tensor rank. By adjusting the dimensionality and the tensor rank, the calculation amount will also change. Through tensor decomposition, the full-connection operation is replaced by a corresponding tensor compression layer, and each weight parameter is determined by the value of tensor rank. Different dimensionalities and tensor ranks are adopted, different compression ratios and different speed hoisting rates are obtained, and the optimal tensor decomposition parameters are selected on the premise of considering the precision.
4.2 implementation of communication with Multi-sensor fusion device
As shown in fig. 29, the diagnostic apparatus communicates with the multi-sensor fusion device in a 4G/5G manner, the multi-sensor fusion device transmits the acquired images or videos of the power equipment to the rapid diagnostic apparatus based on edge calculation in a wireless network manner, and the diagnostic apparatus returns the diagnostic result to the multi-sensor fusion device through neural network calculation.
In summary, the application provides a multi-sensor fusion intelligent imaging model and a prototype system, provides a power equipment defect intelligent diagnosis method of multi-angle, multi-time and multi-scale sensor images, and provides a deployment and transplantation method on an edge computing platform to develop a multi-sensor image intelligent diagnosis device. Based on the realization, after the application is completed, the intelligent analysis level of the live detection of the transformer substation can be greatly improved, and the deep fusion of technologies such as artificial intelligence, computer video and the like and operation and inspection services is promoted. By researching the deep learning lightweight technology, the real-time performance of mass data input and effective feedback of a computing system can be effectively enhanced. The single-site multispectral observation dimensionality of the national power grid is high, the data volume is large, the site distribution is wide, the data transmission time is long, and the existing centralized computing model is difficult to effectively feed back in real time. The software and hardware acceleration technology and the optimization cooperation strategy are researched under the cloud and mist cooperation framework, so that the response speed of the platform and the response capability of the demand change can be greatly improved.
The method and the system construct a demonstration project oriented to the state detection and early warning system of the electrical equipment of the transformer substation, and after the demonstration project is successfully applied, the method and the system can be gradually popularized and applied in provinces and nationwide according to application results.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A device detection method based on multispectral images is characterized by comprising the following steps:
the method comprises the steps of obtaining a plurality of image sets, wherein each image set corresponds to different acquisition parameters, the acquisition parameters at least comprise any one or more of acquisition scale, acquisition angle and acquisition time, each image set comprises equipment images of a plurality of frames of power equipment, the equipment images comprise infrared images, visible light images and ultraviolet images, and the equipment images in each image set correspond to the same acquisition parameter;
performing image recognition on at least the infrared images in each image set to obtain a temperature rise recognition result of the infrared images, wherein the temperature rise recognition result comprises: detecting the abnormal temperature rise of each image area contained in the infrared image;
at least performing image recognition on the infrared image in each image set to obtain a temperature rise recognition result of the infrared image, wherein the temperature rise recognition result comprises the following steps:
performing image segmentation on the infrared image in each image set to obtain a plurality of image areas, wherein each image area corresponds to one device component in the target device;
obtaining a temperature matrix corresponding to each image area at least according to the image radiance of the infrared image;
obtaining characteristic parameters of each image area according to the temperature matrix corresponding to each image area, wherein the characteristic parameters comprise characteristic parameters corresponding to current heating types and/or characteristic parameters corresponding to voltage heating types;
obtaining a temperature rise abnormity detection result of each image area at least according to the characteristic parameters and a preset parameter threshold value;
performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, wherein the discharge recognition result comprises: a discharge abnormality detection result for each image area included in the ultraviolet image;
obtaining an equipment detection result according to the temperature rise identification result and the discharge identification result in each image set, wherein the equipment detection result comprises: and an abnormality detection result for each image area included in the infrared image or the ultraviolet image, the abnormality detection result representing whether an apparatus component corresponding to the image area in the electric power apparatus is abnormal or not.
2. The method according to claim 1, wherein the parameter threshold is determined at least based on the influence factor whose correlation with the characteristic parameter satisfies a predetermined influence condition;
wherein, the influencing factors at least comprise any one or more of seasonal factors, time factors and voltage grade factors.
3. The method of claim 1, wherein the parametric threshold comprises a plurality of thresholds for abnormal levels of temperature rise, each of the thresholds for abnormal levels of temperature rise being different;
and the parameter threshold is obtained by processing a plurality of historical image samples corresponding to each temperature abnormal grade.
4. The method of claim 1, wherein before obtaining the characteristic parameter of each of the image regions according to the temperature matrix corresponding to each of the image regions, the method further comprises:
acquiring initial infrared data corresponding to each image area in the infrared image, and correcting the image radiance;
and adjusting the temperature matrix corresponding to each image area in the infrared image by using the corrected image radiance.
5. The method according to claim 1, wherein performing image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image comprises:
performing image recognition on the visible light image in each image set to identify a plurality of image areas included in the visible light image and a corresponding one of the device components in the power device in each image area;
obtaining a plurality of image areas included in the ultraviolet image in each image set and a device component in the power device corresponding to each image area according to a plurality of image areas included in the visible light image and a device component of a target device corresponding to each image area;
detecting each image area in the ultraviolet image by using a detection model based on a convolutional neural network to obtain a discharge abnormity detection result of each image area in the ultraviolet image;
the detection model is obtained by training a training sample set with a discharge label, the training sample set at least comprises a plurality of frames of training images, the training images are images of ultraviolet spectrums, and the discharge label represents whether discharge exists in equipment components in the training images.
6. The method according to claim 5, wherein performing image recognition on the visible light image in each image set to identify a plurality of image areas included in the visible light image and a corresponding one of the power devices in each image area comprises:
detecting the visible light image by utilizing a previous YOLO object recognition and positioning algorithm to obtain a plurality of equipment components in the electric power equipment contained in the visible light image;
wherein, the YOLO object recognition and positioning algorithm at least comprises a tilt angle vector;
according to the plurality of equipment components, the visible light image is segmented to obtain an image area where each equipment component is located in the visible light image;
and identifying the equipment components in each image area segmented from the visible light image to obtain the component identification of each equipment component.
7. The method of claim 5, further comprising:
compressing the detection model based on the convolutional neural network;
wherein the compression process comprises: any one or more of pruning, quantization, weight sharing, and tensor decomposition.
8. The method according to claim 1, wherein obtaining a device detection result based on the temperature-rise recognition result and the discharge recognition result includes:
judging whether the discharge abnormity detection result of each image area in the discharge identification result represents that the corresponding image area has discharge abnormity;
and under the condition that the image area corresponding to the discharge abnormity detection result representation has the discharge abnormity, adjusting the temperature rise abnormity detection result corresponding to the image area with the discharge abnormity represented by the discharge abnormity detection result in the temperature rise identification result from the current temperature rise abnormity level to a higher temperature rise abnormity level so as to obtain an equipment detection result.
9. An apparatus for detecting a device based on a multispectral image, comprising:
the device comprises an image obtaining unit, a processing unit and a processing unit, wherein the image obtaining unit is used for obtaining a plurality of image sets, each image set corresponds to different acquisition parameters, the acquisition parameters at least comprise any one or more of acquisition scale, acquisition angle and acquisition time, each image set comprises a plurality of frames of device images of power equipment, the device images comprise infrared images, visible light images and ultraviolet images, and the device images in each image set correspond to the same acquisition parameter;
an infrared detection unit, configured to perform image recognition on at least the infrared image in each image set to obtain a temperature rise recognition result of the infrared image, where the temperature rise recognition result includes: detecting the abnormal temperature rise of each image area contained in the infrared image;
the infrared detection unit is specifically used for: performing image segmentation on the infrared image in each image set to obtain a plurality of image areas, wherein each image area corresponds to one equipment component in the target equipment; obtaining a temperature matrix corresponding to each image area at least according to the image radiance of the infrared image; obtaining characteristic parameters of each image area according to the temperature matrix corresponding to each image area, wherein the characteristic parameters comprise characteristic parameters corresponding to current heating types and/or characteristic parameters corresponding to voltage heating types; obtaining a temperature rise abnormity detection result of each image area at least according to the characteristic parameters and a preset parameter threshold value;
an ultraviolet detection unit, configured to perform image recognition on at least the visible light image and the ultraviolet image in each image set to obtain a discharge recognition result of the ultraviolet image, where the discharge recognition result includes: a discharge abnormality detection result for each image area included in the ultraviolet image;
a result obtaining unit, configured to obtain an apparatus detection result according to the temperature rise identification result and the discharge identification result in each image set, where the apparatus detection result includes: and an abnormality detection result for each image area included in the infrared image or the ultraviolet image, the abnormality detection result representing whether an apparatus component corresponding to the image area in the electric power apparatus is abnormal or not.
CN202011263184.2A 2020-11-12 2020-11-12 Equipment detection method and device based on multispectral image Active CN112379231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011263184.2A CN112379231B (en) 2020-11-12 2020-11-12 Equipment detection method and device based on multispectral image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263184.2A CN112379231B (en) 2020-11-12 2020-11-12 Equipment detection method and device based on multispectral image

Publications (2)

Publication Number Publication Date
CN112379231A CN112379231A (en) 2021-02-19
CN112379231B true CN112379231B (en) 2022-06-03

Family

ID=74583393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263184.2A Active CN112379231B (en) 2020-11-12 2020-11-12 Equipment detection method and device based on multispectral image

Country Status (1)

Country Link
CN (1) CN112379231B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183565A1 (en) * 2019-03-11 2020-09-17 三菱電機株式会社 Image processing device, thermal image generation system, program, and recording medium
CN112381784B (en) * 2020-11-12 2024-06-25 国网浙江省电力有限公司信息通信分公司 Equipment detecting system based on multispectral image
CN113095321B (en) * 2021-04-22 2023-07-11 武汉菲舍控制技术有限公司 Roller bearing temperature measurement and fault early warning method and device for belt conveyor
CN113449767B (en) * 2021-04-29 2022-05-17 国网浙江省电力有限公司嘉兴供电公司 Multi-image fusion transformer substation equipment abnormity identification and positioning method
CN113343897B (en) * 2021-06-25 2022-06-07 中国电子科技集团公司第二十九研究所 Method for accelerating signal processing based on slope change of radiation signal
CN114002566A (en) * 2021-11-09 2022-02-01 国网山东省电力公司电力科学研究院 Partial discharge detection device and method based on multi-light fusion
CN114088212A (en) * 2021-11-29 2022-02-25 浙江天铂云科光电股份有限公司 Diagnosis method and diagnosis device based on temperature vision
CN114487742B (en) * 2022-04-14 2022-07-05 湖北工业大学 High-voltage shell discharge insulation performance detection system based on multi-mode texture analysis
CN114782297B (en) * 2022-04-15 2023-12-26 电子科技大学 Image fusion method based on motion-friendly multi-focus fusion network
CN114581741B (en) * 2022-05-09 2022-07-15 广东电网有限责任公司佛山供电局 Circuit breaker testing robot wiring positioning method and device based on image enhancement
CN115077722B (en) * 2022-08-22 2022-12-13 常州领创电气科技有限公司 Partial discharge and temperature comprehensive monitoring system and method applied to high-voltage cabinet
CN115238873B (en) * 2022-09-22 2023-04-07 深圳市友杰智新科技有限公司 Neural network model deployment method and device, and computer equipment
CN116859174A (en) * 2023-09-05 2023-10-10 深圳市鸿明机电有限公司 Online state monitoring system for electrical components of high-voltage transformer cabinet
CN117723564B (en) * 2024-02-18 2024-04-26 青岛华康塑料包装有限公司 Packaging bag printing quality detection method and system based on image transmission
CN117783793B (en) * 2024-02-23 2024-05-07 泸州老窖股份有限公司 Fault monitoring method and system for switch cabinet

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101358088B1 (en) * 2013-08-19 2014-02-06 한국전기안전공사 Method for diagnosising power facility using uv-ir camera
CN105043993A (en) * 2015-07-14 2015-11-11 国网山东省电力公司电力科学研究院 Method for detecting composite insulator based on multi-spectrum
KR101573234B1 (en) * 2015-04-21 2015-12-01 주식회사 광명에스지 Arc detecting fiber optic sensor by trnasforming light and method thereof in switchgear
CN111289854A (en) * 2020-02-26 2020-06-16 华北电力大学 Insulator insulation state evaluation method of 3D-CNN and LSTM based on ultraviolet video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101358088B1 (en) * 2013-08-19 2014-02-06 한국전기안전공사 Method for diagnosising power facility using uv-ir camera
KR101573234B1 (en) * 2015-04-21 2015-12-01 주식회사 광명에스지 Arc detecting fiber optic sensor by trnasforming light and method thereof in switchgear
CN105043993A (en) * 2015-07-14 2015-11-11 国网山东省电力公司电力科学研究院 Method for detecting composite insulator based on multi-spectrum
CN111289854A (en) * 2020-02-26 2020-06-16 华北电力大学 Insulator insulation state evaluation method of 3D-CNN and LSTM based on ultraviolet video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于多光源图像的接触网外绝缘健康状态评估";金立军等;《高电压技术》;20161130;第3515-3523页 *

Also Published As

Publication number Publication date
CN112379231A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN112379231B (en) Equipment detection method and device based on multispectral image
CN112381784B (en) Equipment detecting system based on multispectral image
CN112734692B (en) Defect identification method and device for power transformation equipment
US10452951B2 (en) Active visual attention models for computer vision tasks
CN108109385A (en) A kind of vehicle identification of power transmission line external force damage prevention and hazardous act judgement system and method
CN114627360A (en) Substation equipment defect identification method based on cascade detection model
CN107679495B (en) Detection method for movable engineering vehicles around power transmission line
CN111199523B (en) Power equipment identification method, device, computer equipment and storage medium
CN113436184B (en) Power equipment image defect discriminating method and system based on improved twin network
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN113947555A (en) Infrared and visible light fused visual system and method based on deep neural network
CN112164086A (en) Refined image edge information determining method and system and electronic equipment
CN113515655A (en) Fault identification method and device based on image classification
CN112738533A (en) Machine patrol image regional compression method
CN112637550A (en) PTZ moving target tracking method for multi-path 4K quasi-real-time spliced video
CN113971666A (en) Power transmission line machine inspection image self-adaptive identification method based on depth target detection
CN116485802B (en) Insulator flashover defect detection method, device, equipment and storage medium
CN113536944A (en) Distribution line inspection data identification and analysis method based on image identification
CN115984672B (en) Detection method and device for small target in high-definition image based on deep learning
CN117392065A (en) Cloud edge cooperative solar panel ash covering condition autonomous assessment method
CN114648736B (en) Robust engineering vehicle identification method and system based on target detection
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN113689399B (en) Remote sensing image processing method and system for power grid identification
CN116309407A (en) Method for detecting abnormal state of railway contact net bolt
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant