CN114331838A - Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system - Google Patents

Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system Download PDF

Info

Publication number
CN114331838A
CN114331838A CN202111583034.4A CN202111583034A CN114331838A CN 114331838 A CN114331838 A CN 114331838A CN 202111583034 A CN202111583034 A CN 202111583034A CN 114331838 A CN114331838 A CN 114331838A
Authority
CN
China
Prior art keywords
image
scale
extra
high voltage
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111583034.4A
Other languages
Chinese (zh)
Inventor
谢民
邵庆祝
汪伟
章昊
俞斌
于洋
张骏
叶远波
程晓平
丁津津
孙辉
张峰
许旵鹏
翁凌
刘之奎
刘宏君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
State Grid Anhui Electric Power Co Ltd
CYG Sunri Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
State Grid Anhui Electric Power Co Ltd
CYG Sunri Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd, State Grid Anhui Electric Power Co Ltd, CYG Sunri Co Ltd filed Critical Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
Priority to CN202111583034.4A priority Critical patent/CN114331838A/en
Publication of CN114331838A publication Critical patent/CN114331838A/en
Pending legal-status Critical Current

Links

Images

Abstract

A super-resolution reconstruction method for a panoramic monitoring image of an extra-high voltage converter station protection system belongs to the technical field of power equipment detection, and solves the problems that the existing panoramic monitoring image is fuzzy, low in resolution and incapable of meeting the requirements of inspection personnel for panoramic monitoring; the low-order and high-order features of the images extracted in multiple scales are constructed by adopting the multi-scale convolution block in the depth multi-scale residual error network model, the phenomenon that the image detail extraction is incomplete is avoided, a residual error learning mechanism is adopted in the network model to keep the low-order rough features, the training difficulty is reduced, the reuse of the features is promoted, and the reconstruction capability of the images is improved; the reconstructed image has better structural similarity and peak signal-to-noise performance; the standard data set and the extra-high voltage converter station panoramic monitoring image data set are adopted successively to carry out image super-resolution reconstruction and target identification experiments, and experimental results show that the high-resolution image reconstructed by the method can meet the requirements of panoramic monitoring of inspection personnel.

Description

Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system
Technical Field
The invention belongs to the technical field of power equipment detection, and relates to a super-resolution reconstruction method for a panoramic monitoring image of an extra-high voltage converter station protection system.
Background
The traditional image enhancement reconstruction method generally utilizes the method of improving the image contrast to highlight the target scenery, and mainly comprises the methods of histogram equalization, logarithmic transformation, sharpening, wavelet transformation, Retinex with different scales and the like. The method has low computing resources and strong portability, but has limited enhancement effect as a general algorithm, and the processed image is difficult to meet the requirements of panoramic monitoring in a specific scene. Image-enhanced reconstruction is a classic research topic in computer vision, and Single Image Super Resolution (SISR) is an important component of Image-enhanced reconstruction. SISR utilizes a group of low-quality and low-resolution images to generate a single high-quality and high-resolution image, obtains a region of interest with higher spatial resolution, realizes the concentration analysis of a target object, and enables the image to realize the conversion from a detection level to an identification level or further realizes the conversion to a fine resolution level so as to improve the identification capability and the identification precision of a panoramic monitoring image of the converter station.
The current SISR algorithm can be roughly divided into three types, namely interpolation-based algorithm, reconstruction-based algorithm and deep learning-based algorithm. The interpolation algorithm has low calculation amount and high real-time performance, but lacks the characteristics of external information, so that the high-frequency characteristics are lost after the image is degraded, and the generated image has obvious blurring and ringing effects. Compared with an interpolation algorithm, the effect based on the reconstruction algorithm is more obvious, but the problem of smooth and fuzzy image high-frequency characteristics is solved along with the increase of the reconstruction multiple. In recent years, depth learning-based methods have become mainstream, and HR images with more High-frequency details are learned by using a mapping relationship between an observed Low Resolution (LR) image and an original High Resolution (HR) image and a large number of training samples, but reconstructed images still have the defects of detail feature distortion and High computational complexity. Convolutional neural networks are widely used for visual analysis due to their powerful image feature learning capabilities. In recent years, SISR algorithms based on convolutional neural networks have been proposed and achieve significant performance gains. Document "Image Super-Resolution Using Deep relational Networks" (c.dong, IEEE Transactions on Pattern Analysis and Machine Analysis, published as 2016), proposes a CNN model named SRCNN, which replaces dictionary modeling with automatic adjustment of hidden layer parameters, learns a non-linear mapping relationship between low-Resolution input and high-Resolution output, improves reconstruction accuracy, and reduces calculation time. However, there are some disadvantages in the SRCNN, such as that bicubic interpolation may cause edge blurring and jagged edges in the image, and in the case of the constant model parameter quantity, the larger the super-division multiple indicates the larger the resolution of the input, the higher the calculation quantity of the model. The document "accumulating the Super-Resolution comprehensive Neural Network" (Chao D, European Conference on Computer Vision, 2016) proposes an improved algorithm FSRCNN for the defect of slow training of the SRCNN, and performs up-sampling by deconvolution, and simultaneously performs dimension reduction by using 1 × 1 convolution, and reduces the calculation amount of the model to accelerate the training speed. The core of ResNet is to add a jump connection between the convolutional layer output and its previous convolutional layer input to solve the problem of gradient disappearance. H (x) represents the base layer map fitted by several superimposed convolutional layers, the input of the first convolutional layer being x, x being connected to the output of the last convolutional layer. The stacked layers only need to learn the mapping f (x) h (x) -x, if f (x) is zero, the residual unit can fit the identity mapping.
With the development of power grids, the interconnection scale of the power grids is continuously increased, the electrical connection in the power grids is tighter, the safety and stability problems of large power grids are more and more prominent, and the difficulty and safety risk of operation management technology are obviously increased. The safe and reliable operation of the extra-high voltage converter station plays a self-evident important role in the safe and stable operation of a power grid, so that the equipment fault needs to be manually inspected in the daily operation and maintenance process of the extra-high voltage so as to ensure the safety and stability of the system. However, the manual inspection mode has high working strength, and the inspection performance is easily influenced by experience responsibility of personnel. In order to improve the efficiency of operation and maintenance management of the extra-high voltage converter station, a panoramic monitoring system is widely deployed in the extra-high voltage converter station and used for monitoring the running state of equipment in each link.
The state signal parameters of the extra-high voltage direct current protection core link needing to be monitored by the extra-high voltage converter station protection device are as follows: A. monitoring the state of the outlet pressure plate; B. measuring the temperature of the terminal row in the screen cabinet; C. monitoring a front panel of secondary equipment in the screen cabinet; D. the working temperature of secondary equipment in the screen cabinet; E. working voltage of secondary equipment in the screen cabinet; F. monitoring the light intensity of the optical fiber; G. detecting the insulation of the cable; H. detecting an outlet loop; I. the position of the auxiliary contact; J. detecting the state of the cable; K. detection of parameters of the environment, such as temperature, humidity, etc.; and L, corrosion state of the wiring terminal. However, the operating environment and equipment of the monitoring system are used for a long time, shaking caused by vibration cannot be avoided, and the interference of dust deposition, spider web and the like on a lens can be avoided, so that a video image is blurred, and the acquisition of panoramic monitoring data is not accurate.
Disclosure of Invention
The invention aims to design a super-resolution reconstruction method for a panoramic monitoring image of an extra-high voltage converter station protection system, which aims to solve the problems that the panoramic monitoring image of the existing extra-high voltage converter station protection system is unclear, has low resolution and cannot meet the requirements of inspection personnel for panoramic monitoring.
The invention solves the technical problems through the following technical scheme:
the super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system comprises the following steps:
s1, establishing a depth multi-scale residual error network model on the edge side;
the depth multi-scale residual error network model comprises: an input convolutional layer, an output convolutional layer, and k multi-scale convolutional blocks; the input end convolution layer is used as an encoder to extract original low-order characteristics of a low-resolution image; the output end convolution layer is used for fusing multi-scale detail features to reconstruct a high-resolution image; the input end convolution layer and the output end convolution layer are in jump connection, and an identity mapping from a low-resolution image to a high-resolution image is established so as to carry out global residual learning; the k multi-scale volume blocks are sequentially stacked and connected and are used for obtaining the depth of the network model; the original low-order features are correspondingly connected with the k multi-scale volume blocks through k paths, and the ability of a network model to learn complex features is enhanced through local residual error learning;
s2, inputting a sample data set, and training a depth multi-scale residual error network model;
s3, testing the peak signal-to-noise ratio and the structural similarity index of the network by adopting a standard data set for the trained depth multi-scale residual error network model;
and S4, inputting the panoramic monitoring image of the extra-high voltage converter station into the trained depth multi-scale residual error network model to complete super-resolution reconstruction and identification.
According to the super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system, the multi-scale convolution block is adopted in the deep multi-scale residual error network model to construct the low-order and high-order characteristics of the multi-scale extracted image, the phenomenon that the image detail extraction is incomplete is avoided, the residual error learning mechanism is adopted in the network model to retain the low-order rough characteristics, the training difficulty is reduced, the reuse of the characteristics is promoted, and the reconstruction capability of the image is improved; the reconstructed image has better structural similarity and peak signal-to-noise performance; the standard data set and the extra-high voltage converter station panoramic monitoring image data set are adopted successively to carry out image super-resolution reconstruction and target identification experiments, clearer edges and more details of the reference data set and the extra-high voltage panoramic monitoring image set are recovered, and experimental results show that the high-resolution image reconstructed by the method can meet the requirements of panoramic monitoring of inspection personnel.
Furthermore, the input convolutional layer and the output convolutional layer both adopt convolution kernels with the step length of 1, and the input convolutional layer is activated by Relu.
Furthermore, the multi-scale convolution block respectively uses convolution kernels of 3 × 3, 3 × 2, 2 × 3 and 2 × 2 scales to extract multi-level detail features from the input image, then the feature maps of the four scales are spliced two by two in a specified dimension through a cross mechanism, and then the feature maps are sent into the convolution layer with the scale of 3 × 3 to be subjected to feature mapping, so that a new feature map with the same size as the input size is generated and sent into the next multi-scale convolution block.
Further, the local residual learning is defined as follows:
Hk=Gk(Hk-1)+F (1)
wherein G iskFeature mapping learned for the kth multi-scale volume block, HkFor the output of the kth multi-scale volume block,
Figure BDA0003426877280000048
is the output of the (k-1) th multi-scale convolution block, and F is the original low-order features extracted by the input-side convolution layer.
Further, the k multi-scale convolution block maps obtained by learning the global residual and the local residual are expressed as:
Figure BDA0003426877280000041
wherein, F0() The mapping to be learned for the input convolutional layer, F-1() Mapping to be learned for the output convolutional layer, wherein IHR、ILRRespectively representing a high resolution image and a low resolution image,
Figure BDA0003426877280000049
feature mapping learned for the k-1 th multi-scale volume block, GkR () is a mapping operation for the feature map learned for the 1 st multi-scale volume block.
Further, the loss function of the depth multi-scale residual error network model is as follows:
Figure BDA0003426877280000042
wherein, for the parameter of the depth multi-scale residual error network, an Adam optimizer is adopted to minimize a loss function; x(i)Is a sample data set
Figure BDA0003426877280000043
The ith sub-image of (1), Y(i)And N is a positive integer for the corresponding label.
Further, the panoramic monitoring image of the extra-high voltage converter station comprises: secondary equipment, hard press plate and terminal tarnish images.
Further, the standard data set includes: set5, Set14 and Urban 100.
Further, the formula for calculating the signal-to-noise ratio of the test peak is as follows:
Figure BDA0003426877280000044
wherein MSE is the mean square error, MAX, of the original image and the processed imageIRepresenting the maximum value of the image color.
Further, the formula for calculating the structural similarity index is as follows:
Figure BDA0003426877280000045
Figure BDA0003426877280000046
Figure BDA0003426877280000047
SSIM(X,Y)=L(X,Y)*C(X,Y)*S(X,Y) (8)
wherein u isX、uY、σXAnd σYMeans and standard deviations, σ, of the images X and Y, respectivelyXYRepresenting the covariance of images X and Y, C1、C2And C3Is constant, usually take C1=(K1*L)2,C2=(K2*L)2,C3=C2/2,K1=0.01,K2L is the range of pixel values 0.03.
The invention has the advantages that:
according to the super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system, the multi-scale convolution block is adopted in the deep multi-scale residual error network model to construct the low-order and high-order characteristics of the multi-scale extracted image, the phenomenon that the image detail extraction is incomplete is avoided, the residual error learning mechanism is adopted in the network model to retain the low-order rough characteristics, the training difficulty is reduced, the reuse of the characteristics is promoted, and the reconstruction capability of the image is improved; the reconstructed image has better structural similarity and peak signal-to-noise performance; the standard data set and the extra-high voltage converter station panoramic monitoring image data set are adopted successively to carry out image super-resolution reconstruction and target identification experiments, clearer edges and more details of the reference data set and the extra-high voltage panoramic monitoring image set are recovered, and experimental results show that the high-resolution image reconstructed by the method can meet the requirements of panoramic monitoring of inspection personnel.
Drawings
FIG. 1 is an architecture diagram of a depth multi-scale residual network model of a super-resolution reconstruction method according to an embodiment of the present invention;
fig. 2 is a structural diagram of a multi-scale volume block of the super-resolution reconstruction method according to an embodiment of the present invention;
FIG. 3 is a PSNR performance curve diagram of the super-resolution reconstruction method according to the embodiment of the present invention at different network model depths;
fig. 4, 5 and 6 are graphs comparing the super-resolution reconstruction method according to the embodiment of the present invention with the reconstruction effects of the secondary device monitoring image, the hard pressing plate image and the terminal corrosion image of other algorithms.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the invention is further described by combining the drawings and the specific embodiments in the specification:
example one
As shown in fig. 1, the super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system comprises the following steps:
1. establishing a depth multi-scale residual error network model at the edge side
1.1 Deep Multiscale Residual Network (DMRN)
Fig. 1 is a depth Multi-scale residual network architecture, which consists of a convolutional layer, k Multi-scale convolution blocks (MC blocks) and hopping connections. The stacking of the k multi-scale convolution blocks is used for obtaining larger depth, and meanwhile, the convolution operation of the convolution blocks with different scales and small kernels is improved so as to extract detail features on different scales of the image for fusion, so that the reconstruction capability of the network on the micro texture and the macro geometric features of the input panoramic monitoring image is improved, and the HR image with more vivid detail information is generated. And a residual error structure is added in the training process of the network, so that the characteristic multiplexing is realized, the network redundancy is reduced, the network convergence speed is increased, and the problem of vanishing gradient is solved.
1.2, Multi-Scale Stacking Block
The DMRN uses a multi-scale volume block architecture to perform super-resolution tasks. The convolutional layers with different scales form a multi-scale convolutional block, and different levels of detail features can be generated and combined.
Fig. 2 is a block diagram of a single multi-scale volume block, where x represents the input of the multi-scale volume block and y is the output of the convolution block. The convolution blocks with different scales can extract details with different frequencies, in each multi-scale convolution block, convolution kernels with four scales of 3 x 3, 3 x 2, 2 x 3 and 2 x 2 are respectively used for extracting multi-level detail features from an input image, then feature maps with four scales are spliced two by two on an appointed dimension through a cross mechanism, then the feature maps with the four scales are sent into the convolution layer with the scale of 3 x 3 for feature mapping, a new feature map with the same input size is generated, and the new feature map is sent into the next multi-scale convolution block. The multi-scale convolution block better reserves the edge information of the image and increases the detail information of the reconstructed high-resolution image.
1.3 residual learning mechanism
The DMRN network architecture introduces a global residual learning mechanism and a local residual learning mechanism to carry out network training; due to the similarity between the low-resolution image and the high-resolution image, the DMRN establishes an identity mapping from the low-resolution image to the high-resolution image through a jump connection between input and output so as to perform global residual learning.
The reasons for using local residual learning are two: first, the detail required in high resolution reconstruction is the sum of high frequency features and low order features, the first convolutional layer in fig. 1 as an encoder extracts the original low order features of the low resolution image, and local residual learning can preserve the low order features. Secondly, multiple paths exist between the low-order features and the multi-scale volume blocks, and the capability of network learning of more complex features can be enhanced through local residual learning.
Local residual learning is defined as follows:
Hk=Gk(Hk-1)+F (1)
wherein G iskFeature mapping learned for the kth multi-scale volume block, HkF is the original low-order features extracted from the first convolutional layer, which is the output of the kth multi-scale convolutional block.
Let F0The mapping to be learned for the first convolutional layer (with ReLU), F-1The mapping that needs to be learned for the last convolutional layer (without ReLU), then the k multi-scale convolutional block mappings learned based on the global and local residuals can be expressed as
IHR=R(ILR)=ILR+F-1(Gk(Gk-1(…(G1(F)+F)…)+F)+F) (2)
Wherein F ═ F0(ILR) Are the original low-level features.
1.4 DMRN network details
The body structure of the DMRN in fig. 1 is different from that of ResNet, and the DMRN removes the pooling layer and the batch normalization layer. This is because the goal of SISR is to achieve accurate pixel prediction, and removing the pooling layer is beneficial to preserve more image detail. The batch normalization layer normalizes the features, which eliminates the range flexibility of the network and is not beneficial to image reconstruction, so that the features are also removed. The DMRN uses a convolution kernel with step size 1 and is activated using ReLU, so that any size image can be accepted as input. In addition, the DMRN uses two convolutional layers of size 5 × 5 in the first and last layers to extract coarse features and fuse the multi-scale detail features to reconstruct the HR image.
2. Inputting sample data, training the depth multi-scale residual error network model
And selecting 800 monitoring images collected by the panoramic monitoring system of the extra-high voltage converter station, wherein the resolution of the images is 1600 x 1200. The high resolution image is first reduced to 1/3 with the original resolution using a bicubic difference algorithm, and then resized to the original image size. 24000 sub-images with the size of 32 x 32 are selected as a data set from the adjusted image according to the step size of 32
Figure BDA0003426877280000071
Wherein N is 24000, X(i)Is the ith sub-image, Y(i)Is the corresponding label. Randomly choose 80% of the images as training set and the rest 20% as test set. Using Mean Square Error (MSE) as a loss function of the network:
Figure BDA0003426877280000072
wherein, for the parameters of the DMRN, an Adam optimizer is adopted to minimize a loss function.
3. Testing and analyzing the trained deep multi-scale residual error network model by adopting a standard data set
After the training of the DMRN network is completed, the test is firstly carried out by using three standard data sets: set5, Set14, and Urban 100. Since human vision is more sensitive to brightness variation, the image is converted into YCbCr space, and Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) on the Y channel are used to evaluate the performance of super-resolution reconstruction.
PSNR is defined as the ratio of the maximum power of a signal to the noise power, in decibels (dB), and is often used to evaluate the quality of image compression, with larger values indicating more realism in the resulting image. The PSNR is calculated as follows:
Figure BDA0003426877280000081
wherein MSE is the mean square error, MAX, of the original image and the processed imageIRepresenting the maximum value of the image color.
The SSIM can evaluate the similarity between an original image and a processed image, the value range is [0, 1], the larger the numerical value is, the smaller the image distortion is, and the calculation formula of the SSIM is as follows:
Figure BDA0003426877280000082
Figure BDA0003426877280000083
Figure BDA0003426877280000084
SSIM(X,Y)=L(X,Y)*C(X,Y)*S(X,Y) (8)
wherein u isX、uY、σXAnd σYMeans and standard deviations, σ, of the images X and Y, respectivelyXYRepresenting the covariance of images X and Y, C1、C2And C3Is constant, usually take C1=(K1*L)2,C2=(K2*L)2,C3=C2/2,K1=0.01,K2L is the range of pixel values 0.03.
The depth of the DMRN is determined by the number of the multi-scale volume blocks, and a model with different numbers of the multi-scale volume blocks (k ═ {8,10,12,14}) is selected, as shown in fig. 3, the average PSNR and SSIM performance of 50 randomly selected images in the Set5, Set14 and Urban100 test data sets is given, and as the number of the multi-scale volume blocks increases, the PSNR performance of the DMRN on the Set5, Set14 and Urban100 is steadily improved, which indicates that the method of the present invention achieves the expected target of "deeper is better". However, too deep a network also causes a problem of increasing computational complexity, and k-14 has a limited performance improvement compared with k-12, so that the parameter setting of k-12 is adopted in subsequent experiments.
The values of SSIM and PSNR tested for the standard data sets Set5, Set14 and Urban100 are shown in tables 1-2, respectively. The table is also compared to other methods, including Bicubic (Bicubic), SRCNN, and FSRCNN.
TABLE 1 Set5, Set14, and Urban100 data Set Structure similarity indices
Figure BDA0003426877280000085
TABLE 2 Set5, Set14 and Urban100 data sets Peak SNR
Figure BDA0003426877280000091
Here, a DMRN with k being 12 is selected as a comparison model. As can be seen from the table, the average SSIM values of the SRCNN, FSRCNN and DMRN algorithms are 0.7784, 0.7827 and 0.8082, respectively, and the structural similarity of the algorithm of the present invention is increased by 0.0043 and 0.0298, respectively. The PSNR of SRCNN, FSRCNN and DMRN are respectively 27.50dB, 27.67dB and 28.33dB on average, and the algorithm of the invention is respectively improved by 0.17dB and 0.83 dB. The result shows that the algorithm can establish the nonlinear mapping relation from LR to HR by fusing low-order and high-order characteristics and adopting a mode of combining global residual errors and local residual errors.
4. Inputting the panoramic monitoring image of the extra-high voltage converter station into a trained depth multi-scale residual error network model to complete super-resolution reconstruction
Fig. 4, fig. 5 and fig. 6 respectively show super-resolution reconstruction effect diagrams of the panoramic monitoring image of the extra-high voltage converter station, including secondary equipment, a hard pressing plate and a terminal corrosion image. The method of the present invention was compared to Bicubic, SRCNN and FSRNN, and quantitative experimental results are given in tables 3 and 4. The images before and after reconstruction are respectively input into a YOLOV3 recognition model used in the extra-high voltage converter station, and the obtained recognition results are shown in Table 5, and experimental results show that compared with other methods, the DMRN has better SSIM and PSNR performance, and recovers clearer edges and more details, such as an indicator light and corresponding fuzzy character information in a first image, a hard pressing plate switch state and character display in a second image, and a terminal corrosion state in a third image, so that the method can better help an inspector to perform panoramic monitoring.
TABLE 3 structural similarity index of monitoring image of ultra-high voltage converter station
Figure BDA0003426877280000092
TABLE 4 Surveillance image peak SNR for UHV converter station
Figure BDA0003426877280000093
TABLE 5 Extra-high voltage converter station monitoring image recognition results
Figure BDA0003426877280000101
The invention provides a depth multi-scale residual error network to realize super-resolution rapid reconstruction of a panoramic monitoring image of an extra-high voltage converter station protection system so as to meet the requirement of panoramic monitoring of inspection personnel. In the DMRN, a multi-scale rolling block is adopted to construct low-order and high-order characteristics of a multi-scale extracted image, so that the problem of incomplete extraction of image details is solved. The network keeps low-order rough features by residual learning, reduces training difficulty, promotes reuse of the features, and further improves reconstruction capability of images. Experimental results show that compared with other methods, the DMRN has better SSIM and PSNR performances, the clearer edges and more details of the standard data set and the extra-high voltage panoramic monitoring image set are recovered, the quality of high-resolution image reconstruction is improved, and the requirements of inspection personnel on the panoramic monitoring of the extra-high voltage converter station protection system are met.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system is characterized by comprising the following steps of:
s1, establishing a depth multi-scale residual error network model on the edge side;
the depth multi-scale residual error network model comprises: an input convolutional layer, an output convolutional layer, and k multi-scale convolutional blocks; the input end convolution layer is used as an encoder to extract original low-order characteristics of a low-resolution image; the output end convolution layer is used for fusing multi-scale detail features to reconstruct a high-resolution image; the input end convolution layer and the output end convolution layer are in jump connection, and an identity mapping from a low-resolution image to a high-resolution image is established so as to carry out global residual learning; the k multi-scale volume blocks are sequentially stacked and connected and are used for obtaining the depth of the network model; the original low-order features are correspondingly connected with the k multi-scale volume blocks through k paths, and the ability of a network model to learn complex features is enhanced through local residual error learning;
s2, inputting a sample data set, and training a depth multi-scale residual error network model;
s3, testing the peak signal-to-noise ratio and the structural similarity index of the network by adopting a standard data set for the trained depth multi-scale residual error network model;
and S4, inputting the panoramic monitoring image of the extra-high voltage converter station into the trained depth multi-scale residual error network model to complete super-resolution reconstruction.
2. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the convolution kernels with the step length of 1 are adopted for the input end convolution layer and the output convolution layer, and the input end convolution layer is activated by Relu.
3. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, characterized in that the multi-scale convolution block extracts multilevel detail features from the input image by using convolution kernels of four scales, namely 3 x 3, 3 x 2, 2 x 3 and 2 x 2, then the feature maps of the four scales are spliced two by two in a specified dimension through a cross mechanism, and then the feature maps are sent to a convolution layer with the scale of 3 x 3 for feature mapping, so that a new feature map with the same input size is generated and sent to the next multi-scale convolution block.
4. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the local residual learning is defined as follows:
Hk=Gk(Hk-1)+F (1)
wherein G iskFeature mapping learned for the kth multi-scale volume block, HkFor the kth multi-scale volume blockIs then outputted from the output of (a),
Figure FDA0003426877270000011
is the output of the (k-1) th multi-scale convolution block, and F is the original low-order features extracted by the input-side convolution layer.
5. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 4, wherein the mapping of k multi-scale convolution blocks obtained by learning the global residual and the local residual is represented as follows:
Figure FDA0003426877270000021
wherein, F0() The mapping to be learned for the input convolutional layer, F-1() Mapping to be learned for the output convolutional layer, wherein IHR、ILRRespectively representing a high resolution image and a low resolution image,
Figure FDA0003426877270000027
feature mapping learned for the k-1 th multi-scale volume block, GkR () is a mapping operation for the feature map learned for the 1 st multi-scale volume block.
6. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the loss function of the depth multi-scale residual network model is as follows:
Figure FDA0003426877270000022
wherein, for the parameter of the depth multi-scale residual error network, an Adam optimizer is adopted to minimize a loss function; x(i)Is a sample data set
Figure FDA0003426877270000023
The ith sub-image of (1), Y(i)And N is a positive integer for the corresponding label.
7. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the panoramic monitoring image of the extra-high voltage converter station comprises: secondary equipment, hard press plate and terminal tarnish images.
8. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the standard data set comprises: set5, Set14 and Urban 100.
9. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the calculation formula of the test peak signal-to-noise ratio is as follows:
Figure FDA0003426877270000024
wherein MSE is the mean square error, MAX, of the original image and the processed imageIRepresenting the maximum value of the image color.
10. The super-resolution reconstruction method for the panoramic monitoring image of the extra-high voltage converter station protection system according to claim 1, wherein the calculation formula of the structural similarity index is as follows:
Figure FDA0003426877270000025
Figure FDA0003426877270000026
Figure FDA0003426877270000031
SSIM(X,Y)=L(X,Y)*C(X,Y)*S(X,Y) (8)
wherein u isX、uY、σXAnd σYMeans and standard deviations, σ, of the images X and Y, respectivelyXYRepresenting the covariance of images X and Y, C1、C2And C3Is constant, usually take C1=(K1*L)2,C2=(K2*L)2,C3=C2/2,K1=0.01,K2L is the range of pixel values 0.03.
CN202111583034.4A 2021-12-22 2021-12-22 Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system Pending CN114331838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111583034.4A CN114331838A (en) 2021-12-22 2021-12-22 Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111583034.4A CN114331838A (en) 2021-12-22 2021-12-22 Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system

Publications (1)

Publication Number Publication Date
CN114331838A true CN114331838A (en) 2022-04-12

Family

ID=81054794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111583034.4A Pending CN114331838A (en) 2021-12-22 2021-12-22 Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system

Country Status (1)

Country Link
CN (1) CN114331838A (en)

Similar Documents

Publication Publication Date Title
CN109712127B (en) Power transmission line fault detection method for machine inspection video stream
CN111507914A (en) Training method, repairing method, device, equipment and medium of face repairing model
Tang et al. A reduced-reference quality assessment metric for super-resolution reconstructed images with information gain and texture similarity
CN105631890B (en) Picture quality evaluation method out of focus based on image gradient and phase equalization
CN116309483A (en) DDPM-based semi-supervised power transformation equipment characterization defect detection method and system
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
CN109886927B (en) Image quality evaluation method based on nuclear sparse coding
CN108830829B (en) Non-reference quality evaluation algorithm combining multiple edge detection operators
CN113628143A (en) Weighted fusion image defogging method and device based on multi-scale convolution
Fu et al. Screen content image quality assessment using Euclidean distance
CN111127386B (en) Image quality evaluation method based on deep learning
CN112348809A (en) No-reference screen content image quality evaluation method based on multitask deep learning
CN114331838A (en) Super-resolution reconstruction method for panoramic monitoring image of extra-high voltage converter station protection system
CN116957940A (en) Multi-scale image super-resolution reconstruction method based on contour wave knowledge guided network
CN116523875A (en) Insulator defect detection method based on FPGA pretreatment and improved YOLOv5
CN116309364A (en) Transformer substation abnormal inspection method and device, storage medium and computer equipment
CN110223273A (en) A kind of image repair evidence collecting method of combination discrete cosine transform and neural network
CN115331081A (en) Image target detection method and device
CN111402223A (en) Transformer substation defect problem detection method using transformer substation video image
Liu et al. The First Comprehensive Dataset with Multiple Distortion Types for Visual Just-Noticeable Differences
CN111127587A (en) Non-reference image quality map generation method based on countermeasure generation network
Li et al. An electrical equipment image enhancement approach based on Zero-DCE model for power IoTs edge service
CN113947567B (en) Defect detection method based on multitask learning
CN115795370B (en) Electronic digital information evidence obtaining method and system based on resampling trace
Fang et al. Dynamic image restoration and fusion based on dynamic degradation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination