CN115601644A - Power transmission line image enhancement method under low illumination based on generation countermeasure network - Google Patents
Power transmission line image enhancement method under low illumination based on generation countermeasure network Download PDFInfo
- Publication number
- CN115601644A CN115601644A CN202211292721.5A CN202211292721A CN115601644A CN 115601644 A CN115601644 A CN 115601644A CN 202211292721 A CN202211292721 A CN 202211292721A CN 115601644 A CN115601644 A CN 115601644A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- module
- power transmission
- transmission line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for enhancing an image of a power transmission line under low illumination based on a generated countermeasure network, which is characterized by comprising the following steps: the invention designs a residual error module based on a mixed attention mechanism, and designs a generation network capable of extracting more effective characteristic information by combining a parallel cavity convolution module and the like so as to enhance the image of the power transmission line under low illumination. Secondly, the invention also designs a countermeasure network of a double-discrimination network based on a global discrimination network and a local discrimination network, and improves the discrimination capability of the countermeasure network on the input image. Finally, the invention also designs a loss function of the power transmission line image enhancement network under low illuminance based on the generated countermeasure network. The invention can effectively improve the image brightness of the power transmission line under low illuminance, simultaneously avoid the occurrence of overexposure or underexposure of the enhanced image and the occurrence of artifact phenomenon, retain more image detail information and improve the image quality of the enhanced power transmission line.
Description
Technical Field
The invention belongs to the technical field of image enhancement under low illumination, and relates to a method for enhancing an image of a power transmission line under low illumination based on a generated countermeasure network.
Background
On-line monitoring equipment based on image processing is basically deployed on important power transmission lines with the voltage class of above 220kV in China, images of the power transmission lines and the surrounding are collected through cameras erected on high-voltage power transmission towers, and therefore strand breakage, foreign matter hanging, ice covering, external force damage and the state of power transmission equipment of the power transmission lines are analyzed, hidden danger problems are found in time and processed. However, in cloudy days with low illuminance, the images acquired on site are not clear, so that the actual conditions of the transmission line on site cannot be effectively reflected, and the online monitoring of the transmission line is influenced. Therefore, the definition of the online monitoring image of the power transmission line under low illuminance needs to be improved, and the online monitoring accuracy of the power transmission line under low illuminance is further improved.
The method for enhancing the image of the power transmission line under low illumination can be divided into two categories, one category is that the image of the power transmission line under low illumination is enhanced by adopting a traditional method, and the other category is that the image of the power transmission line under low illumination is enhanced based on a learning method. The traditional method can be divided into two directions, wherein one direction is a histogram equalization method, and the other direction is an image enhancement method based on Retinex theory. The method for enhancing the low-illumination image of the power transmission line based on histogram equalization enhances the image, increases background noise, and is easy to cause local supersaturation of the image and serious loss of local information. The method for enhancing the low-illumination image of the power transmission line based on Retinex theory can generate a halo phenomenon in an enhanced image in an area with large brightness difference, and the visual effect of the image is influenced. In addition, there are also situations where the edge sharpening is insufficient, the shadow boundary is abrupt, part of the color is distorted, and the texture is unclear. The traditional image enhancement method is relatively simple and fast, but context information and the like in the image are not considered, so that the enhanced image effect is not ideal.
With the development of deep learning technology, methods for enhancing low-illuminance images of power transmission lines based on deep learning are proposed, and the methods train a network by using paired (low-illuminance images and original high-definition images) or unpaired images of the power transmission lines and obtain a model capable of being used for enhancing the low-illuminance images of the power transmission lines. Although the quality of the enhanced image of the power transmission line obtained by the low-illuminance image enhancement method based on the deep learning is relatively better than that of the traditional image enhancement method, the phenomena of artifacts and serious loss of partial details still exist to a certain extent, the quality of the enhanced image of the power transmission line is influenced, and the accuracy of the on-line monitoring of the power transmission line based on the artificial intelligence in the low-illuminance environment is further influenced.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method for enhancing the image of the power transmission line under the low illuminance based on the generated countermeasure network can effectively enhance the image of the power transmission line under the single low illuminance, avoid the occurrence of artifacts in the enhanced image, reserve more image detail information and improve the quality of the enhanced image of the power transmission line.
The technical scheme for solving the technical problem is as follows: the method for enhancing the image of the power transmission line under low illumination based on the generated countermeasure network is characterized by comprising the following specific steps of:
1) Building a data set
Selecting low-illuminance images and normal-illuminance images of different backgrounds from a power transmission line video monitoring system and processing the low-illuminance images and the normal-illuminance images to construct an unpaired sample training set and a matched sample testing set;
2) Building a generative network
Constructing a network for enhancing the image of the power transmission line under low illuminance;
3) Constructing a countermeasure network
Constructing a network for judging whether the input image of the power transmission line is a true image or a false image;
defining the image of the power transmission line under the original normal illuminance as a true image;
defining an output image of the generated network as a false image;
4) Constructing a loss function that generates a countermeasure network
The method is used for measuring the generation network performance and the countermeasure network performance of the network training process;
5) Network model training
Training through a network model to obtain an optimal generation network and an optimal confrontation network;
6) Network model performance evaluation
Inputting the low-illuminance images in the paired sample test set constructed in the step) 1 into the trained generation network obtained in the step 5 to obtain enhanced power transmission line images so as to measure the enhancement capability of the network model on the power transmission line images under low illuminance;
7) Network model application
And deploying the trained network model on a server, and enhancing the low-illumination power transmission line image returned from the site to obtain an enhanced image.
Further, the step 1) of constructing the data set comprises the following steps:
selecting low-illuminance images and normal-illuminance images with different backgrounds from a power transmission line video monitoring system, naming the low-illuminance images as original low-illuminance groups, and naming the normal-illuminance images as normal groups;
dividing the images in the normal group into a normal group and a normal group, and constructing a non-pairing sample training set by using the images in the original low-illumination group and the images in the normal group;
thirdly, processing the normal two groups of images to obtain corresponding low-illumination images, and naming the low-illumination images as low-illumination processing groups;
and fourthly, establishing a matched sample test set by using the normal group of images and the processed low-illumination group of images, and accordingly establishing an unpaired sample training set and a matched sample test set.
Further, the images of the normal group are divided into a normal group and a normal group, the images of the normal group are 70% of the images of the normal group, and the images of the normal group are 30% of the images of the normal group.
In the step 2), a generation network is constructed, and the generation network is composed of a network A, a network B and a network C, and specifically comprises the following steps:
the network is used for preprocessing an input low-illuminance power transmission line image, and comprises a brightness attention map, so that the phenomenon of overexposure or underexposure of a generated image is reduced, and the brightness attention map obtained through preprocessing is added with the input low-illuminance image of the power transmission line to be used as an input image of a network B;
the B network is used for extracting the low-frequency information of the input low-illuminance image of the power transmission line, and consists of a convolution module, a LeakyRelu activation function, a first combination module, a second combination module and a residual error module based on a mixed attention mechanism, wherein:
(1) the first combination module and the second combination module are both composed of a first branch and a second branch, and the output characteristic diagrams of the first branch and the second branch of each combination module are spliced to obtain the output characteristic diagram of the combination module;
(2) the residual error module based on the mixed attention mechanism consists of a convolution module, a LeakyRelu activation function, a convolution module, a parallel attention module, a convolution module, a LeakyRelu activation function, a convolution module, another parallel attention module and a convolution module in sequence;
(3) performing element addition on the input feature map of the residual module based on the mixed attention mechanism in the step (2) and the output feature map of the last convolution module in the residual module based on the mixed attention mechanism to obtain a final output feature map of the residual module based on the mixed attention mechanism;
the C network is mainly used for extracting high-frequency information of an input image, and sequentially comprises a first mixing module, a residual error module based on a mixed attention mechanism, a second mixing module, a residual error module based on the mixed attention mechanism, a convolution module and a Tanh activation function, wherein:
the first mixing module and the second mixing module are both composed of an up-sampling module, a convolution module and a LeakyRelu activation function;
the output of the first mixing module of the C network and the output of the first combination module of the B network are subjected to characteristic fusion through splicing operation, and the output of the second mixing module of the C network and the input of the first combination module of the B network are subjected to characteristic fusion through splicing operation, so that the fusion of high-frequency characteristics and low-frequency characteristics is realized;
carrying out element multiplication on the output of the network fify and the brightness attention mapping image obtained by the network A to obtain an image;
sixthly, adding the image obtained in the step I and the low illuminance image input by the generating network to obtain an enhanced electric transmission line image.
Further, the first and second combination modules of the B-network are each composed of a first and second branch, wherein:
the first branch is composed of parallel hole convolution modules;
the second branch consists of a residual block based on a hybrid attention mechanism and a down-sampling.
Further, two parallel attention modules in the residual error module based on the mixed attention mechanism are both composed of a channel attention module and a pixel attention module.
Further, the construction of the countermeasure network in the step 3) is represented as:
the confrontation network is composed of a global discrimination network and a local discrimination network, wherein:
the global discrimination network consists of 2 combination modules, a residual cavity convolution module and 2 combination modules in sequence, and input images of the global discrimination network are false images and true images;
the local discrimination network consists of 6 combination modules, the input images of which are randomly cropped images of false images and true images.
Further, the combination module and the residual error hole convolution module of the global discrimination network are as follows:
the combination module of the global discrimination network consists of convolution and LeakyReLU activation functions in sequence;
the main branch of the residual error cavity convolution module of the global discrimination network is composed of three cavity convolutions with cavity rates of 2,3 and 5 respectively.
Further, the combination module of the local discriminant network is a network module based on a convolution + LeakyReLU activation function.
Further, the loss function for generating the countermeasure network constructed in the step 4) is expressed as:
in the formula (I), the compound is shown in the specification,andrespectively a loss function of the global discrimination network and a loss function of the local discrimination network of the countermeasure network, L Per And L Pix Respectively, the perceptual loss function and the pixel loss function of the generating network, and α, β, γ and ω are weights of the corresponding loss functions described above, respectively.
Further, the loss function of the global discriminant network, the loss function of the local discriminant network, the perceptual loss function of the generation network, and the pixel loss function of the countermeasure network are as follows:
the loss function of the global discriminant network is expressed as:
in the formula D G The method comprises the following steps of (1) determining a global discrimination network, G being a generation network, and z and x being input images of the generation network and an antagonistic network respectively;
the loss function of the local discrimination network is expressed as:
in the formula D L A network is locally discriminated;
the perceptual loss function of the resulting network is expressed as:
where x is the input image to generate the network, W and H are the width and height of the image respectively,pre-training the network at VGG-19;
the pixel loss function of the generated network is expressed as:
further, the model training in the step 5) is specifically as follows:
inputting low-illumination power transmission line images in a training set into a generation network to obtain enhanced images;
secondly, inputting the enhanced image and the normal illumination image in the training set into a countermeasure network, and judging whether the input image is the enhanced image or the original normal illumination image;
and thirdly, optimizing the loss function by using a gradient descent method, continuously updating parameters of the generation network and the loss network, and finally finishing training of the generation network and the confrontation network to obtain the optimal generation network and the confrontation network.
Further, the network model performance evaluation in step 6) is specifically as follows:
inputting the low-illuminance images in the pairing sample test set constructed in the step 1 into the trained generation network obtained in the step 5 to obtain enhanced power transmission line images so as to measure the enhancement capability of a network model on the power transmission line images under low illuminance;
secondly, two indexes of structural similarity SSIM and peak signal-to-noise ratio PSNR between the enhanced image and the normal illumination image corresponding to the test set are calculated respectively, so that the image enhancing capability of the network model on the power transmission line under low illumination is measured;
and thirdly, if the average SSIM or the average PSNR value is low, adjusting the parameters of the network model to continue training, and when the average SSIM and the average PSNR value reach above ideal values, keeping the weight of the network model for enhancing the image of the power transmission line under low illuminance.
The invention provides a method for enhancing an image of a power transmission line under low illuminance based on a generation countermeasure network, which can effectively improve the image brightness of the power transmission line under low illuminance, avoid overexposure or underexposure of the enhanced image and artifact phenomenon, retain more image detail information and improve the image quality of the enhanced power transmission line.
Drawings
Fig. 1 is a flow chart of a method for enhancing an image of a power transmission line under low illumination based on a generation countermeasure network according to the present invention;
FIG. 2 is a block diagram of the generation network and parallel hole convolution module of the present invention;
FIG. 3 is a block diagram of a residual module of the present invention based on a hybrid attention mechanism;
fig. 4 is a diagram of the countermeasure network architecture of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
Referring to fig. 1 to 4, in embodiment 1, the present embodiment provides a method for enhancing an image of a power transmission line under low illuminance based on a generated countermeasure network, which includes the following specific steps:
1) Building a data set
The method comprises the following steps of selecting low-illuminance images and normal-illuminance images of different backgrounds from a power transmission line video monitoring system, processing the low-illuminance images and the normal-illuminance images, and constructing a non-matched sample training set and a matched sample testing set, wherein the steps are as follows:
selecting low-illuminance images and normal-illuminance images with different backgrounds from a power transmission line video monitoring system, naming the low-illuminance images as original low-illuminance groups, and naming the normal-illuminance images as normal groups;
dividing the images of the normal group into a normal group and a normal group, wherein the images of the normal group are 70% of the images of the normal group, the images of the normal group are 30% of the images of the normal group, and the images of the original low-illumination group and the images of the normal group are used for constructing a non-matching sample training set;
thirdly, processing the normal two groups of images to obtain corresponding low-illumination images, and naming the low-illumination images as low-illumination processing groups;
and fourthly, constructing a matched sample test set by using the images of the normal two groups and the images of the processed low-illumination group, and further constructing a non-matched sample training set and a matched sample test set.
Selecting low-light-level images and normal-light-level images with different backgrounds from a power transmission line video monitoring system, wherein the pixel size of the images is adjusted to 600 x 400.
2) Building a generative network
Constructing a network for enhancing the image of the power transmission line under low illuminance;
3) Constructing a countermeasure network
Constructing a network for judging whether the input power transmission line image is a true image or a false image;
defining the image of the power transmission line under the original normal illuminance as a true image;
defining an output image of the generated network as a false image;
4) Constructing a loss function that generates a countermeasure network
The method is used for measuring the generated network performance and the antagonistic network performance in the network training process;
5) Network model training
The model training is as follows:
secondly, inputting the low-illumination power transmission line images in the training set into a generation network to obtain enhanced images;
secondly, inputting the enhanced image and the normal illumination image in the training set into a countermeasure network, and judging whether the input image is the enhanced image or the original normal illumination image;
and thirdly, optimizing the loss function by using a gradient descent method, continuously updating parameters of the generation network and the loss network, and finally finishing training of the generation network and the confrontation network to obtain the optimal generation network and the confrontation network.
6) Network model performance evaluation
Inputting the low-illuminance images in the paired sample test set constructed in the step 1) into the trained generation network obtained in the step 5 to obtain an enhanced power transmission line image so as to measure the enhancement capability of a network model on the power transmission line image under low-illuminance;
secondly, two indexes of structural similarity SSIM and peak signal-to-noise ratio PSNR between the enhanced image and the normal illumination image corresponding to the test set are calculated respectively, so that the image enhancing capability of the network model on the power transmission line under low illumination is measured;
and thirdly, if the average SSIM or the average PSNR value is low, adjusting the parameters of the network model to continue training, and when the average SSIM and the average PSNR value reach above ideal values, keeping the weight of the network model for enhancing the image of the power transmission line under low illuminance.
7) Network model application
And deploying the trained network model on a server, and enhancing the low-illumination power transmission line image returned from the site to obtain an enhanced image.
In the step 2), a generation network is constructed, and the generation network is composed of a network A, a network B and a network C, and specifically comprises the following steps:
the network is used for preprocessing an input low-illuminance power transmission line image, and comprises a brightness attention map, so that the phenomenon of overexposure or underexposure of a generated image is reduced, and the brightness attention map obtained through preprocessing is added with the input low-illuminance image of the power transmission line to be used as an input image of a network B;
the B network is used for extracting the low-frequency information of the input low-illuminance image of the power transmission line, and consists of a convolution module, a LeakyRelu activation function, a first combination module, a second combination module and a residual error module based on a mixed attention mechanism, wherein:
(1) the first combination module and the second combination module are both composed of a first branch and a second branch, and the output characteristic diagrams of the first branch and the second branch of each combination module are spliced to obtain the output characteristic diagram of the combination module;
(2) the residual error module based on the mixed attention mechanism consists of a convolution module, a LeakyRelu activation function, a convolution module, a parallel attention module, a convolution module, a LeakyRelu activation function, a convolution module, another parallel attention module and a convolution module in sequence;
(3) performing element addition on the input feature map of the residual error module based on the mixed attention system in the step (2) and the output feature map of the last convolution module in the residual error module based on the mixed attention system to obtain a final output feature map of the residual error module based on the mixed attention system;
the C network is mainly used for extracting high-frequency information of an input image, and sequentially comprises a first mixing module, a residual error module based on a mixed attention mechanism, a second mixing module, a residual error module based on the mixed attention mechanism, a convolution module and a Tanh activation function, wherein:
the first mixing module and the second mixing module are both composed of an up-sampling module, a convolution module and a LeakyRelu activation function;
the output of the first mixing module of the C network and the output of the first combination module of the B network are subjected to characteristic fusion through splicing operation, and the output of the second mixing module of the C network and the input of the first combination module of the B network are subjected to characteristic fusion through splicing operation, so that the fusion of high-frequency characteristics and low-frequency characteristics is realized;
carrying out element multiplication on the output of the network C and the brightness attention mapping image obtained by the network A to obtain an image;
sixthly, adding the image obtained in the step of the reference image and the low illuminance image input by the generation network to obtain an enhanced power transmission line image.
The first combination module and the second combination module of the B network are composed of a first branch and a second branch, wherein:
the first branch consists of parallel hole convolution modules;
the second branch consists of a residual block based on the mixed attention mechanism and a down-sampling.
Two parallel attention modules of the residual error module based on the mixed attention mechanism are composed of a channel attention module and a pixel attention module.
The construction of the countermeasure network in the step 3) is represented as follows:
the confrontation network is composed of a global discrimination network and a local discrimination network, wherein:
the global discrimination network consists of 2 combination modules, a residual cavity convolution module and 2 combination modules in sequence, and input images of the global discrimination network are false images and true images;
the local discriminant network consists of 6 combination modules, the inputs of which are randomly cropped images of false and true images.
The combination module and the residual error cavity convolution module of the global discrimination network are as follows:
the combination module of the global discrimination network consists of convolution and LeakyReLU activation functions in sequence;
the main branch of the residual error cavity convolution module of the global discrimination network is composed of three cavity convolutions with cavity rates of 2,3 and 5 respectively.
The combination module of the local discrimination network is a network module based on convolution and LeakyReLU activation function.
The loss function for generating the countermeasure network constructed in the step 4) is expressed as:
in the formula (I), the compound is shown in the specification,andrespectively a loss function of the global discrimination network and a loss function of the local discrimination network of the countermeasure network, L Per And L Pix Respectively, the perceptual loss function and the pixel loss function of the generating network, and α, β, γ and ω are the weights of the corresponding loss functions described above, respectively.
The loss function of the global discrimination network, the loss function of the local discrimination network, the perception loss function of the generation network and the pixel loss function of the countermeasure network are as follows:
the loss function of the global discriminant network is expressed as:
in the formula, D G The method comprises the following steps of (1) determining a global discrimination network, G being a generation network, and z and x being input images of the generation network and an antagonistic network respectively;
the loss function of the local discrimination network is expressed as:
in the formula D L A network is locally discriminated;
the perceptual loss function of the resulting network is expressed as:
where x is the input image to generate the network, W and H are the width and height of the image respectively,pre-training the network at VGG-19;
the pixel loss function of the generated network is expressed as:
the convolution modules described in this embodiment are all of the same structure.
The residual error modules based on the mixed attention mechanism are all in the same structure.
The parallel attention modules described in this embodiment are all of the same structure.
The LeakyRelu activation functions described in this embodiment are all of the same structure.
The combined modules described in this embodiment have the same structure.
The present embodiment is implemented using the prior art.
Claims (9)
1. A method for enhancing an image of a power transmission line under low illumination based on a generated countermeasure network is characterized by comprising the following specific steps:
1) Building a data set
Selecting low-illuminance images and normal-illuminance images of different backgrounds from a power transmission line video monitoring system and processing the low-illuminance images and the normal-illuminance images to construct an unpaired sample training set and a matched sample testing set;
2) Building a generative network
Constructing a network for enhancing the image of the power transmission line under low illuminance;
3) Constructing a countermeasure network
Constructing a network for judging whether the input image of the power transmission line is a true image or a false image;
defining the image of the power transmission line under the original normal illuminance as a true image;
defining an output image of the generated network as a false image;
4) Constructing a loss function that generates a countermeasure network
The method is used for measuring the generated network performance and the antagonistic network performance in the network training process;
5) Network model training
Training through a network model to obtain an optimal generation network and an optimal confrontation network;
6) Network model performance evaluation
Inputting the low-illuminance images in the paired sample test set constructed in the step) 1 into the trained generation network obtained in the step 5 to obtain enhanced power transmission line images so as to measure the enhancement capability of the network model on the power transmission line images under low illuminance;
7) Network model application
And deploying the trained network model on a server, and enhancing the low-illumination power transmission line image returned from the site to obtain an enhanced image.
2. The method for enhancing the image of the power transmission line under the low illumination based on the generated countermeasure network as claimed in claim 1, wherein in the step 2) of constructing the generated network, the generated network is composed of a network A, a network B and a network C, and specifically as follows:
the network is used for preprocessing an input low-illuminance power transmission line image, and comprises a brightness attention map, so that the phenomenon of overexposure or underexposure of a generated image is reduced, and the brightness attention map obtained through preprocessing is added with the input low-illuminance image of the power transmission line to be used as an input image of a network B;
the B network is used for extracting the low-frequency information of the input low-illuminance image of the power transmission line, and consists of a convolution module, a LeakyRelu activation function, a first combination module, a second combination module and a residual error module based on a mixed attention mechanism, wherein:
(1) the first combination module and the second combination module are both composed of a first branch and a second branch, and the output characteristic diagrams of the first branch and the second branch of each combination module are spliced to obtain the output characteristic diagram of the combination module;
(2) the residual error module based on the mixed attention mechanism consists of a convolution module, a LeakyRelu activation function, a convolution module, a parallel attention module, a convolution module, a LeakyRelu activation function, a convolution module, another parallel attention module and a convolution module in sequence;
(3) performing element addition on the input feature map of the residual error module based on the mixed attention mechanism in the step (2) and the output feature map of the last convolution module in the residual error module based on the mixed attention mechanism to obtain a final output feature map of the residual error module based on the mixed attention mechanism;
the C network is mainly used for extracting high-frequency information of an input image, and sequentially comprises a first mixing module, a residual error module based on a mixed attention mechanism, a second mixing module, a residual error module based on the mixed attention mechanism, a convolution module and a Tanh activation function, wherein:
the first mixing module and the second mixing module are both composed of an up-sampling module, a convolution module and a LeakyRelu activation function;
the output of the first mixing module of the C network and the output of the first combination module of the B network are subjected to characteristic fusion through splicing operation, and the output of the second mixing module of the C network and the input of the first combination module of the B network are subjected to characteristic fusion through splicing operation, so that the fusion of high-frequency characteristics and low-frequency characteristics is realized;
carrying out element multiplication on the output of the network fify and the brightness attention mapping image obtained by the network A to obtain an image;
sixthly, adding the image obtained in the step of the reference image and the low illuminance image input by the generation network to obtain an enhanced power transmission line image.
3. The method for enhancing the image of the power transmission line under low illumination based on the generation countermeasure network of claim 2, wherein the first combination module and the second combination module of the B network are both composed of a first branch and a second branch, wherein:
the first branch consists of parallel hole convolution modules;
the second branch consists of a residual block based on a hybrid attention mechanism and a down-sampling.
4. The method for enhancing image of power transmission line under low illumination based on generation countermeasure network as claimed in claim 2,
two parallel attention modules in the residual error module based on the mixed attention mechanism are both composed of a channel attention module and a pixel attention module.
5. The method for enhancing the power transmission line image under low illumination based on the generation of the countermeasure network as claimed in claim 1, wherein the construction of the countermeasure network in step 3) is represented as:
the confrontation network is composed of a global discrimination network and a local discrimination network, wherein:
the global discrimination network is composed of 2 combination modules, a residual error hole convolution module and 2 combination modules in sequence, and input images of the global discrimination network are false images and true images;
the local discriminant network consists of 6 combination modules, the inputs of which are randomly cropped images of false and true images.
6. The method for enhancing the image of the power transmission line under the low illuminance as set forth in claim 5, wherein the combination module and the residual hole convolution module of the global discrimination network are:
the combination module of the global discrimination network and the combination module of the local discrimination network are sequentially composed of convolution and LeakyReLU activation functions;
the main branch of the residual error hole convolution module is composed of three hole convolutions with hole rates of 2,3 and 5.
7. The method for enhancing images of power transmission lines under low illumination based on generation of countermeasure networks as claimed in claim 5, wherein the combination module of the local discriminant network is a network module based on convolution + LeakyReLU activation function.
8. The method for enhancing the image of the power transmission line under low illumination based on the generated countermeasure network in claim 1, wherein the loss function of the generated countermeasure network in the step 4) is expressed as:
in the formula (I), the compound is shown in the specification,andrespectively a loss function of the global discrimination network and a loss function of the local discrimination network of the countermeasure network, L Per And L Pix Respectively, the perceptual loss function and the pixel loss function of the generating network, and α, β, γ and ω are the weights of the corresponding loss functions described above, respectively.
9. The method for enhancing the image of the power transmission line under the low illumination based on the generated countermeasure network as claimed in claim 8, wherein the loss function of the global discrimination network, the loss function of the local discrimination network, the perceptual loss function of the generated network and the pixel loss function of the countermeasure network are:
the loss function of the global discriminant network is expressed as:
in the formula, D G The method comprises the following steps of (1) determining a global discrimination network, G being a generation network, and z and x being input images of the generation network and an antagonistic network respectively;
the loss function of the local discrimination network is expressed as:
in the formula, D L A network is locally discriminated;
the perceptual loss function of the resulting network is expressed as:
where x is the input image to generate the network, W and H are the width and height of the image respectively,pre-training the network at VGG-19;
the pixel loss function of the generated network is expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211292721.5A CN115601644A (en) | 2022-10-21 | 2022-10-21 | Power transmission line image enhancement method under low illumination based on generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211292721.5A CN115601644A (en) | 2022-10-21 | 2022-10-21 | Power transmission line image enhancement method under low illumination based on generation countermeasure network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115601644A true CN115601644A (en) | 2023-01-13 |
Family
ID=84849726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211292721.5A Pending CN115601644A (en) | 2022-10-21 | 2022-10-21 | Power transmission line image enhancement method under low illumination based on generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115601644A (en) |
-
2022
- 2022-10-21 CN CN202211292721.5A patent/CN115601644A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709265A (en) | Camera monitoring state classification method based on attention mechanism residual error network | |
CN110288550B (en) | Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition | |
CN109034184B (en) | Grading ring detection and identification method based on deep learning | |
CN111612722B (en) | Low-illumination image processing method based on simplified Unet full-convolution neural network | |
CN111709888A (en) | Aerial image defogging method based on improved generation countermeasure network | |
CN113420794B (en) | Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning | |
CN110363727A (en) | Image defogging method based on multiple dimensioned dark channel prior cascade deep neural network | |
CN116033279B (en) | Near infrared image colorization method, system and equipment for night monitoring camera | |
CN111524060B (en) | System, method, storage medium and device for blurring portrait background in real time | |
CN112767385A (en) | No-reference image quality evaluation method based on significance strategy and feature fusion | |
CN114596233A (en) | Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method | |
CN112102186A (en) | Real-time enhancement method for underwater video image | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
CN112991177B (en) | Infrared image super-resolution method based on antagonistic neural network | |
CN113628143A (en) | Weighted fusion image defogging method and device based on multi-scale convolution | |
CN113920476A (en) | Image identification method and system based on combination of segmentation and color | |
CN117974459A (en) | Low-illumination image enhancement method integrating physical model and priori | |
CN113870162A (en) | Low-light image enhancement method integrating illumination and reflection | |
CN117689617A (en) | Insulator detection method based on defogging constraint network and series connection multi-scale attention | |
CN113378672A (en) | Multi-target detection method for defects of power transmission line based on improved YOLOv3 | |
CN116309171A (en) | Method and device for enhancing monitoring image of power transmission line | |
CN116912114A (en) | Non-reference low-illumination image enhancement method based on high-order curve iteration | |
CN115601644A (en) | Power transmission line image enhancement method under low illumination based on generation countermeasure network | |
CN116452900A (en) | Target detection method based on lightweight neural network | |
CN112465736B (en) | Infrared video image enhancement method for port ship monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |