CN114841890A - Underwater image deblurring method based on generation countermeasure network - Google Patents

Underwater image deblurring method based on generation countermeasure network Download PDF

Info

Publication number
CN114841890A
CN114841890A CN202210539714.4A CN202210539714A CN114841890A CN 114841890 A CN114841890 A CN 114841890A CN 202210539714 A CN202210539714 A CN 202210539714A CN 114841890 A CN114841890 A CN 114841890A
Authority
CN
China
Prior art keywords
image
network
gradient
underwater
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210539714.4A
Other languages
Chinese (zh)
Inventor
张冰
崔博文
赵强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202210539714.4A priority Critical patent/CN114841890A/en
Publication of CN114841890A publication Critical patent/CN114841890A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an underwater image deblurring method based on a generation countermeasure network, which comprises the following steps: acquiring an underwater degraded image and a clear image as training samples, training the improved DeblurgAN network, and acquiring an optimization model; inputting the underwater degraded image to be processed into a trained optimization model to obtain a clear image; and comparing the obtained clear image with the original degraded image, and analyzing and summarizing the accuracy of the optimized model. According to the method, the resolution of the generated image is further improved by improving the learning capacity of the network, the details of the generated image are further extracted, the image features are detailed and enriched as much as possible, loss of feature information is avoided, the details of the generated and processed image are more perfect on the basis of removing the blur, the noise of the underwater image can be effectively removed, the problem of detail blur is improved, and the effect of removing the blur of the underwater image is improved.

Description

Underwater image deblurring method based on generation countermeasure network
Technical Field
The invention relates to the technical field of deep learning and image processing, in particular to an underwater image deblurring method based on a generation countermeasure network.
Background
The underwater image is an important carrier of ocean information, and theoretical technologies of acquisition, transmission, processing and the like of the ocean information are important for reasonably developing, utilizing and protecting ocean resources. However, the underwater complex imaging environment and the absorption and scattering of light during underwater propagation cause the phenomena of color shift, low contrast, blurring, uneven illumination and the like of underwater images. Therefore, the image obtained by people directly shooting at the seabed is often a degraded image, and the degraded image cannot meet the requirements of practical application, and further use of the underwater image is seriously influenced, so that the image quality of the degraded underwater image needs to be improved, the image definition is improved, and the influence of noise on the image on further use of the underwater image is reduced.
With the increasing of underwater image becoming a popular research field, a series of enhancing methods specially aiming at underwater images are provided on the basis of the existing image enhancing algorithm. These methods can be broadly divided into two categories: one type is an enhancement method based on a non-physical model, the method based on the non-physical model does not depend on an underwater optical imaging mathematical model, and the visual quality is improved by adjusting the pixel value of an image; the second type is an enhancement method based on a physical model, the method based on the physical model is to model the degradation process of the underwater image, and the underwater image with improved image quality is obtained by estimating a parameter model and inverting degradation. Although the existing underwater image enhancement methods are numerous, most of the existing underwater image enhancement methods still have certain limitations, and the chroma and the definition of an image are difficult to recover.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems of noise, detail blurring, image blurring and the like of an underwater image in the prior art, the underwater image deblurring method based on the generation countermeasure network is provided, the resolution of the generated image is further improved by improving the learning capacity of the network, the details of the generated image are further extracted, the image features are detailed and enriched as much as possible, the loss of feature information is avoided, the details of the generated and processed image are more perfect on the basis of deblurring removal, the noise of the underwater image can be effectively removed, the problem of detail blurring is improved, and the deblurring effect of the underwater image is improved.
The technical scheme is as follows: in order to achieve the purpose, the invention provides an underwater image deblurring method based on a generation countermeasure network, which comprises the following steps:
s1: acquiring an underwater degraded image and a clear image as training samples, training the improved DeblurgAN network, and acquiring an optimization model;
s2: and inputting the underwater degraded image to be processed into the trained optimization model to obtain a clear image.
Further, the debrurgan network in step S1 includes a generator network and an arbiter network, and the improved method for the debrurgan network includes the following steps:
modifying the activation function of the generator network in the original DeblurgAN network model: replacing an activation function ReLU of a generator network in the original DeblurGAN network model with a SeLU;
on the basis of keeping an original DeblurgAN network, respectively adding a convolution layer with convolution kernel of 3 multiplied by 3, a normalization layer and an activation layer before and after nine ResBlock structures of a generator network, namely deepening one-layer feature extraction;
when the discriminant network is subjected to loss training, a gradient penalty item is added;
the gradient L1 distance of the generated image and the clear image is introduced in the original DeblurgAN network model as a regular term constraint of generator loss.
Further, the formula of the activation function SeLU is:
Figure BDA0003649775960000021
where λ 1.0507, α 1.6732, x denotes the input, α and λ denote the hyperparameters;
the activation function SeLU preserves computations for inputs less than 0, provides rich features, and the distribution of samples after the activation function is automatically normalized to 0 mean and unit variance.
Further, the method for training the loss of the discriminator network comprises the following steps:
the penalty function for the arbiter network is:
Figure BDA0003649775960000022
wherein the content of the first and second substances,
Figure BDA0003649775960000023
the presentation pictures are taken from the generated picture set, x-p r The method comprises the steps that a picture is taken from a real clear picture set, D represents the probability obtained after the picture is input into a discriminator, and E represents the average of the judgment result of the discriminator on a local block of an input image;
the loss training of the discriminator network is to maximize the probability that the discriminator network judges the reconstructed image as a real image, and the loss function L of the generator network is the conditional loss L GAN And optimized perceptual loss L X And (3) the sum:
L=L GAN +λ·L X (3)
the conditional loss function of the generator network is L GAN Comprises the following steps:
Figure BDA0003649775960000024
perceptual loss function L X Inputting a reconstructed image and a clear image into a trained VGG19 network, respectively extracting the characteristics of the reconstructed image and the clear image for comparison, punishing a countering network loss function as a gradient punishment item, and obtaining a perception loss function L between two characteristic graphs after a convolutional layer is activated X Perceptual loss emphasizes the recovery of general content, higher-level convolutional layers represent more abstract features, and perceptual loss function L X Comprises the following steps:
Figure BDA0003649775960000025
in the formula, phi i,,j Showing that the feature map obtained by the generator network and the corresponding true clear image are sent to the second of the VGG19 networki convolution blocks, j characteristic map after convolution layer, W i,j ,H i,j X, y are the abscissa and ordinate, respectively, for the width and height of the feature map.
Further, a specific method of introducing a gradient L1 distance between a generated image and a clear image as a regular term constraint of generator loss in an original DeblurGAN network model is as follows:
the gradient operator adopts a calculation mode used in a Sobel operator, and a calculation formula of a gradient image is as follows:
Grad=|dx|+|dy| (6)
in the formula, dx represents the gradient in the direction of the horizontal axis, dy represents the gradient in the direction of the vertical axis, and Grad is a gradient map;
the L1 penalty for the gradient image is:
Figure BDA0003649775960000031
in the formula: d s Gradient image representing a sharp picture, d G(B) Representing the gradient image of the generated image, C, H, W represent the number of channels, height and width of the gradient image, respectively.
Further, the process of acquiring the clear image in step S2 is as follows:
in the improved DeblurgAN network model, a fuzzy image is input, a generation confrontation network is constructed, a convolutional neural network is trained to be used as a generator and a discrimination network in the DeblurgAN, and finally a clear image is reconstructed in a confrontation mode.
Further, the discriminator network in step S2 is a markov slice discriminator using 5 convolutional layers with a gradient penalty mechanism, and the discrimination method is as follows: before an image is input into a discriminator, the whole image is randomly cut into a plurality of 256 multiplied by 256 local blocks, the discriminator only adds a gradient penalty on the scale of the local blocks to discriminate whether the image is clear within the 256 multiplied by 256 range, the whole image is input into the discriminator to be convoluted, the average value of the discrimination results of all the local blocks is taken as the discrimination result of the whole image, the output of the discriminator is a probability value between 0 and 1, when the probability value is more than 0.5, the discrimination is clear, otherwise, the discrimination is fuzzy.
Further, in the step S2, the obtained clear image and the original degraded image are compared, and the accuracy of the optimized model is analyzed and summarized.
Has the advantages that: compared with the prior art, the method can effectively enhance the processing effect of the underwater image with the detail blur, and is embodied in the following points:
1. the network model provided by the invention modifies the activation function in the generator, and the SeLU is taken as the activation function, so that more features in the image can be extracted, and richer details can be generated;
2. the existing network structure is improved in the generator network model by a method of deepening one-layer feature extraction, and the resolution of the generated image is further improved by improving the learning capability of the network;
3. the invention introduces the gradient L1 distance between the generated image and the clear image as the regular term constraint of the generator loss, can enrich the edge information and the structure information, and further improve the image quality improvement effect.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of a modified generator network architecture of the present invention;
FIG. 3 is a diagram of a discriminator network architecture;
fig. 4 is a comparison graph of the underwater image processing effect in this embodiment.
Detailed Description
The present invention is further illustrated by the following figures and specific examples, which are to be understood as illustrative only and not as limiting the scope of the invention, which is to be given the full breadth of the appended claims and any and all equivalent modifications thereof which may occur to those skilled in the art upon reading the present specification.
The invention provides an underwater image deblurring method based on a generation countermeasure network, which comprises the following steps as shown in figure 1:
s1: acquiring an underwater degraded image and a clear image as training samples, training the improved DeblurgAN network, and acquiring an optimization model;
s2: inputting the underwater degraded image to be processed into a trained optimization model to obtain a clear image;
s3: and comparing the obtained clear image with the original degraded image, and analyzing and summarizing the accuracy of the optimized model.
The DeblurgAN network comprises a generator network and an arbiter network, and the improved method for the DeblurgAN network comprises the following steps:
1) improvements to the generator network:
as shown in fig. 2, on the premise of maintaining the overall structure of the generator network, a convolutional layer, a normalization layer and an activation layer with a convolution kernel of 3 × 3 are respectively added before and after the nine reblock structures of the generator network, i.e., a layer of feature extraction is deepened.
In this embodiment, in order to alleviate the gradient diffusion phenomenon in the Neural network training, the activation function relu (rectifier Linear units) of the generator network in the original debrurgan network model is replaced by a SeLU (Self-Normalizing Neural Networks), and this application reduces the network training time and improves the model recognition rate. The SeLU (Self-Normalizing Neural Networks) activation function reserves the calculation of less than 0 input, provides rich characteristics, and the distribution of the samples is automatically normalized to 0 mean and unit variance after passing through the activation function, and the formula is as follows (1):
Figure BDA0003649775960000041
where λ 1.0507, α 1.6732, x denotes the input and α and λ denote the hyper-parameters. Since the SeLU retains the computation of less than 0 input, resulting in prolonged computation time in the forward and backward propagation processes, the difficulty of model optimization is further increased, and the present invention only uses the SeLU activation function in the generator network in order to obtain more rich features without increasing the computation amount too much.
In this embodiment, taking the generation of 256 × 256 pictures as an example, a generator network model based on a depth residual error generator is constructed as shown in fig. 2. In the design of the network structure of the generator g (generator), the debourgan uses a deep residual network (ResNet) structure, and the ResNet network expresses the output as a linear superposition of input and input nonlinear transformations due to the existence of residual connection (Skip connection), so that the gradient dispersion problem is solved, and the deeper features can be extracted. The ResNet not only greatly increases the number of layers of the neural network, but also solves the problem of gradient disappearance or gradient explosion in deep network training to a certain extent, so that the accommodating capacity of the model can be greatly improved, and a better image generation effect is finally obtained. As can be seen in the architecture in the figure, the generator network header of the DeblurGAN extracts features with a convolution of 7 × 7, and extracts residual features using 9 residual units. Each ResBlock consists of a convolutional layer, an instance normalization layer and a SeLU activation, and the convolutional layer, the normalization layer and the activation layer with a convolution kernel of 3 × 3 are respectively added before and after nine ResBlock structures of the generator network, so that one layer of feature extraction is deepened. The network architecture of the discriminator d (discriminator) of DeblurGAN still uses the Patch-GAN in pix2 pix.
2) Improvements to the arbiter network:
when the loss training is carried out on the discriminator network, a gradient penalty item is added:
the architecture of the discrimination network is the same as PatchGAN. All convolutional layers except the last layer are followed by an InstanceNorm layer, with the parameter a of LeakyReLU being 0.2. The structure of the discriminator is shown in figure 3, the discriminator is used for judging whether the input image is clear, and the invention uses the Markov sheet discriminator which is composed of 5 convolution layers with a gradient punishment mechanism. Before the image is input into the discriminator, the whole image is randomly cut into a plurality of 256 × 256 local blocks, and the discriminator adds a gradient penalty only on the scale of the local blocks to discriminate whether the image is clear in the 256 × 256 range. And inputting the whole picture into a discriminator for convolution, and taking the average value of the discrimination results of all local blocks as the discrimination result of the whole picture. And the output of the final discriminator is a probability value between 01, and the discriminator is judged to be clear when the probability value is greater than 0.5, otherwise, the discriminator is fuzzy. The number of parameters can be greatly reduced by adopting the idea of local blocks, and the method is suitable for pictures with any size.
In the Deblurgan, the loss function is composed of two parts of "adaptive loss" and "Content loss", and the overall calculation formula is
L=L GAN +λ·L X (3)
Wherein L is GAN Is Adversallosis, L X Is Content loss, λ 100.
Adversarial loss
The original generation of the training confrontation network is easy to encounter the problems of gradient disappearance, model collapse and the like, and the training is very tricky. The use of WGAN-GP enables stable training on a variety of generative confrontation network architectures with little need to adjust parameters. The calculation for WGAN-GP, adaptive loss, is used in DeblurgAN as follows:
Figure BDA0003649775960000061
Content loss
content loss is the assessment of the gap between the generated sharp image and the actual sharp image. Two commonly used choices are the loss of mean absolute error of L1(MAE), and the loss of mean square error of L2 (MSE). The use of "Perceptial loss" in Deblurgan is essentially a L2(MSE) mean square error loss. The calculation of Content loss is as follows:
Figure BDA0003649775960000062
in the formula i,j Features, W, referring to the output of the jth convolutional layer before the ith largest pooling layer of the pre-trained network i,j And H i,j Is the dimension information of the feature map.
3) Introducing gradient L1 distance of the generated image and the clear image in an original DeblurgAN network model as a regular term constraint of generator loss:
the original DeblurGAN network model does not provide edge information for content loss, and the reconstructed picture is not clear enough at the edge part and has blur. To this end, the invention introduces the gradient L1 distance of the generated image and the clear image as the regular term constraint of the generator loss, and can enrich the edge information and the structural information. The gradient operator adopts a calculation mode used in a Sobel operator, and a calculation formula of a gradient image is as follows:
Grad=|dx|+|dy| (6)
in the formula, dx represents the gradient in the horizontal axis direction, dy represents the gradient in the vertical axis direction, and Grad is a gradient map.
The L1 penalty for the gradient image is:
Figure BDA0003649775960000063
in the formula: d s Gradient image representing a sharp picture, d G(B) Representing the gradient image of the generated image, C, H, W represent the number of channels, height and width of the gradient image, respectively.
Based on the above scheme, in order to verify the actual effect of the method of the present invention, the original underwater image on the left side of fig. 4 is input into the improved optimization model provided by the present invention, and the enhanced image on the right side of fig. 4 is obtained, the specific process is as follows: inputting an image to be processed into a generator network system after training, firstly performing convolution operation once, then performing down-sampling twice, performing nine Res Block operations, simultaneously deepening a layer of feature extraction, and finally performing up-sampling twice, wherein the image to be processed is added with an input fuzzy image after being subjected to a convolution activation function to be used as output to obtain an enhanced image.
The two images in fig. 4 are compared as follows:
1. in order to illustrate the enhancement effect of the underwater image enhancement method provided by the present invention, the underwater image quality evaluation criterion (UIQM) provided by Karen Panetta et al is used in the present embodiment to score the objective quality of the original image in the present embodiment, and the UIQM score is 5.1110; the enhanced image obtained for this example was then scored using the UIQM criteria, with a UIQM score of 6.8613. The UIQM score of the enhanced image is significantly improved compared to the UIQM score of the original image.
2. Fig. 4 is a subjective quality comparison diagram of an original underwater image and an enhanced image, the left image is an original degraded underwater image, the right image is an enhanced image, and the enhanced image is judged to have better visual effects in terms of color and definition than the original image in subjective vision. And compared with the quality of the original degraded image, the quality of the enhanced image is greatly improved, which shows that the enhancement method provided by the invention has good overall recovery effect.
Therefore, the underwater image deblurring method provided by the invention has a good deblurring effect, and has a good underwater image enhancement effect.

Claims (8)

1. An underwater image deblurring method based on a generation countermeasure network is characterized by comprising the following steps:
s1: acquiring an underwater degraded image and a clear image as training samples, training the improved DeblurgAN network, and acquiring an optimization model;
s2: and inputting the underwater degraded image to be processed into the trained optimization model to obtain a clear image.
2. The underwater image deblurring method based on the generation countermeasure network of claim 1, wherein the DeblurGAN network in the step S1 includes a generator network and a discriminator network, and the improvement method for the DeblurGAN network includes the following steps:
modifying the activation function of the generator network in the original DeblurgAN network model: replacing an activation function ReLU of a generator network in an original DeblurgAN network model with a SeLU;
on the basis of keeping an original DeblurgAN network, respectively adding a convolution layer with convolution kernel of 3 multiplied by 3, a normalization layer and an activation layer before and after nine ResBlock structures of a generator network, namely deepening one-layer feature extraction;
when the discriminant network is subjected to loss training, a gradient penalty item is added;
the gradient L1 distance of the generated image and the clear image is introduced in the original DeblurgAN network model as a regular term constraint of generator loss.
3. The underwater image deblurring method based on the generative countermeasure network of claim 2, wherein the formula of the activation function SeLU is:
Figure FDA0003649775950000011
where x represents the input, α and λ represent the hyperparameters;
the activation function SeLU retains the computations for inputs less than 0, and the distribution of samples is automatically normalized to the mean and unit variance of 0 after the activation function.
4. The underwater image deblurring method based on the generation countermeasure network of claim 2, wherein the loss training method of the discriminator network is as follows:
the penalty function for the arbiter network is:
Figure FDA0003649775950000012
wherein the content of the first and second substances,
Figure FDA0003649775950000013
the presentation pictures are taken from the generated picture set, x-p r The method comprises the steps that a picture is taken from a real clear picture set, D represents the probability obtained after the picture is input into a discriminator, and E represents the average of the judgment result of the discriminator on a local block of an input image;
the loss training of the discriminator network is to maximize the probability that the discriminator network judges the reconstructed image as a real image, and the loss function L of the generator network is the conditional loss L GAN And optimized perceptual loss L X And (3) the sum:
L=L GAN +λ·L X (3)
the conditional loss function of the generator network is L GAN Comprises the following steps:
Figure FDA0003649775950000021
perceptual loss function L X Inputting a reconstructed image and a clear image into a trained VGG19 network, respectively extracting the characteristics of the reconstructed image and the clear image for comparison, punishing a countering network loss function as a gradient punishment item, and obtaining a perception loss function K between two characteristic graphs after a convolutional layer is activated X Perceptual loss function L X Comprises the following steps:
Figure FDA0003649775950000022
in the formula, phi i,j W represents the characteristic diagram obtained by the generator network and the corresponding real clear image after being sent to the ith volume block and the jth volume layer of the VGG19 network i,j ,H i,j X, y are the abscissa and ordinate, respectively, for the width and height of the feature map.
5. The underwater image deblurring method based on the generative countermeasure network as claimed in claim 2, wherein the specific method of introducing gradient L1 distance of the generative image and the sharp image as regular term constraint of generator loss in the original DeblurGAN network model is as follows:
the gradient operator adopts a calculation mode used in a Sobel operator, and a calculation formula of a gradient image is as follows:
Grad=|dx|+|dy| (6)
in the formula, dx represents the gradient in the direction of the horizontal axis, dy represents the gradient in the direction of the vertical axis, and Grad is a gradient map;
the L1 penalty for the gradient image is:
Figure FDA0003649775950000023
in the formula: d s Gradient image representing a sharp picture, d G(B) Representing the gradient image of the generated image, C, H, W represent the number of channels, height and width of the gradient image, respectively.
6. The underwater image deblurring method based on the generation countermeasure network of claim 2, wherein the clear image in the step S2 is obtained by:
in the improved DeblurgAN network model, a fuzzy image is input, a generation confrontation network is constructed, a convolutional neural network is trained to be used as a generator and a discrimination network in the DeblurgAN, and finally a clear image is reconstructed in a confrontation mode.
7. The underwater image deblurring method based on the generation countermeasure network of claim 2, wherein the discriminator network in step S2 is a markov slice discriminator composed of 5 convolutional layers with a gradient penalty mechanism, and the discrimination method is as follows: before an image is input into a discriminator, the whole image is randomly cut into a plurality of 256 multiplied by 256 local blocks, the discriminator only adds a gradient penalty on the scale of the local blocks to discriminate whether the image is clear within the 256 multiplied by 256 range, the whole image is input into the discriminator to be convoluted, the average value of the discrimination results of all the local blocks is taken as the discrimination result of the whole image, the output of the discriminator is a probability value between 0 and 1, when the probability value is more than 0.5, the discrimination is clear, otherwise, the discrimination is fuzzy.
8. The underwater image deblurring method based on the generation of the countermeasure network of claim 1, wherein the step S2 compares the obtained clear image with the original degraded image, analyzes and summarizes the accuracy of the optimization model.
CN202210539714.4A 2022-05-18 2022-05-18 Underwater image deblurring method based on generation countermeasure network Pending CN114841890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210539714.4A CN114841890A (en) 2022-05-18 2022-05-18 Underwater image deblurring method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210539714.4A CN114841890A (en) 2022-05-18 2022-05-18 Underwater image deblurring method based on generation countermeasure network

Publications (1)

Publication Number Publication Date
CN114841890A true CN114841890A (en) 2022-08-02

Family

ID=82571156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210539714.4A Pending CN114841890A (en) 2022-05-18 2022-05-18 Underwater image deblurring method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN114841890A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495687A (en) * 2023-12-29 2024-02-02 清华大学深圳国际研究生院 Underwater image enhancement method
CN117611456A (en) * 2023-12-14 2024-02-27 安徽农业大学 Atmospheric turbulence image restoration method and system based on multiscale generation countermeasure network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611456A (en) * 2023-12-14 2024-02-27 安徽农业大学 Atmospheric turbulence image restoration method and system based on multiscale generation countermeasure network
CN117495687A (en) * 2023-12-29 2024-02-02 清华大学深圳国际研究生院 Underwater image enhancement method
CN117495687B (en) * 2023-12-29 2024-04-02 清华大学深圳国际研究生院 Underwater image enhancement method

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN111275637B (en) Attention model-based non-uniform motion blurred image self-adaptive restoration method
CN110458758B (en) Image super-resolution reconstruction method and system and computer storage medium
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN114841890A (en) Underwater image deblurring method based on generation countermeasure network
CN111340716B (en) Image deblurring method for improving double-discrimination countermeasure network model
CN113284051B (en) Face super-resolution method based on frequency decomposition multi-attention machine system
CN112614136B (en) Infrared small target real-time instance segmentation method and device
CN113392711B (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN111861894A (en) Image motion blur removing method based on generating type countermeasure network
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN113744262B (en) Target segmentation detection method based on GAN and YOLO-v5
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN115546060A (en) Reversible underwater image enhancement method
CN110807369B (en) Short video content intelligent classification method based on deep learning and attention mechanism
Han et al. UIEGAN: Adversarial learning-based photorealistic image enhancement for intelligent underwater environment perception
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN115035010A (en) Underwater image enhancement method based on convolutional network guided model mapping
Lu et al. Underwater image enhancement method based on denoising diffusion probabilistic model
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion
CN117391920A (en) High-capacity steganography method and system based on RGB channel differential plane
CN113487506B (en) Attention denoising-based countermeasure sample defense method, device and system
CN116452472A (en) Low-illumination image enhancement method based on semantic knowledge guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination