CN115456910A - Color recovery method for serious color distortion underwater image - Google Patents

Color recovery method for serious color distortion underwater image Download PDF

Info

Publication number
CN115456910A
CN115456910A CN202211215215.6A CN202211215215A CN115456910A CN 115456910 A CN115456910 A CN 115456910A CN 202211215215 A CN202211215215 A CN 202211215215A CN 115456910 A CN115456910 A CN 115456910A
Authority
CN
China
Prior art keywords
image
underwater
network
underwater image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211215215.6A
Other languages
Chinese (zh)
Inventor
邢文
曲思瑜
严浙平
周佳加
张勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202211215215.6A priority Critical patent/CN115456910A/en
Publication of CN115456910A publication Critical patent/CN115456910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention relates to a color recovery method for a seriously color-distorted underwater image, belonging to the field of underwater image enhancement; step 1: intermediate transmission image estimation of the underwater image; step 2: respectively taking the underwater image and the intermediate transmission image as input and labels to carry out coding and decoding network self-supervision network training, and obtaining fixed parameters of the coding and decoding network self-supervision network after the training is finished; and step 3: and (3) inserting the fixed parameters of the self-supervision network of the encoding and decoding network in the step (2) into an underwater image color recovery network for training. The method can remove the green tone in the underwater image with serious green tone, recover the color of the underwater image and accord with the visual sense of people; the used underwater image is more in line with the actual situation, and rapidness and accuracy can be guaranteed, so that the underwater image has significance and value of actual application.

Description

Color recovery method for serious color distortion underwater image
Technical Field
The invention belongs to the field of underwater image enhancement, relates to an unsupervised learning method in an underwater image processing method and a deep learning technology, and particularly relates to a color recovery method for an underwater image with severe color distortion.
Background
The underwater image enhancement method based on unsupervised learning must be capable of recovering colors of underwater images with serious color distortion under the condition of no pairing data training and ensuring that the underwater images recovered by the method conform to visual senses of human eyes, and in order to ensure the accuracy of recovered image colors, a coding and decoding neural network is trained in advance by using the underwater images and intermediate transmission images estimated by the underwater images through red channel apriori, parameters of the neural network are fixed, and the unsupervised network provides intermediate transmission image attention in a non-local mode when the underwater colors are recovered. In order to improve the brightness and contrast of the underwater image, style transfer loss and degradation structure similarity loss are introduced during unsupervised underwater image color recovery network training.
Disclosure of Invention
The invention aims to overcome the requirement of a deep learning technology on paired training data, rely on the strong learning capacity of a neural network, combine the attention of intermediate transmission images, and fuse style transfer loss and degraded structural similarity loss, thereby realizing the accurate color recovery of underwater images with severe color distortion.
The invention provides attention to the unsupervised network by utilizing the intermediate transmission image of the underwater image, thereby being capable of recovering the color of the underwater image more accurately. The method can well recover the color of the underwater image from the underwater image with serious green tone, and realizes accurate reduction of the color of the underwater image by learning through a cyclic generation confrontation network (cyclic area network). By the method, dependence of deep learning technology on paired training data can be eliminated, and rapidity and accuracy can be guaranteed. The requirements of tasks such as target identification and the like by using underwater images are met.
The purpose of the invention is realized by the following steps:
a color restoration method for severely color-distorted underwater images, comprising the steps of:
step 1: estimating an intermediate transmission image of the underwater image;
and 2, step: respectively taking the underwater image and the intermediate transmission image as input and labels to carry out coding and decoding network self-supervision network training, and obtaining fixed parameters of the coding and decoding network self-supervision network after the training is finished;
and step 3: inserting the fixed parameters of the self-supervision network of the encoding and decoding network in the step 2 into an underwater image color recovery network for training;
the underwater color recovery network includes two generators and four discriminators: the generator G is used for generating a land image from an underwater image, an upper network of the generator G is a fixed parameter coding and decoding network self-supervision network, and a lower network of the generator G is a coding and decoding network self-supervision network which is added with three layers of depth separable convolution fixed parameters in a middle layer; the generator F is responsible for generating underwater images from the land images and names the identifier responsible for identifying whether the land images are real land images and the auxiliary identifier thereof as D Y The identifier responsible for identifying whether the underwater image is a real underwater image or not and the auxiliary identifier thereof are named as D X The training process is as follows:
3.1: training a discriminator;
firstly, fixing generator parameters, opening discriminator parameters, respectively generating false land image and false underwater image by respectively passing the underwater image and land image through G and F, and inputting them into D Y And D X Respectively obtaining identification results, and performing absolute value loss on the results and the label 'false' to obtain a 'false' loss value of the identifier; then, the underwater image is aligned with the land imageIs input to D X And D Y Obtaining an identification result, and performing absolute value loss on the identification result and the 'true' of the label to obtain a 'true' loss value of the identifier; adding the 'true' loss value and the 'false' loss value to obtain all loss values of the discriminator, updating parameters of the discriminator by using all loss values, and finishing the training of the discriminator;
3.2: training of the generator:
fixing discriminator parameters, opening generator parameters, inputting the underwater image and the land image into G and F to generate a false land image and a false underwater image, inputting the images back to F and G to obtain restored images of the images, calculating the loss of the generators respectively by using the underwater image, the land image, the false underwater image and the restored images of the images through the following formula, and updating the parameters of the two generators to finish the training of the generator G;
Figure BDA0003875795260000021
wherein λ is 1 =1,λ 2 =10,λ 3 =9,λ 4 =6e-9,λ 5 =1e-6,λ 6 Each item loss weight coefficient is =1 e-7;
the formula for the similarity loss of the degenerate structure is shown below:
Figure BDA0003875795260000031
in the formula, x and y respectively represent an underwater image of a source domain and a land image of a target domain, G and F are respectively represented as a forward generator and a reverse generator, and mu and sigma represent the mean value and standard deviation of the images;
3.3: continuously circulating the steps 3.1 and 3.2 until the discriminator cannot distinguish whether the input image is true or false, and finishing the underwater image color recovery network training;
and 4, step 4: and extracting the generator G in the underwater image color recovery network, and applying the generator G to the recovery of the underwater image color.
The first step is specifically that the intermediate transmission image estimation of the underwater image is obtained by the following formula:
Figure BDA0003875795260000032
Figure BDA0003875795260000033
wherein D (x) represents the maximum intensity of the red channel subtracted by the maximum intensity of the blue and green channels, I c (x) The pixel value representing channel c ∈ { r, g, b }, Ω is a small block in the image, and is calculated to be 15.
Compared with the prior art, the invention has the beneficial effects that:
the method can remove the green tone in the underwater image with serious green tone, recover the color of the underwater image and accord with the visual sense of people;
the underwater image used by the method of the invention is more in line with the actual situation, and the rapidity and the accuracy can be ensured, so that the method has more practical application significance and value;
the method uses the similarity loss of the intermediate transmission image attention and the degradation structure to ensure that the underwater image subjected to color recovery is more similar to the land image, and provides a clearer and more accurate image for the underwater task needing to depend on the image.
Drawings
FIG. 1 is a schematic diagram of a coding and decoding network structure;
FIG. 2 is a schematic diagram of a generator structure in an underwater image recovery network;
FIG. 3 is a non-local attention schematic;
fig. 4a-b are schematic diagrams of discriminator configurations.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The method comprises the steps of firstly estimating an intermediate transmission image of the underwater image by a red channel prior method, wherein the intermediate transmission image comprises important information of underwater image attenuation. When light travels in water, red light is most rapidly attenuated in water due to the longest wavelength, and then green light and blue light, so that underwater images mostly show a green tone or a blue tone. And after obtaining the underwater image and the intermediate transmission image thereof, training by utilizing a coding and decoding neural network. And the fixed parameters of the encoding and decoding neural network are used for providing the attention of the intermediate transmission image to the unsupervised underwater image color recovery network, and the brightness and the contrast of the image are improved by applying style transfer loss and degradation structure similarity loss during unsupervised network training.
Due to the shooting time, place, temperature, water depth, water turbidity and other reasons, the underwater image has the characteristics of different degrees of color distortion, low contrast, blurred details, insufficient illumination and the like, so that the application of the underwater image color recovery method designed according to a physical model or a conventional image enhancement method is limited. In order to solve the problem that paired training data are difficult to obtain, an unsupervised learning loop generation countermeasure network is introduced, the loop generation countermeasure network is firstly used for the style conversion task of the image, and the source domain image also has the style of the target domain image while retaining the characteristics of details and the like. The advantage of the cyclic generation of the countermeasure network is that no pairs of training data are needed to achieve image conversion, which brings new features to tasks such as image noise reduction tasks, image rain removal tasks, image defogging tasks, underwater image enhancement tasks, etc. The loop generation countermeasure network is a loop formed by two generation countermeasure networks, and the source domain image converted to the target domain can still be completely converted back to the source domain according to the network structure in the loop form.
The intermediate transmission image can reflect important information of underwater image attenuation, and most underwater image enhancement methods based on the underwater imaging physical model utilize various methods to obtain the intermediate transmission image, and enhance the underwater image on the basis of obtaining the intermediate transmission image. The red channel prior is an underwater image enhancement method inspired by the dark channel prior of land image defogging, and is characterized in that the attenuation rate of the red channel is larger than the attenuation rates of the green channel and the blue channel in an underwater image, so that when the image is shot underwater, the number of red pixels in the image is obviously smaller than the number of green pixels and the number of blue pixels, the distribution of the red pixels in a histogram is quite centralized, and the attenuation information contained in an intermediate transmission image indicates how to restore the underwater image.
The self-supervised learning is a deep learning method which is started in recent years, aims to fully mine the information of data, is widely applied to the fields of natural language processing, computer vision and the like, and can effectively improve the performance of a neural network on a real task by utilizing the self-supervised learning to pre-train when a large amount of training data is provided and only a small number of labels are provided. Common self-supervision tasks in the field of computer vision are: image coloring, image mosaic, image missing block generation, some pseudo-tag tasks generated by hard coding, and the like. The underwater image can obtain an intermediate transmission image by using a red channel prior, the underwater image and the intermediate transmission image are used for self-supervision on a coding and decoding network, and the coding and decoding network executes a simple image generation task at the moment, so that the difference between the intermediate transmission image generated by the coding and decoding network and the intermediate transmission image obtained by the red channel prior is limited by L1 loss. In order to prevent underwater image information from being lost as much as possible in the calculation of a neural network, a coding and decoding network with a deep residual error and jump connection is designed, the addition of a residual error module and the jump connection enables the network to have the characteristics of quick convergence, small error and high accuracy, an Upsampling layer is added in a cyclic generation countermeasure network to cause network instability, the network is extremely easy to crash during training, a generator in an underwater image color recovery network is basically consistent with the coding and decoding network in structure, and therefore a transposed convolution layer is used as the network to perform up-sampling on a tensor.
The underwater image color recovery cycle generation countermeasure network follows a basic cycle generation countermeasure network structure and comprises two generators and four discriminators, and the structure of the generators is basically consistent with that of the encoding and decoding network. In order to improve the detail quality of the generated image, an auxiliary discriminator is added. The auxiliary discriminator reduces the size of the field of view of perception relative to the main discriminator, and discriminates whether the image is genuine or not only for a small block of the image.
Because the brightness and the contrast of the underwater image are low, the style transfer loss and the degradation structure similarity loss are used for assisting the underwater image color recovery network to train, the style transfer loss is firstly proposed in an image style transfer task, the image is considered to keep high-level features such as textures and the like when the image is at the high level of the VGG19 network, and the generated image can have the style features of the target domain image by reducing the difference between the gram matrixes of the high-level features. The structural similarity index is an important index for measuring the brightness, the contrast and the structural similarity degree of the images, the difference between the two images can be smaller and smaller due to the structural similarity loss, and the convergence effect is better than the mean square error loss. For the underwater image recovery task, we only want the underwater image to have the brightness and contrast characteristics of the target domain image. Therefore, a degraded structural similarity loss is proposed, only focusing on the difference between brightness and contrast of the image.
The method mainly comprises the following steps: estimating an intermediate transmission image of the underwater image, performing self-supervision training of a coding and decoding network, designing a degradation structure similarity loss function, and training an underwater image color recovery network. The method comprises the following specific steps:
1. intermediate transmission image estimation of underwater images
In order to determine the maximum difference between the red channel and the blue and green channels, assuming that the ambient light is known, and since red light is most rapidly attenuated in the underwater environment, the maximum intensity of the red channel may be subtracted from the maximum intensity of the blue and green channels, as shown in equation 1
Figure BDA0003875795260000061
In the formula I c (x) Represents the channel c e { r ∈ { R }G, b, Ω is a small block in the image, and is calculated to be 15.
The intermediate transmission image can now be calculated from equation 2
Figure BDA0003875795260000062
Since the intermediate transmission image is calculated by taking a small block of the image, the obtained intermediate transmission image has halo and block shadow, and the obtained intermediate transmission image is subjected to guide filtering to be smoothed.
2. Self-supervised training of codec networks
After the intermediate transmission image obtained by the hard coding is obtained, the intermediate transmission image is used as tag data to perform self-supervision training of the coding and decoding network. The codec network is a deep residual structure (see fig. 1), and a jump connection is added in order to save information as much as possible and accelerate convergence. And the transposed convolution is used as an upsampling layer, so that the stability of the network is improved. The L1 loss function is used to narrow the gap between the generated image and the intermediate transmission image.
3. Degenerate structure similarity loss function design
The structural similarity loss function can measure the difference between two images in brightness, contrast and structural similarity, the value is 0 to 1,1 represents complete consistency, and 0 represents complete difference. On the basis of the structural similarity loss function, neglecting the measurement of the structural similarity degree, only the brightness and the contrast of the image are concerned, because the purpose of the method is to hope that the enhanced underwater image has good brightness and contrast of the target domain image. The degraded structure similarity loss is calculated as shown in equation 3:
Figure BDA0003875795260000063
in the formula, x and y respectively represent an underwater image of a source domain and a land image of a target domain, G and F are respectively represented by a forward generator and a reverse generator, and mu and sigma represent a mean value and a standard deviation of the images.
4. Underwater image color recovery network training
The structure of the underwater image recovery network generator is basically the same as that of a coding and decoding network (see the attached figure 2), and three layers of depth separable convolutions are added in the middle layer of the lower network of the underwater image recovery network generator to improve the independent processing capacity of each channel. The upper network in the generator is a coding and decoding network with fixed parameters after training underwater images and intermediate transmission images, and is used for providing the attention of the intermediate transmission images for each layer of the lower network, and the mode for providing the attention is a non-local mode (see figure 3). The construction of the discriminator is a full convolution structure (see fig. 4 a-b), and the convolution layer of the main discriminator is more than that of the auxiliary discriminator, because the reception field of the auxiliary discriminator is smaller, and only a small block in the image is obtained. And a normalization layer is added into the auxiliary discriminator to improve the performance of the auxiliary discriminator.
The loss function of the whole underwater image color recovery network comprises countermeasure loss, cycle consistency loss, style transfer loss, degradation structure similarity loss and total variation loss, as shown in formula 4:
Figure BDA0003875795260000071
wherein λ is 1 =1,λ 2 =10,λ 3 =9,λ 4 =6e-9,λ 5 =1e-6,λ 6 And =1e-7 are weight coefficients of losses of each item respectively.
Before training of the whole network, the data sets to be prepared are an underwater image data set and a land image data set, and an intermediate transmission image is generated for the underwater image data set by the method described in the step 1; respectively taking the underwater image and the intermediate transmission image as input and label to carry out the self-supervision network training in the step 2, fixing the parameters after the self-supervision coding and decoding network training is finished, and inserting the parameters into the designed parametersAnd in the underwater image color recovery network, respectively taking the underwater image and the land image as input and labels to perform the underwater image color recovery network training in the step 4. The generator parameters and discriminator parameters of the network are updated in turn. For convenience of description, in the underwater image color recovery network, an image generator responsible for generating land from an underwater image is named G, a generator responsible for generating an underwater image from a land image is named F, and a discriminator responsible for discriminating whether the image is a real land image and an auxiliary discriminator thereof are named D Y The identifier responsible for identifying whether the underwater image is a real underwater image or not and the auxiliary identifier thereof are named as D X . The whole training process is: firstly, fixing generator parameters, opening discriminator parameters, respectively generating false land image and false underwater image by respectively passing the underwater image and land image through G and F, and inputting them into D Y And D X Respectively obtaining identification results, and performing absolute value loss on the identification results and the label 'false' to obtain a 'false' loss value of the identifier; then directly inputting the underwater image and the land image into D X And D Y Obtaining the identification result, and making the absolute value loss of the identification result and the 'true' of the label to obtain the 'true' loss value of the identifier. The 'true' loss value is added with the 'false' loss value to obtain the total loss value of the discriminator, and the parameter of the discriminator is updated by using the total loss value. At this point, the discriminator training is complete. Parameters of the discriminator and parameters of the generators are fixed, the parameters of the generators are opened, the underwater images and the land images are input to G and F to generate false land images and false underwater images, then the false land images and the false underwater images are input back to F and G to obtain restored images of the generators, and the parameters of the two generators are updated after the losses of the generators are respectively obtained by calculating the underwater images, the land images, the false underwater images and the restored images of the generators through a formula 4, so that the training of the generators is completed. And continuously circulating the training steps until the accuracy of the discriminator is about 50%, wherein the discriminator cannot distinguish whether the input image is true or false, and the underwater image color recovery network training is finished. And extracting the generator G in the network, and applying the generator G to the recovery of the underwater image color.

Claims (2)

1. A color restoration method for a severely color-distorted underwater image, comprising the steps of:
step 1: intermediate transmission image estimation of the underwater image;
step 2: respectively taking the underwater image and the intermediate transmission image as input and labels to perform coding and decoding network self-supervision network training, and obtaining fixed parameters of the coding and decoding network self-supervision network after the training is finished;
and step 3: inserting the fixed parameters of the self-supervision network of the encoding and decoding network in the step 2 into an underwater image color recovery network for training;
the underwater color recovery network comprises two generators and four discriminators: the generator G is used for generating a land image from an underwater image, an upper network of the generator G is a fixed parameter coding and decoding network self-supervision network, and a lower network of the generator G is a coding and decoding network self-supervision network which is added with three layers of depth separable convolution fixed parameters in a middle layer; the generator F is responsible for generating underwater images from the land images and names the identifier responsible for identifying whether the land images are real land images and the auxiliary identifier thereof as D Y The identifier responsible for identifying whether the underwater image is real or not and the auxiliary identifier thereof are named as D X The training process is as follows:
step 3.1: training a discriminator;
firstly, fixing generator parameters, opening discriminator parameters, respectively generating false land image and false underwater image by respectively passing through G and F for underwater image and land image, and inputting them into D Y And D X Respectively obtaining identification results, and performing absolute value loss on the identification results and the label 'false' to obtain a 'false' loss value of the identifier; then directly inputting the underwater image and the land image into D X And D Y Obtaining an identification result, and performing absolute value loss on the identification result and the 'true' of the label to obtain a 'true' loss value of the identifier; adding the 'true' loss value and the 'false' loss value to obtain all loss values of the discriminator, updating parameters of the discriminator by using all loss values, and finishing the training of the discriminator;
step 3.2: training of the generator:
fixing discriminator parameters, opening generator parameters, inputting the underwater image and the land image into G and F to generate a false land image and a false underwater image, inputting the images back to F and G to obtain restored images of the images, calculating the loss of the generators respectively by using the underwater image, the land image, the false underwater image and the restored images of the images through the following formula, and updating the parameters of the two generators to finish the training of the generator G;
Figure FDA0003875795250000011
wherein λ is 1 =1,λ 2 =10,λ 3 =9,λ 4 =6e-9,λ 5 =1e-6,λ 6 Each item loss weight coefficient is =1 e-7;
the formula for the similarity loss of the degenerate structure is shown below:
Figure FDA0003875795250000021
in the formula, x and y respectively represent an underwater image of a source domain and a land image of a target domain, G and F are respectively represented as a forward generator and a reverse generator, and mu and sigma represent the mean value and standard deviation of the images;
step 3.3: continuously circulating the steps 3.1 and 3.2 until the discriminator cannot distinguish whether the input image is true or false, and finishing the underwater image color recovery network training;
and 4, step 4: and extracting the generator G in the underwater image color recovery network, and applying the generator G to the recovery of the underwater image color.
2. The method for color restoration of a severely color distorted underwater image according to claim 1, wherein the step one is that the intermediate transmission image estimate of the underwater image is derived from the following formula:
Figure FDA0003875795250000022
Figure FDA0003875795250000023
wherein D (x) represents the maximum intensity of the red channel subtracted by the maximum intensity of the blue and green channels, I c (x) The pixel value representing channel c ∈ { r, g, b }, Ω is a small block in the image, and is calculated to be 15.
CN202211215215.6A 2022-09-30 2022-09-30 Color recovery method for serious color distortion underwater image Pending CN115456910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211215215.6A CN115456910A (en) 2022-09-30 2022-09-30 Color recovery method for serious color distortion underwater image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211215215.6A CN115456910A (en) 2022-09-30 2022-09-30 Color recovery method for serious color distortion underwater image

Publications (1)

Publication Number Publication Date
CN115456910A true CN115456910A (en) 2022-12-09

Family

ID=84308367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211215215.6A Pending CN115456910A (en) 2022-09-30 2022-09-30 Color recovery method for serious color distortion underwater image

Country Status (1)

Country Link
CN (1) CN115456910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152116A (en) * 2023-04-04 2023-05-23 青岛哈尔滨工程大学创新发展中心 Underwater image enhancement method based on visual self-attention model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152116A (en) * 2023-04-04 2023-05-23 青岛哈尔滨工程大学创新发展中心 Underwater image enhancement method based on visual self-attention model

Similar Documents

Publication Publication Date Title
Hou et al. Joint residual learning for underwater image enhancement
Song et al. Enhancement of underwater images with statistical model of background light and optimization of transmission map
CN111784602B (en) Method for generating countermeasure network for image restoration
CN109800710B (en) Pedestrian re-identification system and method
Hu et al. Underwater image restoration based on convolutional neural network
CN110838092B (en) Underwater image restoration method based on convolutional neural network
CN114445292A (en) Multi-stage progressive underwater image enhancement method
CN113284061B (en) Underwater image enhancement method based on gradient network
CN112767279A (en) Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN108921887B (en) Underwater scene depth map estimation method based on underwater light attenuation priori
CN115861094A (en) Lightweight GAN underwater image enhancement model fused with attention mechanism
CN112614070A (en) DefogNet-based single image defogging method
Xu et al. Image enhancement algorithm based on GAN neural network
CN115456910A (en) Color recovery method for serious color distortion underwater image
CN114067018B (en) Infrared image colorization method for generating countermeasure network based on expansion residual error
Zhang et al. Hierarchical attention aggregation with multi-resolution feature learning for GAN-based underwater image enhancement
Han et al. UIEGAN: Adversarial Learning-based Photo-realistic Image Enhancement for Intelligent Underwater Environment Perception
Zhou et al. IACC: Cross-Illumination Awareness and Color Correction for Underwater Images Under Mixed Natural and Artificial Lighting
CN116523985B (en) Structure and texture feature guided double-encoder image restoration method
CN116309163A (en) Combined denoising and demosaicing method for black-and-white image guided color RAW image
CN115953311A (en) Image defogging method based on multi-scale feature representation of Transformer
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network
Kumar et al. Underwater Image Enhancement using deep learning
CN114140334A (en) Complex coal mine image defogging method based on improved generation countermeasure network
CN113379715A (en) Underwater image enhancement and data set true value image acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination