CN113920021A - Underwater image enhancement method based on two-step residual error network - Google Patents

Underwater image enhancement method based on two-step residual error network Download PDF

Info

Publication number
CN113920021A
CN113920021A CN202111138929.7A CN202111138929A CN113920021A CN 113920021 A CN113920021 A CN 113920021A CN 202111138929 A CN202111138929 A CN 202111138929A CN 113920021 A CN113920021 A CN 113920021A
Authority
CN
China
Prior art keywords
image
network
underwater
residual
color cast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111138929.7A
Other languages
Chinese (zh)
Inventor
黄梦醒
叶金金
毋媛媛
冯思玲
吴迪
冯文龙
张雨
孟昶含
陈扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202111138929.7A priority Critical patent/CN113920021A/en
Publication of CN113920021A publication Critical patent/CN113920021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an underwater image enhancement method based on a two-step residual error network, which is characterized in that a data set only having color cast is obtained according to an underwater image imaging model and is combined with a foggy day data set, and the two data sets are respectively trained by adopting the residual error network, so that the two data sets respectively have the capabilities of removing the color cast and improving the contrast, and the effect of enhancing the underwater image is finally achieved. By using the two-step network, each network has more specificity, single characteristics can be better extracted, the number of training samples is reduced, the efficiency of the network is improved, and underwater images are enhanced more efficiently.

Description

Underwater image enhancement method based on two-step residual error network
Technical Field
The invention relates to the technical field of image fusion, in particular to an underwater image enhancement method based on a two-step residual error network.
Background
Traditional image enhancement methods (such as methods based on physical models and methods based on histogram equalization) are not specific, and the efficiency of enhancing the definition of images is low. Under natural conditions, underwater images can be adversely affected by particles in the water, including micro-floating plants, light absorption, scattering of colored dissolved organics, and non-algal particles. In addition, when light propagates under water, it has the property of selective attenuation for different wavelengths of light. The severe degradation makes it difficult to restore the appearance and color of the underwater image. Color and clarity are therefore extremely important for underwater vision tasks and research.
Disclosure of Invention
Accordingly, the present invention is directed to an underwater image enhancement method based on a two-step residual error network, so as to solve at least the above problems.
The technical scheme adopted by the invention is as follows:
an underwater image enhancement method based on a two-step residual error network, the method comprising the steps of:
s1: simulating and generating simulated underwater images only with color cast of different water depths of 0.5-5 meters in type1 and type I by adjusting the medium attenuation coefficients of type1 and type I water area types and according to the underwater image imaging model and the adjusted medium attenuation coefficients, and obtaining a data set only with color cast and a foggy day data set;
s2: dividing the data set only storing color cast and the foggy day data set into an upper layer network image and a lower layer network image;
s3: respectively training an upper network image and a lower network image through a residual error network to obtain a color cast removed image and a defogging perception removed image;
s4: and removing the color cast image and the fog sense image through residual error network combination to obtain a clear image with color cast and fog sense removed.
Further, in step S1, type1 is blue type for offshore water area, and type i is green type for open sea water area.
Further, in step S1, when simulated underwater images with only color cast are simulated in two water area types of type1 and type i, the simulated underwater images with only color cast at different water depths of 0.5 m to 5 m are simulated by using an underwater image degradation model, which has the following formula:
Uλ(x)=Iλ(x)·Tλ(x)+Bλ·(1-Tλ(x))
Figure BDA0003283087100000021
Figure BDA0003283087100000022
wherein U isλ(x) For underwater images, Iλ(x) Is the brightness of the scene and is,
Figure BDA0003283087100000023
is a clear underwater image, BλFor uniform global background light, λ is the wavelength of the light of the red, green and blue channels, Tλ(x) Is a function of the wavelength of the light lambda and the distance d (x) from the scene point x to the camera, where betaλIs the wavelength dependent attenuation coefficient of the medium before the beam passes through a transmission medium having a distance d (x)The energy emitted from x is Eλ(x,0) and Eλ(x, d (x)), normalizing the ratio of residual energies NλCorresponding to the ratio of the remaining energy per unit distance to the initial energy, uniform global atmospheric light is randomly generated at 0.7<Bλ<0.8, and adjust the water depth D (x) to 0.5 m, 1.5 m, 2.5 m, 3.5 m, 5 m, based on random B for each underwater imageλAnd D (x) generating 10 underwater images with only color cast at different water depths corresponding to type1 and type I.
Further, in step S3, when the color cast removed image is obtained, the upper network image is deepened by continuously extracting the channel feature and the skip connection of the upper network image by the stacking residual of the residual network.
Further, a calculation formula for obtaining the color cast removed image is as follows:
I1=r(c(Iin);θ1)
B1=b(c(r(b(c(I1);θ(1,1)));θ(1,2)))+I1
B2=b(c(r(b(c(B1);θ(2,1)));θ(2,2)))+B1
……
B10=b(c(r(b(c(B9);θ(10,1)));θ(10,2)))+B9
Iout1=r(c(B10);θout)
wherein, IinAnd Iout1Is the input and output of the color cast removed image of the upper network, I1Is an input of IinOutput via convolution-ReLU, θ1A set of weights and offsets associated therewith, B1To B10Is a residual module stacked in the network, theta(l,n)The weights and deviations of the nth convolution layer of the ith residual module are shown, c is convolution, r is a ReLU function, and b is batch normalization.
Further, in step S3, when the image with the defogged sense of fog is obtained, the output I of the upper network image with the color cast removed is outputout1As input for the underlying network image, IoutThe final output of the lower-layer network image is obtained, and the calculation formula for removing the fog-feeling image is as follows:
Ib1=r(c(Iout1);θ1)
Bb1=b(c(b(c(r(b(c(Ib1);θ(1,1)));θ(1,2)));θ(1,3)))+Ib1
Bb2=b(c(b(c(r(b(c(Bb1);θ(2,1)));θ(2,2)));θ(2,3)))+Bb1
……
Bb10=b(c(b(c(r(b(c(Bb9);θ(10,1)));θ(10,2)));θ(10,3)))+Bb9
Iout=r(c(Bb10);θbout)
wherein, Ib1Is an input of Iout1Output via convolution-ReLU, θ2A set of weights and offsets associated therewith, Bb1To Bb10Is the residual of the stack in the residual network.
Further, in step S1, a foggy day data set is obtained according to the foggy day imaging model.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an underwater image enhancement method based on a two-step residual error network, which is characterized in that a data set only with color cast is obtained according to an underwater image imaging model and is combined with a foggy day data set, the two data sets are respectively trained by adopting the residual error network, so that the two data sets respectively have the capabilities of removing the color cast and improving the contrast, and finally the effect of enhancing the underwater image is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only preferred embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive efforts.
Fig. 1 is a schematic overall flow chart of an underwater image enhancement method based on a two-step residual error network according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a simulated underwater image of an underwater image enhancement method based on a two-step residual error network according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of a residual error network of an underwater image enhancement method based on a two-step residual error network according to an embodiment of the present invention
FIG. 4 is a schematic diagram of a residual error network stack of an underwater image enhancement method based on a two-step residual error network according to an embodiment of the present invention
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, the illustrated embodiments are provided to illustrate the invention and not to limit the scope of the invention.
Example one
Referring to fig. 1 to 4, the present invention provides an underwater image enhancement method based on a two-step residual error network, the method comprising the steps of:
s1: by adjusting the medium attenuation coefficients of two water area types of type1 and type I and simulating and generating the simulated underwater image with only color cast in different water depths of 0.5-5 meters according to the underwater image imaging model and the adjusted medium attenuation coefficients in the two water area types of type1 and type I, a data set with only color cast and a foggy day data set are obtained, wherein the data set with only color cast is exemplarily shown as follows: selecting 500 NYU v2 indoor data sets to generate 5000 data sets only storing color cast, wherein 3000 groups are used as test sets, 500 groups are used as test sets, and the image size is 320 x 240; foggy day data set: picking 700 from the NYU v2 room data sets generated a 2800 foggy day data set with 2500 as the test set, 300 as the test set, and an image size of 320 x 240.
S2: dividing the data set only storing color cast and the foggy day data set into an upper-layer network image and a lower-layer network image, illustratively, extracting image characteristics more specifically through the division of the upper-layer network image and the lower-layer network image;
s3: the upper network image and the lower network image are trained respectively through a residual error network to obtain a color cast removed image and a fog sense removed image, and illustratively, the color cast removed image and the fog sense removed image can be better distinguished through the training of the residual error network on the upper network image and the lower network image;
s4: the color cast image and the fog sense image are removed through the residual network combination, clear images with color cast and fog sense removed are obtained, and exemplarily, the color cast image and the fog sense image are removed through the residual network combination again, and a clearer image is obtained.
In step S1, type1 is blue for offshore waters and type i is green for open sea waters. Illustratively, the light scene can be well simulated by simulating that the sea water area is blue and the open sea water area is green.
In step S1, when simulated underwater images with only color cast at different water depths of 0.5 to 5 meters are simulated to be generated in both type1 and type i water area types, the simulation of the underwater images is performed using an underwater image degradation model having the following formula:
Uλ(x)=Iλ(x)·Tλ(x)+Bλ·(1-Tλ(x))
Figure BDA0003283087100000051
Figure BDA0003283087100000061
wherein U isλ(x) For underwater images, Iλ(x) Is the brightness of the scene and is,
Figure BDA0003283087100000062
is a clear underwater image, BλFor uniform global background light, λ is the wavelength of the light of the red, green and blue channels, Tλ(x) Is a function of the wavelength of the light lambda and the distance d (x) from the scene point x to the camera, where betaλIs a medium attenuation coefficient dependent on wavelength, and the energy emitted from x before and after the light beam passes through a transmission medium with a distance d (x) is respectively Eλ(x,0) and Eλ(x, d (x)), normalizing the ratio of residual energies NλCorresponding to the ratio of the remaining energy per unit distance to the initial energy, uniform global atmospheric light is randomly generated at 0.7<Bλ<0.8, and adjust the water depth D (x) to 0.5 m, 1.5 m, 2.5 m, 3.5 m, 5 m, based on random B for each underwater imageλAnd D (x) generating 10 underwater images with only color cast at different water depths corresponding to type1 and type I.
In step S3, when the color cast-removed image is obtained, the upper network image is deepened by continuously extracting the channel features and the skip connection of the upper network image by the stacking residual of the residual network.
The calculation formula for obtaining the image without color cast is as follows:
I1=r(c(Iin);θ1)
B1=b(c(r(b(c(I1);θ(1,1)));θ(1,2)))+I1
B2=b(c(r(b(c(B1);θ(2,1)));θ(2,2)))+B1
……
B10=b(c(r(b(c(B9);θ(10,1)));θ(10,2)))+B9
Iout1=r(c(B10);θout)
wherein, IinAnd Iout1Is the input and output of the color cast removed image of the upper network, I1Is an input of IinOutput via convolution-ReLU, θ1A set of weights and offsets associated therewith, B1To B10Is the residual error of the stack in the networkModule, theta(l,n)The method is characterized by representing the weight and deviation of the nth convolution layer of the ith residual module, c represents convolution, r represents a ReLU function, and b represents batch normalization, wherein convolution kernels with the size of 3x3 are used in the convolution layers of the network image, jump connection can be introduced by keeping the image size unchanged, information of the previous residual block can flow into the next residual block without being obstructed, information circulation is improved, and the problem of vanishing gradient and degradation caused by over-depth of the network is avoided, so that the network achieves the effect of removing color deviation.
In step S3, when the image with the defogged sense of fog is obtained, the output I of the upper network image with the color cast removed is outputout1As input for the underlying network image, IoutThe final output of the lower-layer network image is obtained, and the calculation formula for removing the fog-feeling image is as follows:
Ib1=r(c(Iout1);θ1)
Bb1=b(c(b(c(r(b(c(Ib1);θ(1,1)));θ(1,2)));θ(1,3)))+Ib1
BB2=B(c(b(c(r(B(c(Bb1);θ(2,1)));θ(2,2)));θ(2,3)))+Bb1
……
Bb10=b(c(b(c(r(b(c(Bb9);θ(10,1)));θ(10,2)));θ(10,3)))+Bb9
Iout=r(c(Bb10);θbout)
wherein, Ib1Is an input of Iout1Output via convolution-ReLU, θ2A set of weights and offsets associated therewith, Bb1To Bb10Is the residual stacked in the residual network, illustratively, the lower network image removes fog sense, the lower network image stacks with large residual blocks of 1x1 convolution to make it process more complex fog sense characteristics, the calculation is reduced under one dimension reduction 1 by 1 convolution layer, then the reduction is made under another 1x1 convolution layer, which not only ensures the protectionThe accuracy is maintained and the calculation amount is reduced. And finally, obtaining a clear underwater image without color cast and demisting, wherein the number of convolution kernels is increased from 64 dimensions to 256 dimensions in a way that the convolution kernels are different from the first layer of convolution of the upper-layer network image, so that the characteristics of the image in the foggy day can be better extracted, and Bb1To Bb10The residual error of the lower-layer network image stack is formed by convolution of 1x1, convolution of +3x3 and convolution of +1x1, and the problem of network degradation is solved while the calculation amount is reduced by utilizing convolution of 1x 1.
In step S1, a foggy day data set is obtained according to the foggy day imaging model, and, for example, the foggy day image features may be better extracted through a large amount of training of the foggy day imaging model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. The underwater image enhancement method based on the two-step residual error network is characterized by comprising the following steps of:
s1: simulating and generating simulated underwater images only with color cast of different water depths of 0.5-5 meters in type1 and type I by adjusting the medium attenuation coefficients of type1 and type I water area types and according to the underwater image imaging model and the adjusted medium attenuation coefficients, and obtaining a data set only with color cast and a foggy day data set;
s2: dividing the data set only storing color cast and the foggy day data set into an upper layer network image and a lower layer network image;
s3: respectively training an upper network image and a lower network image through a residual error network to obtain a color cast removed image and a defogging perception removed image;
s4: and removing the color cast image and the fog sense image through residual error network combination to obtain a clear image with color cast and fog sense removed.
2. The underwater image enhancement method based on the two-step residual error network of claim 1, wherein in step S1, the type1 is blue type for offshore water and the type i is green type for open sea water.
3. The two-step residual network-based underwater image enhancement method according to claim 1, wherein in step S1, when generating a color-shift-only simulated underwater image of different water depths of 0.5 m to 5 m in both type1 and type i water area types, the simulation of the underwater image is performed using an underwater image degradation model having the following formula:
Uλ(x)=Iλ(x)·Tλ(x)+Bλ·(1-Tλ(x))
Figure FDA0003283087090000011
Figure FDA0003283087090000012
wherein U isλ(x) For underwater images, Iλ(x) Is the brightness of the scene and is,
Figure FDA0003283087090000021
is a clear underwater image, BλFor uniform global background light, λ is the wavelength of the light of the red, green and blue channels, Tλ(x) Is a function of the wavelength of the light lambda and the distance d (x) from the scene point x to the camera, where betaλIs a medium attenuation coefficient dependent on wavelength, and the energy emitted from x before and after the light beam passes through a transmission medium with a distance d (x) is respectively Eλ(x,0) and Eλ(x, d (x)), normalizing the ratio of residual energies NλCorresponding to the ratio of the residual energy per unit distance to the initial energy, uniform global atmospheric light 0.7 < B is randomly generatedλ< 0.8, and adjust the water depth D (x) to 0.5, 1.5, 2.5, 3.5, 5 meters based on random B for each underwater imageλAnd D (x) pairThe type1 and the type I are used for generating 10 underwater images with different water depths and only color cast.
4. The two-step residual network-based underwater image enhancement method according to claim 1, wherein in step S3, when the color cast removed image is obtained, the upper network image is deepened by continuously extracting channel features and skip connections of the upper network image through stacked residuals of the residual network.
5. The underwater image enhancement method based on the two-step residual error network according to claim 4, wherein the calculation formula for obtaining the color cast image removal is as follows:
I1=r(c(Iin);θ1)
B1=b(c(r(b(c(I1);θ(1,1)));θ(1,2)))+I1
B2=b(c(r(b(c(B1);θ(2,1)));θ(2,2)))+B1
……
B10=b(c(r(b(c(B9);θ(10,1)));θ(10,2)))+B9
Iout1=r(c(B10);θout)
wherein, IinAnd Iout1Is the input and output of the color cast removed image of the upper network, I1Is an input of IinOutput via convolution-ReLU, θ1A set of weights and offsets associated therewith, B1To B10Is a residual module stacked in the network, theta(l,n)The weights and deviations of the nth convolution layer of the ith residual module are shown, c is convolution, r is a ReLU function, and b is batch normalization.
6. The two-step residual network-based underwater image enhancement method according to claim 5, wherein in step S3, when the de-fog perception image is obtained, the upper network image is divided into two or more layersOutput I for removing color castout1As input for the underlying network image, IoutThe final output of the lower-layer network image is obtained, and the calculation formula for removing the fog-feeling image is as follows:
Ib1=r(c(Iout1);θ1)
Bb1=b(c(b(c(r(b(c(Ib1);θ(1,1)));θ(1,2)));θ(1,3)))+Ib1
Bb2=b(c(b(c(r(b(c(Bb1);θ(2,1)));θ(2,2)));θ(2,3)))+Bb1
……
Bb10=b(c(b(c(r(b(c(Bb9);θ(10,1)));θ(10,2)));θ(10,3)))+Bb9
Iout=r(c(Bb10);θbout)
wherein, Ib1Is an input of Iout1Convolving the output of a ReLU, θ2A set of weights and offsets associated therewith, Bb1To Bb10Is the residual of the stack in the residual network.
7. The two-step residual network-based underwater image enhancement method according to claim 1, wherein in step S1, a foggy day data set is obtained according to a foggy day imaging model.
CN202111138929.7A 2021-09-27 2021-09-27 Underwater image enhancement method based on two-step residual error network Pending CN113920021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111138929.7A CN113920021A (en) 2021-09-27 2021-09-27 Underwater image enhancement method based on two-step residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111138929.7A CN113920021A (en) 2021-09-27 2021-09-27 Underwater image enhancement method based on two-step residual error network

Publications (1)

Publication Number Publication Date
CN113920021A true CN113920021A (en) 2022-01-11

Family

ID=79236568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111138929.7A Pending CN113920021A (en) 2021-09-27 2021-09-27 Underwater image enhancement method based on two-step residual error network

Country Status (1)

Country Link
CN (1) CN113920021A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612347A (en) * 2022-05-11 2022-06-10 北京科技大学 Multi-module cascade underwater image enhancement method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612347A (en) * 2022-05-11 2022-06-10 北京科技大学 Multi-module cascade underwater image enhancement method

Similar Documents

Publication Publication Date Title
CN110189278B (en) Binocular scene image restoration method based on generation countermeasure network
CN111754446A (en) Image fusion method, system and storage medium based on generation countermeasure network
CN109447907B (en) Single image enhancement method based on full convolution neural network
CN112288658A (en) Underwater image enhancement method based on multi-residual joint learning
CN110853110B (en) Picture automatic color matching method based on generation countermeasure network
Hu et al. Underwater image restoration based on convolutional neural network
CN110458060A (en) A kind of vehicle image optimization method and system based on confrontation study
CN109410149B (en) CNN denoising method based on parallel feature extraction
CN110322410B (en) Underwater image defogging and color cast correction method based on bright channel transmissivity compensation
CN110838092A (en) Underwater image restoration method based on convolutional neural network
CN107103285A (en) Face depth prediction approach based on convolutional neural networks
CN111091151B (en) Construction method of generation countermeasure network for target detection data enhancement
CN113920021A (en) Underwater image enhancement method based on two-step residual error network
CN115131188A (en) Robust image watermarking method based on generation countermeasure network
CN111179196A (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN114627035A (en) Multi-focus image fusion method, system, device and storage medium
CN115035010A (en) Underwater image enhancement method based on convolutional network guided model mapping
CN108921887A (en) Underwater scene depth map estimation method based on underwater light attenuation apriority
CN117649607B (en) Sea grass bed remote sensing identification method and device based on SegNet deep learning model
CN113436101B (en) Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism
US20230376614A1 (en) Method for decoding and encoding network steganography utilizing enhanced attention mechanism and loss function
CN117391920A (en) High-capacity steganography method and system based on RGB channel differential plane
CN117152019A (en) Low-illumination image enhancement method and system based on double-branch feature processing
CN117314808A (en) Infrared and visible light image fusion method combining transducer and CNN (carbon fiber network) double encoders
CN116402701A (en) Image defogging method and system based on depth of field information fogging and transform network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination