CN111415304A - Underwater vision enhancement method and device based on cascade deep network - Google Patents
Underwater vision enhancement method and device based on cascade deep network Download PDFInfo
- Publication number
- CN111415304A CN111415304A CN202010121157.5A CN202010121157A CN111415304A CN 111415304 A CN111415304 A CN 111415304A CN 202010121157 A CN202010121157 A CN 202010121157A CN 111415304 A CN111415304 A CN 111415304A
- Authority
- CN
- China
- Prior art keywords
- image
- underwater
- enhanced
- sample
- degraded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000004438 eyesight Effects 0.000 title claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 73
- 238000012549 training Methods 0.000 claims abstract description 69
- 238000011156 evaluation Methods 0.000 claims description 62
- 238000004422 calculation algorithm Methods 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 230000008520 organization Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 14
- 230000006872 improvement Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000002708 enhancing effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 239000000126 substance Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 238000000889 atomisation Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011867 re-evaluation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The embodiment of the invention provides an underwater vision enhancement method and device based on a cascade deep network, wherein the method comprises the following steps: determining an underwater degraded image; inputting the underwater degraded image into an underwater image enhancement model, and outputting an enhanced image corresponding to the underwater degraded image; the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers. The method and the device provided by the embodiment of the invention realize the improvement of the accuracy of underwater image enhancement modeling and also improve the effect of underwater image enhancement.
Description
Technical Field
The invention relates to the technical field of image enhancement, in particular to an underwater vision enhancement method and device based on a cascade depth network.
Background
In recent years, underwater image enhancement has received much attention in the fields of image processing, computer vision, and the like. Underwater image enhancement becomes a challenging problem due to the complexity of the underwater environment and lighting conditions. Generally, the underwater image is subjected to wavelength-dependent absorption and scattering effects to cause image degradation, including atomization due to light scattering effect of small particle suspended matters in water, and the phenomenon that the color of the underwater image is distorted due to the absorption effect of water bodies on different wavelengths of light.
The traditional underwater image enhancement method based on a physical imaging model is to respectively estimate the transmissivity and the global background light of underwater imaging, the two-time estimation is a suboptimal selection, and the reconstruction error is not directly minimized, so that the restored image has local color spots and the color cast effect. In addition, the method has the problems of poor adaptability to complex underwater environment, inaccurate underwater imaging model, high complexity of model parameter estimation algorithm and the like.
Therefore, how to avoid the problems of high complexity of parameter estimation and low modeling accuracy in the model establishment of the existing underwater image enhancement method and improve the underwater image enhancement effect is still a problem to be solved by the technical staff in the field.
Disclosure of Invention
The embodiment of the invention provides an underwater vision enhancement method and device based on a cascade depth network by means of an underwater optical imaging physical model and a convolutional neural network, and aims to solve the problems of high parameter estimation complexity and low modeling accuracy in model building of the existing underwater image enhancement method.
In a first aspect, an embodiment of the present invention provides an underwater vision enhancement method based on a cascaded depth network, including:
determining an underwater degraded image;
inputting the underwater degraded image into an underwater image enhancement model, and outputting an enhanced image corresponding to the underwater degraded image;
the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
Preferably, in the method, the obtaining of the enhanced image tag corresponding to each sample underwater degraded image specifically includes:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP (image restoration based on underwater dark channel prior) to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
Preferably, in the method, the evaluating four enhanced images corresponding to the sample underwater degraded image by combining subjective evaluation and objective evaluation, and selecting an optimal image of the four enhanced images as an enhanced image tag of the sample underwater degraded image specifically include:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
Preferably, in the method, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, each stage of convolutional neural network is composed of 5 convolutional layers and 3 dense connection layers, and the method specifically comprises the following steps:
the training network of the underwater image enhancement model is established by adopting a first-stage convolutional neural network and a second-stage convolutional neural network in a cascade manner;
establishing a R, G, B component-based three-color channel model of a first-stage convolutional neural network, wherein each color channel corresponds to one convolutional neural network, the convolutional neural network comprises 5 convolutional layers and 3 dense connecting layers, and the input of the first-stage convolutional neural network is a sample underwater degraded image;
the second stage convolution neural network comprises 5 convolution layers and 3 dense connection layers, wherein the input of the second stage convolution neural network is a semi-enhanced image synthesized by single-channel gray-scale maps of each color channel output by the first stage convolution neural network.
Preferably, in the method, training the underwater image enhancement model based on the sample underwater degraded images and the enhanced image labels corresponding to the sample underwater degraded images specifically includes:
presetting maximum iteration times, a learning rate and a minimum loss threshold value of training;
and when the iteration number of the training reaches the maximum iteration number or the difference value of the loss functions obtained by the two times of training is smaller than the minimum loss threshold value, stopping the network training and determining the final underwater image enhancement model.
The second invention, an embodiment of the present invention provides an underwater vision enhancement device based on a cascaded depth network, including:
a determination unit for determining an underwater degraded image;
the enhancement unit is used for inputting the underwater degraded image into an underwater image enhancement model and outputting an enhanced image corresponding to the underwater degraded image;
the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
Preferably, in the apparatus, the acquiring of the enhanced image tag corresponding to each sample underwater degraded image specifically includes:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP algorithm to the sample underwater degraded image to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
Preferably, in the apparatus, the evaluating four enhanced images corresponding to the sample underwater degraded image by combining subjective evaluation and objective evaluation, and selecting an optimal image of the four enhanced images as an enhanced image tag of the sample underwater degraded image specifically includes:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the steps of the underwater vision enhancement method based on the cascaded deep network as provided in the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the cascade depth network-based underwater vision enhancement method as provided in the first aspect.
According to the method and the device provided by the embodiment of the invention, the underwater degraded image is input into an underwater image enhancement model, and an enhanced image corresponding to the underwater degraded image is output, wherein the underwater image enhancement model is obtained by training based on the sample underwater degraded image and an enhanced image label corresponding to each sample underwater degraded image, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers. Because the underwater image enhancement model is obtained by training based on a large number of samples and labels, the effect of the image enhancement function of the model is greatly improved, and meanwhile, because the adopted training network is established by a two-stage convolution neural network, the accuracy of the trained model is also greatly improved. Therefore, the method and the device provided by the embodiment of the invention realize the improvement of the accuracy of underwater image enhancement modeling and also improve the effect of underwater image enhancement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, a brief description will be given below of the drawings required for the embodiments or the technical solutions in the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an underwater vision enhancement method based on a cascaded depth network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training process of a training network of an underwater image enhancement model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an underwater vision enhancement device based on a cascaded depth network according to an embodiment of the present invention;
fig. 4 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The existing method for enhancing the underwater degraded image generally has the problems of high complexity of parameter estimation and lower modeling accuracy when an image enhancement model is built. Therefore, the embodiment of the invention provides an underwater vision enhancement method based on a cascade deep network. Fig. 1 is a schematic flow chart of an underwater vision enhancement method based on a cascaded depth network according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
Specifically, an underwater degraded image requiring image enhancement is determined.
Specifically, an underwater degraded image is input into an underwater image enhancement model, and an enhanced image corresponding to the underwater degraded image is output, wherein the underwater image enhancement model is obtained by training on the basis of a sample underwater degraded image and an enhanced image label corresponding to each sample underwater degraded image, and a large number of sample underwater degraded images and enhanced image labels corresponding to each sample underwater degraded image are trained on the basis of a deep learning method to determine the underwater image enhancement model. The sample underwater degraded image is composed of acquired underwater degraded images under different scenes, and the enhanced image label corresponding to each sample underwater degraded image is an image with a good enhanced effect after the sample underwater degraded image is processed by an image enhancement method. When the underwater image enhancement model is trained, the used training network is established by adopting a two-stage convolutional neural network, wherein each stage of the two-stage convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
The method provided by the embodiment of the invention is characterized in that the underwater degraded image is input into an underwater image enhancement model, and an enhanced image corresponding to the underwater degraded image is output, wherein the underwater image enhancement model is obtained by training based on the sample underwater degraded image and an enhanced image label corresponding to each sample underwater degraded image, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers. Because the underwater image enhancement model is obtained by training based on a large number of samples and labels, the effect of the image enhancement function of the model is greatly improved, and meanwhile, because the adopted training network is established by a two-stage convolution neural network, the accuracy of the trained model is also greatly improved. Therefore, the method provided by the embodiment of the invention realizes the improvement of the accuracy of underwater image enhancement modeling and also improves the effect of underwater image enhancement.
Based on the above embodiment, in the method, the obtaining of the enhanced image tag corresponding to each sample underwater degraded image specifically includes:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP algorithm to the sample underwater degraded image to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
Specifically, before training of the underwater image enhancement model, samples and sample labels used for training are determined, the sample underwater degraded images are a large number of collected underwater degraded images under different scenes, and the enhanced image labels of the underwater degraded images of the samples are images with good enhancement effect after the underwater degraded images of the samples are processed by the image enhancement method. The sample label is obtained in the following specific manner: for any sample underwater degraded image, respectively carrying out image enhancement processing on the sample underwater degraded image based on four typical underwater image enhancement algorithms, namely a white balance algorithm, a histogram equalization algorithm, a fusion algorithm and a UDCP algorithm to obtain four enhanced images corresponding to the sample underwater degraded image, then evaluating the four enhanced images by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the best image in the four enhanced images as an enhanced image label of the sample underwater degraded image. The method for combining the subjective evaluation and the objective evaluation includes multiple ways, which may be to set different weights for the score of the subjective evaluation and the score of the objective evaluation and add the weights to determine a final score of each enhanced image, or may be to determine whether an optimal enhanced image selected by the subjective evaluation is consistent with an optimal enhanced image selected by the objective evaluation, if so, a consistent result obtained by the two evaluation ways is used as a sample label, if not, the evaluation is performed again, and a re-evaluation result of the subjective evaluation is used as a sample label, and the like, which is not specifically limited herein.
Based on any one of the embodiments, in the method, the evaluating four enhanced images corresponding to the sample underwater degraded image in a manner of combining subjective evaluation and objective evaluation, and selecting an optimal image of the four enhanced images as an enhanced image label of the sample underwater degraded image specifically includes:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
Specifically, in the evaluation of the enhanced images by the subjective evaluation index, 5 appraisers are summoned, wherein 3 of the appraisers are engaged in image processing, the other 2 of the appraisers are not engaged in image processing, the 5 appraisers use the sample underwater degraded images as reference, two-by-two comparison is carried out on four enhanced images processed by four typical underwater image enhancement algorithms, A, B, C, D grade evaluation is carried out on each enhanced image, and then the evaluation result score goal of each enhanced image is obtained, and the specific calculation formula is as follows:
wherein a, B, C and D represent the number of people corresponding to A, B, C, D grades selected from 5 evaluators, A is 10 points, B is 8 points, C is 6 points, D is 4 points, N is the total number of evaluators, and the value is 5. And taking the enhanced image with the highest evaluation result score of the good in the four enhanced images as the first best enhanced image.
Further, on the basis of subjective evaluation, three objective indexes, namely information entropy, UCIQE and UIQM, are utilized to objectively and quantitatively evaluate the four enhanced images. The information entropy is used for measuring the amount of information in the image, wherein the larger the information entropy is, the more information contained in the image is indicated, the better the image enhancement quality is considered, and a specific calculation formula is as follows:
where p (l) is the probability of the gray value l appearing in the image, L is the maximum gray level of the image, L is 255.
UCIQE is a non-reference evaluation index of an underwater color image, is a linear combination of image chromaticity, saturation and contrast on a CIE L AB uniform color space and is used for comprehensively evaluating the color, blur and contrast of the image, and if an enhanced image has N pixel points, the value I of the pixel point p in the CIE L AB spacep=[lp,ap,bp]Wherein l isp、ap、bpAs coordinates in color space, lpRepresenting the brightness of a pixel p, apRepresenting the chromaticity of a pixel p, bpThe index UCIQE for evaluating the underwater image quality by using CIE L AB color space is specifically defined as:
UCIQE=c1×σc+c2×conl+c3×μswherein, c1、c2、c3Is a weighting coefficient c for the blurred and color-biased enhanced image1=0.4680,c2=0.2745,c3For different types of enhancement, better performance is obtained with different weighting coefficients, 0.2576.
σcIs the standard deviation of chromaticity, which is specifically calculated as:
wherein N is an increaseNumber of pixels contained in the strong image,/pDenotes the chromaticity of the pixel p in the CIE L AB space, μ denotes the average value of the chromaticity in the CIE L AB space;
conlis the contrast of the luminance, calculated by the difference between maximum 1% and minimum 1% of the luminance of all pixels in the CIE L AB space;
μsis the average value of the saturation, and is specifically calculated as:
where N is the number of pixels of the enhanced image, biIs the saturation of pixel i in CIE L AB space.
The UIQM underwater color image non-reference evaluation index is selected for evaluation, and the UIQM is a linear combination of image color, sharpness and contrast. The underwater evaluation index UIQM is specifically calculated as:
UIQM=c1×UICM+c2×UISM+c3×UIConM
wherein, c1、c2、c3Is a weighting coefficient, c for underwater images1=0.0282、c2=0.2953、c33.5753. UICM represents the color chroma metric of the image, UISM is the sharpness metric, UIConM is the contrast metric.
The information entropy is used for measuring the information quantity of an image, UCIQE is used for evaluating the chroma, saturation and contrast of the image, and UIQM is based on a human eye visual system, accords with the visual perception of human, and can measure the saturation, the contrast and the definition of the image. And summing the index values obtained based on the three evaluation indexes after different weights are set, and taking the enhanced image with the maximum summation value in the four enhanced images as the selected second best enhanced image for objective evaluation.
Comparing the subjective evaluation result with the objective evaluation result, and determining a first optimal enhanced image selected by the subjective evaluation and corresponding to the underwater degraded image of the sample as an enhanced image label of the underwater degraded image of the sample if the first optimal enhanced image selected by the subjective evaluation and corresponding to the underwater degraded image of the sample is consistent with a second optimal enhanced image selected by the objective evaluation and corresponding to the underwater degraded image of the sample; and if the first optimal enhanced image selected by the subjective evaluation and corresponding to the sample underwater degraded image is not consistent with the second optimal enhanced image selected by the objective evaluation and corresponding to the sample underwater degraded image, performing the subjective evaluation again, namely performing manual scoring again, and selecting the optimal enhanced image selected by the result of the second subjective evaluation as an enhanced image label of the sample underwater degraded image.
Based on any one of the above embodiments, in the method, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, each stage of convolutional neural network is composed of 5 convolutional layers and 3 dense connection layers, and specifically includes:
the training network of the underwater image enhancement model is established by adopting a first-stage convolutional neural network and a second-stage convolutional neural network in a cascade manner;
establishing a R, G, B component-based three-color channel model of a first-stage convolutional neural network, wherein each color channel corresponds to one convolutional neural network, the convolutional neural network comprises 5 convolutional layers and 3 dense connecting layers, and the input of the first-stage convolutional neural network is a sample underwater degraded image;
the second stage convolution neural network comprises 5 convolution layers and 3 dense connection layers, wherein the input of the second stage convolution neural network is a semi-enhanced image synthesized by single-channel gray-scale maps of each color channel output by the first stage convolution neural network. .
Specifically, fig. 2 is a schematic diagram of a training process of a training network of an underwater image enhancement model according to an embodiment of the present invention. As shown in fig. 2, the training network of the underwater image enhancement model is established by cascading a first-stage convolutional neural network and a second-stage convolutional neural network; when training is carried out by using the sample underwater degradation image and the enhanced image label corresponding to the sample underwater degradation image, R, G, B color channel images of the sample underwater degradation image are extracted and recorded as IR(x),IG(x),IB(x) Building three colors for the R, G, B componentThe channel model is used as a first-stage convolutional neural network, and each color channel corresponds to one convolutional neural network; and then constructing a fine tuning network based on the output of the first-stage convolutional neural network as a second-stage convolutional neural network, and cascading the second-stage convolutional neural network and the first-stage convolutional neural network to obtain the underwater image enhancement model.
The three color channel convolution network models aiming at R, G, B components are all the same in structure, each color channel model comprises 5 convolution layers and 3 dense connection layers, and the sizes of convolution kernels are 1x1, 3x3, 5x5, 7x7 and 3x3 respectively; the 1 st dense connection layer is connected with the feature diagram output by the 1 st convolution and the 2 nd convolution, the 2 nd dense connection layer is connected with the feature diagram output by the 2 nd convolution and the 2 nd dense connection layer after convolution processing, and the 3 rd dense connection layer is connected with the feature diagram output by the 4-time convolution.
The calculation formula is as follows:
wherein the content of the first and second substances,the convolution weights of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolutional layers, respectively, of a convolutional layer group in a c color component (c ∈ { R, G, B }),the offsets of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolutional layers, respectively, of a convolutional layer set in a c color component (c ∈ { R, G, B }) network1(Ic(x)),F2(Ic(x)),F3(Ic(x)),F4(Ic(x)),F5(Ic(x) Are the output results of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolutional layers, respectively, of a convolutional layer group in a c color component (c ∈ { R, G, B }) network.
Then, the first-stage convolutional neural network takes the output result of the 5 th convolutional layer obtained by the learning of the convolutional neural network as an intermediate variable K of the underwater optical physical imaging modelc(x) And reconstructing a color-restored image by using the intermediate variable.
The underwater optical physical imaging model formula is as follows:
Ic(x)=tc(x)Jc(x)+(1-tc(x))·Ac
wherein, Ic(x) For underwater degraded images, Jc(x) Is an original non-degraded image, AcIs a global background light, tc(x) For medium transmittance, c ∈ { R, G, B } represents the red, green, blue color components.
Wherein the content of the first and second substances,βcfor the attenuation coefficient, z (x) is the distance between the object and the camera.
for formula Ic(x)=tc(x)Jc(x)+(1-tc(x))·AcPerforming an equivalent transformation to obtain:
Jc(x)=Kc(x)Ic(x)-Kc(x)+bc
further, K is estimated by means of a modelc(x) Recovering degraded underwater image Jc(x) B is mixingcIs set to 1.
Further, the resulting three color components J will be recoveredR(x)、JG(x)、JB(x) Synthesized as an image J restored underwaterc’(x)。
Constructing a second-level convolutional neural network, namely a fine tuning network, wherein the fine tuning network comprises 5 convolutional layers and 3 dense connection layers, and the sizes of convolutional cores in the convolutional layers are 1x1, 3x3, 5x5, 7x7 and 3x3 respectively; the 1 st dense connection layer is connected with the feature graph output by the 1 st convolution and the 2 nd convolution, the 2 nd dense connection layer is connected with the feature graph output by the 2 nd convolution and the 2 nd dense connection layer after convolution processing, the 3 rd dense connection layer is connected with the feature graph output by the convolution for the above 4 times, and the calculation formula is as follows:
F1(I(x))=max(W1*I(x)+b1,0)
F2(I(x))=max(W2*F1(I(x))+b2,0)
F3(I(x))=max(W3*{F1(I(x)),F2(I(x))}+b3,0)
F4(I(x))=max(W4*{F2(I(x)),F3(I(x))}+b4,0)
F5(I(x))=max(W5*{F1(I(x)),F2(I(x)),F3(I(x)),F4(I(x))}+b5,0)
wherein W1,W2,W3,W4,W5Convolution weights, b, of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolution layers, respectively, of a convolution layer group in the fine tuning network1,b2,b3,b4,b5The offsets of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolutional layers, respectively, of a convolutional layer group in the network. F1(I(x)),F2(I(x)),F3(I(x)),F4(I(x)),F5(I (x)) are the output results of the 1 st, 2 nd, 3 rd, 4 th and 5 th convolutional layers, respectively, of the convolutional layer groups in the network.
Further, the second-stage convolutional neural network takes the output result of the 5 th convolutional layer obtained by learning of the convolutional neural network as an intermediate variable K of the underwater optical physical imaging modelc(x) And according to formula Jc(x)=Kc(x)Ic(x)-Kc(x)+bcReconstructing a color restored image Jc(x)。
Based on any one of the above embodiments, in the method, training the underwater image enhancement model based on the sample underwater degraded images and the enhanced image labels corresponding to the sample underwater degraded images specifically includes:
presetting maximum iteration times, a learning rate and a minimum loss threshold value of training;
and when the iteration number of the training reaches the maximum iteration number or the difference value of the loss functions obtained by the two times of training is smaller than the minimum loss threshold value, stopping the network training and determining the final underwater image enhancement model.
Specifically, the basic parameters of the model need to be set before the first-stage convolutional neural network is trained. First, the ratio of the training set to the validation set was set to 9:1, the batch size was 8, the learning rate was [0.01,0.0001], the number of iterations was 67000, the weight decay was set to 0.0001, and the weight of the model was updated using the Adam optimization algorithm. Considering that the detail loss of the underwater image is relatively serious, the embodiment of the invention uses the MSE as the loss function and combines a three-channel network structure, which is very favorable for protecting the image detail and recovering the image color, wherein the MSE represents the mean square error.
Wherein the content of the first and second substances,xmtag representing a real enhanced image, ymRepresenting the image recovered with the network and M representing the number of samples.
The basic parameters of the second-level network model need to be set before the model is trained. First, the ratio of the training set to the validation set was set to 9:1, the block size was set to 8, the learning rate was set to [0.01,0.0001], the number of iterations was 67000, the weight decay was set to 0.0001, and the weight of the model was updated using the Adam optimization algorithm. The second-level network is a fine adjustment process of the output result of the first-level network, the SSIM and the color loss are added to be used as a loss function, the SSIM represents structural similarity and is used for protecting the detail characteristics of the image, and the color loss is used for reducing the color difference between the enhanced image and the target image.
Wherein, the structural similarity SSIM (x, y) between the real enhanced image label x and the image y restored by the network is [ < l (x, y) >α·c(x,y)β·s(x,y)γ]Where l (x, y), c (x, y), s (x, y) respectively represent the brightness between the real enhanced image tag x and the network restored image y, the contrast between the real enhanced image tag x and the network restored image y, and the contrast of the structure between the real enhanced image tag x and the network restored image y. Wherein the content of the first and second substances,
μx,μyrespectively, mean values of x, y, σx,σyDenotes the standard deviation, σ, of x and yxyDenotes the covariance of x and y, c1,c2,c3Respectively, constant to avoid system errors with a denominator of 0, settings α - β - γ -1, and c3=c2/2, SSIM simplification to
Wherein, Lcolor(X, Y) is a color loss function, Xb,YbIs a blurred image corresponding to the target image X and the enhanced image Y, wherein the brightness value of the pixel point (i, j) in the blurred image corresponding to the target image X2-dimensional Gaussian blur kernel ofWherein, A is 0.053, mux=μy=0,σx=σy=3。
Based on any one of the above embodiments, an embodiment of the present invention provides an underwater vision enhancement device based on a cascaded depth network, and fig. 3 is a schematic structural diagram of the underwater vision enhancement device based on the cascaded depth network provided by the embodiment of the present invention. As shown in fig. 3, the apparatus comprises a determining unit 310 and an enhancing unit 320, wherein,
the determining unit 310 is configured to determine an underwater degraded image;
the enhancing unit 320 is configured to input the underwater degraded image into an underwater image enhancing model, and output an enhanced image corresponding to the underwater degraded image;
the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
The device provided by the embodiment of the invention outputs the enhanced image corresponding to the underwater degraded image by inputting the underwater degraded image into the underwater image enhanced model, wherein the underwater image enhanced model is obtained by training based on the sample underwater degraded image and the enhanced image label corresponding to each sample underwater degraded image, the underwater image enhanced model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers. Because the underwater image enhancement model is obtained by training based on a large number of samples and labels, the effect of the image enhancement function of the model is greatly improved, and meanwhile, because the adopted training network is established by a two-stage convolution neural network, the accuracy of the trained model is also greatly improved. Therefore, the device provided by the embodiment of the invention realizes the improvement of the accuracy of underwater image enhancement modeling and also improves the effect of underwater image enhancement.
Based on any one of the above embodiments, in the apparatus, the obtaining of the enhanced image tag corresponding to each sample underwater degraded image specifically includes:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP algorithm to the sample underwater degraded image to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
Based on any one of the above embodiments, in the apparatus, the evaluating four enhanced images corresponding to the sample underwater degraded image in a manner of combining subjective evaluation and objective evaluation, and selecting an optimal image of the four enhanced images as an enhanced image tag of the sample underwater degraded image specifically includes:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
Based on any one of the above embodiments, in the apparatus, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, each stage of convolutional neural network is composed of 5 convolutional layers and 3 dense connection layers, and specifically includes:
the training network of the underwater image enhancement model is established by adopting a first-stage convolutional neural network and a second-stage convolutional neural network in a cascade manner;
establishing a R, G, B component-based three-color channel model of a first-stage convolutional neural network, wherein each color channel corresponds to one convolutional neural network, the convolutional neural network comprises 5 convolutional layers and 3 dense connecting layers, and the input of the first-stage convolutional neural network is a sample underwater degraded image;
the second stage convolution neural network comprises 5 convolution layers and 3 dense connection layers, wherein the input of the second stage convolution neural network is a semi-enhanced image synthesized by single-channel gray-scale maps of each color channel output by the first stage convolution neural network.
Based on any one of the above embodiments, in the apparatus, training the underwater image enhancement model based on the sample underwater degraded images and the enhanced image labels corresponding to the sample underwater degraded images specifically includes:
presetting maximum iteration times, a learning rate and a minimum loss threshold value of training;
and when the iteration number of the training reaches the maximum iteration number or the difference value of the loss functions obtained by the two times of training is smaller than the minimum loss threshold value, stopping the network training and determining the final underwater image enhancement model.
Fig. 4 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)401, a communication Interface (communication Interface)402, a memory (memory)403 and a communication bus 404, wherein the processor 401, the communication Interface 402 and the memory 403 complete communication with each other through the communication bus 404. The processor 401 may invoke a computer program stored on the memory 403 and executable on the processor 401 to perform the method for underwater vision enhancement based on the cascaded depth network provided by the above embodiments, for example, including: determining an underwater degraded image; inputting the underwater degraded image into an underwater image enhancement model, and outputting an enhanced image corresponding to the underwater degraded image; the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
In addition, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method for enhancing underwater vision based on a cascaded depth network provided in the foregoing embodiments when executed by a processor, for example, the method includes: determining an underwater degraded image; inputting the underwater degraded image into an underwater image enhancement model, and outputting an enhanced image corresponding to the underwater degraded image; the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. An underwater vision enhancement method based on a cascade deep network is characterized by comprising the following steps:
determining an underwater degraded image;
inputting the underwater degraded image into an underwater image enhancement model, and outputting an enhanced image corresponding to the underwater degraded image;
the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
2. The underwater vision enhancement method based on the cascaded depth network of claim 1, wherein the obtaining of the enhanced image label corresponding to each sample underwater degraded image specifically comprises:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP algorithm to the sample underwater degraded image to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
3. The method according to claim 2, wherein the four enhanced images corresponding to the sample underwater degraded image are evaluated in a manner of combining subjective evaluation and objective evaluation, and the best image of the four enhanced images is selected as the enhanced image label of the sample underwater degraded image, and specifically comprises:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
4. The underwater vision enhancement method based on the cascaded depth network of claim 1, wherein the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, each stage of convolutional neural networks is composed of 5 convolutional layers and 3 dense connection layers, and specifically comprises:
the training network of the underwater image enhancement model is established by adopting a first-stage convolutional neural network and a second-stage convolutional neural network in a cascade manner;
establishing a R, G, B component-based three-color channel model of a first-stage convolutional neural network, wherein each color channel corresponds to one convolutional neural network, the convolutional neural network comprises 5 convolutional layers and 3 dense connecting layers, and the input of the first-stage convolutional neural network is a sample underwater degraded image;
the second stage convolution neural network comprises 5 convolution layers and 3 dense connection layers, wherein the input of the second stage convolution neural network is a semi-enhanced image synthesized by single-channel gray-scale maps of each color channel output by the first stage convolution neural network.
5. The underwater vision enhancement method based on the cascaded depth network of claim 1, wherein the underwater image enhancement model is trained based on the sample underwater degraded images and the enhanced image labels corresponding to the sample underwater degraded images, and specifically comprises:
presetting maximum iteration times, a learning rate and a minimum loss threshold value of training;
and when the iteration number of the training reaches the maximum iteration number or the difference value of the loss functions obtained by the two times of training is smaller than the minimum loss threshold value, stopping the network training and determining the final underwater image enhancement model.
6. An underwater vision enhancement device based on a cascaded depth network, comprising:
a determination unit for determining an underwater degraded image;
the enhancement unit is used for inputting the underwater degraded image into an underwater image enhancement model and outputting an enhanced image corresponding to the underwater degraded image;
the underwater image enhancement model is obtained by training based on sample underwater degraded images and enhanced image labels corresponding to the sample underwater degraded images, the underwater image enhancement model adopts network training established by two stages of convolutional neural networks, and each stage of convolutional neural network consists of 5 convolutional layers and 3 dense connecting layers.
7. The cascaded depth network-based underwater vision enhancement device of claim 6, wherein the obtaining of the enhanced image tag corresponding to each sample underwater degraded image specifically comprises:
respectively adopting four underwater image enhancement algorithms of white balance, histogram equalization, fusion and UDCP algorithm to the sample underwater degraded image to obtain four enhanced images corresponding to the sample underwater degraded image;
and evaluating four enhanced images corresponding to the sample underwater degraded image by adopting a mode of combining subjective evaluation and objective evaluation, and selecting the optimal image in the four enhanced images as an enhanced image label of the sample underwater degraded image.
8. The underwater vision enhancement device based on the cascaded depth network of claim 6, wherein the four enhanced images corresponding to the sample underwater degraded image are evaluated in a manner of combining subjective evaluation and objective evaluation, and an optimal image of the four enhanced images is selected as an enhanced image label of the sample underwater degraded image, specifically comprising:
the organization evaluator takes the sample underwater degraded image as a reference, carries out subjective scoring on the four enhanced images and selects a first best enhanced image;
performing objective evaluation on the four enhanced images by using an information entropy index, a UCIQE index and a UIQM index, and selecting a second best enhanced image;
if the first optimal enhanced image is consistent with the second optimal enhanced image, determining that the first optimal enhanced image is an enhanced image label of the sample underwater degraded image;
and if the first optimal enhanced image is inconsistent with the second optimal enhanced image, performing subjective scoring again, and selecting the image with the highest second subjective score as an enhanced image label of the sample underwater degraded image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the steps of the cascaded deep network based underwater vision enhancement method as claimed in any one of claims 1 to 5.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the cascaded depth network based underwater vision enhancement method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010121157.5A CN111415304A (en) | 2020-02-26 | 2020-02-26 | Underwater vision enhancement method and device based on cascade deep network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010121157.5A CN111415304A (en) | 2020-02-26 | 2020-02-26 | Underwater vision enhancement method and device based on cascade deep network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111415304A true CN111415304A (en) | 2020-07-14 |
Family
ID=71492820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010121157.5A Pending CN111415304A (en) | 2020-02-26 | 2020-02-26 | Underwater vision enhancement method and device based on cascade deep network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111415304A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085681A (en) * | 2020-09-09 | 2020-12-15 | 苏州科达科技股份有限公司 | Image enhancement method, system, device and storage medium based on deep learning |
CN112669225A (en) * | 2020-12-01 | 2021-04-16 | 宁波大学科学技术学院 | Underwater image enhancement method and system based on structural decomposition and storage medium |
CN112801536A (en) * | 2021-02-20 | 2021-05-14 | 北京金山云网络技术有限公司 | Image processing method and device and electronic equipment |
CN114612347A (en) * | 2022-05-11 | 2022-06-10 | 北京科技大学 | Multi-module cascade underwater image enhancement method |
CN117218033A (en) * | 2023-09-27 | 2023-12-12 | 仲恺农业工程学院 | Underwater image restoration method, device, equipment and medium |
CN112801536B (en) * | 2021-02-20 | 2024-04-30 | 北京金山云网络技术有限公司 | Image processing method and device and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846261A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | Underwater picture processing method based on convolutional neural networks |
CN107610123A (en) * | 2017-10-11 | 2018-01-19 | 中共中央办公厅电子科技学院 | A kind of image aesthetic quality evaluation method based on depth convolutional neural networks |
CN109147254A (en) * | 2018-07-18 | 2019-01-04 | 武汉大学 | A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks |
-
2020
- 2020-02-26 CN CN202010121157.5A patent/CN111415304A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846261A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | Underwater picture processing method based on convolutional neural networks |
CN107610123A (en) * | 2017-10-11 | 2018-01-19 | 中共中央办公厅电子科技学院 | A kind of image aesthetic quality evaluation method based on depth convolutional neural networks |
CN109147254A (en) * | 2018-07-18 | 2019-01-04 | 武汉大学 | A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085681A (en) * | 2020-09-09 | 2020-12-15 | 苏州科达科技股份有限公司 | Image enhancement method, system, device and storage medium based on deep learning |
CN112085681B (en) * | 2020-09-09 | 2023-04-07 | 苏州科达科技股份有限公司 | Image enhancement method, system, device and storage medium based on deep learning |
CN112669225A (en) * | 2020-12-01 | 2021-04-16 | 宁波大学科学技术学院 | Underwater image enhancement method and system based on structural decomposition and storage medium |
CN112801536A (en) * | 2021-02-20 | 2021-05-14 | 北京金山云网络技术有限公司 | Image processing method and device and electronic equipment |
CN112801536B (en) * | 2021-02-20 | 2024-04-30 | 北京金山云网络技术有限公司 | Image processing method and device and electronic equipment |
CN114612347A (en) * | 2022-05-11 | 2022-06-10 | 北京科技大学 | Multi-module cascade underwater image enhancement method |
CN117218033A (en) * | 2023-09-27 | 2023-12-12 | 仲恺农业工程学院 | Underwater image restoration method, device, equipment and medium |
CN117218033B (en) * | 2023-09-27 | 2024-03-12 | 仲恺农业工程学院 | Underwater image restoration method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520504B (en) | End-to-end blurred image blind restoration method based on generation countermeasure network | |
Amirshahi et al. | Image quality assessment by comparing CNN features between images | |
Fu et al. | Uncertainty inspired underwater image enhancement | |
CN111415304A (en) | Underwater vision enhancement method and device based on cascade deep network | |
US8908989B2 (en) | Recursive conditional means image denoising | |
CN111079740A (en) | Image quality evaluation method, electronic device, and computer-readable storage medium | |
CN112381897B (en) | Low-illumination image enhancement method based on self-coding network structure | |
CN109816666B (en) | Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium | |
CN109859152B (en) | Model generation method, image enhancement method, device and computer-readable storage medium | |
CN110335221B (en) | Multi-exposure image fusion method based on unsupervised learning | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
Steffens et al. | Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing | |
CN111080531A (en) | Super-resolution reconstruction method, system and device for underwater fish image | |
CN111882555B (en) | Deep learning-based netting detection method, device, equipment and storage medium | |
Saleh et al. | Adaptive uncertainty distribution in deep learning for unsupervised underwater image enhancement | |
CN115131229A (en) | Image noise reduction and filtering data processing method and device and computer equipment | |
Wang et al. | Multiscale supervision-guided context aggregation network for single image dehazing | |
CN109376782B (en) | Support vector machine cataract classification method and device based on eye image features | |
Saleem et al. | A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset | |
Li et al. | Adaptive weighted multiscale retinex for underwater image enhancement | |
CN110009574A (en) | A kind of method that brightness, color adaptively inversely generate high dynamic range images with details low dynamic range echograms abundant | |
CN114648467B (en) | Image defogging method and device, terminal equipment and computer readable storage medium | |
Siddiqui et al. | Hierarchical color correction for camera cell phone images | |
CN115880176A (en) | Multi-scale unpaired underwater image enhancement method | |
CN115457015A (en) | Image no-reference quality evaluation method and device based on visual interactive perception double-flow network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |