CN117934340A - Retinex variation underwater image enhancement method and device based on deep expansion network - Google Patents

Retinex variation underwater image enhancement method and device based on deep expansion network Download PDF

Info

Publication number
CN117934340A
CN117934340A CN202410342527.6A CN202410342527A CN117934340A CN 117934340 A CN117934340 A CN 117934340A CN 202410342527 A CN202410342527 A CN 202410342527A CN 117934340 A CN117934340 A CN 117934340A
Authority
CN
China
Prior art keywords
channel
image
underwater
network
expansion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410342527.6A
Other languages
Chinese (zh)
Other versions
CN117934340B (en
Inventor
庄培显
李江昀
张天翔
王宏
张新恒
童俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Shunde Innovation School of University of Science and Technology Beijing
Original Assignee
University of Science and Technology Beijing USTB
Shunde Innovation School of University of Science and Technology Beijing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB, Shunde Innovation School of University of Science and Technology Beijing filed Critical University of Science and Technology Beijing USTB
Priority to CN202410342527.6A priority Critical patent/CN117934340B/en
Publication of CN117934340A publication Critical patent/CN117934340A/en
Application granted granted Critical
Publication of CN117934340B publication Critical patent/CN117934340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a Retinex variation underwater image enhancement method and device based on a depth expansion network, which relate to the technical field of underwater image enhancement and comprise the following steps: inputting underwater degradation images; performing color correction by using a color white balance method; decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component, learning respective image prior and one-step degree prior of the reflectivity component and the illumination component by adopting a depth expansion network, and obtaining an enhancement result of the reflectivity component and the illumination component by adopting an alternate iterative optimization network expansion method; correcting the enhancement result of the illumination component by using a gamma correction method; and multiplying the enhancement result of the reflectivity component and the gamma-corrected illumination component point by point to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space, and converting the combined brightness channel, the chromaticity channel and the saturation channel into an RGB space to obtain a final underwater enhanced image. The invention can output high-quality underwater enhanced images with high efficiency.

Description

Retinex variation underwater image enhancement method and device based on deep expansion network
Technical Field
The invention relates to the technical field of underwater image enhancement, in particular to a Retinex variation underwater image enhancement method and device based on a depth expansion network.
Background
Due to the complex physical characteristics of the underwater environment, underwater degradation images with low visibility and distorted colors are easily obtained by the underwater imaging device. According to the underwater optical imaging mechanism, the amount of light captured by an underwater imaging device is mainly composed of three components, a direct component (light reflected by an object and not scattered), a forward scattering component (light reflected by an object and scattered at a small angle), and a backward scattering component (light reflected by a non-target object, but light reflected by floating particles or the like). Wherein the acquired underwater image is considered as a linear combination of these three partial amounts. Forward scatter components tend to create blurred image structures, while backward scatter components mask image edges and details. At the same time, light of different wavelengths decays in water at different rates, red light having the longest wavelength or the smallest energy, which first disappears, whereas blue light having a relatively short wavelength or a relatively large energy, the opposite, which characteristic results in the acquired underwater image generally exhibiting a blue hue.
The existing underwater image enhancement method needs to carry out manually designed sparse prior modeling on the reflectivity component and the illumination component of the underwater image respectively, and the prior constraint solving is very time-consuming; meanwhile, in a practical complex underwater environment, accurate priori modeling of the reflectivity component and the illumination component is difficult, and the detail of the underwater image and the enhancement performance of the structure are directly affected.
Disclosure of Invention
The invention provides a Retinex variation underwater image enhancement method and device based on a deep expansion network, which are used for solving the problems existing in the prior art, and the technical scheme provided by the invention is as follows:
In one aspect, a Retinex variant underwater image enhancement method based on a depth expansion network is provided, and the method comprises the following steps:
s1, inputting underwater degraded images;
S2, performing color correction on the underwater degraded image by using a color white balance method;
S3, decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to a Retinex theory, then adopting a depth expansion network to learn respective image prior and a step prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
S4, correcting an enhancement result of the illumination component by using a gamma correction method;
S5, multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space to obtain a combined image, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
Optionally, the color correction of the underwater degraded image by using the color white balance method in S2 specifically includes:
S21, calculating an average value of each color channel in the RGB color space of the underwater degradation image:
(1)
(2)
(3)
Wherein, subscript symbol { r, g, b } refers to a color channel, r is a red channel, g is a green channel, b is a blue channel, I is an input underwater degraded image, and the subscript symbol { r, g, b } refers to an underwater image which is obtained by averaging the color channels, and comprises a red channel I r, a green channel I g and a blue channel I b, Iavg, and comprises the red channel Green channel/>And blue channel/>I refers to the current row coordinate of the image pixel, j refers to the current column coordinate of the image pixel,/>,/>M refers to the range of row coordinates of the pixels of the input underwater degradation image I, and N refers to the range of column coordinates of the pixels of the input underwater degradation image I;
s22, calculating corresponding gain coefficients to enhance the red channel and the green channel, wherein the specific calculation process is as follows:
(4)
(5)
wherein, Is an enhanced image of red channel I r,/>An enhanced image of green channel I g;
S23, converting the enhanced underwater image from an RGB color space to an HSV color space, wherein the HSV color space comprises a brightness channel, a saturation channel and a chromaticity channel, and respectively carrying out normalization processing calculation on the brightness channel and the saturation channel of the HSV color space:
(6)
(7)
Wherein V in is the luminance channel of the HSV color space, S in is the saturation channel of the HSV color space, V max is the maximum value of V in, V min is the minimum value of V in, S max is the maximum value of S in, and S min is the minimum value of S in.
Optionally, the step S3 specifically includes:
A Retinex variation enhancement model based on a depth expansion network is established, and an objective function is expressed as follows:
(8)
wherein, For data fidelity terms, the product of the illumination component L to be solved and the reflectivity component R is kept consistent with the brightness channel V by utilizing L 2 norm constraint,/>For pixel multiplication of images,/>And/>Is a regularization term of the reflectivity component R, and respectively represents an image prior and a step prior of the reflectivity component; and/>And/>Is regularization term of the illumination component L, and respectively represents image prior and one-step prior of the illumination component,/>Is a step operator comprising a first-order difference operation in the horizontal and vertical directions,/>、/>、/>And/>Are implicit prior functions, do not need to set specific explicit expression forms in advance, and learn by deep expansion network,/>All are weight parameters, and are iteratively updated by deep expansion network learning;
The data fidelity term and regularization term are processed separately, the data fidelity term and regularization term are decoupled by adopting a variable splitting method, and four auxiliary variables Q, P, T and J are introduced to approximate R, L respectively, And/>Converting equation (8) into a corresponding augmented lagrangian function:
(9)
wherein, To balance the weight parameters Q, P, T and J are R, L,/>, respectivelyAnd/>Is solved for equation (9) by an alternate iterative optimized network expansion method.
Optionally, the solving the formula (9) by the network expansion method of the alternate iterative optimization specifically includes:
1) The current reflectivity component R and illumination component L are fixed, and the objective functions for solving the four auxiliary variables Q, P, T, J are respectively:
(10)
(11)
(12)
(13)
wherein R k、Lk, 、/>R, L,/>, respectively、/>Solving four auxiliary variables by an expansion module of a deep expansion network, and updating corresponding variables in three iterations, wherein the method comprises the following steps:
Mapping the iterative updating step into a deep neural network architecture, and expanding the iterative updating step into three stages, wherein each stage corresponds to one iteration, the iteration is completed by one expansion module of the deep expansion network, and four auxiliary variables Q, P, T, J are updated in an alternating manner as follows:
(14)
(15)
(16)
(17)
Wherein Q k+1、Pk+1、Tk+1 and J k+1 are the k+1st iteration results of Q, P, T and J respectively, Are deep convolutional neural networks, { R k, Lk,/>,/>As an input variable to a deep convolutional neural network, {/>,/>,/>,/>-Learning a first depth convolutional neural network, learning implicit prior knowledge of the auxiliary variable from training data;
2) Fixing the four current auxiliary variables Q, P, T, J, and solving the objective functions of the reflectivity component R and the illumination component L as follows:
(18)
(19)
Wherein the subscript symbol k is the number of k iterations, k+1 is the number of k+1 iterations, L k is the k iteration result of L, Q k+1、Tk+1、Rk+1、Pk+1、Jk+1 is the k+1 iteration result of Q, T, R, P, J, the division in formulas (18) and (19) are dot-division operations between image pixels, the expansion modules of the deep expansion network solve R and L, the iteration update step is mapped into the deep neural network architecture, the expansion is performed into three stages, each stage corresponds to one iteration, the expansion is performed by one expansion module of the deep expansion network, and the two variables R and L are alternately updated as follows:
(20)
(21)
Wherein L k+1 is the k+1st iteration result of L, Are all deep convolutional neural networks,Input variables representing deep convolutional neural networks, {/>' Is a learnable network parameter,/>Is a step operator/>Through designing a second deep convolutional neural network, learning implicit prior knowledge of R and L from training data to avoid designing complex regularization terms; meanwhile, the method has a rapid test speed in an end-to-end network mode so as to overcome the problem of time consumption in solving prior modeling.
3) Repeating steps 1) -2) until convergence, obtaining an enhanced reflectivity component R E and an enhanced illumination component L E after the iteration is finished.
Optionally, the depth expansion network converts the underwater image enhancement problem based on the Retinex variation model into an optimization expansion problem of the learning network, the depth expansion network is used for adaptively fitting the image priori and the one-step priori of the reflectivity component and the illumination component in a data driving mode, the training mode is an all-parameter training mode of the end-to-end network, the depth expansion network comprises a plurality of expansion modules, each expansion module carries out one iteration update of each solution variable, each expansion module comprises a plurality of depth convolution neural networks, and implicit priori knowledge of each solution variable is learned from training data.
Optionally, the first deep convolutional neural network comprises 4 convolutional layers and 1 ReLU active layer; the second deep convolutional neural network includes 5 convolutional layers and 1 ReLU active layer.
Optionally, the S4 specifically includes:
The calculation formula of the gamma-corrected illumination component L' E is as follows (22):
(22)
Where L E is the enhanced illumination component and L' E is the gamma corrected illumination component.
In another aspect, there is provided a Retinex variant underwater image enhancement device based on a depth expansion network, the device comprising:
The input module is used for inputting underwater degradation images;
The color correction module is used for performing color correction on the underwater degraded image by using a color white balance method;
the first processing module is used for decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to the Retinex theory, then adopting a depth expansion network to learn respective image prior and a step degree prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
The correction module is used for correcting the enhancement result of the illumination component by utilizing a gamma correction method;
the second processing module is used for multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space to obtain a combined image, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
In another aspect, an electronic device is provided, the electronic device including a processor and a memory, the memory having instructions stored therein, the instructions being loaded and executed by the processor to implement the aforementioned Retinex variant underwater image enhancement method based on a depth-expanded network.
In another aspect, a computer readable storage medium having instructions stored therein that are loaded and executed by a processor to implement the aforementioned Retinex variant underwater image enhancement method based on a deep expansion network is provided.
Compared with the prior art, the technical scheme has at least the following beneficial effects:
Compared with the existing Retinex variation model driven underwater image enhancement method, the Retinex variation model driven underwater image enhancement method has the advantages that the Retinex variation model driven underwater image enhancement problem is unfolded to be an optimization problem of a learnable network, in an unfolding module of the deep unfolding network, a plurality of deep convolutional neural networks are designed to adaptively fit image priori and a gradient priori of reflectivity components and illumination components in a data driving mode, the problem that regularized priori in an underwater environment is difficult to accurately model is overcome more comprehensively and effectively, meanwhile, each step of iterative optimization is designed to be an unfolding module of the deep unfolding network, the enhancement results of the reflectivity components and the illumination components are solved rapidly and accurately in an end-to-end network full-parameter training mode, the whole algorithm is solved in a full-parameter network learning mode, and the whole algorithm operation efficiency is relatively higher. In addition, the color white balance method is utilized for the color degradation color of the underwater degraded image, the corresponding average value is calculated for each color channel of RGB (red, green and blue) color space, the corresponding gain is calculated through the average values to strengthen the red and green color channels, the underwater image is finally converted into the HSV color space, and the normalization calculation is carried out for the brightness channel and the saturation channel of the HSV color space, so that the better color correction result of the underwater image is obtained through the joint processing operation of the RGB color space and the HSV color space, and the color naturalness protection aspect is particularly shown.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a Retinex variant underwater image enhancement method based on a depth expansion network according to an embodiment of the present invention;
FIG. 2 is a flow chart of color correction according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an expansion module of a deep expansion network according to an embodiment of the present invention;
fig. 4 is a block diagram of a Retinex variant underwater image enhancement device based on a depth expansion network according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a Retinex variant underwater image enhancement method based on a depth expansion network, where the method includes:
s1, inputting underwater degraded images;
S2, performing color correction on the underwater degraded image by using a color white balance method;
S3, decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to a Retinex theory, then adopting a depth expansion network to learn respective image prior and a step prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
S4, correcting an enhancement result of the illumination component by using a gamma correction method;
S5, multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space to obtain a combined image, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
The following describes in detail a Retinex variant underwater image enhancement method based on a depth expansion network, with reference to fig. 2 to fig. 3, where the method includes:
s1, inputting underwater degraded images;
as input data for the start.
S2, performing color correction on the underwater degraded image by using a color white balance method;
Optionally, as shown in fig. 2, the performing color correction on the underwater degraded image by using the color white balance method in S2 specifically includes:
S21, calculating an average value of each color channel in the RGB color space of the underwater degradation image:
(1)
(2)
(3)
Wherein, subscript symbol { r, g, b } refers to a color channel, r is a red channel, g is a green channel, b is a blue channel, I is an input underwater degraded image, and the subscript symbol { r, g, b } refers to an underwater image which is obtained by averaging the color channels, and comprises a red channel I r, a green channel I g and a blue channel I b,Iavg, and comprises the red channel Green channel/>And blue channel/>I refers to the current row coordinate of the image pixel, j refers to the current column coordinate of the image pixel,/>,/>M refers to the range of row coordinates of the pixels of the input underwater degradation image I, and N refers to the range of column coordinates of the pixels of the input underwater degradation image I; ;
The embodiment of the invention keeps the dominant color channel constant through the operation.
S22, calculating corresponding gain coefficients to enhance the red channel and the green channel, wherein the specific calculation process is as follows:
(4)
(5)
wherein, Is an enhanced image of red channel I r,/>An enhanced image of green channel I g;
aiming at the problem of blue color shift of the underwater degraded image, the embodiment of the invention enhances the red channel and the green channel by calculating the corresponding gain coefficients so as to realize the color balance of the underwater image.
S23, converting the enhanced underwater image from an RGB color space to an HSV color space, wherein the HSV color space comprises a brightness channel, a saturation channel and a chromaticity channel, and respectively carrying out normalization processing calculation on the brightness channel and the saturation channel of the HSV color space:
(6)
(7)
Wherein V in is the luminance channel of the HSV color space, S in is the saturation channel of the HSV color space, V max is the maximum value of V in, V min is the minimum value of V in, S max is the maximum value of S in, and S min is the minimum value of S in.
The embodiment of the invention effectively solves the problem of color distortion of the underwater image through the color correction.
S3, decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to a Retinex theory, then adopting a depth expansion network to learn respective image prior and a step prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
Optionally, the step S3 specifically includes:
A Retinex variation enhancement model based on a depth expansion network is established, and an objective function is expressed as follows:
(8)
wherein, For data fidelity terms, the product of the illumination component L to be solved and the reflectivity component R is kept consistent with the brightness channel V by utilizing L 2 norm constraint,/>For pixel multiplication of images,/>And/>Is a regularization term of the reflectivity component R, and respectively represents an image prior and a step prior of the reflectivity component; and/>And/>Is regularization term of the illumination component L, and respectively represents image prior and one-step prior of the illumination component,/>Is a step operator comprising a first-order difference operation in the horizontal and vertical directions,/>、/>、/>And/>Are implicit prior functions, do not need to set specific explicit expression forms in advance, and learn by deep expansion network,/>Weight parameters (initial values are respectively set to be 1, 10, 0.25 and 0.5), and the weight parameters are iteratively updated by deep expansion network learning;
Because the formula (8) contains four unknown implicit prior functions, the direct solution cannot be carried out by utilizing the traditional optimization methods such as gradient descent and the like, the data fidelity term and the regularization term are separately processed for the convenience of optimization, the data fidelity term and the regularization term are decoupled by adopting a variable splitting method, and four auxiliary variables Q, P, T and J are introduced to approximate R, L respectively, And/>Converting equation (8) into a corresponding augmented lagrangian function:
(9)
wherein, For the balance weight parameters (initial values set to 0.1, 0.5, 0.01,0.05, respectively, and 0.005 increase in each iteration), Q, P, T and J are R, L,/>, respectivelyAnd/>Is solved for equation (9) by an alternate iterative optimized network expansion method
Optionally, the solving the formula (9) by the network expansion method of the alternate iteration optimization includes:
1) The current reflectivity component R and illumination component L, and the objective functions for solving the four auxiliary variables Q, P, T, J are:
(10)
(11)
(12)
(13)
wherein R k、Lk, 、/>R, L,/>, respectively、/>And (3) according to the k iteration result, solving four auxiliary variables through an expansion module of the deep expansion network, and updating corresponding variables in three iterations, wherein the method comprises the following steps:
Mapping the iterative updating step into a deep neural network architecture, and expanding the iterative updating step into three stages, wherein each stage corresponds to one iteration and is completed by one expansion module of the deep expansion network, as shown in fig. 3, fig. 3 shows one expansion module, one iteration updating is performed, and in one iteration updating, four auxiliary variables Q, P, T, J are updated in an alternating manner as follows:
(14)
(15)
(16)
(17)
Wherein Q k+1、Pk+1、Tk+1 and J k+1 are the k+1st iteration results of Q, P, T and J, respectively Is a deep convolutional neural network (i.e., each formula represents a deep convolutional neural network), { R k, Lk,/>,/>As an input variable to a deep convolutional neural network, {/>,/>,,/>-Learning a first depth convolutional neural network, learning implicit prior knowledge of the auxiliary variable from training data;
2) Fixing the four current auxiliary variables Q, P, T, J, and solving the objective functions of the reflectivity component R and the illumination component L as follows:
(18)
(19)
Wherein the subscript symbol k is the number of k iterations, k+1 is the number of k+1 iterations, L k is the k iteration result of L, Q k+1、Tk+1、Rk+1、Pk+1、Jk+1 is the k+1 iteration result of Q, T, R, P, J, the division in formulas (18) and (19) are dot-division operations between image pixels, similar to the auxiliary variable solution described above, the iteration update steps are mapped into the deep neural network architecture by the expansion module of the deep neural network to be expanded into three stages, each stage corresponds to one iteration and is completed by one expansion module of the deep neural network, as shown in fig. 3, fig. 3 shows one expansion module to perform one iteration update, and in one iteration update, two variables R and L are alternately updated as follows:
(20)
(21)
Wherein L k+1 is the k+1st iteration result of L, Are deep convolutional neural networks (i.e., each formula represents a deep convolutional neural network),/>Input variables representing deep convolutional neural networks, {/>,/>' Is a learnable network parameter,/>Is a step operator/>Through designing a second deep convolutional neural network, learning implicit prior knowledge of R and L from training data to avoid designing complex regularization terms; meanwhile, the method has a rapid test speed in an end-to-end network mode so as to overcome the problem of time consumption in solving prior modeling.
3) Repeating steps 1) -2) until convergence, obtaining an enhanced reflectivity component R E and an enhanced illumination component L E after the iteration is finished.
Optionally, the depth expansion network converts the underwater image enhancement problem based on the Retinex variation model into an optimization expansion problem of the learning network, the depth expansion network is used for adaptively fitting the image priori and the one-step priori of the reflectivity component and the illumination component in a data driving mode, the training mode is an all-parameter training mode of the end-to-end network, the depth expansion network comprises a plurality of expansion modules, each expansion module carries out one iteration update of each solution variable, each expansion module comprises a plurality of depth convolution neural networks, and implicit priori knowledge of each solution variable is learned from training data.
Optionally, the first deep convolutional neural network comprises 4 convolutional layers and 1 ReLU active layer according to the difference of the size of the solving variables; the second deep convolutional neural network includes 5 convolutional layers and 1 ReLU active layer.
S4, correcting an enhancement result of the illumination component by using a gamma correction method;
optionally, the S4 specifically includes:
The calculation formula of the gamma-corrected illumination component L' E is as follows (22):
(22)
Where L E is the enhanced illumination component and L' E is the gamma corrected illumination component.
S5, multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space, converting the combined image from the HSV space to the RGB space, and obtaining and outputting a final underwater enhanced image.
The enhanced result R E of the reflectivity component and the gamma-corrected illumination component L' E are subjected to point-by-point multiplication of image pixels, and the enhanced brightness channel V E is calculated as
(23)
Wherein V E is the enhanced luminance channel.
Combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image, which is not described in detail herein.
As shown in fig. 4, the embodiment of the present invention further provides a Retinex variant underwater image enhancement device based on a depth expansion network, where the device includes:
an input module 410 for inputting underwater degradation images;
The color correction module 420 is configured to perform color correction on the underwater degraded image by using a color white balance method;
The first processing module 430 is configured to decompose a luminance channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to the Retinex theory, then learn respective image prior and a step prior of the reflectivity component and the illumination component by using a deep expansion network, and obtain enhancement results of the reflectivity component and the illumination component by using an alternative iterative optimization network expansion method;
A correction module 440 for correcting the enhancement result of the illumination component by using a gamma correction method;
The second processing module 450 is configured to multiply the enhancement result of the reflectivity component and the gamma-corrected illumination component point by using image pixels to obtain an enhanced luminance channel, combine the enhanced luminance channel, the chrominance channel and the saturation channel of the HSV color space to obtain a combined image, and then convert the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
The functional structure of the Retinex variational underwater image enhancement device based on the depth expansion network provided by the embodiment of the invention corresponds to the Retinex variational underwater image enhancement method based on the depth expansion network provided by the embodiment of the invention, and is not described herein.
Fig. 5 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present invention, where the electronic device 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 501 and one or more memories 502, where the memories 502 store instructions, and the instructions are loaded and executed by the processors 501 to implement the steps of the aforementioned Retinex variant underwater image enhancement method based on a deep-expansion network.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in a terminal to perform the aforementioned Retinex variant underwater image enhancement method based on a depth expansion network, is also provided. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A Retinex variant underwater image enhancement method based on a depth expansion network, the method comprising:
s1, inputting underwater degraded images;
S2, performing color correction on the underwater degraded image by using a color white balance method;
S3, decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to a Retinex theory, then adopting a depth expansion network to learn respective image prior and a step prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
S4, correcting an enhancement result of the illumination component by using a gamma correction method;
S5, multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space to obtain a combined image, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
2. The method according to claim 1, wherein the color correction of the underwater degraded image by using the color white balance method in S2 specifically includes:
S21, calculating an average value of each color channel in the RGB color space of the underwater degradation image:
(1)
(2)
(3)
Wherein, subscript symbol { r, g, b } refers to a color channel, r is a red channel, g is a green channel, b is a blue channel, I is an input underwater degraded image, and the subscript symbol { r, g, b } refers to an underwater image which is obtained by averaging the color channels, and comprises a red channel I r, a green channel I g and a blue channel I b,Iavg, and comprises the red channel Green channel/>And blue channel/>I refers to the current row coordinate of the image pixel, j refers to the current column coordinate of the image pixel,/>,/>M refers to the range of row coordinates of the pixels of the input underwater degradation image I, and N refers to the range of column coordinates of the pixels of the input underwater degradation image I;
S22, calculating corresponding gain coefficients to enhance the red channel and the green channel, wherein the specific calculation process is as follows;
(4)
(5)
wherein, Is an enhanced image of red channel I r,/>An enhanced image of green channel I g;
S23, converting the enhanced underwater image from an RGB color space to an HSV color space, wherein the HSV color space comprises a brightness channel, a saturation channel and a chromaticity channel, and respectively carrying out normalization processing calculation on the brightness channel and the saturation channel of the HSV color space:
(6)
(7)
Wherein V in is the luminance channel of the HSV color space, S in is the saturation channel of the HSV color space, V max is the maximum value of V in, V min is the minimum value of V in, S max is the maximum value of S in, and S min is the minimum value of S in.
3. The method according to claim 1, wherein S3 specifically comprises:
A Retinex variation enhancement model based on a depth expansion network is established, and an objective function is expressed as follows:
(8)
wherein, For data fidelity terms, the product of the illumination component L to be solved and the reflectivity component R is kept consistent with the brightness channel V by utilizing L 2 norm constraint,/>For pixel multiplication of images,/>And/>Is a regularization term of the reflectivity component R, and respectively represents an image prior and a step prior of the reflectivity component; and/>And/>Is regularization term of the illumination component L, and respectively represents image prior and one-step prior of the illumination component,/>Is a step operator comprising a first-order difference operation in the horizontal and vertical directions,/>、/>、/>And/>Are implicit prior functions, do not need to set specific explicit expression forms in advance, and learn by deep expansion network,/>All are weight parameters, and are iteratively updated by deep expansion network learning;
The data fidelity term and regularization term are processed separately, the data fidelity term and regularization term are decoupled by adopting a variable splitting method, and four auxiliary variables Q, P, T and J are introduced to approximate R, L respectively, And/>Converting equation (8) into a corresponding augmented lagrangian function:
(9)
wherein, To balance the weight parameters Q, P, T and J are R, L,/>, respectivelyAnd/>Is solved for equation (9) by an alternate iterative optimized network expansion method.
4. A method according to claim 3, characterized in that said network expansion method optimized by alternate iteration solves formula (9), comprising in particular:
1) The current reflectivity component R and illumination component L are fixed, and the objective functions for solving the four auxiliary variables Q, P, T, J are respectively:
(10)
(11)
(12)
(13)
wherein R k、Lk, 、/>R, L,/>, respectively、/>Solving four auxiliary variables by an expansion module of a deep expansion network, and updating corresponding variables in three iterations, wherein the method comprises the following steps:
Mapping the iterative updating step into a deep neural network architecture, and expanding the iterative updating step into three stages, wherein each stage corresponds to one iteration, the iteration is completed by one expansion module of the deep expansion network, and four auxiliary variables Q, P, T, J are updated in an alternating manner as follows:
(14)
(15)
(16)
(17)
Wherein Q k+1、Pk+1、Tk+1 and J k+1 are the k+1st iteration results of Q, P, T and J respectively, Are deep convolutional neural networks, { R k、Lk,/>,/>As an input variable to a deep convolutional neural network, {/>,/>,/>, />-Learning a first depth convolutional neural network, learning implicit prior knowledge of the auxiliary variable from training data;
2) Fixing the four current auxiliary variables Q, P, T, J, and solving the objective functions of the reflectivity component R and the illumination component L as follows:
(18)
(19)
Wherein the subscript symbol k is the number of k iterations, k+1 is the number of k+1 iterations, L k is the k iteration result of L, Q k+1、Tk+1、Rk+1、Pk+1、Jk+1 is the k+1 iteration result of Q, T, R, P, J, the division in formulas (18) and (19) are dot-division operations between image pixels, the expansion modules of the deep expansion network solve R and L, the iteration update step is mapped into the deep neural network architecture, the expansion is performed into three stages, each stage corresponds to one iteration, the expansion is performed by one expansion module of the deep expansion network, and the two variables R and L are alternately updated as follows:
(20)
(21)
Wherein L k+1 is the k+1st iteration result of L, Are all deep convolutional neural networks,Input variables representing deep convolutional neural networks {,/>' Is a learnable network parameter,/>Is a step operator/>By designing a second deep convolutional neural network, learning implicit prior knowledge of R and L from the training data;
3) Repeating steps 1) -2) until convergence, obtaining an enhanced reflectivity component R E and an enhanced illumination component L E after the iteration is finished.
5. The method of claim 4, wherein the depth expansion network converts the underwater image enhancement problem based on the Retinex variational model into an optimized expansion problem of the learnable network, the image priors and the one-step priors of the reflectivity component and the illumination component are adaptively fitted in a data-driven manner through the depth convolution neural network, the training manner is an end-to-end network full-parameter training manner, the depth expansion network comprises a plurality of expansion modules, each expansion module performs one iteration update of each solution variable, each expansion module comprises a plurality of depth convolution neural networks, and implicit prior knowledge of each solution variable is learned from training data.
6. The method of claim 4, wherein the first deep convolutional neural network comprises 4 convolutional layers and 1 ReLU active layer; the second deep convolutional neural network includes 5 convolutional layers and 1 ReLU active layer.
7. The method according to claim 1, wherein S4 specifically comprises:
The calculation formula of the gamma-corrected illumination component L' E is as follows (22):
(22)
Where L E is the enhanced illumination component and L' E is the gamma corrected illumination component.
8. A Retinex variant underwater image enhancement device based on a depth expansion network, the device comprising:
The input module is used for inputting underwater degradation images;
The color correction module is used for performing color correction on the underwater degraded image by using a color white balance method;
the first processing module is used for decomposing a brightness channel of an HSV color space of the color-corrected underwater image into a reflectivity component and an illumination component according to the Retinex theory, then adopting a depth expansion network to learn respective image prior and a step degree prior of the reflectivity component and the illumination component, and obtaining an enhancement result of the reflectivity component and the illumination component through an alternate iterative optimization network expansion method;
The correction module is used for correcting the enhancement result of the illumination component by utilizing a gamma correction method;
the second processing module is used for multiplying the enhancement result of the reflectivity component and the gamma corrected illumination component point by point of the image pixels to obtain an enhanced brightness channel, combining the enhanced brightness channel, the chromaticity channel and the saturation channel of the HSV color space to obtain a combined image, and converting the combined image from the HSV space to the RGB space to obtain and output a final underwater enhanced image.
9. An electronic device comprising a processor and a memory having instructions stored therein, wherein the instructions are loaded and executed by the processor to implement the Retinex variant underwater image enhancement method based on a depth expansion network according to any of claims 1-7.
10. A computer readable storage medium having instructions stored therein, wherein the instructions are loaded and executed by a processor to implement the depth-expanded network-based Retinex-variant underwater image enhancement method of any of claims 1-7.
CN202410342527.6A 2024-03-25 2024-03-25 Retinex variation underwater image enhancement method and device based on deep expansion network Active CN117934340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410342527.6A CN117934340B (en) 2024-03-25 2024-03-25 Retinex variation underwater image enhancement method and device based on deep expansion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410342527.6A CN117934340B (en) 2024-03-25 2024-03-25 Retinex variation underwater image enhancement method and device based on deep expansion network

Publications (2)

Publication Number Publication Date
CN117934340A true CN117934340A (en) 2024-04-26
CN117934340B CN117934340B (en) 2024-06-04

Family

ID=90761374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410342527.6A Active CN117934340B (en) 2024-03-25 2024-03-25 Retinex variation underwater image enhancement method and device based on deep expansion network

Country Status (1)

Country Link
CN (1) CN117934340B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN106485681A (en) * 2016-10-18 2017-03-08 河海大学常州校区 Color image restoration method under water based on color correction and red channel prior
CN117541520A (en) * 2023-10-10 2024-02-09 大连海事大学 Underwater image enhancement method of low-coupling Retinex model based on optimization algorithm
CN117593235A (en) * 2024-01-18 2024-02-23 北京科技大学 Retinex variation underwater image enhancement method and device based on depth CNN denoising prior
CN117670687A (en) * 2023-12-15 2024-03-08 大连海事大学 Underwater image enhancement method based on CNN and transducer mixed structure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761227A (en) * 2016-03-04 2016-07-13 天津大学 Underwater image enhancement method based on dark channel prior algorithm and white balance
CN106485681A (en) * 2016-10-18 2017-03-08 河海大学常州校区 Color image restoration method under water based on color correction and red channel prior
CN117541520A (en) * 2023-10-10 2024-02-09 大连海事大学 Underwater image enhancement method of low-coupling Retinex model based on optimization algorithm
CN117670687A (en) * 2023-12-15 2024-03-08 大连海事大学 Underwater image enhancement method based on CNN and transducer mixed structure
CN117593235A (en) * 2024-01-18 2024-02-23 北京科技大学 Retinex variation underwater image enhancement method and device based on depth CNN denoising prior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WENHUI WU: "URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhancement", IEEE, 27 September 2022 (2022-09-27), pages 5891 - 5900 *

Also Published As

Publication number Publication date
CN117934340B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
Liang et al. Cameranet: A two-stage framework for effective camera isp learning
CN110232661B (en) Low-illumination color image enhancement method based on Retinex and convolutional neural network
CN110197463B (en) High dynamic range image tone mapping method and system based on deep learning
CN117593235B (en) Retinex variation underwater image enhancement method and device based on depth CNN denoising prior
US11288783B2 (en) Method and system for image enhancement
CN109829868B (en) Lightweight deep learning model image defogging method, electronic equipment and medium
CN110009574B (en) Method for reversely generating high dynamic range image from low dynamic range image
CN113284061B (en) Underwater image enhancement method based on gradient network
CN112465715B (en) Image scattering removal method based on iterative optimization of atmospheric transmission matrix
CN113256510A (en) CNN-based low-illumination image enhancement method with color restoration and edge sharpening effects
Wang et al. Hazy image decolorization with color contrast restoration
CN116843559A (en) Underwater image enhancement method based on image processing and deep learning
WO2024217182A1 (en) Image enhancement method and apparatus, electronic device, and storage medium
CN112102186A (en) Real-time enhancement method for underwater video image
CN106296749B (en) RGB-D image eigen decomposition method based on L1 norm constraint
CN116957948A (en) Image processing method, electronic product and storage medium
Zhang et al. Learning a single convolutional layer model for low light image enhancement
CN117934340B (en) Retinex variation underwater image enhancement method and device based on deep expansion network
CN111798381A (en) Image conversion method, image conversion device, computer equipment and storage medium
CN116433509A (en) Progressive image defogging method and system based on CNN and convolution LSTM network
CN113409225B (en) Retinex-based unmanned aerial vehicle shooting image enhancement algorithm
CN115601260A (en) Hyperspectral image restoration method driven by neural network and optimization model in combined mode
CN110009676B (en) Intrinsic property decomposition method of binocular image
CN113658072A (en) Underwater image enhancement method based on progressive feedback network
CN112734673A (en) Low-illumination image enhancement method and system based on multi-expression fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant