CN116664454B - Underwater image enhancement method based on multi-scale color migration parameter prediction - Google Patents

Underwater image enhancement method based on multi-scale color migration parameter prediction Download PDF

Info

Publication number
CN116664454B
CN116664454B CN202310952246.8A CN202310952246A CN116664454B CN 116664454 B CN116664454 B CN 116664454B CN 202310952246 A CN202310952246 A CN 202310952246A CN 116664454 B CN116664454 B CN 116664454B
Authority
CN
China
Prior art keywords
image
migration
color migration
module
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310952246.8A
Other languages
Chinese (zh)
Other versions
CN116664454A (en
Inventor
李坤乾
刘文杰
樊宏涛
亓琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202310952246.8A priority Critical patent/CN116664454B/en
Publication of CN116664454A publication Critical patent/CN116664454A/en
Application granted granted Critical
Publication of CN116664454B publication Critical patent/CN116664454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an underwater image enhancement method based on multi-scale color migration parameter prediction, belongs to the technical field of image processing, and is used for improving the visual quality of underwater photographed images. According to the method, an underwater image enhancement training sample is constructed, an underwater image enhancement network based on multi-scale color migration parameter prediction is constructed and trained, the underwater degraded image is taken as input, the multi-scale color migration parameters are predicted, namely color migration basic parameters are used as global migration parameter guidance and color migration bias parameter matrixes are used as local difference migration parameter guidance, and finally the parameters are utilized to carry out color migration on the original image, so that a visual quality enhancement image is output. According to the application, the image downsampling module is added in the process, so that high-resolution and large-size images can be efficiently processed, and the fusion color migration parameter matrix interpolation module is added in the subsequent process, so that the enhancement efficiency can be effectively improved under the condition of not sacrificing more enhancement details.

Description

Underwater image enhancement method based on multi-scale color migration parameter prediction
Technical Field
The application discloses an underwater image enhancement method based on multi-scale color migration parameter prediction, and belongs to the technical field of image processing.
Background
With the increasing demands of unmanned underwater observation and operation, research and application of ROVs and AUVs have been promoted. Vision, one of the main sources of observation information for ROVs and AUVs, plays an important role in underwater tasks. However, photographic images in underwater scenes tend to suffer from significant degradation, including color bias, blurred appearance, low visual contrast, poor visibility. There is a strong need for a high-efficiency underwater image enhancement algorithm suitable for improving the visual quality of high-resolution images in diverse underwater scenes.
At present, the underwater image visual enhancement algorithm can be mainly divided into a traditional underwater image enhancement method and an underwater image enhancement method based on deep learning. For the traditional underwater image enhancement method, due to the complexity of the underwater environment, the attenuation and scattering conditions of light in different water environments are different, a single physical model is often not suitable for various underwater scenes, the generalization is poor, and the color cast problem exists in the image enhancement result. Meanwhile, the underwater image enhancement method based on the non-physical model has a certain image quality improvement effect, but other noise and color distortion problems are often introduced in the processing process, and particularly for extremely degraded scenes, the method cannot be used for effectively processing. In addition, due to the difference and complexity of the water environment, the degradation conditions of images formed by different underwater scenes are different, so that the degradation images facing different scenes by the underwater image enhancement method based on the non-physical model often have no strong generalization, and good image processing effects are difficult to obtain.
With the progress of deep learning technology, underwater image visual enhancement research has been rapidly developed thanks to good feature learning and predictive expression capabilities, but still has great limitations. First, the real underwater image samples are still relatively limited, and the difficulty of acquiring the underwater image samples in a real environment is great. Based on the starved training samples, it is relatively difficult to learn the pixel-level mapping from degraded to non-degraded images end-to-end, and the enhancement results tend to introduce color distortion and overcorrection problems. Secondly, for extreme degradation scenes, because the degradation situation is more complex and the samples corresponding to such scenes are relatively limited, the existing deep learning model often cannot obtain an ideal enhancement effect for the extreme degradation scene. In addition, the conventional underwater image enhancement method based on deep learning is influenced by the structure and the operand of the model, and has low processing efficiency on high-resolution images and even ultrahigh-resolution images.
Disclosure of Invention
The application aims to provide an underwater image enhancement method based on multi-scale color migration parameter prediction, which aims to solve the problem of poor underwater image processing effect in the prior art.
An underwater image enhancement method based on multi-scale color migration parameter prediction, comprising:
s1, obtaining an image data set to be processed, and constructing training samples required in a deep learning enhancement model training and learning process, wherein one training sample comprises the following components: underwater degradation imageReference enhanced image corresponding to the underwater degraded imageWill->In the CIELab color space +.>The mean value and standard deviation of the three channels are used as a reference mean value true value and a reference standard deviation true value of the color migration basic parameter prediction model training;
s2, building a deep learning enhancement network;
s3, training the deep learning enhancement network to obtain a deep learning enhancement model, performing constraint optimization on the color migration basic parameter prediction module by using parameter regression loss, and constraining the overall output of the deep learning enhancement network by using pixel value difference loss and structural similarity loss, and feeding back and optimizing model parameters of the whole deep learning enhancement network;
s4, taking the underwater image with the degraded visual quality as a model of a deep learning enhancement network, and outputting an enhancement image with improved visual quality after color migration parameter prediction and processing by a built-in color migration module.
S1 comprises the following steps:
s1.1. calculationThe mean value of the three channels is used as a reference mean value true value trained by the color migration basic parameter prediction module, and the reference mean value true value is marked as +.>,/>
wherein , and />Is the height and width of the image, h, w, c are +.>Is of a height, width and length.
S1 comprises the following steps:
s1.2. calculated using step S1.1Further calculate->Is->The standard deviation of the three channels is used as a reference standard deviation true value trained by the color migration basic parameter prediction module, and the reference standard deviation true value is marked as +.>
S1.3. 、/>、/> and />Together, a set of samples for deep learning enhancement networks is formed.
S2 comprises the following steps:
s2.1, building deep learning enhancement networkThe system comprises an RGB-to-Lab color space module, a color migration basic parameter prediction module, a color migration bias parameter matrix prediction module, a built-in color migration module and an Lab-to-RGB color space module;
s2.2, constructing a color migration basic parameter prediction module for acquiring global migration parameters according to a depth regression network structure, and predicting and outputting six colors for color migration by taking a degradation image represented by a Lab color space as inputBasic parameters, i.e. target-migrated imagesMean>Standard deviation->
The color migration basic parameter prediction module consists of a depth coding module and a depth regression module, wherein the depth coding module is formed by stacking a convolution layer, an activation layer and a pooling layer in a depth feature extraction structure, and extracts depth feature expressions of all layers of an input image; the depth regression module adopts a full connection structure, acquires global migration parameter guidance, and predicts and outputs six basic parameters for color migration.
S2 comprises the following steps:
s2.3, based on the encoding-decoding structure, constructing a color migration bias parameter matrix prediction module for acquiring local difference migration parameters, wherein the color migration bias parameter matrix prediction module comprises three prediction branches, and three channels of a degraded image represented by a Lab color space are taken as inputs to respectively output target migration imagesMigration bias parameter matrix of three channels;
the color migration bias parameter matrix prediction module comprises a self-attention spectrum sensing module, a channel characteristic extraction module and a color migration bias parameter matrix prediction module based on an encoding-decoding module, wherein the self-attention spectrum sensing module is used for inputting an imageNormalized matrix for each channel->Calculated, self-attention profile of channel c +.>The method comprises the following steps:
the channel feature extraction module takes each channel normalization matrix as input, extracts features for migration bias parameter matrix prediction under the guidance of self-attention spectrum, and the structure is formed by stacking a convolution layer, an activation layer and a pooling layer in a typical depth feature extraction structure;
the depth coding-decoding module is guided by obtaining local difference migration parameters, predicts and generates two color migration bias parameter matrixes corresponding to the mean value and the standard difference and />
S2 comprises the following steps:
s2.4, carrying out pixel-by-pixel addition on the predicted 6 groups of basic migration parameters and the color migration bias parameter matrix corresponding to the matrix channels to obtain a fusion color migration parameter matrix for carrying out color migration:
,/>
wherein ,is an element-by-element addition of the value and the matrix.
S2 comprises the following steps:
s2.5, designing a color migration module built in a network according to the fused color migration parameter matrix obtained based on the S2.4, and obtaining an underwater visual enhancement image after color migration
in the formula ,to enhance the channel c matrix of the image, +.>A matrix of channels c representing an input degraded image, +.>Channel c matrix element mean value representing input degraded image, for example>Channel c matrix element standard deviation representing an input degraded image;
the built-in color migration module takes a parameter matrix obtained by fusing a degraded image to be enhanced, a color migration basic parameter and a bias parameter as input, and outputs an underwater image with enhanced visual quality after color migration.
S3 comprises the following steps:
s3.1. Recording the degraded underwater image asAs an input to a deep learning enhancement network;
s3.2, outputting six basic parameters of the color migration basic parameter prediction module, namely the target migration imageThe mean and standard deviation of the three channels are denoted +.> and />, wherein />Representing channel index, calculate->And->、/>And->Parameter prediction error loss between them, parameter prediction error loss is +.>
S3 comprises the following steps:
s3.3. inputGenerating an enhanced underwater image corresponding to the input after the complete enhanced network model is recorded as +.>Calculate-> and />Pixel difference loss L between MSE And structural similarity loss L SSIM
Is the total number of pixels in the image, +.>Is the center pixel of the image block,> and />Respectively, are imagesAt->Pixel mean and standard deviation of image block, +.> and />Is a set of constant combinations, ">,/>
S3 comprises the following steps:
s3.4. by minimizing the sum of loss termsIterative update optimization network->Model parameters of (2):
compared with the prior art, the application has the following beneficial effects: the dependence on paired learning samples in the training process of the depth enhancement model is reduced, and the robustness of the visual enhancement model to complex underwater visual degradation is improved; based on a color migration parameter prediction strategy, large-size underwater images can be effectively processed, and the processing efficiency is high; by introducing multi-scale parameter prediction and self-attention mechanisms, the recovery capability of the depth model on the non-uniform degradation of underwater vision is improved.
Drawings
FIG. 1 is a flow chart of a method of underwater image enhancement according to an embodiment of the present application;
FIG. 2 is a detailed flow diagram of a method of underwater image enhancement according to one embodiment of the present application;
FIG. 3 is a schematic frame diagram of an underwater image enhancement system of an embodiment of the present application;
FIG. 4 is a schematic diagram of a training data preparation flow for an underwater image enhancement depth network according to an embodiment of the present application.
Fig. 5 is a schematic diagram of the structure of an underwater image enhancement depth network according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the present application will be clearly and completely described below, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
An underwater image enhancement method based on multi-scale color migration parameter prediction, comprising:
s1, obtaining an image data set to be processed, and constructing training samples required in a deep learning enhancement model training and learning process, wherein one training sample comprises the following components: underwater degradation imageReference enhanced image corresponding to the underwater degraded imageWill->In the CIELab color space/>The mean value and standard deviation of the three channels are used as a reference mean value true value and a reference standard deviation true value of the color migration basic parameter prediction model training;
the underwater degradation image and the reference enhancement image can be obtained from underwater data sets such as UIEB, SUIM-E and the like which are commonly used at present, and can also be prepared and obtained by self: the underwater degradation image can be obtained through shooting by an underwater robot carrying a camera, the visual quality of the underwater degradation image often has degradation such as color distortion, blurring and the like due to the effects of absorption, scattering and the like of underwater light, and the reference enhancement image can be imitated by the preparation mode of an underwater image enhancement data set such as UIEB, SUIM-E and the like, namely, after being processed by a series of existing underwater image enhancement methods (such as histogram equalization, dark channel prior method and Retinex model method), the enhancement result with the optimal visual effect is selected by volunteer voting to be used as the reference enhancement image.
S2, building a deep learning enhancement network;
the formula of the deep learning enhancement network is:
the essence of the deep learning enhancement network is that after the color migration parameters are predicted, the color migration is carried out on the input degraded image by utilizing the above formula, so that the effect of visual enhancement is realized. The color migration parameters comprise two parts, which are the outputs of two parameter prediction modules in the network, respectively, (1) six color migration basic parameters, namely the target migration imageThe mean value and standard deviation of the three channels; (2) Target migration image +.>Migration bias parameter matrix of three channels. In the above, the standard deviation fuses the color migration parameter matrix +.>And mean fusion color migration parameter matrix>I.e. obtained by adding (1) and (2). And obtaining the color transfer parameters, namely generating an intermediate result of the final enhanced image, and bringing the parameters and each channel of the original image into a formula of a deep learning enhancement network to obtain the image after visual enhancement.
Conventional color migration algorithms are not designed for visual enhancement per se, but rather, by designating an image as a migration template, the color, texture appearance of the image to be processed is changed toward the template image. The innovation of the application is that the strategy is applied to the problem of underwater image enhancement, and a parameter prediction depth model is designed, so that a real template image is not required to be provided in the color migration process. In addition, the traditional color migration method does not have pixel differentiation processing capability in the migration process after obtaining migration parameters according to the template image, because the parameters are globally oriented to each color channel, the application has the advantage that the differentiation migration processing of each pixel is realized by predicting migration bias parameter matrixes, so that the enhancement result is more natural.
S3, training the deep learning enhancement network to obtain a deep learning enhancement model, performing constraint optimization on the color migration basic parameter prediction module by using parameter regression loss, and constraining the overall output of the deep learning enhancement network by using pixel value difference loss and structural similarity loss, and feeding back and optimizing model parameters of the whole deep learning enhancement network;
step S3 introduces how to train the network built by S2. The built network essentially achieves the effect of visual enhancement by predicting color migration parameters and then performing color migration on the original degraded image by taking the parameters as references. Therefore, S3 is that the built depth enhancement network is subjected to model parameter optimization through training: on one hand, the migration parameters can be accurately predicted; on the other hand, an enhanced image with as good visual quality as possible can be generated.
Based on the above objectives, parametric prediction error loss, pixel difference loss, and structural similarity loss were designed as a loss function for training of the depth enhancement model.
The input underwater degradation image undergoes two processes: one is to take it as input, predict the color migration parameter (two parameter prediction modules of the corresponding network); and then taking the predicted color transfer parameter and the predicted color transfer parameter as input, and obtaining a visual quality enhanced image according to a formula of the deep learning enhanced network.
S4, taking the underwater image with the degraded visual quality as a model of a deep learning enhancement network, and outputting an enhancement image with improved visual quality after color migration parameter prediction and processing by a built-in color migration module.
S1 comprises the following steps:
s1.1. calculationThe mean value of the three channels is used as a reference mean value true value trained by the color migration basic parameter prediction module, and the reference mean value true value is marked as +.>,/>
wherein , and />Is the height and width of the image, h, w, c are +.>Is of a height, width and length.
S1 comprises the following steps:
s1.2. utilizing stepS1.1 calculatedFurther calculate->Is->The standard deviation of the three channels is used as a reference standard deviation true value trained by the color migration basic parameter prediction module, and the reference standard deviation true value is marked as +.>
S1.3. 、/>、/> and />Together, a set of samples for deep learning enhancement networks is formed.
S2 comprises the following steps:
s2.1, building deep learning enhancement networkThe system comprises an RGB-to-Lab color space module, a color migration basic parameter prediction module, a color migration bias parameter matrix prediction module, a built-in color migration module and an Lab-to-RGB color space module;
s2.2, building a network structure for acquiring global migration parameters according to the depth regressionThe color migration basic parameter prediction module takes a degraded image represented by a Lab color space as an input, predicts and outputs six basic parameters for color migration, namely a target migration imageMean>Standard deviation->
The color migration basic parameter prediction module consists of a depth coding module and a depth regression module, wherein the depth coding module is formed by stacking a convolution layer, an activation layer and a pooling layer in a depth feature extraction structure, and extracts depth feature expressions of all layers of an input image; the depth regression module adopts a full connection structure, acquires global migration parameter guidance, and predicts and outputs six basic parameters for color migration.
S2 comprises the following steps:
s2.3, based on the encoding-decoding structure, constructing a color migration bias parameter matrix prediction module for acquiring local difference migration parameters, wherein the color migration bias parameter matrix prediction module comprises three prediction branches, and three channels of a degraded image represented by a Lab color space are taken as inputs to respectively output target migration imagesMigration bias parameter matrix of three channels;
the color migration bias parameter matrix prediction module comprises a self-attention spectrum sensing module, a channel characteristic extraction module and a color migration bias parameter matrix prediction module based on an encoding-decoding module, wherein the self-attention spectrum sensing module is used for inputting an imageNormalized matrix for each channel->Calculated, self-attention profile of channel c +.>The method comprises the following steps:
the channel feature extraction module takes each channel normalization matrix as input, extracts features for migration bias parameter matrix prediction under the guidance of self-attention spectrum, and the structure is formed by stacking a convolution layer, an activation layer and a pooling layer in a typical depth feature extraction structure;
the depth coding-decoding module is guided by obtaining local difference migration parameters, predicts and generates two color migration bias parameter matrixes corresponding to the mean value and the standard difference and />
S2 comprises the following steps:
s2.4, carrying out pixel-by-pixel addition on the predicted 6 groups of basic migration parameters and the color migration bias parameter matrix corresponding to the matrix channels to obtain a fusion color migration parameter matrix for carrying out color migration:
,/>
wherein ,is an element-by-element addition of the value and the matrix.
S2 comprises the following steps:
s2.5, designing a color migration module built in a network according to the fused color migration parameter matrix obtained based on the S2.4, and obtaining an underwater visual enhancement image after color migration
in the formula ,to enhance the channel c matrix of the image, +.>A matrix of channels c representing an input degraded image, +.>Channel c matrix element mean value representing input degraded image, for example>Channel c matrix element standard deviation representing an input degraded image;
the built-in color migration module takes a parameter matrix obtained by fusing a degraded image to be enhanced, a color migration basic parameter and a bias parameter as input, and outputs an underwater image with enhanced visual quality after color migration.
S3 comprises the following steps:
s3.1. Recording the degraded underwater image asAs an input to a deep learning enhancement network;
s3.2, outputting six basic parameters of the color migration basic parameter prediction module, namely the target migration imageThe mean and standard deviation of the three channels are denoted +.> and />, wherein />Representing channel index, calculate->And->、/>And->Parameter prediction error loss between them, parameter prediction error loss is +.>
S3 comprises the following steps:
s3.3. inputGenerating an enhanced underwater image corresponding to the input after the complete enhanced network model is recorded as +.>Calculate-> and />Pixel difference loss L between MSE And structural similarity loss L SSIM
Is the total number of pixels in the image, +.>Is the center pixel of the image block,> and />Respectively, are imagesAt->Pixel mean and standard deviation of image block, +.> and />Is a set of constant combinations, ">,/>
S3 comprises the following steps:
s3.4. by minimizing the sum of loss termsIterative update optimization network->Model parameters of (2):
in the embodiment of the application, a flow diagram of an underwater image enhancement method is shown in fig. 1, and a pair of vision imaging quality degradation is obtained firstAnd then predicting multi-scale color migration parameters for image enhancement by using a trained underwater image enhancement network, and obtaining an underwater cleaning image corresponding to the input degraded underwater image by using the multi-scale color migration parameters through a color migration module embedded in the network. The detailed flow of the underwater image enhancement method is shown in fig. 2, firstly, a data enhancement expansion data set is obtained, and training data preparation, enhancement model training and model deployment are executed. The framework of the underwater image enhancement system is shown in fig. 3, and comprises an underwater image acquisition module, an optional image downsampling module, a multi-scale color migration parameter prediction module, an optional fusion color migration parameter matrix interpolation module and an image enhancement module based on color migration. The training data preparation flow of the underwater image enhancement depth network is as shown in fig. 4, the image enhancement data set is respectively an underwater degraded image and an underwater clear image, and the image enhancement data set is respectively input into three channels and expanded into an image enhancement expansion data set. The schematic structure of the underwater image enhancement depth network is shown in fig. 5, and the underwater degradation image is importedRBGRotationLabThe color space is respectively imported into a color migration basic parameter prediction module and a color migration bias parameter prediction module, and then color migration parameter fusion is carried out, and the color space is imported into the underwater degradation imageRBGRotationLabColor migration is performed in the color space, and thenLabRotationRBGAnd (5) obtaining an underwater enhanced image by using the color space.
The depth regression network structure of the present application uses regression prediction network structures such as Alex-Net, VGG-Net, etc., the encoding-decoding structure uses depth encoding-decoding structures such as U-Net, dense-Net, etc., the underwater image data set of the underwater degradation-enhancement image uses UIEB data set, SUIM-E data set, the convolution layer-activation layer-pooling layer uses Conv-ReLu-MaxPool, etc., all of which are prior art in this field, and thus the detailed contents are not explained.
The above embodiments are only for illustrating the technical aspects of the present application, not for limiting the same, and although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may be modified or some or all of the technical features may be replaced with other technical solutions, which do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (4)

1. An underwater image enhancement method based on multi-scale color migration parameter prediction is characterized by comprising the following steps:
s1: obtaining an image data set to be processed, and constructing training samples required in the training and learning process of the deep learning enhancement model, wherein one training sample comprises the following components: underwater degradation imageReference enhanced image corresponding to the underwater degraded image +.>Will beIn the CIELab color space +.>The mean value and standard deviation of the three channels are used as a reference mean value true value and a reference standard deviation true value of the color migration basic parameter prediction model training;
s1.1: calculation ofThe mean value of the three channels is used as a reference mean value true value trained by the color migration basic parameter prediction module, and the reference mean value true value is marked as +.>,/>
wherein , and />Is the height and width of the image, h, w, c are +.>Is of the height, width and length;
s1.2: obtained by calculation in step S1.1Further calculate->Is->The standard deviation of the three channels is used as a reference standard deviation true value trained by the color migration basic parameter prediction module, and the reference standard deviation true value is marked as +.>
S1.3: 、/>、/> and />Together forming a set of samples for a deep learning enhancement network;
s2: building a deep learning enhancement network;
s2.1: building deep learning enhancement networkThe system comprises an RGB-to-Lab color space module, a color migration basic parameter prediction module, a color migration bias parameter matrix prediction module, a built-in color migration module and an Lab-to-RGB color space module;
s2.2: according to the depth regression network structure, a color migration basic parameter prediction module for acquiring global migration parameters is built, and six basic parameters for color migration, namely target migration images, are predicted and output by taking a degradation image represented by a Lab color space as inputMean>Standard deviation->
The color migration basic parameter prediction module consists of a depth coding module and a depth regression module, wherein the depth coding module is formed by stacking a convolution layer, an activation layer and a pooling layer in a depth feature extraction structure, and extracts depth feature expressions of all layers of an input image; the depth regression module adopts a full connection structure, acquires global migration parameter guidance, predicts and outputs six basic parameters for color migration;
s2.3: based on the coding-decoding structure, a color migration bias parameter matrix prediction module for obtaining local difference migration parameters is built, and the color migration bias parameter matrix prediction module comprises three prediction branches which are respectively represented by Lab color spaceThree channels of the degraded image are taken as input, and target migration images are respectively outputMigration bias parameter matrix of three channels;
the color migration bias parameter matrix prediction module comprises a self-attention spectrum sensing module, a channel characteristic extraction module and a color migration bias parameter matrix prediction module based on an encoding-decoding module, wherein the self-attention spectrum sensing module is used for inputting an imageNormalized matrix for each channel->Calculated, self-attention profile of channel c +.>The method comprises the following steps:
the channel feature extraction module takes each channel normalization matrix as input, extracts features for migration bias parameter matrix prediction under the guidance of self-attention spectrum, and the structure is formed by stacking a convolution layer, an activation layer and a pooling layer in a typical depth feature extraction structure;
the depth coding-decoding module is guided by obtaining local difference migration parameters, predicts and generates two color migration bias parameter matrixes corresponding to the mean value and the standard difference and />
S2.4: and carrying out pixel-by-pixel addition on the predicted 6 groups of basic migration parameters and the color migration bias parameter matrix corresponding to the matrix channels to obtain a fusion color migration parameter matrix for carrying out color migration:
,/>
wherein ,adding the numerical value and the elements of the matrix;
s2.5: based on the fusion color migration parameter matrix obtained in S2.4, designing a color migration module built in a network according to the following formula to obtain an underwater visual enhancement image after color migration
in the formula ,to enhance the channel c matrix of the image, +.>A matrix of channels c representing an input degraded image, +.>Channel c matrix element mean value representing input degraded image, for example>Channel c matrix element standard deviation representing an input degraded image;
the built-in color migration module takes a parameter matrix obtained by fusing a degraded image to be enhanced, a color migration basic parameter and a bias parameter as input, and outputs an underwater image with enhanced visual quality after color migration;
s3: training a deep learning enhancement network to obtain a deep learning enhancement model, performing constraint optimization on a color migration basic parameter prediction module by using parameter regression loss, and performing feedback optimization on model parameters of the whole deep learning enhancement network by using pixel value difference loss and structural similarity loss to constrain the overall output of the deep learning enhancement network;
s4: taking an underwater image with degraded visual quality as an input of a deep learning enhancement model, and outputting an enhancement image with improved visual quality after being processed by a color migration parameter prediction and a built-in color migration module;
s3 comprises the following steps:
s3.1: recording the degraded underwater image asAs an input to a deep learning enhancement network.
2. The method for underwater image enhancement based on multi-scale color migration parameter prediction of claim 1, wherein S3 comprises:
s3.2: outputting six basic parameters of the color migration basic parameter prediction module, namely the target migration imageThe mean and standard deviation of the three channels are denoted +.> and />, wherein />Representing channel index, calculate->And->、/>And->Parameter prediction error loss between them, parameter prediction error loss is +.>
3. The method for underwater image enhancement based on multi-scale color migration parameter prediction as claimed in claim 2, wherein S3 comprises:
s3.3: input deviceGenerating an enhanced underwater image corresponding to the input after the complete enhanced network model is recorded as +.>Calculation of and />Pixel difference loss L between MSE And structural similarity loss L SSIM
Is the total number of pixels in the image, +.>Is the center pixel of the image block,> and />Respectively is an image->At->Pixel mean and standard deviation of image block, +.> and />Is a set of constant combinations, ">,/>
4. A method of underwater image enhancement based on multi-scale color migration parameter prediction as claimed in claim 3, wherein S3 comprises:
s3.4: by minimizing the sum of loss termsIterative update optimization network->Model parameters of (2):
CN202310952246.8A 2023-08-01 2023-08-01 Underwater image enhancement method based on multi-scale color migration parameter prediction Active CN116664454B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310952246.8A CN116664454B (en) 2023-08-01 2023-08-01 Underwater image enhancement method based on multi-scale color migration parameter prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310952246.8A CN116664454B (en) 2023-08-01 2023-08-01 Underwater image enhancement method based on multi-scale color migration parameter prediction

Publications (2)

Publication Number Publication Date
CN116664454A CN116664454A (en) 2023-08-29
CN116664454B true CN116664454B (en) 2023-11-03

Family

ID=87712194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310952246.8A Active CN116664454B (en) 2023-08-01 2023-08-01 Underwater image enhancement method based on multi-scale color migration parameter prediction

Country Status (1)

Country Link
CN (1) CN116664454B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437287B (en) * 2023-12-14 2024-03-19 深圳大学 Underwater positioning method for structure priori knowledge augmentation and migration
CN117974509B (en) * 2024-04-02 2024-06-18 中国海洋大学 Two-stage underwater image enhancement method based on target detection perception feature fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127360A (en) * 2019-12-20 2020-05-08 东南大学 Gray level image transfer learning method based on automatic encoder
CN111739077A (en) * 2020-06-15 2020-10-02 大连理工大学 Monocular underwater image depth estimation and color correction method based on depth neural network
WO2021052160A1 (en) * 2019-09-20 2021-03-25 五邑大学 Face beauty prediction method based on multi-task migration and device
CN113284058A (en) * 2021-04-16 2021-08-20 大连海事大学 Underwater image enhancement method based on migration theory
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning
CN114627016A (en) * 2022-03-15 2022-06-14 厦门微亚智能科技有限公司 Industrial defect detection preprocessing method based on color migration strategy
CN114926359A (en) * 2022-05-20 2022-08-19 电子科技大学 Underwater image enhancement method combining bicolor space recovery and multistage decoding structure
CN115147259A (en) * 2022-03-07 2022-10-04 中山大学 Image color migration method, system and computer medium
CN115689869A (en) * 2022-10-21 2023-02-03 中国科学院计算技术研究所 Video makeup migration method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9857953B2 (en) * 2015-11-17 2018-01-02 Adobe Systems Incorporated Image color and tone style transfer
CN109816615B (en) * 2019-03-06 2022-12-16 腾讯科技(深圳)有限公司 Image restoration method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021052160A1 (en) * 2019-09-20 2021-03-25 五邑大学 Face beauty prediction method based on multi-task migration and device
CN111127360A (en) * 2019-12-20 2020-05-08 东南大学 Gray level image transfer learning method based on automatic encoder
CN111739077A (en) * 2020-06-15 2020-10-02 大连理工大学 Monocular underwater image depth estimation and color correction method based on depth neural network
CN113284058A (en) * 2021-04-16 2021-08-20 大连海事大学 Underwater image enhancement method based on migration theory
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning
CN115147259A (en) * 2022-03-07 2022-10-04 中山大学 Image color migration method, system and computer medium
CN114627016A (en) * 2022-03-15 2022-06-14 厦门微亚智能科技有限公司 Industrial defect detection preprocessing method based on color migration strategy
CN114926359A (en) * 2022-05-20 2022-08-19 电子科技大学 Underwater image enhancement method combining bicolor space recovery and multistage decoding structure
CN115689869A (en) * 2022-10-21 2023-02-03 中国科学院计算技术研究所 Video makeup migration method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Beyond Single Reference for Training: Underwater Image Enhancement via Comparative Learning;Kunqian Li;IEEE Transactions on Circuits and Systems for Video Technology;全文 *
Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer;Chongyi Li等;ARXIV;全文 *
Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer;Chongyi Li等;IEEE Signal Processing Letters;全文 *
多输入融合对抗网络的水下图像增强;林森;刘世本;唐延东;;红外与激光工程(第05期);全文 *
深度学习驱动的水下图像增强与复原研究进展;丛润民;张禹墨;张晨;李重仪;赵耀;;信号处理(第09期);全文 *

Also Published As

Publication number Publication date
CN116664454A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN116664454B (en) Underwater image enhancement method based on multi-scale color migration parameter prediction
Zeng et al. Learning image-adaptive 3d lookup tables for high performance photo enhancement in real-time
CN108537746B (en) Fuzzy variable image blind restoration method based on deep convolutional network
CN111402146B (en) Image processing method and image processing apparatus
CN111489372B (en) Video foreground and background separation method based on cascade convolution neural network
Li et al. DewaterNet: A fusion adversarial real underwater image enhancement network
CN115442515A (en) Image processing method and apparatus
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
CN110378848B (en) Image defogging method based on derivative map fusion strategy
Bianco et al. Personalized image enhancement using neural spline color transforms
CN115209119B (en) Video automatic coloring method based on deep neural network
CN109598695B (en) No-reference image fuzzy degree estimation method based on deep learning network
CN115861101A (en) Low-illumination image enhancement method based on depth separable convolution
CN115393227A (en) Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network
Wan et al. Purifying low-light images via near-infrared enlightened image
Mustafa et al. Distilling style from image pairs for global forward and inverse tone mapping
CN112215766A (en) Image defogging method integrating image restoration and image enhancement and convolution network thereof
CN112070686A (en) Backlight image cooperative enhancement method based on deep learning
CN115330655A (en) Image fusion method and system based on self-attention mechanism
CN112164078B (en) RGB-D multi-scale semantic segmentation method based on encoder-decoder
CN112132923A (en) Two-stage digital image style transformation method and system based on style thumbnail high-definition
Zheng et al. Image restoration via uavformer for under-display camera of uav
CN117974509B (en) Two-stage underwater image enhancement method based on target detection perception feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant