CN116681627A - Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method - Google Patents

Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method Download PDF

Info

Publication number
CN116681627A
CN116681627A CN202310967540.6A CN202310967540A CN116681627A CN 116681627 A CN116681627 A CN 116681627A CN 202310967540 A CN202310967540 A CN 202310967540A CN 116681627 A CN116681627 A CN 116681627A
Authority
CN
China
Prior art keywords
underwater image
image
enhanced
adaptive
underwater
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310967540.6A
Other languages
Chinese (zh)
Other versions
CN116681627B (en
Inventor
李振铭
朱文博
董超
郑兵
黎海兵
叶姚良
张忠波
李艾园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202310967540.6A priority Critical patent/CN116681627B/en
Publication of CN116681627A publication Critical patent/CN116681627A/en
Application granted granted Critical
Publication of CN116681627B publication Critical patent/CN116681627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method, which comprises the following steps: acquiring an underwater image sample data set, wherein the underwater image sample data set comprises an underwater image to be enhanced and a surrounding-trunk image; introducing an adaptive image physical model enhancement module, and constructing an adaptive underwater image generation countermeasure network model; and carrying out alternate optimization training on the self-adaptive underwater image generation countermeasure network model through the underwater image sample data set to generate an enhanced underwater image. According to the invention, by introducing the self-adaptive image physical model enhancement module, the discrimination results of different local areas of the input data can be better obtained. The method for generating the countermeasure enhancement by using the cross-scale fusion self-adaptive underwater image can be widely applied to the technical field of underwater image enhancement.

Description

Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method
Technical Field
The invention relates to the technical field of underwater image enhancement, in particular to a cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method.
Background
At present, when an underwater image is shot, due to the special physical and chemical characteristics of complex underwater environment and light in different water bodies, the phenomenon of degradation of the underwater image is caused, the underwater image presents more bluish green tone and haze effect, so that the color of the image is distorted and the image is blurred, the normal use work of the subsequent underwater image is influenced, and the existing underwater image enhancement technology has certain defects, for example, the existing underwater image enhancement technology can generate excessive enhancement side effect and has different effects on different underwater environments and different image conditions; secondly, the existing underwater image enhancement technology does not aim at finer visible details and more remarkable color information of an image, so that a network cannot easily capture key features in an input image. Therefore, the problems of insufficient contrast, color distortion, low sharpening degree, atomization, low resolution, excessive enhancement and the like of the enhanced image still exist, and the prior art is still limited by the degradation type of the underwater image, so that the degradation problem existing in the underwater image at the same time cannot be solved at the same time; thirdly, the existing underwater image enhancement technology singly utilizes a physical model or a deep learning model to carry out underwater image enhancement, the situation that the underwater image is sensitive to the type of the underwater image can occur when the physical model is singly utilized to carry out quality enhancement on the underwater image, and the uniqueness of the physical principle of underwater imaging is ignored when the underwater image enhancement model based on deep learning is singly utilized. Therefore, the current underwater image enhancement model based on deep learning has limited generalization capability; fourth, existing underwater image enhancement techniques only process a single RGB input, although RGB images use three basic colors, which can represent very rich colors and color variations. This makes the RGB image very efficient in capturing and rendering real world colors. However, since the underwater image is affected by absorption and scattering of the water body, color distortion and contrast reduction are caused, and input using the RGB color space alone may cause that the model may not accurately restore the true color and detail in the underwater environment.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method, and a self-adaptive image physical model enhancement module is introduced to better obtain discrimination results of different local areas of input data.
The first technical scheme adopted by the invention is as follows: a method of cross-scale fused adaptive underwater image generation contrast enhancement, comprising the steps of:
acquiring an underwater image sample data set, wherein the underwater image sample data set comprises an underwater image to be enhanced and a surrounding-trunk image;
introducing an adaptive image physical model enhancement module, and constructing an adaptive underwater image generation countermeasure network model;
and carrying out alternate optimization training on the self-adaptive underwater image generation countermeasure network model through the underwater image sample data set to generate an enhanced underwater image.
Further, the step of introducing the self-adaptive image physical model enhancement module and constructing the self-adaptive underwater image generation countermeasure network model specifically comprises the following steps:
introducing an adaptive image physical model enhancement module to construct an underwater image generation network, wherein the adaptive image physical model enhancement module comprises a total variation reconstruction fusion generator network, an adaptive histogram equalization fusion generator network, a dark channel prior estimation fusion generator network and a sparse representation resolution reconstruction fusion generator network;
The underwater image generation network comprises a self-adaptive image physical model enhancement module, a multi-color space conversion module, a multi-scale feature extraction module, a CBAM attention module and a step-by-step fusion full convolution network module;
and constructing an adaptive underwater image generation countermeasure network model by combining an underwater image countermeasure network, wherein the underwater image countermeasure network is a PatchGAN discriminator.
Further, the step of performing alternating optimization training on the adaptive underwater image generation countermeasure network model through the underwater image sample data set to generate an enhanced underwater image specifically includes:
inputting the underwater image to be enhanced in the underwater image sample data set into an underwater image generation network in the self-adaptive underwater image generation countermeasure network model for image enhancement processing, and outputting a preliminary enhanced underwater image;
inputting the preliminary enhanced underwater image and the group-trunk image into the self-adaptive underwater image generation countermeasure network model, and optimizing the underwater image countermeasure network, so that the optimized underwater image countermeasure network can generate an enhanced underwater image result.
Further, the step of inputting the underwater image to be enhanced in the underwater image sample data set to the underwater image generation network in the adaptive underwater image generation countermeasure network model to perform image enhancement processing, and outputting a preliminary enhanced underwater image specifically includes:
Inputting the underwater image to be enhanced into an underwater image generation network;
calculating a PSNR value of the underwater image to be enhanced through a membership function, and selecting a corresponding self-adaptive image physical model enhancement module according to the PSNR value;
restoring the underwater image to be enhanced based on the selected self-adaptive image physical model enhancement module to obtain a restored underwater image;
performing color conversion processing on the underwater image to be enhanced based on a multi-color space conversion module to obtain an underwater image with three color spaces, wherein the three color spaces comprise an RGB color space, a Lab color space and an HSV color space;
performing multi-scale feature extraction and fusion processing on the restored underwater image and the underwater image with the three color spaces based on the multi-scale feature extraction module to obtain an underwater image result with three-scale feature fusion;
the method comprises the steps that extraction fusion processing of spatial features and channel features is carried out on an underwater image result with three scale feature fusion based on a CBAM attention module, and an attention feature underwater image is obtained;
and deconvolution processing is carried out on the attention characteristic underwater image based on the progressive fusion full convolution network module, so that a preliminary enhanced underwater image is obtained.
Further, the step of calculating a PSNR value of the underwater image to be enhanced by the membership function, and selecting a corresponding adaptive image physical model enhancement module according to the PSNR value specifically includes:
calculating a PSNR value of the underwater image to be enhanced through a membership function, and assigning the obtained PSNR value to a variable psnr_value;
calculating the membership degree of the psnr_value for each fuzzy set according to the fuzz. Interface_members function;
judging the grade of the underwater image to be enhanced according to the membership degree, wherein the grade of the underwater image to be enhanced comprises poor, fair, average and good;
when the level of the underwater image to be enhanced is a pool, selecting a total variation reconstruction fusion generator network as an adaptive image physical model enhancement module;
when the level of the underwater image to be enhanced is a fair, selecting a self-adaptive histogram equalization fusion generator network as a self-adaptive image physical model enhancement module;
when the level of the underwater image to be enhanced is average, selecting a dark channel prior estimation fusion generator network as an adaptive image physical model enhancement module;
and when the grade of the underwater image to be enhanced is good, selecting a sparse representation resolution reconstruction fusion generator network as an adaptive image physical model enhancement module.
Further, the step of recovering the underwater image to be enhanced based on the selected adaptive image physical model enhancement module to obtain a recovered underwater image specifically includes:
reconstructing a fusion generator network based on the total variation, and recovering the underwater image to be enhanced to minimize the total variation of the underwater image to be enhanced by adjusting the pixel value of the underwater image to be enhanced;
based on the self-adaptive histogram equalization fusion generator network, restoring the underwater image to be enhanced to perform histogram equalization in each local block of the underwater image to be enhanced through self-adaptive histogram equalization, and increasing the local contrast of the underwater image to be enhanced;
based on a dark channel prior estimation fusion generator network, restoring the underwater image to be enhanced to acquire the degree of atmospheric scattering by detecting a dark channel region in the underwater image to be enhanced, removing a haze effect according to the degree of atmospheric scattering, and restoring details and contrast of the underwater image to be enhanced;
and (3) restoring the underwater image to be enhanced based on the sparse representation resolution reconstruction fusion generator network, namely extracting features of a low-resolution image in the underwater image to be enhanced in the sparse representation resolution reconstruction, calculating a sparse coefficient by using a sparse representation method according to an extraction result, reconstructing based on the sparse coefficient and a sparse dictionary, acquiring an estimation result of the high-resolution image, and restoring details and definition of the underwater image to be enhanced.
Further, the step of performing extraction fusion processing of spatial features and channel features on the underwater image result with three scale feature fusion based on the CBAM attention module to obtain an attention feature underwater image specifically comprises the following steps:
inputting an underwater image result with three scale feature fusion to a CBAM attention module;
the channel attention block based on the CBAM attention module respectively carries out average pooling operation and maximum pooling operation on the input underwater image result with three scale feature fusion, carries out nonlinear transformation through an activation function ReLU after being input into a full-connection layer, carries out concat fusion after transformation, and outputs a channel attention vector;
the space attention block based on the CBAM attention module respectively carries out average pooling operation and maximum pooling operation on the channel attention vector and the underwater image result fused with the three scale features, and the average pooling operation result and the maximum pooling operation result are spliced and output through a sigmoid activation function to obtain a space attention diagram;
multiplying the channel attention vector with an underwater image result fused with three scale features to obtain a weighted feature map;
Broadcasting the space attention map in the channel dimension to obtain a broadcasted space attention map;
and multiplying the weighted feature map by the broadcasted space attention map element by element to obtain the attention feature underwater image.
Further, the step of deconvolution processing is performed on the attention characteristic underwater image based on the progressive fusion full convolution network module to obtain a preliminary enhanced underwater image, which specifically comprises the following steps:
inputting the attention characteristic underwater image into a step-by-step fusion full-convolution network module, wherein the attention characteristic underwater image comprises a maximum-scale attention characteristic underwater image, a middle-scale attention characteristic underwater image and a minimum-scale attention characteristic underwater image, and the step-by-step fusion full-convolution network module comprises a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer;
performing up-sampling processing on the maximum-scale attention characteristic underwater image based on the first deconvolution layer to obtain a first deconvolution layer output result;
based on the second deconvolution layer, carrying out fusion processing on the output result of the first deconvolution layer and the intermediate scale attention feature underwater image to obtain the output result of the second deconvolution layer;
Based on the third deconvolution layer, carrying out fusion processing on the output result of the second deconvolution layer and the minimum-scale attention feature underwater image to obtain the output result of the third deconvolution layer;
performing up-sampling operation on the output result of the third deconvolution layer based on the fourth deconvolution layer to obtain the output result of the fourth deconvolution layer;
and performing up-sampling operation processing on the output result of the fourth deconvolution layer based on the fifth deconvolution layer to obtain a preliminary enhanced underwater image.
Further, the step of inputting the preliminary enhanced underwater image and the group-trunk image into the adaptive underwater image generation countermeasure network model to optimize the underwater image countermeasure network specifically includes:
inputting the preliminary enhanced underwater image and the group-trunk image into an underwater image countermeasure network, wherein the underwater image countermeasure network comprises at least one judging block, a 4x4 convolution layer, a batch normalization layer, a spectrum normalization layer, a LeakyReLU function, a 1x1 convolution layer, an average pooling layer and a network butyl supplementing layer;
the 4x4 convolution layer, the batch normalization layer, the spectrum normalization layer and the LeakyReLU function are used for enhancing the stability of the underwater image against the network;
Carrying out local mode and feature capturing processing on the input preliminary enhanced underwater image and the group-trunk image based on the discrimination block to obtain an output result of the discrimination block;
based on the 1x1 convolution layer, carrying out adjustment processing on the channel number and the characteristic dimension on the output result of the discrimination block, and outputting a 1x1 convolution result;
based on the average pooling layer, carrying out space size reduction treatment on the 1x1 convolution result, and outputting an average pooling result;
and carrying out classification evaluation processing on the average pooling result based on the network butyl layer, and outputting the enhanced underwater image.
The method has the beneficial effects that: according to the invention, an underwater image sample dataset is obtained, an adaptive image physical model enhancement module is further introduced, an adaptive underwater image is constructed to generate an countermeasure network model, the adaptive image physical model enhancement module is utilized to calculate PSNR values of pictures so as to determine a physical model with the quality grades of the pictures being added in a self-adaptive control manner, the matching of different physical models and deep learning is regulated according to the obtained quality grades, the situation of excessive enhancement is further prevented, the shortcomings of RGB single color space in the underwater environment such as color distortion and contrast reduction can be overcome by utilizing Lab color space and HSV color space through a multi-color space conversion module, the image is further subjected to feature fusion in the downsampling operation process, the step of up-sampling when the feature information is required to be fused is omitted, the feature extraction of the images with different scales is performed while the image information is prevented from being lost, the image enhancement is more effectively performed by utilizing the image information, the complex redundancy of the fused feature information is prevented by a CBAM attention module in the downsampling process, the output layer of the countermeasure network model can enhance the space perception of input data through the fusion full convolution network module, the situation of the input data can be better sensed by the fusion of the full convolution network module, the image is better, the situation of the input image is better output based on the situation of the input image is better, and the situation of the situation is better input based on the situation of the input image is better, and the situation is better than the situation of the situation is better than the situation based on the situation of the input image is better, and the situation is better on the situation is better than the input based on the situation is compared on the situation.
Drawings
FIG. 1 is a flow chart of the steps of a cross-scale fused adaptive underwater image generation countermeasure enhancement method of the present invention;
FIG. 2 is a schematic diagram of the architecture of an underwater image generation network in an adaptive underwater image generation countermeasure network model of the present invention;
FIG. 3 is a schematic diagram of the architecture of a multi-color space conversion module in an underwater image generation network of the present invention;
FIG. 4 is a schematic diagram of the architecture of a multi-scale feature extraction module in an underwater image generation network of the present invention;
FIG. 5 is a schematic flow chart of the multi-scale feature extraction and fusion process for the restored underwater image and the underwater image with three color spaces according to the present invention;
FIG. 6 is a schematic diagram of the architecture of a CBAM attention module in the underwater image generation network of the present invention;
FIG. 7 is a schematic diagram of a progressive fusion full convolution network module in an underwater image generation network of the present invention;
FIG. 8 is a schematic diagram of the structure of an underwater image countermeasure network in the adaptive underwater image generation countermeasure network model of the present invention;
FIG. 9 is a flowchart illustrating steps for selecting a corresponding adaptive image physical model enhancement module according to the present invention.
Detailed Description
The invention will now be described in further detail with reference to the drawings and to specific examples. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Referring to fig. 1, the present invention provides a method of cross-scale fusion of adaptive underwater image generation contrast enhancement, the method comprising the steps of:
s1, acquiring an underwater image sample data set, wherein the underwater image sample data set comprises an underwater image to be enhanced and a surrounding-trunk image;
specifically, the present invention acquires an underwater Image sample dataset from a synthetic underwater Image EUVP dataset used in a rapid underwater Image enhancement method published in IEEE robot, auto, lett, 2020 by m.j. Islam et al, and an underwater Image enhancement reference dataset UIBE published in IEEE trans, image process, 2019 by c. Li et al.
S2, introducing an adaptive image physical model enhancement module, and constructing an adaptive underwater image generation countermeasure network model;
specifically, the adaptive underwater image generation countermeasure network model constructed by the invention comprises an underwater image generation network and an underwater image countermeasure network, as shown in fig. 2, wherein the underwater image generation network comprises an adaptive image physical model enhancement module, a multi-color space conversion module, a multi-scale feature extraction module, a CBAM attention module and a step-by-step fusion full convolution network module, wherein the underwater image generation network can be divided into an up-sampling operation process and a down-sampling operation process, the down-sampling process corresponds to the image physical model enhancement module, the multi-color multi-scale feature extraction module and the physical model and deep learning feature extraction fusion module, the up-sampling process corresponds to the CBAM attention module and the step-by-step fusion full convolution network, as shown in fig. 8, and the underwater image countermeasure network is a PatchGAN discriminator
And S3, performing alternating optimization training on the self-adaptive underwater image generation countermeasure network model through the underwater image sample data set to generate an enhanced underwater image.
Specifically, the input low-quality image is firstly restored by a physical model image and is subjected to color space conversion parallel processing, the obtained output results are respectively an image restoration result of the physical model, a Lab color space and an HSV color space, then the multi-scale feature extraction module is utilized to respectively extract multi-scale features of the RGB color space, the Lab color space, the HSV color space and the output of the physical model, fusion on different scales is carried out by utilizing the extracted features to respectively obtain three different-scale feature fusion results of large, medium and small as fusion feature 1 (minimum scale feature fusion), fusion feature 2 (medium scale feature fusion) and fusion feature 3 (maximum scale feature fusion) of the upper graph, the obtained three fusion features enter a channel space attention mechanism, unimportant features are restrained by the channel space attention mechanism, important features are highlighted, the performance of the model is improved, the output of the attention mechanism enters an deconvolution layer in the up-sampling process layer by layer, the enhanced image result is finally output through the five-layer deconvolution layer, the result of the generated network is further combined with the result of the group-trueness image to be input into a pairwise network, and whether the data is actually classified or not in the network, and the data is generated or not. The discrimination result of different local areas of the input data can be obtained through the patch of 16x 16;
The termination condition for the decision-directed alternate optimization training is that the structure of the production countermeasure network highlights the deployment of two subnets, including one generator and one discriminator. The generator network enhances the noisy image and produces high quality results. On the other hand, the discriminator will determine whether the output of the generator is an actual high resolution image or simply a composite image. When the discriminator cannot distinguish the generated image from the real image, the training process ends and then an enhanced underwater image is generated.
S31, selecting a corresponding self-adaptive image physical model enhancement module;
specifically, as shown in fig. 9, if the features extracted by the physical model and the features of the deep learning model are rigidly fused, the physical model adaptive matching deep learning network may cause excessive enhancement effects, such as overexposure or darkness of the enhanced output image, due to the problem of information redundancy;
therefore, it is necessary to adaptively select a proper physical model, and the invention simulates fuzzy control, and utilizes a membership function to determine a physical model added by the control of the picture quality grade adaptation by calculating the PSNR value of the picture;
When the picture quality is the pos, the Total Variation (Total Variation) is selected to reconstruct, the picture quality is the pos, which usually indicates that the distortion between the processed image and the original image is very serious, the image quality is very poor, and the Total Variation (Total Variation) reconstruction is suitable for the problems of image reconstruction and denoising, so that the physical model reconstructed by the Total Variation (Total Variation) is suitable;
through judgment of the membership function, when the picture quality is fair, a physical model of CLAHE (contrast limited self-adaptive histogram equalization) is selected, wherein the picture quality is fair usually because the image has lower brightness and contrast, a certain degree of distortion exists between the processed image and the original image, the image quality is poor, and the CLAHE is suitable for improving the contrast problem of the image, especially under the condition of uneven illumination, so the physical model of CLAHE (contrast limited self-adaptive histogram equalization) is more suitable;
through judgment of the membership function, when the picture quality is average, a physical model of DCP (dark channel prior) is selected, when the picture quality is average, the picture quality is good, the level of no perceived distortion is close, but factors influencing the PSNR value of the picture, such as haze, still exist, the DCP algorithm removes the haze and improves the definition and contrast of the picture by utilizing the dark channel information in the picture based on the dark channel prior assumption, and the method is suitable for removing the haze and enhancing the picture detail, so the physical model of DCP (dark channel prior) is selected to be suitable;
Through judging the membership function, when the picture quality is good, a reconstruction algorithm based on sparse representation is selected to be used, when the picture quality is good, the image quality is very high and is close to the quality of an original image, but in order to obtain a clearer image texture structure, the resolution and detail of the image are effectively improved through the reconstruction algorithm based on sparse representation;
membership functions are defined by invoking fuzzy logic toolkits (e.g., fuzzy logic toolbox). These membership functions describe the relationship between different fuzzy sets ("face", "pair", "average", "good", "excelt") of fuzzy variables (here quality), next, the average value of the PSNR (peak signal-to-noise ratio) is calculated, assigned to the variable psnr_value, which is calculated for each fuzzy set by using the fuzz. Inter_members hip function in the fuzzy reasoning stage. The degree of psnr_value in each fuzzy set can be determined, the calculated fuzzy set is stored in a dictionary named quality_level, wherein keys are quality grades (such as "face", "fair", "average", "good", "excelent") and values are membership grades of the corresponding grades, and the quality grade with the largest membership grade is found by combining a max function with a key=quality_level. The quality level is the final result determined according to the input PSNR value and is used for describing the quality level of the image or video, and the matching of different physical models and deep learning is adjusted according to the obtained quality level, so that the situation of excessive enhancement is prevented.
S32, an adaptive image physical model enhancement module;
specifically, the self-adaptive image physical model enhancement module comprises a total variation reconstruction fusion generator network, a self-adaptive histogram equalization fusion generator network, a dark channel prior estimation fusion generator network and a sparse representation resolution reconstruction fusion generator network;
total variation reconstruction fusion generator network (Total Variation Reconstruction): the total variation reconstruction process minimizes the total variation of the image by adjusting the values of the pixels. The definition and detail of the image can be improved through minimizing the total variation, the edge information in the image can be recovered, and a clearer physical model prior result with lower noise can be output;
dark channel prior estimate fusion generator network (DCP): by detecting dark channel regions in the image, we can estimate the extent of atmospheric scattering. The estimation result can be used for removing haze effects, so that details and contrast of the image are recovered, and a defogging physical model priori result is output;
adaptive histogram equalization fusion generator network (CLAHE): by adaptive histogram equalization, histogram equalization is performed within each local block to increase local contrast. Statistical properties of the local block, such as average luminance and standard deviation of the local block, are taken into account when performing local histogram equalization. In this way, the self-adaptive histogram equalization can avoid excessively enhancing noise or texture in the image, maintain the overall smoothness of the image, improve the contrast imbalance problem in the low-quality image and output a physical model priori result with higher contrast;
Sparse representation based resolution reconstruction (Sparse Reconstruction): in the sparse representation resolution reconstruction, firstly, feature extraction is carried out on a low-resolution image, and then a sparse representation method is used for calculating a sparse coefficient. Next, reconstruction is performed using the sparse coefficients and the sparse dictionary to obtain an estimation result of the high resolution image. The process aims at recovering details and definition of the image, reducing blurring and detail loss, recovering high-resolution details from the low-resolution image, and outputting a physical model priori result with higher resolution;
the output sizes after the physical model processing are 256×256×3.
S33, a multi-color space conversion module;
specifically, the input RGB diagram is converted into an HSV diagram and a Lab diagram by using a color space conversion algorithm, and three color space inputs are formed, as shown in fig. 3, with the dimensions of 256×256×3;
s331, a conversion process of HSV;
specifically, normalizing the range of RGB values from 0-255 to 0-1 can be accomplished by dividing each RGB component by 255, expressed as:
;
the values of the maximum and minimum RGB components are calculated by the following calculation process:
;
hue calculation, which is an operation of representing each pixel in an image as its Hue, saturation, and brightness values, in image processing, is a calculation process of:
;
A Saturation calculation, which is an operation of representing each pixel in an image as its Saturation value, is calculated as follows:
;
value calculation, which is an operation of representing each pixel in an image as its luminance Value, is calculated as follows:
;
s332, a Lab conversion process;
specifically, the range of RGB values is normalized from 0-255 to 0-1. This can be achieved by dividing each RGB component by 255, which is calculated by:
;
RGB is converted into linear RGB. This can be achieved by applying an inverse gamma correction (inverse gamma correction), typically an inverse gamma correction function that raises the value of each RGB component to a power of 2.2, calculated by:
;
XYZ components are calculated, XYZ is a color space based on CIE standards, and the calculation expression of the XYZ components is as follows:
;
converting XYZ to Lab components involves normalizing XYZ components using the values of the reference white point, expressed as:
;
s34, a multi-scale feature extraction module;
specifically, three color spaces and pictures processed by a physical model are respectively input into a multi-scale feature extraction module (multiscale block) as shown in fig. 4, scale1, scale2 and scale3 respectively correspond to feature extraction of three scales of large, medium and small, each scale (scale extraction block) consists of three layers of convolution layers Conv2d, the only difference is that the convolution kernel sizes of all the layers are different, the filling parameters are different, the convolution kernel sizes of all the Conv2d layers in the scale1 are 3*3, the step size is 1, the filling number is 1, the convolution kernel sizes of all the Conv2d layers in the scale2 are 5*5, the step size is 1, the filling number is 2, the convolution kernel sizes of all the Conv2d layers in the scale3 are 7*7, the step size is 1, the filling number is 3, and the channel number change relation corresponding to each scale is that the number of channels is increased from 3 to 16 and from 16 to 32;
The sizes of the feature graphs with different scales are 256 multiplied by 3, the step of up-sampling when fusion is needed is omitted, the feature information is prevented from being lost, the feature extraction with different scales is carried out on the image, and the image information is effectively utilized for image enhancement;
the fusion of the output processed by the physical model and the features extracted by different scales of each color space is shown in fig. 5, the output processed by the physical model is also processed by a Multiscale module to extract the multi-scale features, and the concat operation is utilized to enable the features of each scale to be simply fused;
therefore, multiscale characteristic extraction can be utilized to ensure the integrity of image information, a preliminary repair result or guiding information can be provided through a physical model, a deep learning model can further optimize the repair result and process complex image conditions, and the generalization capability of an underwater image enhancement model is improved.
S35, a CBAM attention module;
specifically, to prevent the complicated redundancy of the fused feature information, a channel spatial attention mechanism (CBAM) is utilized as shown in fig. 6, wherein the CBAM is composed of a channel attention block (CA) and a spatial attention block (SA);
The channel attention block (CA) respectively performs an average pooling (Avg-pool) operation and a maximum pooling (Max-pool) operation on the input, respectively inputs the pooled feature images into two full-connection layers, then performs nonlinear transformation through an activation function ReLU to output through the two full-connection layers respectively, and the full-connection layers output two C-dimensional channel attention vectors, perform concat fusion on the channel attention vectors and output the channel attention vectors through a sigmoid activation function to obtain importance weights of corresponding output represented by each element;
the space attention block (SA) respectively performs an average pooling (Avg-pool) operation and a maximum pooling (Max-pool) operation, the feature images after the maximum pooling and the average pooling are spliced in the channel dimension, the spliced feature images are subjected to convolution operation through convolution check and output through a sigmoid activation function, and a single-channel attention heat image is obtained;
multiplying the channel attention vector by an input feature map (H x W x C) to weight each channel, outputting a weighted feature map (H x W x C), broadcasting the spatial attention map in the channel dimension to have the same dimension (H x W x C) as the feature map, multiplying the broadcasted spatial attention map by the weighted feature map element by element, and adding the weighted feature map with the channel attention map to obtain a fused feature map (H x W x C);
The channel attention block learns channel weights using global averaging pooling and fully connected layers to weight each channel. The spatial attention block captures important spatial locations in the feature map through a max pooling and average pooling operation and uses them to generate a spatial attention map, improve the attention of the model to important features, and suppress irrelevant features, thereby improving the performance and generalization ability of the model.
S36, merging the full convolution network modules step by step;
specifically, as shown in fig. 7, the feature map with the largest dimension is output after CBAM, and enters a deconvolution network cotr in the up-sampling process;
the first layer is deconvoluted and laminated: this is the first layer of the decoder network, inputting features of maximum scale. The deconvolution layer (also referred to as a transpose convolution layer) enlarges the spatial size of the feature map by an upsampling operation. Here, the first deconvolution layer upsamples the features of the largest scale to recover a higher resolution feature map;
second layer deconvolution layer: the output of the first layer is fused with the medium-scale features, then a second deconvolution layer is input, the fusion operation can be carried out in modes of addition, splicing and the like, and the second deconvolution layer further expands the space size of the feature map to generate a feature map with higher resolution;
Third layer deconvolution layer: the output of the second layer is fused with the features of the minimum scale, and then the fused features are input into a third deconvolution layer, and the third deconvolution layer can further expand the spatial dimension of the feature map to generate a feature map with higher resolution in addition, splicing and other modes;
fourth layer deconvolution lamination: the output of the third layer is used as input to be transmitted to a fourth deconvolution layer, and the up-sampling operation is still carried out on the fourth deconvolution layer, so that the space size of the feature map is further improved;
fifth layer deconvolution layer: the output of the fourth layer is used as input to be transmitted to the final deconvolution layer, and the layer continues up-sampling operation to finally generate a final reconstruction feature map;
features with different scales can be integrated layer by layer, and the representation capability of the features is enriched. The method is favorable for the model to better understand different layers of characteristics of input data, so that the quality of an output result is improved, information interaction can be carried out on different spatial scales of a characteristic diagram, the characteristics of a larger scale are more focused on global context information, the characteristics of a smaller scale are more focused on local details, the spatial perception capability of the model on the input data can be enhanced by fusing the characteristics of the different scales, and the local and global relation is better captured, so that the spatial consistency of the output result is improved.
S37, an underwater image countermeasure network;
specifically, the discriminator is modeled as a PatchGAN as in FIG. 8, and consists of multiple discrimination blocks (including a 4x4 convolution layer, a batch normalization layer, a spectral normalization layer, and a LeakyReLU function), and several other layers (a 1x1 convolution layer, an average pooling layer, and a 16x16 patch);
discrimination block(s): the function of each discrimination block is to perform feature extraction and nonlinear mapping on the input data. The discrimination block is able to capture local patterns and features of the input data by a 4x4 convolutional layer. The batch normalization layer and the spectrum normalization layer are beneficial to improving the stability and generalization capability of the model and preventing gradient problems and overfitting. Finally, introducing nonlinear activation into the LeakyReLU function, and carrying out nonlinear transformation on the features;
convolution layer of 1x 1: the convolution layer of 1x1 is used for adjusting the channel number of the feature map and reducing the calculation cost. It can reduce the dimension of the feature map by linear combination and dimension reduction operations. In this arbiter, a 1x1 convolution layer is used to reduce the number of channels and adjust the dimension of the feature map;
average pooling layer: the averaging pooling layer is used to reduce the spatial size of the feature map. It reduces the spatial dimension by averaging the eigenvalues of each small region, thereby reducing the number of parameters in the model. In this arbiter, the averaging pooling layer is used to further reduce the size of the feature map;
16x16 patch: the last layer is a 16x16 patch that processes the output of the arbiter. This patch can be viewed as a unit of dividing and evaluating input data. Each patch represents a discrimination result of the input data by the discriminator. In generating an impedance network, the task of the arbiter is to classify the input data, determine whether it is real data or generated data. The discrimination result of different local areas of the input data can be obtained through the patch of 16x 16.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present application has been described in detail, the application is not limited to the embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (9)

1. A method of cross-scale fusion for adaptive underwater image generation contrast enhancement, comprising the steps of:
Acquiring an underwater image sample data set, wherein the underwater image sample data set comprises an underwater image to be enhanced and a surrounding-trunk image;
introducing an adaptive image physical model enhancement module, and constructing an adaptive underwater image generation countermeasure network model;
and carrying out alternate optimization training on the self-adaptive underwater image generation countermeasure network model through the underwater image sample data set to generate an enhanced underwater image.
2. The method for generating countermeasure enhancement by cross-scale fusion of adaptive underwater image according to claim 1, wherein the step of introducing an adaptive image physical model enhancement module to construct an adaptive underwater image generation countermeasure network model specifically comprises the steps of:
introducing an adaptive image physical model enhancement module to construct an underwater image generation network, wherein the adaptive image physical model enhancement module comprises a total variation reconstruction fusion generator network, an adaptive histogram equalization fusion generator network, a dark channel prior estimation fusion generator network and a sparse representation resolution reconstruction fusion generator network;
the underwater image generation network comprises a self-adaptive image physical model enhancement module, a multi-color space conversion module, a multi-scale feature extraction module, a CBAM attention module and a step-by-step fusion full convolution network module;
And constructing an adaptive underwater image generation countermeasure network model by combining an underwater image countermeasure network, wherein the underwater image countermeasure network is a PatchGAN discriminator.
3. A method for generating contrast enhancement in a cross-scale fused adaptive underwater image according to claim 2, wherein the step of alternately optimizing training the adaptive underwater image generating contrast network model by using the underwater image sample data set to generate the enhanced underwater image specifically comprises the steps of:
inputting the underwater image to be enhanced in the underwater image sample data set into an underwater image generation network in the self-adaptive underwater image generation countermeasure network model for image enhancement processing, and outputting a preliminary enhanced underwater image;
inputting the preliminary enhanced underwater image and the group-trunk image into the self-adaptive underwater image generation countermeasure network model, and optimizing the underwater image countermeasure network, so that the optimized underwater image countermeasure network can generate an enhanced underwater image result.
4. A method for generating a countermeasure against enhancement by a cross-scale fused adaptive underwater image according to claim 3, wherein the step of inputting the underwater image to be enhanced in the underwater image sample dataset into an underwater image generating network in the adaptive underwater image generating countermeasure network model to perform image enhancement processing and outputting a preliminary enhanced underwater image specifically comprises:
Inputting the underwater image to be enhanced into an underwater image generation network;
calculating a PSNR value of the underwater image to be enhanced through a membership function, and selecting a corresponding self-adaptive image physical model enhancement module according to the PSNR value;
restoring the underwater image to be enhanced based on the selected self-adaptive image physical model enhancement module to obtain a restored underwater image;
performing color conversion processing on the underwater image to be enhanced based on a multi-color space conversion module to obtain an underwater image with three color spaces, wherein the three color spaces comprise an RGB color space, a Lab color space and an HSV color space;
performing multi-scale feature extraction and fusion processing on the restored underwater image and the underwater image with the three color spaces based on the multi-scale feature extraction module to obtain an underwater image result with three-scale feature fusion;
the method comprises the steps that extraction fusion processing of spatial features and channel features is carried out on an underwater image result with three scale feature fusion based on a CBAM attention module, and an attention feature underwater image is obtained;
and deconvolution processing is carried out on the attention characteristic underwater image based on the progressive fusion full convolution network module, so that a preliminary enhanced underwater image is obtained.
5. The method for generating contrast enhancement for a cross-scale fused adaptive underwater image according to claim 4, wherein the step of calculating a PSNR value of the underwater image to be enhanced by a membership function and selecting a corresponding adaptive image physical model enhancement module according to the PSNR value comprises the following steps:
calculating a PSNR value of the underwater image to be enhanced through a membership function, and assigning the obtained PSNR value to a variable psnr_value;
calculating the membership degree of the psnr_value for each fuzzy set according to the fuzz. Interface_members function;
judging the grade of the underwater image to be enhanced according to the membership degree, wherein the grade of the underwater image to be enhanced comprises poor, fair, average and good;
when the level of the underwater image to be enhanced is a pool, selecting a total variation reconstruction fusion generator network as an adaptive image physical model enhancement module;
when the level of the underwater image to be enhanced is a fair, selecting a self-adaptive histogram equalization fusion generator network as a self-adaptive image physical model enhancement module;
when the level of the underwater image to be enhanced is average, selecting a dark channel prior estimation fusion generator network as an adaptive image physical model enhancement module;
And when the grade of the underwater image to be enhanced is good, selecting a sparse representation resolution reconstruction fusion generator network as an adaptive image physical model enhancement module.
6. The method for generating contrast enhancement for a cross-scale fused adaptive underwater image according to claim 5, wherein the step of recovering the underwater image to be enhanced based on the selected adaptive image physical model enhancement module to obtain the recovered underwater image specifically comprises the steps of:
reconstructing a fusion generator network based on the total variation, and recovering the underwater image to be enhanced to minimize the total variation of the underwater image to be enhanced by adjusting the pixel value of the underwater image to be enhanced;
based on the self-adaptive histogram equalization fusion generator network, restoring the underwater image to be enhanced to perform histogram equalization in each local block of the underwater image to be enhanced through self-adaptive histogram equalization, and increasing the local contrast of the underwater image to be enhanced;
based on a dark channel prior estimation fusion generator network, restoring the underwater image to be enhanced to acquire the degree of atmospheric scattering by detecting a dark channel region in the underwater image to be enhanced, removing a haze effect according to the degree of atmospheric scattering, and restoring details and contrast of the underwater image to be enhanced;
And (3) restoring the underwater image to be enhanced based on the sparse representation resolution reconstruction fusion generator network, namely extracting features of a low-resolution image in the underwater image to be enhanced in the sparse representation resolution reconstruction, calculating a sparse coefficient by using a sparse representation method according to an extraction result, reconstructing based on the sparse coefficient and a sparse dictionary, acquiring an estimation result of the high-resolution image, and restoring details and definition of the underwater image to be enhanced.
7. The method for generating contrast enhancement of a cross-scale fused adaptive underwater image according to claim 6, wherein the step of obtaining an attention feature underwater image by performing spatial feature and channel feature extraction fusion processing on an underwater image result fused by three scale features based on a CBAM attention module comprises the following steps:
inputting an underwater image result with three scale feature fusion to a CBAM attention module;
the channel attention block based on the CBAM attention module respectively carries out average pooling operation and maximum pooling operation on the input underwater image result with three scale feature fusion, carries out nonlinear transformation through an activation function ReLU after being input into a full-connection layer, carries out concat fusion after transformation, and outputs a channel attention vector;
The space attention block based on the CBAM attention module respectively carries out average pooling operation and maximum pooling operation on the channel attention vector and the underwater image result fused with the three scale features, and the average pooling operation result and the maximum pooling operation result are spliced and output through a sigmoid activation function to obtain a space attention diagram;
multiplying the channel attention vector with an underwater image result fused with three scale features to obtain a weighted feature map;
broadcasting the space attention map in the channel dimension to obtain a broadcasted space attention map;
and multiplying the weighted feature map by the broadcasted space attention map element by element to obtain the attention feature underwater image.
8. The method for generating contrast enhancement for cross-scale fused adaptive underwater image according to claim 7, wherein the step of deconvoluting the attention characteristic underwater image based on the progressive fusion full convolution network module to obtain a preliminary enhanced underwater image specifically comprises:
inputting the attention characteristic underwater image into a step-by-step fusion full-convolution network module, wherein the attention characteristic underwater image comprises a maximum-scale attention characteristic underwater image, a middle-scale attention characteristic underwater image and a minimum-scale attention characteristic underwater image, and the step-by-step fusion full-convolution network module comprises a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer and a fifth deconvolution layer;
Performing up-sampling processing on the maximum-scale attention characteristic underwater image based on the first deconvolution layer to obtain a first deconvolution layer output result;
based on the second deconvolution layer, carrying out fusion processing on the output result of the first deconvolution layer and the intermediate scale attention feature underwater image to obtain the output result of the second deconvolution layer;
based on the third deconvolution layer, carrying out fusion processing on the output result of the second deconvolution layer and the minimum-scale attention feature underwater image to obtain the output result of the third deconvolution layer;
performing up-sampling operation on the output result of the third deconvolution layer based on the fourth deconvolution layer to obtain the output result of the fourth deconvolution layer;
and performing up-sampling operation processing on the output result of the fourth deconvolution layer based on the fifth deconvolution layer to obtain a preliminary enhanced underwater image.
9. The method for generating countermeasure enhancement by adaptive underwater image generation with cross-scale fusion according to claim 8, wherein the step of inputting the preliminary enhanced underwater image and the group-trunk image into the underwater image countermeasure network in the adaptive underwater image generation countermeasure network model is performed with optimization, and specifically comprises:
Inputting the preliminary enhanced underwater image and the group-trunk image into an underwater image countermeasure network, wherein the underwater image countermeasure network comprises at least one judging block, a 4x4 convolution layer, a batch normalization layer, a spectrum normalization layer, a LeakyReLU function, a 1x1 convolution layer, an average pooling layer and a network butyl supplementing layer;
the 4x4 convolution layer, the batch normalization layer, the spectrum normalization layer and the LeakyReLU function are used for enhancing the stability of the underwater image against the network;
carrying out local mode and feature capturing processing on the input preliminary enhanced underwater image and the group-trunk image based on the discrimination block to obtain an output result of the discrimination block;
based on the 1x1 convolution layer, carrying out adjustment processing on the channel number and the characteristic dimension on the output result of the discrimination block, and outputting a 1x1 convolution result;
based on the average pooling layer, carrying out space size reduction treatment on the 1x1 convolution result, and outputting an average pooling result;
and carrying out classification evaluation processing on the average pooling result based on the network butyl layer, and outputting the enhanced underwater image.
CN202310967540.6A 2023-08-03 2023-08-03 Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method Active CN116681627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310967540.6A CN116681627B (en) 2023-08-03 2023-08-03 Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310967540.6A CN116681627B (en) 2023-08-03 2023-08-03 Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method

Publications (2)

Publication Number Publication Date
CN116681627A true CN116681627A (en) 2023-09-01
CN116681627B CN116681627B (en) 2023-11-24

Family

ID=87785903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310967540.6A Active CN116681627B (en) 2023-08-03 2023-08-03 Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method

Country Status (1)

Country Link
CN (1) CN116681627B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910323A (en) * 2019-11-19 2020-03-24 常州工学院 Underwater image enhancement method based on self-adaptive fractional order multi-scale entropy fusion
CN112070767A (en) * 2020-09-10 2020-12-11 哈尔滨理工大学 Micro-vessel segmentation method in microscopic image based on generating type countermeasure network
CN113139907A (en) * 2021-05-18 2021-07-20 广东奥普特科技股份有限公司 Generation method, system, device and storage medium for visual resolution enhancement
CN113408423A (en) * 2021-06-21 2021-09-17 西安工业大学 Aquatic product target real-time detection method suitable for TX2 embedded platform
CN113689517A (en) * 2021-09-08 2021-11-23 云南大学 Image texture synthesis method and system of multi-scale channel attention network
CN114584675A (en) * 2022-05-06 2022-06-03 中国科学院深圳先进技术研究院 Self-adaptive video enhancement method and device
CN114663297A (en) * 2022-02-21 2022-06-24 南京信息工程大学 Underwater image enhancement method based on multi-scale intensive generation countermeasure network and training method of network model
CN115565056A (en) * 2022-09-27 2023-01-03 中国农业大学 Underwater image enhancement method and system based on condition generation countermeasure network
CN115965544A (en) * 2022-11-29 2023-04-14 国网安徽省电力有限公司淮南供电公司 Image enhancement method and system for self-adaptive brightness adjustment
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment
CN116309107A (en) * 2022-12-30 2023-06-23 合肥学院 Underwater image enhancement method based on Transformer and generated type countermeasure network
CN116402709A (en) * 2023-03-22 2023-07-07 大连海事大学 Image enhancement method for generating countermeasure network based on underwater attention

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910323A (en) * 2019-11-19 2020-03-24 常州工学院 Underwater image enhancement method based on self-adaptive fractional order multi-scale entropy fusion
CN112070767A (en) * 2020-09-10 2020-12-11 哈尔滨理工大学 Micro-vessel segmentation method in microscopic image based on generating type countermeasure network
CN113139907A (en) * 2021-05-18 2021-07-20 广东奥普特科技股份有限公司 Generation method, system, device and storage medium for visual resolution enhancement
CN113408423A (en) * 2021-06-21 2021-09-17 西安工业大学 Aquatic product target real-time detection method suitable for TX2 embedded platform
CN113689517A (en) * 2021-09-08 2021-11-23 云南大学 Image texture synthesis method and system of multi-scale channel attention network
CN114663297A (en) * 2022-02-21 2022-06-24 南京信息工程大学 Underwater image enhancement method based on multi-scale intensive generation countermeasure network and training method of network model
CN114584675A (en) * 2022-05-06 2022-06-03 中国科学院深圳先进技术研究院 Self-adaptive video enhancement method and device
CN115565056A (en) * 2022-09-27 2023-01-03 中国农业大学 Underwater image enhancement method and system based on condition generation countermeasure network
CN115965544A (en) * 2022-11-29 2023-04-14 国网安徽省电力有限公司淮南供电公司 Image enhancement method and system for self-adaptive brightness adjustment
CN116309107A (en) * 2022-12-30 2023-06-23 合肥学院 Underwater image enhancement method based on Transformer and generated type countermeasure network
CN116402709A (en) * 2023-03-22 2023-07-07 大连海事大学 Image enhancement method for generating countermeasure network based on underwater attention
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CODRUTA O. ANCUTI ET AL: "Color Balance and Fusion for Underwater Image Enhancement", 《IEEE》, vol. 27, no. 1, pages 379 - 393, XP055692952, DOI: 10.1109/TIP.2017.2759252 *
YUANHAO ZHONG ET AL: "SCAUIE-Net: Underwater Image Enhancement Method Based on Spatial and Channel Attention", 《IEEE》, pages 72172 - 72185 *
朱文博 等: "基于注意力机制的 TDNN-LSTM 模型及应用", 《声学技术》, vol. 40, no. 4, pages 508 - 514 *
李广豪 等: "基于图像融合的水下光照不均匀图像增强算法", 《计算机仿真》, vol. 40, no. 4, pages 330 - 335 *
赵兴运 等: "融合注意力机制和上下文信息的微光图像增强", 《中国图象图形学报》, vol. 27, no. 5, pages 1565 - 1576 *
陈海秀 等,: "基于改进生成对抗网络的水下图像增强方法", 《中国测试》, pages 1 - 11 *

Also Published As

Publication number Publication date
CN116681627B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
JP4542528B2 (en) Image contrast enhancement method and image contrast enhancement system
Ulutas et al. Underwater image enhancement using contrast limited adaptive histogram equalization and layered difference representation
CN106780417A (en) A kind of Enhancement Method and system of uneven illumination image
CN111105376B (en) Single-exposure high-dynamic-range image generation method based on double-branch neural network
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN113284061B (en) Underwater image enhancement method based on gradient network
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
CN113658057A (en) Swin transform low-light-level image enhancement method
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN115393227A (en) Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Wang et al. Low-light image enhancement based on virtual exposure
Lei et al. Low-light image enhancement using the cell vibration model
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
CN116681627B (en) Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method
Huang et al. An effective algorithm for specular reflection image enhancement
CN115147311A (en) Image enhancement method based on HSV and AM-RetinexNet
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
CN115760640A (en) Coal mine low-illumination image enhancement method based on noise-containing Retinex model
CN115809966A (en) Low-illumination image enhancement method and system
Bhat et al. Underwater Image Enhancement with Feature Preservation using Generative Adversarial Networks (UIEFP GAN)
Srinivas et al. Channel prior based Retinex model for underwater image enhancement
Srigowri Enhancing unpaired underwater images with cycle consistent network
Wen et al. TransIm: Transfer image local statistics across EOTFs for HDR image applications
CN114240767A (en) Image wide dynamic range processing method and device based on exposure fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant