CN114612347B - Multi-module cascade underwater image enhancement method - Google Patents
Multi-module cascade underwater image enhancement method Download PDFInfo
- Publication number
- CN114612347B CN114612347B CN202210506856.0A CN202210506856A CN114612347B CN 114612347 B CN114612347 B CN 114612347B CN 202210506856 A CN202210506856 A CN 202210506856A CN 114612347 B CN114612347 B CN 114612347B
- Authority
- CN
- China
- Prior art keywords
- network
- image
- channel
- underwater
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012937 correction Methods 0.000 claims abstract description 26
- 230000015556 catabolic process Effects 0.000 claims abstract description 18
- 238000006731 degradation reaction Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 10
- 238000010586 diagram Methods 0.000 claims description 50
- 239000000126 substance Substances 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 9
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000007670 refining Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 241000196171 Hydrodictyon reticulatum Species 0.000 description 1
- 101100014407 Pisum sativum GDCSP gene Proteins 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention provides a multi-module cascaded underwater image enhancement method, and belongs to the technical field of computer vision. The method comprises the following steps: cascading an existing air image enhancement network and a color correction network to construct a multi-module cascade enhancement network, wherein the air image enhancement network is used for solving the degradation problem similar to an air image in an underwater image, and the color correction network is used for correcting color cast in the underwater image; acquiring paired underwater image data sets, and training the multi-module cascade enhancement network by using the acquired paired underwater image data sets; and acquiring an underwater image to be enhanced, and sending the underwater image to be enhanced into the trained multi-module cascade enhancement network to obtain the enhanced underwater image. By adopting the method and the device, the degradation problems of different types in underwater imaging can be solved.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to an underwater image enhancement method based on multi-module cascade.
Background
In recent years, as a major problem in image enhancement research, underwater image enhancement has received increasing attention from researchers. As an important carrier of ocean information, the underwater image plays a vital role in exploring ocean environment, and reasonably developing and utilizing ocean resources. However, due to the complexity of the underwater imaging environment, the obtained underwater image is often accompanied by degradation problems such as blurring, low contrast, color distortion, poor visibility, and the like, which seriously affects the performance of the task based on underwater vision. Therefore, it is urgently required to improve the quality of underwater images.
In the past decades, many methods have been proposed to improve the quality of underwater images, and these methods can be simply divided into non-learning methods and deep learning based methods. Among the non-learning methods, one is to apply the classical air image enhancement method or its variants (such as histogram equalization, white balance, etc.) directly on the underwater image; the other is a specially designed algorithm aiming at the imaging characteristics of the underwater image or a physical imaging model combined with the underwater image, such as Retinex-based, Fusion-based, GDCP-based and the like. Although these methods improve the quality of underwater images, they are susceptible to degraded image types and have poor generalization capability due to the uncertainty of estimating the physical model parameters and the inaccuracy of a priori knowledge. With the development of deep learning, researchers provide a series of underwater image enhancement methods based on deep learning, such as Water-Net, UIEC2^ Net, Ucolor and the like, which directly model degraded images and clear images, relieve the unsuitability of estimation model parameters and greatly improve the quality of underwater images, but the methods do not consider the attenuation difference between R, G, B channels caused by attenuation related to wavelength, so that color cast exists in the enhanced images, and the methods are still limited by the degradation types of the underwater images, and can not solve the degradation problem existing in the underwater images at the same time. Solving the various degradation problems that coexist in underwater images through a single network remains a significant challenge.
Disclosure of Invention
The embodiment of the invention provides a multi-module cascaded underwater image enhancement method, which can solve the degradation problems of different types in an underwater image. The technical scheme is as follows:
the embodiment of the invention provides a multi-module cascaded underwater image enhancement method, which comprises the following steps:
cascading an existing air image enhancement network and a color correction network to construct a multi-module cascade enhancement network, wherein the air image enhancement network is used for solving the degradation problem similar to an air image in an underwater image, and the color correction network is used for correcting color cast in the underwater image;
acquiring paired underwater image data sets, and training the multi-module cascade enhancement network by using the acquired paired underwater image data sets;
and acquiring an underwater image to be enhanced, and sending the underwater image to be enhanced into the trained multi-module cascade enhancement network to obtain the enhanced underwater image.
Further, the step of cascading the existing air image enhancement network with the color correction network to construct a multi-module cascade enhancement network includes:
selecting an existing aerial image enhancement network as a first stage enhancement network E1;
the color correction network is taken as a second-stage enhancement network E2;
and connecting the E1 and the E2 in a residual error mode to obtain the multi-module cascade enhanced network E.
Further, the processing step of the color correction network comprises:
a1, enhancing the output image of the network E1 by the first levelAnd input imageObtaining input images of a second-level enhanced network E2 through residual error structural connectionThen extracting the red channel images thereof respectivelyGreen channel imageAnd blue channel imageWherein, in the step (A),、respectively representing the height and width of the image,is a dimension symbol;
a2, for the one obtained in step A1Carrying out convolution operation on the images of the three channels respectively to obtain a red channel characteristic diagramGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,、andall show beltConvolution operations of the layers;
a3, respectively compensating the information of the red channel characteristic diagram and the blue channel characteristic diagram by using the characteristic diagram of the green channel to obtain the compensated red channel characteristic diagramGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,are representative of the compensation parameters that are,representing splicing operation according to channels;
a4, sending the compensated feature map obtained in the step A3 into a channel-space attention module, and further extracting and refining the features to obtain a red channel feature mapGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,representing a channel-space attention module;
a5, for the characteristic diagram obtained in the step A4, the characteristic diagram of the green channel is used for compensating the information of the characteristic diagrams of the other two channels to obtain the characteristic diagram after color correction、And:
a6, correcting the color of the feature mapAndrespectively changing into single-channel characteristic diagrams, and splicing according to the channels to obtain color characteristic diagrams:
a7, sending the color feature map into a convolution module to reconstruct a clear underwater image, namely a final enhanced underwater image:
Wherein the content of the first and second substances,which represents a convolution operation, the operation of the convolution,representing a volume block.
Further, the processing step of the channel-space attention module comprises:
Wherein the content of the first and second substances,meaning that the summation is by element,in particular, the compensated characteristic diagram obtained in step A3、And;
a42, obtaining the characteristic diagram A41Respectively sending into a channel attention branch CA _ brach and a space attention branch SA _ brach to obtain a channel feature descriptorAnd spatial feature descriptorsThen, the feature map is processedRespectively with channel feature descriptorsMultiplying the spatial feature descriptors by elements to obtain the output of CA _ burst and SA _ burst、(ii) a Wherein the content of the first and second substances,represents multiplication by element;
a43, splicing the outputs of CA _ break and SA _ break in the step A42 according to channels and performing convolution operationObtaining a final output characteristic diagram after processing:
further, the training the multi-module cascade enhancement network includes:
determining a loss function of the multi-module cascade enhancement network E:
wherein, the first and the second end of the pipe are connected with each other,representing the loss function originally used by the first stage enhancement network E1,the function of the perceptual loss is represented by,representing perceptual loss functionsThe weight of (c);
determining an initial learning rate of a multi-module cascaded enhanced network E, wherein the initial learning rate of a first stage enhanced network E1At least one amount less than the initial learning rate set in the original aerial image enhancement networkLevel two level enhancing initial learning rate of network E2Enhancing the initial learning rate set in the network for the original air image;
and training the multi-module cascade enhancement network E by using the acquired paired underwater image data sets.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
1) the degradation problem of different underwater scenes is considered, the complicated underwater degradation problem is decomposed into different subproblems, and the degradation problems of different types in underwater imaging are solved by cascading different air image enhancement networks.
2) For the difference of the R, G, B channel attenuation, in this embodiment, the G channel with smaller information attenuation adaptively compensates the R channel and the B channel with more serious information attenuation through the color correction network, so as to correct the color of the underwater image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an underwater image enhancement method of multi-module cascade connection according to an embodiment of the present invention;
fig. 2 is a schematic view of a workflow of a multi-module cascade enhanced network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a color correction network according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an enhanced underwater image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Different from underwater images, the deep learning-based enhancement algorithm designed for air images is relatively mature (such as an image defogging algorithm and a low-illumination image enhancement algorithm), and on the basis, in order to fully utilize the existing research results, reduce the difficulty of network information processing and solve various coexisting degradation problems in the underwater images, the embodiment of the invention provides an end-to-end multi-module cascaded underwater image enhancement method, and the method provides a multi-module cascaded enhancement network.
As shown in fig. 1 and fig. 2, an embodiment of the present invention provides an underwater image enhancement method with multiple modules cascaded, including:
s101, cascading an existing air image enhancement network and a color correction network to construct a multi-module cascade enhancement network, wherein the air image enhancement network is used for solving the degradation problem similar to an air image in an underwater image, and the color correction network is used for correcting color cast in the underwater image; the method specifically comprises the following steps:
selecting an existing aerial image enhancement network as a first stage enhancement network E1; wherein the aerial image enhancement network comprises: a defogging network GridDehazeNet, a low-illumination enhancement network MIRNet and the like, and network weight parameters are preloaded;
the color correction network is taken as a second-stage enhancement network E2; wherein, the color correction network is proposed in this embodiment;
and connecting the E1 and the E2 in a residual error mode to obtain the multi-module cascade enhanced network E.
In this embodiment, the multi-module cascade enhancement network E includes two parts, the former is the existing air image enhancement network; the latter is a color correction network designed in consideration of the difference of R, G, B channel attenuation, in which a G channel with relatively small information attenuation is used to adaptively compensate an R channel and a B channel with relatively serious information attenuation, thereby correcting the color of the underwater image.
As shown in fig. 3, the processing steps of the color correction network include:
a1, enhancing the output image of the network E1 by the first levelAnd input imageConnected by a residual structure to obtain an input image of a second-level enhanced network E2Then extracting the red channel images thereof respectivelyGreen channel imageAnd blue channel imageWherein, in the step (A),、respectively representing the height and width of the image,is a dimension symbol;
a2, performing convolution operation on the three-channel images obtained in the step A1 respectively to obtain red channel feature mapsGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,、andall show beltA convolution operation of the layers;
a3, respectively compensating the information of the red channel characteristic diagram and the blue channel characteristic diagram by using the characteristic diagram of the green channel to obtain the compensated red channel characteristic diagramGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,are representative of the compensation parameters that are,representing splicing operation according to channels;
a4, sending the compensated feature map obtained in the step A3 into a channel-space attention module, and further extracting and refining the features to obtain a red channel feature mapGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,representing a channel-space attention module, the processing steps of which include:
Wherein the content of the first and second substances,meaning that the summation is by element,in particular, the compensated characteristic diagram obtained in step A3、And;
a42, obtaining the characteristic diagram A41Respectively sending into a channel attention branch CA _ brach and a space attention branch SA _ brach to obtain a channel feature descriptorAnd spatial feature descriptorsThen, the feature map is processedRespectively with channel feature descriptorsAnd spatial feature descriptorsElement-by-element multiplication to obtain the outputs of CA _ burst and SA _ burst、(ii) a Wherein the content of the first and second substances,represents multiplication by element;
a43, splicing the outputs of CA _ break and SA _ break in the step A42 according to channels and performing convolution operationObtaining a final output characteristic diagram after processing:
a5, for the characteristic diagram obtained in the step A4, the characteristic diagram of the green channel is used for compensating the information of the characteristic diagrams of the other two channels to obtain the characteristic diagram after color correction、And:
a6, correcting the color of the feature mapAndrespectively changing into single-channel characteristic diagrams, and splicing according to the channels to obtain color characteristic diagrams:
a7, sending the color feature map into a convolution module to reconstruct a clear underwater image, namely a final enhanced underwater image:
Wherein the content of the first and second substances,which represents a convolution operation, the operation of the convolution,representing a volume block.
S102, acquiring paired underwater image data sets, and training the multi-module cascade enhancement network by using the acquired paired underwater image data sets, specifically including the following steps:
b1, acquiring paired underwater image data sets; wherein each pair of underwater images comprises: a degraded underwater image and its corresponding reference image;
in this embodiment, a paired underwater image dataset for training the multi-module cascade enhancement network E is constructed from the existing disclosed underwater dataset.
B2, determining a loss function of the multi-module cascade enhancement network E:
wherein the content of the first and second substances,a loss function representing the original use of the first level enhancement network E1;representing perceptual loss functionsThe weight of (a) is 0.04; wherein the perceptual loss functionExpressed as:
wherein the content of the first and second substances,andrespectively representing the channel number, height and width of the characteristic diagram, the enhanced underwater image and the corresponding reference image,the characteristic diagrams of the images in different layers of the VGG-19 are shown, and in the embodiment of the invention, Conv1_2, Conv2_2 and Conv3_3 of the VGG-19 are selected for characteristic extraction.
In this embodiment, the loss function consists of two parts: some are the loss functions originally used by E1(ii) a Another part is the perceptual loss functionAnd is used for enabling the image generated by the multi-module cascade enhancement network E and the reference image to be as close as possible in the feature space.
B3, determining the initial learning rate of the multi-module cascade enhanced network E, wherein the initial learning rate of the first-stage enhanced network E1At least one order of magnitude smaller than the initial learning rate set in the original aerial image enhancement network, the initial learning rate of the second stage enhancement network E2Enhancing the initial learning rate set in the network for the original air image;
and B4, training the multi-module cascade enhancement network E by using the acquired paired underwater image data sets.
S103, acquiring an underwater image to be enhanced, and sending the underwater image to be enhanced into a trained multi-module cascade enhancement network to obtain the enhanced underwater image, wherein the method specifically comprises the following steps:
in this embodiment, an underwater image to be enhanced is acquiredInputting the underwater image to be enhanced into the trained multi-module cascade enhancement network to obtain the final enhanced underwater imageFig. 4 shows a schematic diagram of the enhanced underwater image. Therefore, the research result of the existing image enhancement is fully utilized, the proposed color correction network is cascaded with different air image enhancement networks, and different underwater image enhancement tasks are realized, such as the defogging of an underwater image, the enhancement of an underwater low-illumination image and the color correction of the underwater image are realized; meanwhile, the method has stronger generalization capability, can be used for various underwater images with different degradation types, has strong universality and obtains more ideal enhancement effect.
The multi-module cascade underwater image enhancement method provided by the embodiment of the invention at least has the following beneficial effects:
1) the degradation problem of different underwater scenes is considered, the complicated underwater degradation problem is decomposed into different subproblems, and the degradation problems of different types in underwater imaging are solved by cascading different air image enhancement networks.
2) For the difference of the R, G, B channel attenuation, in this embodiment, the G channel with smaller information attenuation adaptively compensates the R channel and the B channel with more serious information attenuation through the color correction network, so as to correct the color of the underwater image.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (3)
1. An underwater image enhancement method based on multi-module cascade is characterized by comprising the following steps:
cascading an existing air image enhancement network and a color correction network to construct a multi-module cascade enhancement network, wherein the air image enhancement network is used for solving the degradation problem similar to an air image in an underwater image, and the color correction network is used for correcting color cast in the underwater image;
wherein, the cascade connection of the existing air image enhancement network and the color correction network to construct a multi-module cascade enhancement network comprises:
selecting an existing aerial image enhancement network as a first stage enhancement network E1;
taking the color correction network as a second-stage enhancement network E2;
connecting the E1 and the E2 in a residual error mode to obtain a multi-module cascade enhanced network E;
acquiring paired underwater image data sets, and training the multi-module cascade enhancement network by using the acquired paired underwater image data sets;
acquiring an underwater image to be enhanced, and sending the underwater image to be enhanced into a trained multi-module cascade enhancement network to obtain an enhanced underwater image;
wherein, the processing steps of the color correction network comprise:
a1, enhancing the output image of the network E1 by the first levelAnd input imageConnected by a residual structure to obtain an input image of a second-level enhanced network E2Then extracting the red channel images thereof respectivelyGreen channel imageAnd blue channel imageWherein, in the step (A),respectively representing the height and width of the image,is a dimension symbol;
a2, performing convolution operation on the three-channel images obtained in the step A1 respectively to obtain red channel feature mapsGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,all show beltA convolution operation of the layers;
a3, respectively compensating the information of the red channel characteristic diagram and the blue channel characteristic diagram by using the characteristic diagram of the green channel to obtain the compensated red channel characteristic diagramAnd blue channel profileAnd carrying out convolution and splicing operation on the characteristic diagram of the green channel to obtain the characteristic diagram of the green channel:
Wherein the content of the first and second substances,are representative of the compensation parameters that are,representing splicing operation according to channels;
a4, sending the compensated feature map obtained in the step A3 into a channel-space attention module, and further extracting and refining the features to obtain a red channel feature mapGreen channel profileAnd blue channel profile:
Wherein the content of the first and second substances,representing a channel-space attention module;
a5, for the characteristic diagram obtained in the step A4, the characteristic diagram of the green channel is used for compensating the information of the characteristic diagrams of the other two channels to obtain the characteristic diagram after color correctionAnd:
wherein, the first and the second end of the pipe are connected with each other,all represent compensation parameters, a feature map of the color corrected green channel;
A6, correcting the color of the feature mapRespectively changing into single-channel characteristic diagrams, and splicing according to the channels to obtain color characteristic diagrams:
a7, color feature mapSending the data to a convolution module to reconstruct a clear underwater image, namely a final enhanced underwater image:
2. The multi-module cascaded underwater image enhancement method according to claim 1, wherein the processing step of the channel-space attention module comprises:
Wherein the content of the first and second substances,meaning that the summation is by element,in particular, the compensated characteristic diagram obtained in step A3、;
A42, obtaining the characteristic diagram A41Respectively sending into a channel attention branch CA _ brach and a space attention branch SA _ brach to obtain a channel feature descriptorAnd spatial feature descriptorsThen, the feature map is mappedRespectively with channel feature descriptorsAnd spatial feature descriptorsElement-by-element multiplication to obtain the outputs of CA _ burst and SA _ burst(ii) a Wherein the content of the first and second substances,represents multiplication by element;
a43, splicing the outputs of CA _ break and SA _ break in the step A42 according to channels and performing convolution operationObtaining a final output characteristic diagram after processing:
3. The method of claim 1, wherein the training the multi-module cascade enhancement network comprises:
Wherein the content of the first and second substances,representing the loss of original usage of the first stage enhancement network E1The function of the function is that of the function,the function of the perceptual loss is represented by,representing perceptual loss functionsThe weight of (c);
determining an initial learning rate of a multi-module cascaded enhanced network E, wherein the initial learning rate of a first stage enhanced network E1At least one order of magnitude smaller than the initial learning rate set in the original aerial image enhancement network, the initial learning rate of the second stage enhancement network E2Enhancing the initial learning rate set in the network for the original air image;
and training the multi-module cascade enhancement network E by using the acquired paired underwater image data sets.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210506856.0A CN114612347B (en) | 2022-05-11 | 2022-05-11 | Multi-module cascade underwater image enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210506856.0A CN114612347B (en) | 2022-05-11 | 2022-05-11 | Multi-module cascade underwater image enhancement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114612347A CN114612347A (en) | 2022-06-10 |
CN114612347B true CN114612347B (en) | 2022-08-16 |
Family
ID=81870440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210506856.0A Active CN114612347B (en) | 2022-05-11 | 2022-05-11 | Multi-module cascade underwater image enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114612347B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115170443B (en) * | 2022-09-08 | 2023-01-13 | 荣耀终端有限公司 | Image processing method, shooting method and electronic equipment |
CN116797471A (en) * | 2022-12-20 | 2023-09-22 | 慧之安信息技术股份有限公司 | Underwater target image detection method and system based on deep learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2020100175A4 (en) * | 2020-02-04 | 2020-04-09 | Hu, Wei MR | Retinex-based progressive image enhancement method |
CN111415304A (en) * | 2020-02-26 | 2020-07-14 | 中国农业大学 | Underwater vision enhancement method and device based on cascade deep network |
CN112508812A (en) * | 2020-12-01 | 2021-03-16 | 厦门美图之家科技有限公司 | Image color cast correction method, model training method, device and equipment |
CN113034391B (en) * | 2021-03-19 | 2023-08-08 | 西安电子科技大学 | Multi-mode fusion underwater image enhancement method, system and application |
CN113256528B (en) * | 2021-06-03 | 2022-05-27 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113920021A (en) * | 2021-09-27 | 2022-01-11 | 海南大学 | Underwater image enhancement method based on two-step residual error network |
-
2022
- 2022-05-11 CN CN202210506856.0A patent/CN114612347B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114612347A (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114612347B (en) | Multi-module cascade underwater image enhancement method | |
CN112233038B (en) | True image denoising method based on multi-scale fusion and edge enhancement | |
CN111898701B (en) | Model training, frame image generation and frame insertion methods, devices, equipment and media | |
WO2022105638A1 (en) | Image degradation processing method and apparatus, and storage medium and electronic device | |
WO2018188260A1 (en) | Image display control method and device, and display screen control system | |
CN111127336A (en) | Image signal processing method based on self-adaptive selection module | |
CN110189260B (en) | Image noise reduction method based on multi-scale parallel gated neural network | |
CN101466046A (en) | Method and apparatus for removing color noise of image signal | |
CN111127331A (en) | Image denoising method based on pixel-level global noise estimation coding and decoding network | |
CN116416561A (en) | Video image processing method and device | |
EP3451294B1 (en) | Luminance-normalised colour spaces | |
CN112508812A (en) | Image color cast correction method, model training method, device and equipment | |
US11823352B2 (en) | Processing video frames via convolutional neural network using previous frame statistics | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
CN113781318A (en) | Image color mapping method and device, terminal equipment and storage medium | |
CN110047038B (en) | Single-image super-resolution reconstruction method based on hierarchical progressive network | |
CN103516959B (en) | Image processing method and equipment | |
US20210044303A1 (en) | Neural network acceleration device and method | |
CN114862711B (en) | Low-illumination image enhancement and denoising method based on dual complementary prior constraints | |
CN116433525A (en) | Underwater image defogging method based on edge detection function variation model | |
TWI736112B (en) | Pixel value calibrationmethod and pixel value calibration device | |
CN115841523A (en) | Double-branch HDR video reconstruction algorithm based on Raw domain | |
CN111754412A (en) | Method and device for constructing data pairs and terminal equipment | |
CN104284207B (en) | Information transmission method based on video image | |
CN107392871A (en) | Image defogging method, device, mobile terminal and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |