CN116563145B - Underwater image enhancement method and system based on color feature fusion - Google Patents

Underwater image enhancement method and system based on color feature fusion Download PDF

Info

Publication number
CN116563145B
CN116563145B CN202310463009.5A CN202310463009A CN116563145B CN 116563145 B CN116563145 B CN 116563145B CN 202310463009 A CN202310463009 A CN 202310463009A CN 116563145 B CN116563145 B CN 116563145B
Authority
CN
China
Prior art keywords
features
information
channel
channels
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310463009.5A
Other languages
Chinese (zh)
Other versions
CN116563145A (en
Inventor
白慧慧
周洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202310463009.5A priority Critical patent/CN116563145B/en
Publication of CN116563145A publication Critical patent/CN116563145A/en
Application granted granted Critical
Publication of CN116563145B publication Critical patent/CN116563145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an underwater image enhancement method and system based on color feature fusion, which belong to the technical field of underwater image processing, and set different discarding proportions for information of different channels, generate residual errors of the channels, fuse the feature information of three channels, extract more diversified feature information by deformable convolution, and fuse the features of common convolution and deformable convolution by adopting one residual error connection; the information of each color channel of the original input image is introduced, so that the training of the model can be more stable. According to the invention, receptive fields with different sizes are distributed to the information of the channels with different colors, global and local features of the image can be better learned, meanwhile, the information of each channel of the underwater image is further refined by combining a channel-space attention mechanism, and the flexibility of deformable convolution is utilized to capture the richer feature information, so that the problems of texture damage, smooth artifact and the like of an enhancement result are prevented, and the robustness and reliability of an algorithm are improved.

Description

Underwater image enhancement method and system based on color feature fusion
Technical Field
The invention relates to the technical field of underwater image processing, in particular to an underwater image enhancement method and an underwater image enhancement system based on color feature fusion.
Background
The image obtained by underwater direct shooting is often subject to the problems of detail blurring, low contrast, color distortion and the like, which are affected by special physical and chemical environments. The problems have great influence on ocean exploration work such as ocean archaeology, marine organism research, ocean unmanned underwater navigation and the like.
Conventional image processing methods may have some enhancement effect on underwater images, but often require a large number of parameter estimates and rely on some assumption priors for comparison. In recent years, with the continuous development of deep learning technology, the underwater image enhancement technology based on deep learning has also been widely studied. However, many underwater image enhancement models only directly apply the generic depth network structure in the underwater scene, ignoring the uniqueness of the underwater image scene. This results in extremely limited robustness and generalization ability of the model, and satisfactory results are not obtained.
Disclosure of Invention
The invention aims to provide an underwater image enhancement method and an underwater image enhancement system based on color feature fusion, which utilize the flexibility of deformable convolution to capture richer feature information so as to prevent texture damage or smooth artifact and the like of an enhancement result and improve the robustness and reliability of an algorithm, and solve at least one technical problem in the background technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in one aspect, the invention provides an underwater image enhancement method based on color feature fusion, which comprises the following steps:
acquiring an image to be enhanced;
processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
Preferably, extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels, includes: dividing an input image into three channels of information with different colors to be respectively processed, and distributing different receptive fields to the information of each channel; a drop layer is used for replacing a BN layer in a convolution block, different discarding strategies are set for channels with different colors, and discarding parameters of 0.4, 0.3 and 0.2 are respectively set for blue, green and red channels; and finally, using PRelu as an activation layer, adaptively learning parameters of the correction linear units, and fusing the characteristics of each channel.
Preferably, the red channel uses a 3×3 smaller convolution kernel to extract features and the larger convolution kernels of 5×5 and 7×7 are used to extract green and blue channel features, respectively.
Preferably, aiming at the obtained fusion characteristics, a multi-channel information processing strategy is adopted, three convolution layers with receptive fields of different sizes are used for processing, a dropout layer with a discarding parameter of 0.1 and a PReLU are used for further processing, the detail characteristics and the global characteristics of the underwater image are extracted to generate corresponding characteristic residual errors, and a CBAM module is used for processing residual error information of each channel.
Preferably, a deformable convolution theory is introduced, the characteristic capacity of the deformable convolution on irregular characteristics is utilized to obtain more diversified characteristic information, and meanwhile, a residual connection is adopted to fuse the characteristics of the common convolution and the deformable convolution.
Preferably, the information of each channel of the original image is introduced by using residual connection, the effectiveness of residual connection is increased by adopting shunt feature processing, the original channel features are combined with the channel features, and three paths of features are fused, dimension is adjusted and then output to obtain a final result.
In a second aspect, the present invention provides an underwater image enhancement system based on color feature fusion, comprising:
the acquisition module is used for acquiring the image to be enhanced;
the processing module is used for processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
In a third aspect, the present invention provides a non-transitory computer readable storage medium for storing computer instructions which, when executed by a processor, implement an underwater image enhancement method based on color feature fusion as described above.
In a fourth aspect, the present invention provides a computer program product comprising a computer program for implementing an underwater image enhancement method based on color feature fusion as described above when run on one or more processors.
In a fifth aspect, the present invention provides an electronic device, comprising: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory, so that the electronic device executes the instructions for implementing the underwater image enhancement method based on color feature fusion as described above.
The invention has the beneficial effects that: the method combines the specific characteristics of the underwater scene, and provides a multi-channel characteristic extraction module, a residual error enhancement module combined with an attention mechanism and a dynamic characteristic enhancement module, so that the quality of underwater image enhancement is improved.
The advantages of additional aspects of the invention will be set forth in part in the description which follows, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a flow frame of an underwater image enhancement algorithm based on color feature fusion according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of a multi-channel feature extraction module according to an embodiment of the invention.
Fig. 3 is a functional block diagram of a residual enhancement module with attention mechanism according to an embodiment of the present invention.
Fig. 4 is a functional block diagram of a dynamic feature enhancement module according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements throughout or elements having like or similar functionality. The embodiments described below by way of the drawings are exemplary only and should not be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or groups thereof.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
In order that the invention may be readily understood, a further description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings and are not to be construed as limiting embodiments of the invention.
It will be appreciated by those skilled in the art that the drawings are merely schematic representations of examples and that the elements of the drawings are not necessarily required to practice the invention.
Example 1
In this embodiment 1, there is provided an underwater image enhancement system based on color feature fusion, the system including: the acquisition module is used for acquiring the image to be enhanced; the processing module is used for processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises: extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels; generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors; extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection; and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
In this embodiment 1, an underwater image enhancement method based on color feature fusion is implemented by using the system described above, including:
acquiring an image to be enhanced by using an acquisition module; processing the acquired image to be enhanced based on the trained model by using a processing module to obtain an image enhancement result; wherein the training of the trained model comprises: extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels; generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors; extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection; and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
Extracting detail features and global features of the underwater image, setting discarding proportions with different sizes for information of different channels, and comprising the following steps: dividing an input image into three channels of information with different colors to be respectively processed, and distributing different receptive fields to the information of each channel; a drop layer is used for replacing a BN layer in a convolution block, different discarding strategies are set for channels with different colors, and discarding parameters of 0.4, 0.3 and 0.2 are respectively set for blue, green and red channels; and finally, using PRelu as an activation layer, adaptively learning parameters of the correction linear units, and fusing the characteristics of each channel. The red channel uses a 3 x 3 smaller convolution kernel to extract features and the larger convolution kernels of 5 x 5 and 7 x 7 are used to extract green and blue channel features, respectively.
Aiming at the obtained fusion characteristics, a multi-channel information processing strategy is adopted, three convolution layers with different sizes of receptive fields are used for processing, a dropout layer with a discarding parameter of 0.1 and a PReLU are used for further processing, the detail characteristics and the global characteristics of the underwater image are extracted to generate corresponding characteristic residual errors, and the residual error information of each channel is processed by a CBAM module.
The deformable convolution theory is introduced, the characteristic capability of the deformable convolution on irregular characteristics is utilized to obtain more diversified characteristic information, and meanwhile, the common convolution and the characteristics of the deformable convolution are fused by adopting a residual connection.
And introducing information of each channel of the original image by using residual connection, increasing the effectiveness of residual connection by adopting shunt feature processing, combining the original channel features with each channel feature, fusing the three features, adjusting the dimension and outputting to obtain a final result.
Example 2
In this embodiment 2, an enhancement processing method for an underwater image is provided, combining the uniqueness of light propagating in water, fully utilizing the characteristics of an underwater scene and the advantages of a deep learning network, firstly utilizing a multi-channel characteristic extraction module to allocate receptive fields with different sizes to information of channels with different colors, so as to better learn the global and local characteristics of the image, simultaneously utilizing dropout to replace a common batch normalization (BatchNormalization, BN) layer, and then utilizing a residual enhancement module combined with a channel-space attention mechanism (ConvolutionalBlockAttentionModule, CBAM) to refine the characteristics; finally, by utilizing the advantage of deformable convolution (defoblecon volume) on irregular feature extraction, a dynamic feature enhancement module is provided to capture more abundant feature information and ensure the stability of training by combining global residual connection.
An underwater image enhancement algorithm based on color feature fusion comprises the following steps:
step (1): firstly, an image is input and sent to a multi-channel feature extraction module for feature extraction.
Step (2): combining the CBAM module and residual connection, and further refining the result of the step (1).
Step (3): and inputting the refined fusion characteristics into a dynamic characteristic enhancement module.
Step (4): and the residual connection is combined to introduce the characteristics of each color channel of the original image, so that the stability of model training is ensured.
The underwater image enhancement algorithm based on deep learning generally adopts an end-to-end training mode, input data are low-quality underwater images obtained by shooting directly through a camera, and network output is enhanced high-quality underwater images. The training process of the whole network is the process of the neural network for learning the mapping relation between the low-quality underwater image and the high-quality underwater image.
In step (1), a new multi-channel feature extraction module is provided, and firstly, according to the propagation characteristics of light under water, an input image is divided into three channels of information with different colors to be respectively processed, and different receptive fields are allocated to the information of each channel. Since red light decays more severely in water, the red channel contains more local features of the underwater image, for which a smaller convolution kernel of 3 x 3 is used to extract features, while the blue-green channel can travel a greater distance in water, and therefore more global features of the image, the larger convolution kernels of 5 x 5 and 7 x 7 are used to extract green and blue channel features, respectively. In addition, dropout has been shown in recent years to play a role in enhancing model generalization performance and improving middle layer feature representation capability, so that a dropout layer is used to replace a common BN layer in a convolution block, different sizes of discarding strategies are set for channels with different colors, and discarding parameters of 0.4, 0.3 and 0.2 are set for blue, green and red channels respectively. And finally, using PRelu as an activation layer, adaptively learning and correcting parameters of the linear units, and fusing characteristics of each channel to obtain output of the first stage.
In the step (2), aiming at the fusion characteristics obtained by a simple strategy in the first stage, a strategy of multipath information processing is continuously adopted, in order to refine the characteristics of each channel, three convolution layers with different sizes of receptive fields are still used for processing, then a dropoff layer with a discarding parameter of 0.1 and a PReLU are uniformly used for further processing, the characteristics obtained in one stage are utilized to generate corresponding characteristic residual errors, the residual error information of each channel is processed by a CBAM module, the main characteristics and the secondary characteristics on each branch are further defined, and the result of the second stage is obtained after the three paths of information are fused.
In the step (3), in order to extract richer underwater scene information, a deformable convolution theory is introduced, and more diversified feature information is obtained by utilizing the strong characterization capability of the deformable convolution on irregular features, so that the problem that the final result has image damage or smooth artifacts due to the fact that convolution kernels of the same size are always used is avoided, and meanwhile, the common convolution and the deformable convolution features are fused by adopting a residual connection.
In step (4), in order to improve the training efficiency of the model, the information of each channel of the original image is introduced by using residual connection, and the effectiveness of the residual connection is increased by continuously adopting a shunt feature processing method, and the original channel features are combined with the channel features after the step (3), so that the whole training process can be ensured to be more stable, and no large deviation occurs. And fusing the three paths of characteristics, adjusting the dimension, and outputting to obtain a final result.
Comparative test
(1) Training and testing process
The experiment adopts NVIDIARTXA4000GPU, performs training and testing work of a model under the support of Pytorch framework, optimizes by using an Adam optimizer, sets the learning rate to 0.0002, sets the batch size to 4, and sets the training round to 100 times.
In order to better evaluate the underwater image enhancement algorithm provided by the invention, experiments pass through three different indexes: peak signal-to-noise ratio (PeakSignaltoNoiseRatio, PSNR), structural similarity (StructuralSimilarity, SSIM) and underwater image quality evaluation index (UnderwaterImageQualityMeasure, UIQM) to evaluate the resulting enhanced image.
Tables 1 and 2 show the PSNR, SSIM and UIQM values tested on the two data sets EUVP and UIEB, respectively, for the methods presented herein and the comparison algorithm described above. Comparing the data in the two tables, it can be seen that the color feature fusion-based underwater image enhancement algorithm proposed in the chapter realizes more excellent performance than the aforementioned comparison method on both data sets, and particularly, on a real underwater data set UIEB, the method presented in the chapter is outstanding in performance, and the effectiveness of a color feature fusion strategy for processing underwater images is fully proved.
TABLE 1 quality assessment results for different algorithms on EUVP test set
Table 2 results of quality assessment of different algorithms on UIEB test sets
In order to further verify the robustness of the proposed solution of the invention, an experiment was performed by selecting the target detection task in underwater advanced vision, and the DetectingUnderwaterObjects (DUO) dataset proposed in 2021 was selected in terms of dataset, which was obtained from the URPC series dataset provided by the underwater robot grab-up large race after eliminating duplicate and extremely similar pictures, and was saved in COCO format.
Experiments were based on the MMdetection framework, with the Resnet50 network as the backbone network, and the Faster-RCNN was selected as the target detection network for testing. The average precision mean (MeanAveragePrecision, mAP) of the COCO indexes of the evaluation index selection standard is used for precision evaluation, and in addition, in order to reduce the influence caused by the long tail effect of the data, mAP values under different cross-over-cross ratios (IntersectionoverUnion, ioU) and different object sizes are also provided, and specific detection results are shown in table 3.
The table 3 shows that the underwater image enhanced by the scheme of the invention has corresponding improvement on each index, and fully proves that the underwater image enhancement algorithm provided by the invention can be applied to the preprocessing of the underwater advanced visual task and has better robustness.
TABLE 3 results of underwater target detection experiments
Example 3
Embodiment 3 provides a non-transitory computer readable storage medium for storing computer instructions which, when executed by a processor, implement the underwater image enhancement method based on color feature fusion as described above, the method comprising:
acquiring an image to be enhanced;
processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
Example 4
This embodiment 4 provides a computer program product comprising a computer program for implementing a color feature fusion based underwater image enhancement method as described above when run on one or more processors, the method comprising:
acquiring an image to be enhanced;
processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
Example 5
Embodiment 5 provides an electronic apparatus including: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and wherein the computer program is stored in the memory, said processor executing the computer program stored in said memory when the electronic device is running, to cause the electronic device to execute instructions for implementing an underwater image enhancement method based on color feature fusion as described above, the method comprising:
acquiring an image to be enhanced;
processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it should be understood that various changes and modifications could be made by one skilled in the art without the need for inventive faculty, which would fall within the scope of the invention.

Claims (7)

1. An underwater image enhancement method based on color feature fusion is characterized by comprising the following steps:
acquiring an image to be enhanced;
processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels; comprising the following steps: dividing an input image into three channels of information with different colors to be respectively processed, and distributing different receptive fields to the information of each channel; a drop layer is used for replacing a BN layer in a convolution block, different discarding strategies are set for channels with different colors, and discarding parameters of 0.4, 0.3 and 0.2 are respectively set for blue, green and red channels; finally, using PRelu as an activation layer, adaptively learning and correcting parameters of the linear units, and fusing characteristics of each channel; wherein the detail features are red channel features and the global features are blue and green channel features;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
2. The underwater image enhancement method based on color feature fusion of claim 1, wherein the red channel uses a 3 x 3 convolution kernel to extract features and the green and blue channel features use 5 x 5 and 7 x 7 convolution kernels, respectively.
3. The method for enhancing the underwater image based on the color feature fusion according to claim 1, wherein aiming at the obtained fusion features, a multi-channel information processing strategy is adopted, three convolution layers with different sizes of receptive fields are used for processing, a dropout layer with a discarding parameter of 0.1 and a PReLU are used for further processing, the detailed features and the global features of the underwater image are extracted to generate corresponding feature residual errors, and a CBAM module is used for processing residual error information of each channel.
4. The underwater image enhancement method based on color feature fusion according to claim 1, wherein the residual connection is utilized to introduce information of each channel of an original image, the effectiveness of residual connection is increased by adopting shunt feature processing, the original channel features are combined with each channel feature, three features are fused, dimensions are adjusted, and a final result is obtained.
5. An underwater image enhancement system based on color feature fusion, comprising:
the acquisition module is used for acquiring the image to be enhanced;
the processing module is used for processing the acquired image to be enhanced by using the trained model to obtain an image enhancement result; wherein the training of the trained model comprises:
extracting detail features and global features of the underwater image, and setting discarding proportions with different sizes for information of different channels; comprising the following steps: dividing an input image into three channels of information with different colors to be respectively processed, and distributing different receptive fields to the information of each channel; a drop layer is used for replacing a BN layer in a convolution block, different discarding strategies are set for channels with different colors, and discarding parameters of 0.4, 0.3 and 0.2 are respectively set for blue, green and red channels; finally, using PRelu as an activation layer, adaptively learning and correcting parameters of the linear units, and fusing characteristics of each channel; wherein the detail features are red channel features and the global features are blue and green channel features;
generating residual errors of all channels according to the extracted detail features and global features, and fusing the feature information of the three channels according to the residual errors;
extracting more diversified feature information from the obtained fusion features by using deformable convolution, and fusing the common convolution and the deformable convolution features by adopting a residual connection;
and the global residual error connection is utilized, and the information of each color channel of the original input image is introduced, so that the training of the model can be more stable.
6. A non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the color feature fusion-based underwater image enhancement method of any of claims 1-4.
7. An electronic device, comprising: a processor, a memory, and a computer program; wherein the processor is connected to the memory, and wherein the computer program is stored in the memory, which processor, when the electronic device is running, executes the computer program stored in the memory to cause the electronic device to execute instructions for implementing the underwater image enhancement method based on color feature fusion as claimed in any of claims 1-4.
CN202310463009.5A 2023-04-26 2023-04-26 Underwater image enhancement method and system based on color feature fusion Active CN116563145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310463009.5A CN116563145B (en) 2023-04-26 2023-04-26 Underwater image enhancement method and system based on color feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310463009.5A CN116563145B (en) 2023-04-26 2023-04-26 Underwater image enhancement method and system based on color feature fusion

Publications (2)

Publication Number Publication Date
CN116563145A CN116563145A (en) 2023-08-08
CN116563145B true CN116563145B (en) 2024-04-05

Family

ID=87499301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310463009.5A Active CN116563145B (en) 2023-04-26 2023-04-26 Underwater image enhancement method and system based on color feature fusion

Country Status (1)

Country Link
CN (1) CN116563145B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313644A (en) * 2021-05-26 2021-08-27 西安理工大学 Underwater image enhancement method based on residual double attention network
CN115511705A (en) * 2021-06-04 2022-12-23 天津城建大学 Image super-resolution reconstruction method based on deformable residual convolution neural network
CN115564676A (en) * 2022-09-30 2023-01-03 华东理工大学 Underwater image enhancement method and system and readable storage medium
CN115713469A (en) * 2022-11-08 2023-02-24 大连海事大学 Underwater image enhancement method for generating countermeasure network based on channel attention and deformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070511B (en) * 2019-04-30 2022-01-28 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313644A (en) * 2021-05-26 2021-08-27 西安理工大学 Underwater image enhancement method based on residual double attention network
CN115511705A (en) * 2021-06-04 2022-12-23 天津城建大学 Image super-resolution reconstruction method based on deformable residual convolution neural network
CN115564676A (en) * 2022-09-30 2023-01-03 华东理工大学 Underwater image enhancement method and system and readable storage medium
CN115713469A (en) * 2022-11-08 2023-02-24 大连海事大学 Underwater image enhancement method for generating countermeasure network based on channel attention and deformation

Also Published As

Publication number Publication date
CN116563145A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN111127331B (en) Image denoising method based on pixel-level global noise estimation coding and decoding network
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
CN109509248B (en) Photon mapping rendering method and system based on neural network
CN112422870B (en) Deep learning video frame insertion method based on knowledge distillation
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN110223251A (en) Suitable for manually with the convolutional neural networks underwater image restoration method of lamp
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN116402709A (en) Image enhancement method for generating countermeasure network based on underwater attention
CN116563693A (en) Underwater image color restoration method based on lightweight attention mechanism
CN111915589A (en) Stereo image quality evaluation method based on hole convolution
CN113658091A (en) Image evaluation method, storage medium and terminal equipment
US11783454B2 (en) Saliency map generation method and image processing system using the same
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
Wang et al. Image super-resolution via lightweight attention-directed feature aggregation network
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
CN116563145B (en) Underwater image enhancement method and system based on color feature fusion
CN111476739B (en) Underwater image enhancement method, system and storage medium
CN116863320A (en) Underwater image enhancement method and system based on physical model
CN113362251B (en) Anti-network image defogging method based on double discriminators and improved loss function
CN115170921A (en) Binocular stereo matching method based on bilateral grid learning and edge loss
CN112016456B (en) Video super-resolution method and system based on adaptive back projection depth learning
CN114648800A (en) Face image detection model training method, face image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant