CN115272072A - Underwater image super-resolution method based on multi-feature image fusion - Google Patents

Underwater image super-resolution method based on multi-feature image fusion Download PDF

Info

Publication number
CN115272072A
CN115272072A CN202210799113.7A CN202210799113A CN115272072A CN 115272072 A CN115272072 A CN 115272072A CN 202210799113 A CN202210799113 A CN 202210799113A CN 115272072 A CN115272072 A CN 115272072A
Authority
CN
China
Prior art keywords
image
feature
underwater
resolution
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210799113.7A
Other languages
Chinese (zh)
Inventor
付先平
王息宁
姚冰
蒋广琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202210799113.7A priority Critical patent/CN115272072A/en
Publication of CN115272072A publication Critical patent/CN115272072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an underwater image super-resolution method based on multi-feature image fusion, which comprises the following steps: acquiring an underwater image, and respectively preprocessing the underwater image by adopting three methods of white balance, adaptive histogram equalization and dark channel prior; constructing a feature fusion super-resolution network model, inputting the three preprocessed underwater images and the original image into the feature fusion super-resolution network model for training, and obtaining confidence information of each preprocessed image; respectively carrying out feature optimization on the original image and the three preprocessed images to obtain three feature optimized images; multiplying the three feature optimization images with respective confidence degrees respectively to obtain three underwater images to be fused, and obtaining feature fused underwater images; performing super-resolution reconstruction on the feature-fused underwater image to obtain a super-resolution reconstructed image; the method can better extract useful detail information of the preprocessed image while reducing artifacts.

Description

Underwater image super-resolution method based on multi-feature image fusion
Technical Field
The invention relates to the technical field of underwater image processing, in particular to an underwater image super-resolution method based on multi-feature image fusion.
Background
In recent years, with the continuous and deep development of human beings on ocean resources, the development of the underwater machine vision field plays an important role in the exploration of the ocean resources. However, due to the complex and various underwater environments, the illumination intensity attenuation caused by a large amount of suspended matters in seawater, the influence factors of forward scattering and backward scattering of the water environment on the illumination and the like, the problems of color distortion, blurred details, low contrast, low resolution and the like of the images obtained underwater generally exist. Therefore, the contrast and the resolution of the underwater image are improved by utilizing the underwater image super-resolution reconstruction technology so as to solve the problems of lack and fuzziness of detail textures of the underwater image, and the method has important research significance.
The image super-resolution reconstruction technology is a technology for restoring low-quality and low-resolution images into high-quality and high-resolution images by using related knowledge in the fields of digital image processing, computer vision and the like. High resolution images typically contain greater pixel density, richer texture details, and higher trustworthiness than low resolution images. The high-resolution image is very helpful for improving the recognition capability and the recognition accuracy of the image by human beings. The image super-resolution reconstruction method comprises a traditional super-resolution reconstruction algorithm and a super-resolution reconstruction algorithm based on deep learning. The traditional super-resolution reconstruction algorithm is mainly used for reconstructing images through a digital image processing technology, and mainly comprises an interpolation-based method, a degradation model-based method, a learning-based method and the like. With the rise of deep learning algorithms, more and more super-resolution reconstruction methods based on deep learning begin to appear.
Compared with the image super-resolution reconstruction method on land, the existing image super-resolution reconstruction method has relatively less research in the underwater field. Because underwater imaging is generally affected by poor visibility, light absorption and scattering, the underwater image lacks important details and the region of interest is not significant, which brings great difficulty to the super-resolution reconstruction of the underwater image. Researchers have made some research attempts in recent years on super-resolution reconstruction of underwater images, with the emphasis on reconstructing better quality underwater images from noisy or blurred images. The image super-resolution method based on feature fusion provides a good idea for solving the problem of super-resolution reconstruction of underwater images. The feature fusion module can extract and fuse the most significant features in a plurality of feature images into one image and merge the feature images into a feature image with stronger discrimination capability. The super-resolution module adopts a deep residual error network structure, and retains identification mapping in the convolutional layer repeating block by using jump connection, which is beneficial to stably training a very deep network model. The jump connection in the dense residual block can combine the hierarchical features from each layer to improve the performance of the image super-resolution reconstruction. The image super-resolution method based on feature fusion can solve the problems of difficult extraction of underwater image features, unobvious detail features and low resolution.
Disclosure of Invention
According to the problems in the prior art, the invention discloses an underwater image super-resolution method based on multi-feature image fusion, which specifically comprises the following steps:
acquiring an underwater image, and respectively preprocessing the underwater image by adopting three methods of white balance, adaptive histogram equalization and dark channel prior;
constructing a feature fusion super-resolution network model, inputting the three preprocessed underwater images and the original image into the feature fusion super-resolution network model for training, and obtaining confidence information of each preprocessed image;
respectively carrying out feature optimization on the original image and the three preprocessed images to obtain three feature optimized images;
multiplying the three feature optimization images with respective confidence degrees to obtain three underwater images to be fused, and integrating and adding the three underwater images to be fused for feature fusion to obtain feature-fused underwater images;
and performing super-resolution reconstruction on the feature-fused underwater image to obtain a super-resolution reconstructed image.
Further, the white balance method comprises: firstly, carrying out color projection correction on an underwater image to obtain accurate color restoration; the adaptive histogram equalization method comprises the following steps: firstly, calculating a local histogram of an underwater image, and redistributing brightness to improve the local contrast of the underwater image so as to obtain more details of the image; the dark channel prior method comprises the following steps: and calculating the transmissivity and atmospheric light component value of the underwater image, performing Gaussian low-pass filtering on the original image, then reversing to obtain a transmissivity image, and obtaining the underwater image with local defogging according to the transmissivity image.
Further, the adaptive histogram equalization feature optimization image RHEWhite balance feature optimized image RWBOptimization image R of prior characteristics of sum-dark channelDCPRespectively with respective confidence degrees DHE、DWBAnd DDCPMultiplying to obtain three underwater images to be fused, integrating and adding the three underwater images to be fused for feature fusion to obtain a feature-fused underwater image IERThe function of the process is expressed as:
IER=DHE×RHE+DWB×RWB+DDCP×RDCP (1)
in the process of carrying out feature fusion on the underwater image, a mapping function of the feature fusion underwater image is learned by adopting a minimum perception loss function, so that artifacts caused by a pixel-level loss function are reduced, and a feature fusion image I is measuredERAnd a reference picture IRFThe difference perception loss function between is:
Figure BDA0003733324380000031
wherein N represents the number of each batch in the training process, WkHkCkRespectively representing the width, height and channel number of the characteristic mapping of the kth convolution layer in the network modelk(x) Representing the k-th layer of the pre-trained network model after activation.
Further, when performing super-resolution reconstruction on the feature-fused underwater image, the method comprises reconstructing low-level feature information and high-level feature information in the underwater image:
wherein when reconstructing the low-level feature information: firstly, extracting low-level feature information in an underwater image, and recovering the low-level feature information in the underwater image;
when the high-level characteristic information is reconstructed: firstly, extracting high-level feature information of an underwater image, extracting the high-level feature information of the high-resolution underwater image by using a pre-trained network model by adopting a transfer learning method, comparing the high-level feature information with the high-level feature information extracted in the previous step, and reducing the error of the high-level feature information and the high-level feature information by adopting a minimum mean square error method;
and performing super-resolution reconstruction on the recovered underwater image low-level characteristic information and high-level characteristic information by adopting a deconvolution method.
Furthermore, when the super-resolution reconstruction is carried out on the underwater image with the fused characteristics, firstly, the difference degree between the extracted underwater image characteristics and the underwater image characteristics extracted from the network model is measured to obtain the underwater image with the high resolution close to the reality, the similarity degree between the images is measured by adopting a content loss function,
LC(F)=Ιh,l[||θ(h)-θ(F(l))||2] (3)
wherein, F: { l } → h represents a learning function or mapping, l represents the low resolution image domain, and h represents the high resolution image domain; in order to regulate the perception loss function formula (2) and the content loss function formula (3) to enable the two loss functions to obtain small loss, a hyperparameter lambda related to the loss functions is introducedkAnd λCThen, the target loss function of the feature fusion super-resolution network is expressed as:
Figure BDA0003733324380000032
due to the adoption of the technical scheme, the underwater image super-resolution method based on multi-feature image fusion can well solve the problems of low contrast ratio, low resolution ratio and the like of an underwater image. The method also has certain help to solve the problem of color cast of the underwater image; the method utilizes a feature fusion network, and can better extract useful detail information of a preprocessed image while reducing artifacts; according to the method, a depth residual block structure is used, low-level feature details of the underwater image can be well extracted and combined, and a transfer learning method is used for supplementing high-level feature details of the underwater image so as to reconstruct the high-quality and high-resolution underwater image at a super-resolution ratio.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an underwater image super-resolution method based on multi-feature image fusion, disclosed by the invention;
FIG. 2 is a general framework structure diagram of the underwater image super-resolution method based on multi-feature image fusion.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
the method comprises the steps of firstly obtaining underwater images of different sea areas and different depths, and respectively processing an original image by adopting three algorithms of white balance, adaptive histogram equalization and dark channel prior to construct an underwater feature map with various discrimination information. Then, the three feature images are input into a feature confidence network together with the original image, and the confidence information of each feature image is obtained. Furthermore, the feature map is processed using a feature optimization unit to reduce the redundant color mapping and artifacts introduced by the pre-processing algorithm. And respectively multiplying the optimized feature images by respective confidence coefficients to obtain underwater images to be fused, and then fusing the different types of images to obtain feature fusion images. Due to scattering of light by a water body and the like, extraction of underwater image features is difficult, and feature fusion can extract target essential features from underwater feature maps with various discrimination information and combine the target essential features into a feature image with stronger discrimination capability. Then, the feature fusion image is input into a super-resolution module, low-layer feature information from each convolution layer is combined by utilizing a dense residual block, high-layer feature information of the image is supplemented by using a transfer learning method, and finally the resolution of the image is amplified by using deconvolution operation. The method solves the problems of low contrast and low resolution of the underwater image, effectively enhances the detail texture characteristics of the underwater image, and improves the resolution of the underwater image. The method comprises the following specific steps:
s1: acquiring an underwater image, and respectively preprocessing the underwater image by adopting three methods of white balance, adaptive histogram equalization and dark channel prior, wherein the following methods are specifically adopted:
s11: acquiring underwater images, integrating the acquired underwater images into a data set, and according to the following steps: 3 into a training data set and a test data set.
S12: and preprocessing the underwater image by adopting a white balance method. Aiming at the characteristic that the underwater image is blue and green, the ratio of a red pixel channel in the underwater image is increased, the ratio of a green blue channel in the underwater image is reduced, color projection correction is carried out, and accurate color restoration is obtained. Specifically, the histogram distribution of the pixel values of each channel of RGB is counted, 0 is assigned to 0.25% of the pixels before the histogram distribution, 255 is assigned to 0.25% of the pixels after the histogram distribution, and the balance is mapped between 0 and 255, which is equivalent to performing histogram equalization once for each channel. Thus, the value of each value channel is distributed more uniformly in RGB, and the result of color balance is achieved.
S13: and preprocessing the underwater image by adopting a self-adaptive histogram equalization method. A local histogram of the underwater image is computed and then the brightness is redistributed to improve the local contrast of the underwater image and to obtain more image detail. Specifically, each local area divided by the underwater image is processed by a histogram specification method, and then adjacent local areas of the underwater image are combined by bilinear interpolation to eliminate boundary effect artificially introduced, and particularly, the contrast of a uniform brightness area can be limited so as not to amplify noise.
S14: and preprocessing the underwater image by adopting a dark channel prior method. The method comprises the steps of firstly calculating the transmissivity of an underwater image and the atmospheric light component value, then carrying out Gaussian low-pass filtering on an original image, carrying out inversion to obtain a transmissivity image, and then obtaining a partially defogged underwater image from the transmissivity image, so that the brightness of a dark area in the underwater image is improved.
S2: constructing a feature fusion super-resolution network model, specifically adopting the following method:
s21: and constructing a deep learning framework by using the pyrorch.
S22: the model is compiled to complete the fitting training process. A confidence coefficient module is designed by adopting a gate-controlled fusion network architecture, a characteristic optimization module is designed by adopting a multilayer convolution and correction linear unit, and a super-resolution module is designed by adopting an intensive residual block and deconvolution operation.
S23: the model was fitted on the acquired training data set, epoch set to 400, batch _sizeset to 16, and learning rate set to 0.001.
S24: and evaluating and predicting the model, and testing the model by using variable data and random tests to verify the effectiveness of the training model and whether the performance efficiency of the training model meets the expected requirement.
S3: inputting the three preprocessed images and the original image into a feature fusion super-resolution network for training, and obtaining the confidence coefficient of each preprocessed image through a feature confidence coefficient module, wherein the following method is specifically adopted:
s31: and inputting the three preprocessed images and the original image into a feature confidence coefficient module in the feature fusion super-resolution network in parallel.
S32: and training the network by adopting a method of multilayer convolution and normalization index function to obtain confidence information of each preprocessed image.
S33: the most significant features in the feature fusion images are determined from the learned confidence information, the three preprocessing feature images are combined into a feature image which comprehensively utilizes multiple image features and has stronger discrimination capability, the advantage complementation of multiple features is realized, and a more robust and significant result is obtained.
S4: the original underwater image is preprocessed by three image processing algorithms, and although the detail characteristics of the underwater image can be enhanced from different angles, redundant background noise is added to the image. The characteristic optimization module can reduce color mapping and artifacts introduced into the original image by three image processing algorithms as much as possible, and improve the image quality. Inputting the original image and the three preprocessed images into a feature optimization module respectively, performing feature optimization to obtain three feature optimized images, and specifically adopting the following modes:
s41: and the original image and the three preprocessed images are respectively input into a feature optimization module in the feature fusion super-resolution network.
S42: for more efficient gradient descent and back propagation: the problems of gradient explosion and gradient disappearance are avoided, the network is trained by adopting a method of multilayer convolution and linear rectification function, the characteristic information of the input image is extracted and combined, the color mapping and artifacts introduced by three image processing algorithms in the original image are reduced, the image quality is improved, and meanwhile, the integral calculation cost of the neural network is reduced due to the activity dispersibility.
S5: due to scattering of light by a water body and the like, extraction of underwater image features is difficult, and feature fusion can extract target essential features from underwater feature maps with various discrimination information and combine the target essential features into a feature image with stronger discrimination capability. And multiplying the three feature optimization images by respective confidence degrees to obtain the underwater image to be fused. And integrating and adding the three underwater images to be fused for feature fusion to obtain the feature-fused underwater image, wherein the following method is specifically adopted:
s51: and multiplying the three feature optimization images obtained by the feature optimization module by respective confidence information by adopting an element-wise multiplication method to obtain three underwater images to be fused.
S52: and integrating and adding the three underwater images to be fused by adopting an element-wise addition method to perform feature fusion to obtain the feature-fused underwater image.
S53: the feature fusion process is functionally represented to facilitate determination of the loss cost. Optimizing image R by histogram equalization featureHEWhite balance feature optimized image RWBOptimization image R of prior characteristics of sum-dark channelDCPRespectively with respective confidence degree DHE、DWBAnd DDCPMultiplying to obtain three underwater images to be fused, integrating and adding the three underwater images to be fused for feature fusion to obtain a feature-fused underwater image IERThe function of the process is expressed as:
IER=DHE×RHE+DWB×RWB+DDCP×RDCP (1)
s54: in the process of carrying out feature fusion on the underwater image, a mapping function of the feature fusion underwater image is learned by adopting a minimum perception loss function, so that artifacts caused by a pixel-level loss function are reduced, and a feature fusion image I is measuredERAnd a reference picture IRFThe differential perceptual loss function between is:
Figure BDA0003733324380000071
wherein N represents the number of each batch in the training process, WkHkCkRespectively representing the width, height and channel number of the characteristic mapping of the kth convolution layer in the network modelk(x) After the representation is activatedLayer k of the pre-trained network model.
S6: inputting the underwater image with the fused features into a super-resolution module for super-resolution reconstruction to obtain a super-resolution reconstructed image, wherein the following method is specifically adopted:
s61: extracting low-level feature information in the underwater image by adopting a multi-layer convolution and batch standardization method, combining the level features and the identification mapping from each convolution layer by adopting a dense residual jump connection mode, and recovering the low-level feature information in the underwater image.
S62: and extracting and storing high-level characteristic information of the underwater image by adopting a multi-layer convolution and batch standardization method.
S63: and (3) obtaining high-level characteristic information of the pre-trained high-resolution underwater image extracted from the pre-trained network model by adopting a transfer learning method.
S64: and comparing the high-level characteristic information stored in the step two with the high-level characteristic information obtained in the step three, reducing the error of the high-level characteristic information and the high-level characteristic information by adopting a minimum mean square error method, and recovering the high-level characteristic information of the underwater image.
S65: and performing super-resolution reconstruction on the recovered low-level characteristic information and high-level characteristic information of the underwater image by adopting a deconvolution method to obtain a super-resolution underwater image.
S66: when the super-resolution reconstruction is carried out on the underwater image with the fused characteristics, in order to measure the difference degree between the extracted underwater image characteristics and the underwater image characteristics obtained by the migration learning method and obtain the underwater image with the high resolution close to the reality, the similarity degree between the images is measured by adopting a content loss function,
LC(F)=Ιh,l[||θ(h)-θ(F(l))||2] (3)
wherein, { l } → h represents a learning function or mapping, l represents the low resolution image domain, and h represents the high resolution image domain; in order to regulate the perception loss function formula (2) and the content loss function formula (3) so that the two loss functions obtain small loss, a hyperparameter lambda related to the loss functions is introducedkAnd λCThen, the target loss function of the feature fusion super-resolution network is expressed as:
Figure BDA0003733324380000081
the underwater image super-resolution method based on the feature fusion network can well solve the problems of low contrast ratio, low resolution ratio and the like of an underwater image. The method also has certain help to solve the problem of color cast of the underwater image. By applying the feature fusion network, useful detail information of the preprocessed image can be better extracted while artifacts are reduced. By using the depth residual block structure, the low-level feature details of the underwater image can be well extracted and combined, and the high-level feature details of the underwater image are supplemented by using a transfer learning method, so that the high-quality and high-resolution underwater image is reconstructed by using super-resolution.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. An underwater image super-resolution method based on multi-feature image fusion is characterized by comprising the following steps:
acquiring an underwater image, and respectively preprocessing the underwater image by adopting three methods of white balance, adaptive histogram equalization and dark channel prior;
constructing a feature fusion super-resolution network model, inputting the three preprocessed underwater images and the original image into the feature fusion super-resolution network model for training, and obtaining confidence information of each preprocessed image;
respectively carrying out feature optimization on the original image and the three preprocessed images to obtain three feature optimized images;
multiplying the three feature optimization images with respective confidence degrees to obtain three underwater images to be fused, and integrating and adding the three underwater images to be fused for feature fusion to obtain feature-fused underwater images;
and performing super-resolution reconstruction on the feature-fused underwater image to obtain a super-resolution reconstructed image.
2. The underwater image super-resolution method based on multi-feature image fusion of claim 1, characterized in that: the white balance method comprises the following steps: firstly, carrying out color projection correction on an underwater image to obtain accurate color restoration; the self-adaptive histogram equalization method comprises the following steps: firstly, calculating a local histogram of an underwater image, and redistributing brightness to improve the local contrast of the underwater image so as to obtain more details of the image; the dark channel prior method comprises the following steps: and calculating the transmissivity and atmospheric light component value of the underwater image, performing Gaussian low-pass filtering on the original image, then reversing to obtain a transmissivity image, and obtaining the underwater image with local defogging according to the transmissivity image.
3. The underwater image super-resolution method based on multi-feature image fusion of claim 1, characterized in that: feature optimized image R for equalizing adaptive histogramHEWhite balance feature optimized image RWBAnd dark channel prior feature optimization image RDCPRespectively with respective confidence degree DHE、DWBAnd DDCPMultiplying to obtain three underwater images to be fused, integrating and adding the three underwater images to be fused for feature fusion to obtain a feature-fused underwater image IERThe function of the process is expressed as:
IER=DHE×RHE+DWB×RWB+DDCP×RDCP (1)
in the process of carrying out feature fusion on the underwater image, a mapping function of the feature fusion underwater image is learned by adopting a minimum perception loss function, so that artifacts caused by a pixel-level loss function are reduced, and a feature fusion image I is measuredERAnd a reference picture IRFSense of difference therebetweenThe known loss function is:
Figure FDA0003733324370000021
wherein N represents the number of each batch in the training process, WkHkCkRespectively representing the width, height and channel number of the feature map of the kth convolution layer in the network modelk(x) Representing the k-th layer of the pre-trained network model after activation.
4. The underwater image super-resolution method based on multi-feature image fusion of claim 1, characterized in that: when the super-resolution reconstruction is carried out on the underwater image with the fused characteristics, the reconstruction comprises the reconstruction of low-layer characteristic information and high-layer characteristic information in the underwater image:
wherein when reconstructing the low-level feature information: firstly, extracting low-level feature information in an underwater image, and recovering the low-level feature information in the underwater image;
when the high-level characteristic information is reconstructed: extracting high-level information of an underwater image, extracting the high-level information of the high-resolution underwater image by using a pre-trained network model by adopting a transfer learning method, comparing the high-level information with the high-level information extracted in the previous step, and reducing the error of the high-level information and the high-level information by adopting a minimum mean square error method;
and performing super-resolution reconstruction on the recovered underwater image low-level characteristic information and high-level characteristic information by adopting a deconvolution method.
5. The underwater image super-resolution method based on multi-feature image fusion of claim 4, characterized in that: when the super-resolution reconstruction is carried out on the underwater image with the fused characteristics, firstly, the difference degree between the extracted underwater image characteristics and the underwater image characteristics extracted from the network model is measured to obtain the underwater image with high resolution close to reality, the similarity degree between the images is measured by adopting a content loss function,
LC(F)=Ιh,l[||θ(h)-θ(F(l))||2] (3)
wherein, F: { l } → h represents a learning function or mapping, l represents the low resolution image domain, and h represents the high resolution image domain; in order to regulate the perception loss function formula (2) and the content loss function formula (3) to enable the two loss functions to obtain small loss, a hyperparameter lambda related to the loss functions is introducedkAnd λCThen, the target loss function of the feature fusion super-resolution network is expressed as:
Figure FDA0003733324370000031
CN202210799113.7A 2022-07-06 2022-07-06 Underwater image super-resolution method based on multi-feature image fusion Pending CN115272072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210799113.7A CN115272072A (en) 2022-07-06 2022-07-06 Underwater image super-resolution method based on multi-feature image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210799113.7A CN115272072A (en) 2022-07-06 2022-07-06 Underwater image super-resolution method based on multi-feature image fusion

Publications (1)

Publication Number Publication Date
CN115272072A true CN115272072A (en) 2022-11-01

Family

ID=83766457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210799113.7A Pending CN115272072A (en) 2022-07-06 2022-07-06 Underwater image super-resolution method based on multi-feature image fusion

Country Status (1)

Country Link
CN (1) CN115272072A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117082362A (en) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 Underwater imaging method and device
CN117689760A (en) * 2024-02-02 2024-03-12 山东大学 OCT axial super-resolution method and system based on histogram information network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117082362A (en) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 Underwater imaging method and device
CN117082362B (en) * 2023-08-25 2024-05-28 山东中清智能科技股份有限公司 Underwater imaging method and device
CN117689760A (en) * 2024-02-02 2024-03-12 山东大学 OCT axial super-resolution method and system based on histogram information network
CN117689760B (en) * 2024-02-02 2024-05-03 山东大学 OCT axial super-resolution method and system based on histogram information network

Similar Documents

Publication Publication Date Title
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
CN111275637B (en) Attention model-based non-uniform motion blurred image self-adaptive restoration method
CN111127336B (en) Image signal processing method based on self-adaptive selection module
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN113284061B (en) Underwater image enhancement method based on gradient network
CN111738948A (en) Underwater image enhancement method based on double U-nets
CN116523794A (en) Low-light image enhancement method based on convolutional neural network
CN115170915A (en) Infrared and visible light image fusion method based on end-to-end attention network
CN111476739B (en) Underwater image enhancement method, system and storage medium
Han et al. UIEGAN: Adversarial learning-based photorealistic image enhancement for intelligent underwater environment perception
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
CN115861094A (en) Lightweight GAN underwater image enhancement model fused with attention mechanism
CN115035010A (en) Underwater image enhancement method based on convolutional network guided model mapping
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network
CN118247174A (en) Method and device for training turbid underwater image enhancement model, medium and equipment
Kumar et al. Underwater image enhancement using deep learning
Wang et al. Underwater image quality optimization: Researches, challenges, and future trends
CN117974459A (en) Low-illumination image enhancement method integrating physical model and priori
CN114140361A (en) Generation type anti-network image defogging method fusing multi-stage features
CN117392036A (en) Low-light image enhancement method based on illumination amplitude
CN117351340A (en) Underwater image enhancement algorithm based on double-color space
CN117115411A (en) Multi-frequency double-branch underwater image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination