CN111489303A - Maritime affairs image enhancement method under low-illumination environment - Google Patents

Maritime affairs image enhancement method under low-illumination environment Download PDF

Info

Publication number
CN111489303A
CN111489303A CN202010231309.7A CN202010231309A CN111489303A CN 111489303 A CN111489303 A CN 111489303A CN 202010231309 A CN202010231309 A CN 202010231309A CN 111489303 A CN111489303 A CN 111489303A
Authority
CN
China
Prior art keywords
image
noise
network
component
optimized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010231309.7A
Other languages
Chinese (zh)
Inventor
刘�文
郭彧
卢南华
孙睿涵
胡旭东
崔振华
马全党
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010231309.7A priority Critical patent/CN111489303A/en
Publication of CN111489303A publication Critical patent/CN111489303A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a maritime video image enhancement method under a low-illumination environment, which comprises the following steps: firstly, estimating an initial brightness component of an input low-illumination image by adopting a Max-RGB method, obtaining an optimized brightness component by utilizing guide filtering, further carrying out contrast correction on the optimized brightness component by adopting Gamma transformation, separating a reflection component of an original image according to a Retinex theory, eliminating noise in the reflection component by adopting a convolution blind denoising network, retaining the detail and color information of the image, and finally multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image. By analyzing the marine image, the difference between the texture structure of the marine image and the texture structure of the common image is found, and the research on the enhancement direction of the marine image is less.

Description

Maritime affairs image enhancement method under low-illumination environment
Technical Field
The invention relates to a computer vision image enhancement technology, in particular to a maritime affairs image enhancement method in a low-illumination environment.
Background
In order to complete the maritime supervision task, the maritime workers install video acquisition equipment on the bridge area and the maritime unmanned aerial vehicle to monitor the waterway traffic. However, the low contrast of images captured in low light environments will severely impact the effectiveness of computer vision based techniques. In order to make the hidden information visible, the low-light image needs to be enhanced. The traditional histogram equalization and the improvement method thereof can obtain satisfactory enhancement effect. However, these methods are proposed based on image contrast enhancement, which easily causes problems of over-enhancement or under-enhancement. Gamma transformation is another conventional image processing method, and can perform nonlinear operation on an image. However, this method does not sufficiently consider the relationship between the pixels, and easily causes image distortion.
The Retinex theory aims at decomposing an input image into a luminance component and a reflection component. The reflected component contains texture and color information. The luminance component contains luminance information, and the luminance information is considered to be spatially smooth. Early attempts based on Retinex, such as single-scale Retinex (ssr) and multi-scale Retinex (msr), have reflected components as enhancement results, which result in image distortion and excessive enhancement. Multi-scale retinex (msrcr) with color reduction employs a color correction method to improve the quality of the result, but it still fails to solve the problem of over-enhancement. Kimmel et al propose a variation framework to estimate the luminance component, but this method does not take into account the reflection component. A method of adjusting a luminance component and a method of simultaneously estimating the luminance component and a reflection component are also proposed to solve the low illumination enhancement problem. Both of these methods can obtain a smooth luminance component, but the reflection component still has the problem of noise residual.
Many scholars have also proposed some neural networks to solve the low light image enhancement problem, such as LL Net, L earn to sea in the dark and retinextnet, however, these machine learning based low illumination enhancement methods depend largely on the type and volume of the data set, while marine images differ from normal images in methods such as structure information and texture, and thus cannot be directly used to enhance marine images.
Although many researches on the low-illumination image enhancement algorithm exist at present, the low-illumination marine image and the common image have great difference in information such as structure, texture and the like, and the low-illumination enhancement method based on the common image cannot robustly enhance the marine image, so that the method for enhancing the marine video image in the low-illumination environment has important practical significance.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for enhancing a maritime video image in a low-illumination environment, aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a maritime video image enhancement method in a low-light environment comprises the following steps:
1) acquiring a maritime affairs image data set in a low-illumination environment;
2) obtaining an initial brightness component of an image data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
3) dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
4) eliminating noise in the reflection component by adopting a convolution blind denoising network, and keeping the details and color information of the image;
5) and performing contrast correction on the optimized brightness component by adopting Gamma transformation, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
According to the scheme, the initial brightness score is obtained through Max-RGB in the step 2), and the following formula is adopted:
Figure BDA0002429371560000041
wherein I is a maritime low-illumination image,
Figure BDA0002429371560000042
is the original luminance component and the three channels R, G, B share the same luminance component.
According to the scheme, the optimized brightness component is obtained through the guide filtering in the step 2), and the following formula is adopted:
Figure BDA0002429371560000043
wherein ω represents (x)i,yj) The neighborhood of (a) is determined,
Figure BDA0002429371560000044
representing the original luminance component of the luminance signal,
Figure BDA0002429371560000045
representing the optimized luminance component, M representing the input imageCorresponding to the grayscale image, W is the filter associated with M.
According to the scheme, the convolution blind denoising network in the step 4) is a residual convolution neural network.
According to the scheme, the convolution blind denoising network in the step 4) comprises a noise estimation network and a noise removal network, and the structure is as follows:
a) noise estimation network (E-Net)
Input image g due to noise and noise level estimation map
Figure BDA0002429371560000046
Has the same size, so that the noise estimation sub-network (E-Net) adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation chart
Figure BDA0002429371560000047
This network contains only convolutional layers (Conv) and Re L U activation functions, where the eigenchannel of each convolutional layer is set to 32, the size of all filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer;
b) noise removing network (D-Net)
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graph
Figure BDA0002429371560000051
As input, a residual map is estimated
Figure BDA0002429371560000052
Finally, the noise image g and the residual image are compared
Figure BDA0002429371560000053
Element-by-element addition to obtain a noise-free estimated image
Figure BDA0002429371560000054
The noise removing network adopts a 16-layer U-Net network structure, the network expands a receiving domain by using symmetrical jump connection and transposition convolution, and utilizes multi-scale information, D-All filters in Net were sized to 3 × 3, and a Re L U activation function was placed after each Conv layer except the last.
According to the scheme, the loss function adopted by the convolution blind denoising network in the step 4) is a noise level estimation graph with a mixed loss function containing three sub-loss functions for constraint
Figure BDA0002429371560000055
And noise-free estimated image
Figure BDA0002429371560000056
The method comprises the following steps: asymmetric MSE loss function
Figure BDA0002429371560000057
Sum total variation regularization term loss function
Figure BDA0002429371560000058
Constrained estimated noise level map
Figure BDA0002429371560000059
The mathematical formula is expressed as:
Figure BDA00024293715600000510
Figure BDA00024293715600000511
where Ω represents the image domain, σ represents the true noise level map,
Figure BDA00024293715600000512
and
Figure BDA00024293715600000513
operators representing horizontal and vertical gradients, α and β are penalty factors for the loss function, α is set empirically to 0.3 when
Figure BDA00024293715600000514
When β is equal to 1, otherwise β is equal to 0.
Estimating images for noise free
Figure BDA0002429371560000061
Using structurally similar loss functions
Figure BDA0002429371560000062
It is constrained by the following mathematical formula:
Figure BDA0002429371560000063
where f represents the true noise-free image, SSIM can be expressed as:
Figure BDA0002429371560000064
in the formula, mufAnd
Figure BDA0002429371560000066
respectively representing the mean, σ, of the imagefAnd
Figure BDA0002429371560000067
respectively represent the variance of the image,
Figure BDA0002429371560000068
representing the covariance between the two images, the loss function of the entire network is expressed as:
Figure BDA0002429371560000065
according to the scheme, the contrast correction of the optimized brightness component in the step 5) is performed by adopting Gamma transformation.
The invention has the following beneficial effects:
the blind denoising network is introduced into low-illumination enhancement, the noise of the reflection component in the maritime low-illumination image can be well eliminated, the enhanced image is obtained through the reflection component and the brightness component after the reflection component denoising processing, and meanwhile, the convolution blind denoising network is provided, the noise of the image can be well eliminated, and the details are kept.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
fig. 2 is a schematic diagram of a blind denoising network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a method for enhancing a maritime video image in a low-light environment includes the following steps:
firstly, collecting a data set for making 10000 images, and cutting the images into 10000 × 512 large data sets;
10000 images are collected low-illumination noiseless marine images containing various common marine objects such as sea, reef, bridge column, ship, ocean platform and the like.
Secondly, obtaining an initial brightness component of the data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
the initial luminance component obtained by the Max-RGB method conforms to the following formula:
Figure BDA0002429371560000071
wherein I is a maritime low-illumination image,
Figure BDA0002429371560000072
is the original luminance component and the three channels R, G, B share the same luminance component.
The guided filtering to obtain the optimized luminance component conforms to the following formula:
Figure BDA0002429371560000081
wherein ω represents (x)i,yj) The neighborhood of (a) is determined,
Figure BDA0002429371560000082
representing the original luminance component of the luminance signal,
Figure BDA0002429371560000083
represents the optimized luminance component, M represents the gray scale image corresponding to the input image, and W is the filter associated with M.
Thirdly, dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
fourthly, training and learning the processed reflection component data set by using a residual convolution-based neural network framework;
the residual convolutional neural network framework specifically comprises the following steps:
a. noise estimation network (E-Net)
Input image g due to noise and noise level estimation map
Figure BDA0002429371560000084
Has the same size, so that the noise estimation sub-network (E-Net) adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation chart
Figure BDA0002429371560000085
The network contains only convolutional layers (Conv) and Re L U activation functions, where the eigenchannel for each convolutional layer is set to 32, the size of all filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer.
b. Noise removing network (D-Net)
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graph
Figure BDA0002429371560000091
As input, a residual map is estimated
Figure BDA0002429371560000092
Finally, the noise image g and the residual image are compared
Figure BDA0002429371560000093
Element-by-element addition to obtain a noise-free estimated image
Figure BDA0002429371560000094
This network employs a 16-layer U-Net network architecture that spreads the receive domain using symmetric hopping connections and transposed convolution and utilizes multi-scale information the size of all filters in D-Net is set to 3 × 3 and Re L U activation functions are placed after each Conv layer except the last.
c. Loss Function (L oss Function)
The invention adopts a mixed loss function to calculate the similarity of the images in training, and the noise can be effectively estimated and removed through the loss function. The method is characterized in that:
the hybrid loss function comprises three sub-loss functions to constrain a noise level estimate map
Figure BDA0002429371560000095
And noise-free estimated image
Figure BDA0002429371560000096
According to the research on the non-blind noise reduction network, when
Figure BDA0002429371560000097
When the network denoising effect is not good, the network denoising effect is not good
Figure BDA0002429371560000098
And the network denoising effect is satisfactory. Therefore, to reliably estimate the noise level, an asymmetric MSE loss function is employed
Figure BDA0002429371560000099
Sum total variation regularization term loss function
Figure BDA00024293715600000910
Constrained estimated noise level map
Figure BDA00024293715600000911
The mathematical formula can be expressed as:
Figure BDA00024293715600000912
Figure BDA00024293715600000913
where Ω represents the image domain, σ represents the true noise level map,
Figure BDA00024293715600000914
and
Figure BDA00024293715600000915
operators representing horizontal and vertical gradients, α and β represent penalty factors, α is empirically set to 0.3 when
Figure BDA00024293715600000916
Figure BDA00024293715600000917
When β is 1, otherwise β is 0, estimate the image for no noise
Figure BDA00024293715600000918
Using structurally similar loss functions
Figure BDA0002429371560000101
It is constrained by the following formula:
Figure BDA0002429371560000102
where f represents the true noise-free image, SSIM can be expressed as:
Figure BDA0002429371560000103
in the formula, mufAnd
Figure BDA0002429371560000105
respectively representing the mean, σ, of the imagefAnd
Figure BDA0002429371560000106
respectively represent the variance of the image,
Figure BDA0002429371560000107
representing the covariance between the two images, the loss function of the entire network can be expressed as:
Figure BDA0002429371560000104
wherein λ isasymmPenalty term coefficient, λ, for asymmetric MSE loss functionTVFor the penalty coefficient of the total variation regularization term loss function, it can take lambdaasymmIs 0.5, lambdaTVIs 0.005.
And fifthly, obtaining training parameters, and testing the noise removal effect by adopting the reflection component containing the noise.
And sixthly, performing contrast correction on the optimized brightness component, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (9)

1. A maritime video image enhancement method under a low-illumination environment is characterized by comprising the following steps:
1) acquiring a maritime affairs image data set in a low-illumination environment;
2) obtaining an initial brightness component of an image data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
3) dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
4) eliminating noise in the reflection component by adopting a convolution blind denoising network, and keeping the details and color information of the image;
5) and carrying out contrast correction on the optimized brightness component, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
2. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the initial luminance score is obtained through Max-RGB in step 2), and the following formula is adopted:
Figure FDA0002429371550000011
wherein I is a maritime low-illumination image,
Figure FDA0002429371550000012
is the original luminance component and the three channels R, G, B share the same luminance component.
3. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the optimized luminance component is obtained by the guided filtering in step 2), and the following formula is adopted:
Figure FDA0002429371550000021
wherein ω represents (x)i,yj) The neighborhood of (a) is determined,
Figure FDA0002429371550000022
representing the original luminance component of the luminance signal,
Figure FDA0002429371550000023
represents the optimized luminance component, M represents the gray scale image corresponding to the input image, and W is the filter associated with M.
4. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the convolution blind denoising network in the step 4) is a residual convolution neural network.
5. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the convolution blind denoising network in the step 4) includes a noise estimation network and a noise removal network, and the structure is as follows:
a) noise estimation network
Input image g due to noise and noise level estimation map
Figure FDA0002429371550000024
Has the same size, so that the noise estimation sub-network adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation graph
Figure FDA0002429371550000025
This network contains only convolutional layers and Re L U activation functions,
b) noise-removing network
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graph
Figure FDA0002429371550000026
As input, a residual map is estimated
Figure FDA0002429371550000027
Finally, the noise image g and the residual image are compared
Figure FDA0002429371550000028
Element-by-element addition to obtain a noise-free estimated image
Figure FDA0002429371550000029
The noise removal network adopts a 16-layer U-Net network structure.
6. The method for enhancing maritime video images under low-illumination environment according to claim 5, wherein the characteristic channel of each convolutional layer of the noise estimation network in the step 4) is set to 32, the size of all the filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer.
7. The method for enhancing maritime video images under low-illumination environment according to claim 5, wherein the noise-removal network in the step 4) expands the receiving domain by using symmetric jump connection and transposed convolution, and utilizes multi-scale information, the size of all filters in D-Net is set to 3 × 3, and Re L U activation function is placed after each Conv layer except the last layer.
8. The method as claimed in claim 5, wherein the loss function adopted by the convolution blind denoising network in step 4) is a noise level estimation map with a mixed loss function including three sub-loss functions to constrain
Figure FDA0002429371550000031
And noise-free estimated image
Figure FDA0002429371550000032
The method comprises the following steps: asymmetric MSE loss function
Figure FDA0002429371550000033
Sum total variation regularization term loss function
Figure FDA0002429371550000034
Constrained estimated noise level map
Figure FDA0002429371550000035
The mathematical formula is expressed as:
Figure FDA0002429371550000036
Figure FDA0002429371550000037
where Ω represents the image domain, σ represents the true noise level map,
Figure FDA00024293715500000313
and
Figure FDA0002429371550000039
operators representing horizontal and vertical gradients, α and β are penalty coefficients of the loss function;
estimating images for noise free
Figure FDA00024293715500000310
Using structurally similar loss functions
Figure FDA00024293715500000311
It is constrained by the following mathematical formula:
Figure FDA00024293715500000312
where f represents the true noise-free image, SSIM can be expressed as:
Figure FDA0002429371550000041
in the formula, mufAnd
Figure FDA0002429371550000044
respectively representing the mean, σ, of the imagefAnd
Figure FDA0002429371550000045
respectively represent the variance of the image,
Figure FDA0002429371550000042
representing the covariance between the two images, the loss function of the entire network is expressed as:
Figure FDA0002429371550000043
wherein λ isasymmPenalty term coefficient, λ, for asymmetric MSE loss functionTVPenalty term coefficients for the total variation regularization term loss function.
9. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein in the step 4), the contrast correction of the optimized luminance component in the step 5) is performed by using Gamma transformation.
CN202010231309.7A 2020-03-27 2020-03-27 Maritime affairs image enhancement method under low-illumination environment Pending CN111489303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010231309.7A CN111489303A (en) 2020-03-27 2020-03-27 Maritime affairs image enhancement method under low-illumination environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231309.7A CN111489303A (en) 2020-03-27 2020-03-27 Maritime affairs image enhancement method under low-illumination environment

Publications (1)

Publication Number Publication Date
CN111489303A true CN111489303A (en) 2020-08-04

Family

ID=71797562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231309.7A Pending CN111489303A (en) 2020-03-27 2020-03-27 Maritime affairs image enhancement method under low-illumination environment

Country Status (1)

Country Link
CN (1) CN111489303A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200750A (en) * 2020-10-21 2021-01-08 华中科技大学 Ultrasonic image denoising model establishing method and ultrasonic image denoising method
CN112215767A (en) * 2020-09-28 2021-01-12 电子科技大学 Anti-blocking effect image video enhancement method
CN112308803A (en) * 2020-11-25 2021-02-02 哈尔滨工业大学 Self-supervision low-illumination image enhancement and denoising method based on deep learning
CN112580672A (en) * 2020-12-28 2021-03-30 安徽创世科技股份有限公司 License plate recognition preprocessing method and device suitable for dark environment and storage medium
CN112614063A (en) * 2020-12-18 2021-04-06 武汉科技大学 Image enhancement and noise self-adaptive removal method for low-illumination environment in building
CN113344804A (en) * 2021-05-11 2021-09-03 湖北工业大学 Training method of low-light image enhancement model and low-light image enhancement method
CN113450366A (en) * 2021-07-16 2021-09-28 桂林电子科技大学 AdaptGAN-based low-illumination semantic segmentation method
CN113643202A (en) * 2021-07-29 2021-11-12 西安理工大学 Low-light-level image enhancement method based on noise attention map guidance
CN113808036A (en) * 2021-08-31 2021-12-17 西安理工大学 Low-illumination image enhancement and denoising method based on Retinex model
WO2022217476A1 (en) * 2021-04-14 2022-10-20 深圳市大疆创新科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN115797225A (en) * 2023-01-06 2023-03-14 山东环宇地理信息工程有限公司 Unmanned ship acquisition image enhancement method for underwater topography measurement
CN116128768A (en) * 2023-04-17 2023-05-16 中国石油大学(华东) Unsupervised image low-illumination enhancement method with denoising module

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163818A (en) * 2019-04-28 2019-08-23 武汉理工大学 A kind of low illumination level video image enhancement for maritime affairs unmanned plane

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163818A (en) * 2019-04-28 2019-08-23 武汉理工大学 A kind of low illumination level video image enhancement for maritime affairs unmanned plane

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S. GUO,ET.AL,: "Toward convolutional blind denoising of real photographs", 《IN PROC. IEEE/CVF CONF. COMPUT. VIS. PATTERN RECOGNIT.》 *
X. GUO,ET.CL,: "LIME: Low-light image enhancement via illumination map estimation", 《IEEE TRANS. IMAGE PROCESS》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215767A (en) * 2020-09-28 2021-01-12 电子科技大学 Anti-blocking effect image video enhancement method
CN112200750A (en) * 2020-10-21 2021-01-08 华中科技大学 Ultrasonic image denoising model establishing method and ultrasonic image denoising method
CN112200750B (en) * 2020-10-21 2022-08-05 华中科技大学 Ultrasonic image denoising model establishing method and ultrasonic image denoising method
CN112308803A (en) * 2020-11-25 2021-02-02 哈尔滨工业大学 Self-supervision low-illumination image enhancement and denoising method based on deep learning
CN112614063B (en) * 2020-12-18 2022-07-01 武汉科技大学 Image enhancement and noise self-adaptive removal method for low-illumination environment in building
CN112614063A (en) * 2020-12-18 2021-04-06 武汉科技大学 Image enhancement and noise self-adaptive removal method for low-illumination environment in building
CN112580672A (en) * 2020-12-28 2021-03-30 安徽创世科技股份有限公司 License plate recognition preprocessing method and device suitable for dark environment and storage medium
WO2022217476A1 (en) * 2021-04-14 2022-10-20 深圳市大疆创新科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN113344804A (en) * 2021-05-11 2021-09-03 湖北工业大学 Training method of low-light image enhancement model and low-light image enhancement method
CN113450366A (en) * 2021-07-16 2021-09-28 桂林电子科技大学 AdaptGAN-based low-illumination semantic segmentation method
CN113450366B (en) * 2021-07-16 2022-08-30 桂林电子科技大学 AdaptGAN-based low-illumination semantic segmentation method
CN113643202A (en) * 2021-07-29 2021-11-12 西安理工大学 Low-light-level image enhancement method based on noise attention map guidance
CN113808036A (en) * 2021-08-31 2021-12-17 西安理工大学 Low-illumination image enhancement and denoising method based on Retinex model
CN113808036B (en) * 2021-08-31 2023-02-24 西安理工大学 Low-illumination image enhancement and denoising method based on Retinex model
CN115797225A (en) * 2023-01-06 2023-03-14 山东环宇地理信息工程有限公司 Unmanned ship acquisition image enhancement method for underwater topography measurement
CN116128768A (en) * 2023-04-17 2023-05-16 中国石油大学(华东) Unsupervised image low-illumination enhancement method with denoising module

Similar Documents

Publication Publication Date Title
CN111489303A (en) Maritime affairs image enhancement method under low-illumination environment
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
Li et al. Rain streak removal using layer priors
CN111047530A (en) Underwater image color correction and contrast enhancement method based on multi-feature fusion
CN108564597B (en) Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN113284061B (en) Underwater image enhancement method based on gradient network
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN111145102A (en) Synthetic aperture radar image denoising method based on convolutional neural network
CN114219722A (en) Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing
Yu et al. Image and video dehazing using view-based cluster segmentation
Zhou et al. Multicolor light attenuation modeling for underwater image restoration
CN116188325A (en) Image denoising method based on deep learning and image color space characteristics
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN114693548B (en) Dark channel defogging method based on bright area detection
Huang et al. Underwater image enhancement based on color restoration and dual image wavelet fusion
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN115115549A (en) Image enhancement model, method, equipment and storage medium of multi-branch fusion attention mechanism
CN114119383A (en) Underwater image restoration method based on multi-feature fusion
CN115409872B (en) Image optimization method for underwater camera
CN115760640A (en) Coal mine low-illumination image enhancement method based on noise-containing Retinex model
CN116612032A (en) Sonar image denoising method and device based on self-adaptive wiener filtering and 2D-VMD
CN113269763B (en) Underwater image definition recovery method based on depth map restoration and brightness estimation
CN115249211A (en) Image restoration method based on underwater non-uniform incident light model
CN114549342A (en) Underwater image restoration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200804