CN111489303A - Maritime affairs image enhancement method under low-illumination environment - Google Patents
Maritime affairs image enhancement method under low-illumination environment Download PDFInfo
- Publication number
- CN111489303A CN111489303A CN202010231309.7A CN202010231309A CN111489303A CN 111489303 A CN111489303 A CN 111489303A CN 202010231309 A CN202010231309 A CN 202010231309A CN 111489303 A CN111489303 A CN 111489303A
- Authority
- CN
- China
- Prior art keywords
- image
- noise
- network
- component
- optimized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005286 illumination Methods 0.000 title claims abstract description 31
- 238000001914 filtration Methods 0.000 claims abstract description 10
- 238000012937 correction Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 29
- 230000002708 enhancing effect Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000011160 research Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- WQZGKKKJIJFFOK-IVMDWMLBSA-N D-allopyranose Chemical compound OC[C@H]1OC(O)[C@H](O)[C@H](O)[C@@H]1O WQZGKKKJIJFFOK-IVMDWMLBSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a maritime video image enhancement method under a low-illumination environment, which comprises the following steps: firstly, estimating an initial brightness component of an input low-illumination image by adopting a Max-RGB method, obtaining an optimized brightness component by utilizing guide filtering, further carrying out contrast correction on the optimized brightness component by adopting Gamma transformation, separating a reflection component of an original image according to a Retinex theory, eliminating noise in the reflection component by adopting a convolution blind denoising network, retaining the detail and color information of the image, and finally multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image. By analyzing the marine image, the difference between the texture structure of the marine image and the texture structure of the common image is found, and the research on the enhancement direction of the marine image is less.
Description
Technical Field
The invention relates to a computer vision image enhancement technology, in particular to a maritime affairs image enhancement method in a low-illumination environment.
Background
In order to complete the maritime supervision task, the maritime workers install video acquisition equipment on the bridge area and the maritime unmanned aerial vehicle to monitor the waterway traffic. However, the low contrast of images captured in low light environments will severely impact the effectiveness of computer vision based techniques. In order to make the hidden information visible, the low-light image needs to be enhanced. The traditional histogram equalization and the improvement method thereof can obtain satisfactory enhancement effect. However, these methods are proposed based on image contrast enhancement, which easily causes problems of over-enhancement or under-enhancement. Gamma transformation is another conventional image processing method, and can perform nonlinear operation on an image. However, this method does not sufficiently consider the relationship between the pixels, and easily causes image distortion.
The Retinex theory aims at decomposing an input image into a luminance component and a reflection component. The reflected component contains texture and color information. The luminance component contains luminance information, and the luminance information is considered to be spatially smooth. Early attempts based on Retinex, such as single-scale Retinex (ssr) and multi-scale Retinex (msr), have reflected components as enhancement results, which result in image distortion and excessive enhancement. Multi-scale retinex (msrcr) with color reduction employs a color correction method to improve the quality of the result, but it still fails to solve the problem of over-enhancement. Kimmel et al propose a variation framework to estimate the luminance component, but this method does not take into account the reflection component. A method of adjusting a luminance component and a method of simultaneously estimating the luminance component and a reflection component are also proposed to solve the low illumination enhancement problem. Both of these methods can obtain a smooth luminance component, but the reflection component still has the problem of noise residual.
Many scholars have also proposed some neural networks to solve the low light image enhancement problem, such as LL Net, L earn to sea in the dark and retinextnet, however, these machine learning based low illumination enhancement methods depend largely on the type and volume of the data set, while marine images differ from normal images in methods such as structure information and texture, and thus cannot be directly used to enhance marine images.
Although many researches on the low-illumination image enhancement algorithm exist at present, the low-illumination marine image and the common image have great difference in information such as structure, texture and the like, and the low-illumination enhancement method based on the common image cannot robustly enhance the marine image, so that the method for enhancing the marine video image in the low-illumination environment has important practical significance.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for enhancing a maritime video image in a low-illumination environment, aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a maritime video image enhancement method in a low-light environment comprises the following steps:
1) acquiring a maritime affairs image data set in a low-illumination environment;
2) obtaining an initial brightness component of an image data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
3) dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
4) eliminating noise in the reflection component by adopting a convolution blind denoising network, and keeping the details and color information of the image;
5) and performing contrast correction on the optimized brightness component by adopting Gamma transformation, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
According to the scheme, the initial brightness score is obtained through Max-RGB in the step 2), and the following formula is adopted:
wherein I is a maritime low-illumination image,is the original luminance component and the three channels R, G, B share the same luminance component.
According to the scheme, the optimized brightness component is obtained through the guide filtering in the step 2), and the following formula is adopted:
wherein ω represents (x)i,yj) The neighborhood of (a) is determined,representing the original luminance component of the luminance signal,representing the optimized luminance component, M representing the input imageCorresponding to the grayscale image, W is the filter associated with M.
According to the scheme, the convolution blind denoising network in the step 4) is a residual convolution neural network.
According to the scheme, the convolution blind denoising network in the step 4) comprises a noise estimation network and a noise removal network, and the structure is as follows:
a) noise estimation network (E-Net)
Input image g due to noise and noise level estimation mapHas the same size, so that the noise estimation sub-network (E-Net) adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation chartThis network contains only convolutional layers (Conv) and Re L U activation functions, where the eigenchannel of each convolutional layer is set to 32, the size of all filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer;
b) noise removing network (D-Net)
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graphAs input, a residual map is estimatedFinally, the noise image g and the residual image are comparedElement-by-element addition to obtain a noise-free estimated imageThe noise removing network adopts a 16-layer U-Net network structure, the network expands a receiving domain by using symmetrical jump connection and transposition convolution, and utilizes multi-scale information, D-All filters in Net were sized to 3 × 3, and a Re L U activation function was placed after each Conv layer except the last.
According to the scheme, the loss function adopted by the convolution blind denoising network in the step 4) is a noise level estimation graph with a mixed loss function containing three sub-loss functions for constraintAnd noise-free estimated imageThe method comprises the following steps: asymmetric MSE loss functionSum total variation regularization term loss functionConstrained estimated noise level mapThe mathematical formula is expressed as:
where Ω represents the image domain, σ represents the true noise level map,andoperators representing horizontal and vertical gradients, α and β are penalty factors for the loss function, α is set empirically to 0.3 whenWhen β is equal to 1, otherwise β is equal to 0.
Estimating images for noise freeUsing structurally similar loss functionsIt is constrained by the following mathematical formula:
where f represents the true noise-free image, SSIM can be expressed as:
in the formula, mufAndrespectively representing the mean, σ, of the imagefAndrespectively represent the variance of the image,representing the covariance between the two images, the loss function of the entire network is expressed as:
according to the scheme, the contrast correction of the optimized brightness component in the step 5) is performed by adopting Gamma transformation.
The invention has the following beneficial effects:
the blind denoising network is introduced into low-illumination enhancement, the noise of the reflection component in the maritime low-illumination image can be well eliminated, the enhanced image is obtained through the reflection component and the brightness component after the reflection component denoising processing, and meanwhile, the convolution blind denoising network is provided, the noise of the image can be well eliminated, and the details are kept.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
fig. 2 is a schematic diagram of a blind denoising network according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a method for enhancing a maritime video image in a low-light environment includes the following steps:
firstly, collecting a data set for making 10000 images, and cutting the images into 10000 × 512 large data sets;
10000 images are collected low-illumination noiseless marine images containing various common marine objects such as sea, reef, bridge column, ship, ocean platform and the like.
Secondly, obtaining an initial brightness component of the data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
the initial luminance component obtained by the Max-RGB method conforms to the following formula:
wherein I is a maritime low-illumination image,is the original luminance component and the three channels R, G, B share the same luminance component.
The guided filtering to obtain the optimized luminance component conforms to the following formula:
wherein ω represents (x)i,yj) The neighborhood of (a) is determined,representing the original luminance component of the luminance signal,represents the optimized luminance component, M represents the gray scale image corresponding to the input image, and W is the filter associated with M.
Thirdly, dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
fourthly, training and learning the processed reflection component data set by using a residual convolution-based neural network framework;
the residual convolutional neural network framework specifically comprises the following steps:
a. noise estimation network (E-Net)
Input image g due to noise and noise level estimation mapHas the same size, so that the noise estimation sub-network (E-Net) adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation chartThe network contains only convolutional layers (Conv) and Re L U activation functions, where the eigenchannel for each convolutional layer is set to 32, the size of all filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer.
b. Noise removing network (D-Net)
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graphAs input, a residual map is estimatedFinally, the noise image g and the residual image are comparedElement-by-element addition to obtain a noise-free estimated imageThis network employs a 16-layer U-Net network architecture that spreads the receive domain using symmetric hopping connections and transposed convolution and utilizes multi-scale information the size of all filters in D-Net is set to 3 × 3 and Re L U activation functions are placed after each Conv layer except the last.
c. Loss Function (L oss Function)
The invention adopts a mixed loss function to calculate the similarity of the images in training, and the noise can be effectively estimated and removed through the loss function. The method is characterized in that:
the hybrid loss function comprises three sub-loss functions to constrain a noise level estimate mapAnd noise-free estimated imageAccording to the research on the non-blind noise reduction network, whenWhen the network denoising effect is not good, the network denoising effect is not goodAnd the network denoising effect is satisfactory. Therefore, to reliably estimate the noise level, an asymmetric MSE loss function is employedSum total variation regularization term loss functionConstrained estimated noise level mapThe mathematical formula can be expressed as:
where Ω represents the image domain, σ represents the true noise level map,andoperators representing horizontal and vertical gradients, α and β represent penalty factors, α is empirically set to 0.3 when When β is 1, otherwise β is 0, estimate the image for no noiseUsing structurally similar loss functionsIt is constrained by the following formula:
where f represents the true noise-free image, SSIM can be expressed as:
in the formula, mufAndrespectively representing the mean, σ, of the imagefAndrespectively represent the variance of the image,representing the covariance between the two images, the loss function of the entire network can be expressed as:
wherein λ isasymmPenalty term coefficient, λ, for asymmetric MSE loss functionTVFor the penalty coefficient of the total variation regularization term loss function, it can take lambdaasymmIs 0.5, lambdaTVIs 0.005.
And fifthly, obtaining training parameters, and testing the noise removal effect by adopting the reflection component containing the noise.
And sixthly, performing contrast correction on the optimized brightness component, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.
Claims (9)
1. A maritime video image enhancement method under a low-illumination environment is characterized by comprising the following steps:
1) acquiring a maritime affairs image data set in a low-illumination environment;
2) obtaining an initial brightness component of an image data set through Max-RGB, and obtaining an optimized brightness component through guide filtering;
3) dividing the input low-illumination image and the brightness component optimized by the guiding filtering to obtain a reflection component;
4) eliminating noise in the reflection component by adopting a convolution blind denoising network, and keeping the details and color information of the image;
5) and carrying out contrast correction on the optimized brightness component, and multiplying the corrected brightness component and the optimized reflection component element by element to obtain an enhanced image.
2. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the initial luminance score is obtained through Max-RGB in step 2), and the following formula is adopted:
3. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the optimized luminance component is obtained by the guided filtering in step 2), and the following formula is adopted:
4. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the convolution blind denoising network in the step 4) is a residual convolution neural network.
5. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein the convolution blind denoising network in the step 4) includes a noise estimation network and a noise removal network, and the structure is as follows:
a) noise estimation network
Input image g due to noise and noise level estimation mapHas the same size, so that the noise estimation sub-network adopts a 7-layer full convolution network to estimate the noise level of the input image and obtain a noise level estimation graphThis network contains only convolutional layers and Re L U activation functions,
b) noise-removing network
The noise removing network introduces a residual error learning mode to use the image g containing noise and a noise level estimation graphAs input, a residual map is estimatedFinally, the noise image g and the residual image are comparedElement-by-element addition to obtain a noise-free estimated imageThe noise removal network adopts a 16-layer U-Net network structure.
6. The method for enhancing maritime video images under low-illumination environment according to claim 5, wherein the characteristic channel of each convolutional layer of the noise estimation network in the step 4) is set to 32, the size of all the filters is set to 3 × 3, and the Re L U activation function is placed after each convolutional layer.
7. The method for enhancing maritime video images under low-illumination environment according to claim 5, wherein the noise-removal network in the step 4) expands the receiving domain by using symmetric jump connection and transposed convolution, and utilizes multi-scale information, the size of all filters in D-Net is set to 3 × 3, and Re L U activation function is placed after each Conv layer except the last layer.
8. The method as claimed in claim 5, wherein the loss function adopted by the convolution blind denoising network in step 4) is a noise level estimation map with a mixed loss function including three sub-loss functions to constrainAnd noise-free estimated imageThe method comprises the following steps: asymmetric MSE loss functionSum total variation regularization term loss functionConstrained estimated noise level mapThe mathematical formula is expressed as:
where Ω represents the image domain, σ represents the true noise level map,andoperators representing horizontal and vertical gradients, α and β are penalty coefficients of the loss function;
estimating images for noise freeUsing structurally similar loss functionsIt is constrained by the following mathematical formula:
where f represents the true noise-free image, SSIM can be expressed as:
in the formula, mufAndrespectively representing the mean, σ, of the imagefAndrespectively represent the variance of the image,representing the covariance between the two images, the loss function of the entire network is expressed as:
wherein λ isasymmPenalty term coefficient, λ, for asymmetric MSE loss functionTVPenalty term coefficients for the total variation regularization term loss function.
9. The method for enhancing maritime video images under low-illumination environment according to claim 1, wherein in the step 4), the contrast correction of the optimized luminance component in the step 5) is performed by using Gamma transformation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010231309.7A CN111489303A (en) | 2020-03-27 | 2020-03-27 | Maritime affairs image enhancement method under low-illumination environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010231309.7A CN111489303A (en) | 2020-03-27 | 2020-03-27 | Maritime affairs image enhancement method under low-illumination environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111489303A true CN111489303A (en) | 2020-08-04 |
Family
ID=71797562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010231309.7A Pending CN111489303A (en) | 2020-03-27 | 2020-03-27 | Maritime affairs image enhancement method under low-illumination environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111489303A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200750A (en) * | 2020-10-21 | 2021-01-08 | 华中科技大学 | Ultrasonic image denoising model establishing method and ultrasonic image denoising method |
CN112215767A (en) * | 2020-09-28 | 2021-01-12 | 电子科技大学 | Anti-blocking effect image video enhancement method |
CN112308803A (en) * | 2020-11-25 | 2021-02-02 | 哈尔滨工业大学 | Self-supervision low-illumination image enhancement and denoising method based on deep learning |
CN112580672A (en) * | 2020-12-28 | 2021-03-30 | 安徽创世科技股份有限公司 | License plate recognition preprocessing method and device suitable for dark environment and storage medium |
CN112614063A (en) * | 2020-12-18 | 2021-04-06 | 武汉科技大学 | Image enhancement and noise self-adaptive removal method for low-illumination environment in building |
CN113344804A (en) * | 2021-05-11 | 2021-09-03 | 湖北工业大学 | Training method of low-light image enhancement model and low-light image enhancement method |
CN113450366A (en) * | 2021-07-16 | 2021-09-28 | 桂林电子科技大学 | AdaptGAN-based low-illumination semantic segmentation method |
CN113643202A (en) * | 2021-07-29 | 2021-11-12 | 西安理工大学 | Low-light-level image enhancement method based on noise attention map guidance |
CN113808036A (en) * | 2021-08-31 | 2021-12-17 | 西安理工大学 | Low-illumination image enhancement and denoising method based on Retinex model |
WO2022217476A1 (en) * | 2021-04-14 | 2022-10-20 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN115797225A (en) * | 2023-01-06 | 2023-03-14 | 山东环宇地理信息工程有限公司 | Unmanned ship acquisition image enhancement method for underwater topography measurement |
CN116128768A (en) * | 2023-04-17 | 2023-05-16 | 中国石油大学(华东) | Unsupervised image low-illumination enhancement method with denoising module |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163818A (en) * | 2019-04-28 | 2019-08-23 | 武汉理工大学 | A kind of low illumination level video image enhancement for maritime affairs unmanned plane |
-
2020
- 2020-03-27 CN CN202010231309.7A patent/CN111489303A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163818A (en) * | 2019-04-28 | 2019-08-23 | 武汉理工大学 | A kind of low illumination level video image enhancement for maritime affairs unmanned plane |
Non-Patent Citations (2)
Title |
---|
S. GUO,ET.AL,: "Toward convolutional blind denoising of real photographs", 《IN PROC. IEEE/CVF CONF. COMPUT. VIS. PATTERN RECOGNIT.》 * |
X. GUO,ET.CL,: "LIME: Low-light image enhancement via illumination map estimation", 《IEEE TRANS. IMAGE PROCESS》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215767A (en) * | 2020-09-28 | 2021-01-12 | 电子科技大学 | Anti-blocking effect image video enhancement method |
CN112200750A (en) * | 2020-10-21 | 2021-01-08 | 华中科技大学 | Ultrasonic image denoising model establishing method and ultrasonic image denoising method |
CN112200750B (en) * | 2020-10-21 | 2022-08-05 | 华中科技大学 | Ultrasonic image denoising model establishing method and ultrasonic image denoising method |
CN112308803A (en) * | 2020-11-25 | 2021-02-02 | 哈尔滨工业大学 | Self-supervision low-illumination image enhancement and denoising method based on deep learning |
CN112614063B (en) * | 2020-12-18 | 2022-07-01 | 武汉科技大学 | Image enhancement and noise self-adaptive removal method for low-illumination environment in building |
CN112614063A (en) * | 2020-12-18 | 2021-04-06 | 武汉科技大学 | Image enhancement and noise self-adaptive removal method for low-illumination environment in building |
CN112580672A (en) * | 2020-12-28 | 2021-03-30 | 安徽创世科技股份有限公司 | License plate recognition preprocessing method and device suitable for dark environment and storage medium |
WO2022217476A1 (en) * | 2021-04-14 | 2022-10-20 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113344804A (en) * | 2021-05-11 | 2021-09-03 | 湖北工业大学 | Training method of low-light image enhancement model and low-light image enhancement method |
CN113450366A (en) * | 2021-07-16 | 2021-09-28 | 桂林电子科技大学 | AdaptGAN-based low-illumination semantic segmentation method |
CN113450366B (en) * | 2021-07-16 | 2022-08-30 | 桂林电子科技大学 | AdaptGAN-based low-illumination semantic segmentation method |
CN113643202A (en) * | 2021-07-29 | 2021-11-12 | 西安理工大学 | Low-light-level image enhancement method based on noise attention map guidance |
CN113808036A (en) * | 2021-08-31 | 2021-12-17 | 西安理工大学 | Low-illumination image enhancement and denoising method based on Retinex model |
CN113808036B (en) * | 2021-08-31 | 2023-02-24 | 西安理工大学 | Low-illumination image enhancement and denoising method based on Retinex model |
CN115797225A (en) * | 2023-01-06 | 2023-03-14 | 山东环宇地理信息工程有限公司 | Unmanned ship acquisition image enhancement method for underwater topography measurement |
CN116128768A (en) * | 2023-04-17 | 2023-05-16 | 中国石油大学(华东) | Unsupervised image low-illumination enhancement method with denoising module |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111489303A (en) | Maritime affairs image enhancement method under low-illumination environment | |
Wang et al. | An experimental-based review of image enhancement and image restoration methods for underwater imaging | |
Li et al. | Rain streak removal using layer priors | |
CN108564597B (en) | Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method | |
Zhou et al. | Multicolor light attenuation modeling for underwater image restoration | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
CN116596792B (en) | Inland river foggy scene recovery method, system and equipment for intelligent ship | |
CN115409872B (en) | Image optimization method for underwater camera | |
CN111145102A (en) | Synthetic aperture radar image denoising method based on convolutional neural network | |
CN112070688A (en) | Single image defogging method for generating countermeasure network based on context guidance | |
CN114219722A (en) | Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing | |
Yu et al. | Image and video dehazing using view-based cluster segmentation | |
CN116188325A (en) | Image denoising method based on deep learning and image color space characteristics | |
CN114693548B (en) | Dark channel defogging method based on bright area detection | |
Huang et al. | Underwater image enhancement based on color restoration and dual image wavelet fusion | |
CN115249211A (en) | Image restoration method based on underwater non-uniform incident light model | |
CN117994167A (en) | Diffusion model defogging method integrating parallel multi-convolution attention | |
CN114119383A (en) | Underwater image restoration method based on multi-feature fusion | |
CN114549342B (en) | Restoration method for underwater image | |
CN115760640A (en) | Coal mine low-illumination image enhancement method based on noise-containing Retinex model | |
CN116612032A (en) | Sonar image denoising method and device based on self-adaptive wiener filtering and 2D-VMD | |
CN111489302A (en) | Maritime image enhancement method in fog environment | |
Du et al. | Recursive image dehazing via perceptually optimized generative adversarial network (POGAN) | |
CN113269763B (en) | Underwater image definition recovery method based on depth map restoration and brightness estimation | |
CN115619674A (en) | Low-illumination video enhancement method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200804 |