CN115760630A - Low-illumination image enhancement method - Google Patents
Low-illumination image enhancement method Download PDFInfo
- Publication number
- CN115760630A CN115760630A CN202211495205.2A CN202211495205A CN115760630A CN 115760630 A CN115760630 A CN 115760630A CN 202211495205 A CN202211495205 A CN 202211495205A CN 115760630 A CN115760630 A CN 115760630A
- Authority
- CN
- China
- Prior art keywords
- image
- component
- illumination
- network
- decomposition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 48
- 230000004927 fusion Effects 0.000 claims abstract description 7
- 230000004913 activation Effects 0.000 claims description 37
- 238000011176 pooling Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 10
- 230000009467 reduction Effects 0.000 abstract description 9
- 230000007547 defect Effects 0.000 abstract description 5
- 230000006872 improvement Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 56
- 238000013527 convolutional neural network Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000512668 Eunectes Species 0.000 description 1
- 206010049155 Visual brightness Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a low-illumination image enhancement method, which comprises the following steps: (a1) Decomposing the original image to obtain a reflection component, an illumination component and a characteristic component; (a2) Performing brightness enhancement on the illumination component obtained by decomposition in the step (a 1); (a3) And (b) carrying out weighted fusion on the illumination component enhanced in the step (a 2), the reflection component and the characteristic component obtained by decomposition in the step (a 1) to obtain an enhanced image. The invention decomposes the image into a reflection component, an illumination component and a characteristic component by utilizing the principle of image decomposition, and adds noise reduction treatment in the reflection component, the illumination component and the characteristic component, thereby reducing the influence of noise on the image effect to a certain extent. Meanwhile, the image characteristic components are independently extracted, so that the defect of detail loss caused by noise reduction is avoided, and the defects of unobvious image brightness improvement, high image noise, detail loss and the like in the traditional method are overcome to a certain extent.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a low-illumination image enhancement method.
Background
In the field of image processing, low-illumination image enhancement is a very important branch, and can be widely applied to high-grade visual tasks such as military affairs, monitoring, security and the like. However, due to the limitations of the camera equipment and environmental factors, especially the severe degradation of nighttime images and video, much image information is lost in high-level vision processing tasks. Although the image brightness is improved to some extent by prolonging the exposure time, the practical significance is not achieved. The purpose of low-light image enhancement is to highlight the useful features of the image, while attenuating or eliminating noise, improving contrast, and producing better human visual effects.
Many researchers have been investing in research on low-illumination image enhancement methods, and many effective methods have been proposed. The traditional low-illumination image enhancement methods comprise gray level transformation, histogram equalization (namely HE), a Retinex model, frequency domain processing-based, an image fusion model, a defogging model and the like.
The popularity of deep learning continues to increase in recent years, and the strong feature representation and nonlinear mapping capability of deep learning also have many applications in the image field and have been highly successful. At present, the field of low-illumination image enhancement is mainly based on a deep learning scheme, and a plurality of network models and model building are innovated. The basic method is to adopt the CNN network to extract the image characteristic information and then process the component information. For example, zhu et al proposes a new three-branch full convolution neural network, RRDNet, that decomposes an input image into three components: illumination, reflection and noise, and noise are effectively estimated and illumination is restored by iterating the loss function, so that the noise is definitely predicted to achieve the purpose of denoising. Li et al propose a trainable Convolutional Neural Network (CNN) for weak illumination image enhancement, namely lightnenet, which takes a weak illumination image as input and outputs its illumination map, which is then used to obtain an enhanced image based on the Retinex model. Guo et al presents a novel approach, zero-reference depth curve estimation, or Zero-DCE, which formulates light enhancement as the task of image-specific curve estimation with a depth network. Lim et al propose a deep laplacian-based restorer for low-illumination image enhancement, i.e., DSLR, which proposes a multi-scale laplacian residual block in the algorithm, making the training phase more efficient by rich concatenation of high-order residuals defined in a multi-scale structure embedded in a feature space. Kin GwnLore et al propose a depth auto-encoder based approach, namely LLNet, to identify signal features from low light images and adaptively highlight images without over-magnifying/saturating brighter portions of images with high dynamic range.
Although the above method can solve the brightness problem of the low-illumination image, some problems still remain to be solved. For example lightnenet relies on paired low-light/normal-luminance image datasets, which is not only limited in the number of datasets, but also prone to overfitting problems. RRDNet avoids the dependence on a data set, but the brightness of the image is not obviously improved, and the brightness of the image still needs to be effectively improved. In general, most of the existing methods have phenomena of color distortion, detail loss and the like which influence the appearance of the enhancement result, and meanwhile, the phenomenon of uneven exposure is still the biggest problem.
Disclosure of Invention
The invention aims to solve the technical problem of providing a low-illumination image enhancement method aiming at the defects of the prior art, and the low-illumination image enhancement method can enhance the original low-illumination color image, highlight the useful characteristics of the image, improve the contrast, avoid color distortion and enrich the detailed characteristics of the image, so that the enhanced image is more in line with the visual effect of human eyes.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a low-illumination image enhancement method, comprising the steps of:
(a1) Decomposing the original image (namely the original low-illumination color image) to obtain a reflection component, an illumination component and a characteristic component;
(a2) Performing brightness enhancement on the illumination component obtained by decomposition in the step (a 1);
(a3) And (b) carrying out weighted fusion on the illumination component enhanced in the step (a 2), the reflection component and the characteristic component obtained by decomposition in the step (a 1) to obtain an enhanced image.
As a further improved technical scheme of the invention, the method specifically comprises the following steps:
(b1) Establishing a CNN network, wherein the CNN network comprises a decomposition network and an enhancement network;
(b2) Decomposing the original image through a decomposition network to obtain a reflection component, an illumination component and a characteristic component;
(b3) Performing brightness enhancement on the illumination component obtained by decomposition in the step (b 2) through an enhancement network;
(b4) And (c) carrying out weighted fusion on the illumination component enhanced in the step (b 3), the reflection component and the characteristic component obtained by decomposition in the step (b 2) to obtain an enhanced image.
As a further improved technical solution of the present invention, in the step (b 2), the decomposition network includes three branch networks, which are a first branch network, a second branch network and a third branch network; the first branch network and the second branch network have the same structure and sequentially comprise a convolution layer, a ReLU activation function, a maximum pooling layer, a convolution layer, a ReLU activation function, an upsampling layer, a convolution layer, a ReLU activation function, a convolution layer and a Sigmoid function; the third branch network comprises a convolution layer, a ReLU activation function, a convolution layer and a Tanh function in sequence;
the step (b 2) is specifically as follows:
inputting an original image into a first branch network to obtain an image with the channel number of 3 as a reflection component; inputting an original image into a second branch network to obtain an image with the channel number of 1 as an illumination component; and inputting the original image into a third branch network to obtain an image with the channel number of 3 as a characteristic component.
As a further improved technical scheme of the invention, in the step (b 3), the enhancement network sequentially comprises a convolution layer, a ReLU activation function, a reverse convolution layer, a ReLU activation function, a convolution layer and a Sigmoid function;
the step (b 3) is specifically as follows:
inputting the illumination component obtained by decomposition in the step (b 2) into an enhancement network, sequentially passing through a convolution layer, a ReLU activation function, an inverse convolution layer, a ReLU activation function, a convolution layer, a ReLU activation function and a convolution layer in the enhancement network, splicing the illumination component obtained by decomposition in the step (b 2) with the illumination component obtained by decomposition in the step (b 2) and inputting the illumination component into a Sigmoid function to obtain the enhanced illumination component.
As a further improved technical solution of the present invention, the loss function of the CNN network is:
L=L recon +λ 1 L s +λ 2 L t (1);
wherein L is recon For reconstruction of the loss function, L s As a smoothness loss function, L t Estimating a loss function for the feature; lambda [ alpha ] 1 And λ 2 Is a weight factor;
wherein S represents the original image and the original image,i is the decomposed luminance component for the enhanced image,is the maximum value of the color channel, R is the reflection component obtained by decomposition,
where H denotes the height of the image, W denotes the width of the image, C denotes the channel of the image, Δ x Representing horizontal gradient operation, Δ y Represents a vertical gradient operation;
wherein | | | purple hair F The Frobenius norm of the matrix is represented, P represents a characteristic component obtained by decomposition, and alpha x Representing horizontal gradient operation, α y Representing vertical gradient operation, β represents the illumination steering weight, as:
β=normalize[I·(α x R) 2 ·(α y R) 2 ] -1 (5);
where normaize denotes the minimum normalization.
The invention has the beneficial effects that:
the invention decomposes the image into a reflection component, an illumination component and a characteristic component by utilizing the principle of image decomposition, and adds noise reduction treatment in the first branch network and the second branch network, thereby reducing the influence of noise on the image effect to a certain extent. Meanwhile, the image characteristic components are independently extracted through the third branch network, so that the defect of detail loss caused by noise reduction is avoided, and the defects of unobvious image brightness improvement, high image noise, detail loss and the like in the traditional method are overcome to a certain extent; the loss of color distortion details, the obvious reduction of image color contrast, artifact phenomena and the like are effectively avoided, the overall brightness of the enhanced image effect is improved, the detail features of the image are effectively recovered, and the phenomena of overexposure, underexposure or uneven exposure are avoided.
The zero-sample low-illumination image enhancement method provided by the invention has the advantages that the illumination component of the image is greatly improved, the problem of image color distortion is avoided, and the detail characteristics of the image can be effectively recovered. The decomposition principle is utilized to carry out decomposition processing on the reflection component and the illumination component of the image, so that the influence on the color of the image is avoided while the contrast component is enhanced. And through the independent decomposition of the texture features, the loss of image detail features is avoided when the noise reduction is carried out on the illumination component and the illumination component, and the overall visual quality of the image is improved.
Drawings
Fig. 1 is a schematic diagram of an algorithm in an embodiment of the invention.
Fig. 2 is a schematic diagram of a decomposed network structure in an embodiment of the present invention.
Fig. 3 is a schematic diagram of an enhanced network structure in an embodiment of the present invention.
FIG. 4 is a visual comparison of the results of a first set of experiments in an embodiment of the present invention.
FIG. 5 is a visual comparison of the results of a second set of experiments in an example of the invention.
FIG. 6 is a visual comparison of the results of a third set of experiments in an example of the present invention.
FIG. 7 is a graph showing a visual comparison of the results of the fourth set of experiments in the example of the present invention.
Detailed Description
The following further description of embodiments of the invention is made with reference to the accompanying drawings:
the embodiment provides a low-illumination image enhancement method, which comprises the following steps:
(b1) And establishing a CNN network, wherein the CNN network comprises a decomposition network and an enhancement network.
As shown in fig. 1, the CNN network designed in this embodiment is a CNN low-illumination image restoration network based on zero sample learning, and includes a decomposition network and an enhancement network.
(b2) And decomposing the original image (namely the original low-illumination color image) through a decomposition network to obtain a reflection component, an illumination component and a characteristic component.
(b3) And (c) performing brightness enhancement on the illumination component obtained by decomposition in the step (b 2) through an enhancement network so as to improve the overall brightness of the image.
(b4) And (c) carrying out weighted fusion on the illumination component enhanced in the step (b 3), the reflection component and the characteristic component obtained by decomposition in the step (b 2) to obtain an enhanced image.
As shown in fig. 2, the decomposition network is divided into three branch networks, and the reflection component, the illumination component, and the feature component of the input image can be accurately predicted. Is characterized in that a pooling layer is added in the decomposition process, so that the illumination map and the reflection map are subjected to noise reduction operation in the decomposition process. The activation functions for the illumination map and the reflection map select Sigmoid functions to ensure that the output is between (0,1). The activation function of the feature map selects a tanh function, the output of the tanh is (-1,1), and the model convergence is faster. And when the set iteration number is reached, decomposing the image.
Specifically, in the step (b 2), the decomposing network includes three branch networks, namely a first branch network, a second branch network and a third branch network; the first branch network and the second branch network have the same structure and sequentially comprise a3 × 3 convolutional layer, a ReLU activation function, a maximum pooling layer, a3 × 3 convolutional layer, a ReLU activation function, an upsampling layer, a3 × 3 convolutional layer, a ReLU activation function, a3 × 3 convolutional layer and a Sigmoid function; the third branch network comprises a3 × 3 convolutional layer, a ReLU activation function, a3 × 3 convolutional layer and a Tanh function in sequence;
the step (b 2) is specifically as follows:
inputting an original image into a first branch network, sequentially extracting features through a3 × 3 convolutional layer, a ReLU activation function, a maximum pooling layer, a3 × 3 convolutional layer and a ReLU activation function, and finally obtaining an image with the channel number of 3 as a reflection component through an up-sampling layer, a3 × 3 convolutional layer, a ReLU activation function, an up-sampling layer, a3 × 3 convolutional layer and a ReLU activation function, and a3 × 3 convolutional layer and a Sigmoid function; similarly, after the original image is input into the second branch network, an image with the channel number of 1 is obtained as an illumination component; inputting an original image into a third branch network, sequentially mapping by a3 × 3 convolutional layer, a ReLU activation function, a3 × 3 convolutional layer and a ReLU activation function, and sequentially mapping by the 3 × 3 convolutional layer and a Tanh function to obtain an image with the channel number of 3 as a characteristic component.
As shown in fig. 3, the input to the enhancement network is the luminance component of the decomposed network output. The enhancement network consists of eight convolutional layers, wherein the eight convolutional layers comprise 6 convolutional layers and two reverse convolutional layers, and illumination information of an illumination map can be effectively acquired. In order to make up for the effective information of the illumination map which may be lost in the process, the input illumination component is finally spliced to the last layer and output as the enhanced illumination component.
Specifically, in step (b 3), the enhancement network sequentially includes a convolutional layer, a ReLU activation function, a reverse convolutional layer, a ReLU activation function, a convolutional layer, a ReLU activation function, a convolutional layer, and a Sigmoid function;
the step (b 3) is specifically as follows:
inputting the illumination component obtained by decomposition in the step (b 2) into an enhancement network, sequentially passing through a convolution layer, a ReLU activation function, an inverse convolution layer, a ReLU activation function, a convolution layer, a ReLU activation function and a convolution layer in the enhancement network, splicing the illumination component obtained by decomposition in the step (b 2) with the illumination component obtained by decomposition in the step (b 2) and inputting the illumination component into a Sigmoid function to obtain the enhanced illumination component. In the step (b 3), the illumination component obtained by decomposition in the step (b 2) is input, the enhancement network comprises 7 convolutional layers with convolution kernels of 3 × 3 and 3 × 3, the convolution kernels with the ReLU activation function are provided with 7 layers, the illumination information of the illumination component can be fully acquired, finally, in order to avoid loss of effective information, the input is spliced to the last layer, and the value is constrained to be between [0,1] by using a Sigmoid function.
The relationship between the original image, i.e., the original low-illumination image S, the reflection component R, the illumination component I, and the feature component P, is:
S=(R+P)*I(1);
further, in order to better update the network weight, the embodiment designs a loss function to evaluate the decomposition of the network, leading to the generation of a more accurate network. The loss function is expressed as:
L=L recon +λ 1 L s +λ 2 L t (2);
wherein L is recon As a reconstruction loss function, L s As a smoothness loss function, L t Estimating a loss function for the feature; lambda [ alpha ] 1 And λ 2 Is a weighting factor.
Wherein L is recon For reconstruction loss herein, as a constraint on reflection and illumination, the loss function is expressed as:
wherein S represents the original image and the original image,i is the decomposed illumination component for the enhanced image,is the maximum value of the color channel, R is the reflection component obtained by decomposition,L 1 the norm is used to guide all the loss functions of this embodiment.
Wherein L is s For smoothness loss, plotThe image should be as smooth as possible on the reflectance and illuminance maps so that amplified noise does not affect the enhancement effect. The loss function is expressed as:
where H denotes the height of the image, W denotes the width of the image, C denotes the channel of the image, Δ x Representing horizontal gradient operation, Δ y Indicating a vertical gradient operation.
Wherein L is t To estimate the loss for a feature, the image feature extraction is guided by weighting based on the estimated luminance map, the loss function being expressed as:
wherein | | | purple hair F The Frobenius norm of the matrix is represented, P represents a characteristic component obtained by decomposition, and alpha x Representing horizontal gradient operation, α y Representing vertical gradient operation, β represents the illumination steering weight, as:
β=normalize[I·(α x R) 2 ·(α y R) 2 ] -1 (6);
where normaize denotes the minimum normalization.
And (3) obtaining a final CNN network, namely a final decomposition network and an enhancement network, when the set iteration times are reached through iterative computation of a loss function, and then respectively executing the steps (b 1) - (b 4) to decompose and enhance the original low-illumination color image by utilizing the final decomposition network and the enhancement network to obtain an enhanced image.
The present invention will be described with reference to specific examples.
The experimental environment is as follows:
in all experiments we set up a lambda empirically 1 =0.5,λ 2 =5000. The research use was performed in the same configuration environment, training environment: intel i7-8700 CPU, 32GB RAM and NVIDIA GeForce RTX 2080Ti GPU, pyTorch framework, pyCharm software under 32GB environment, anaconda python3.7 interpreter built a network framework. Tests were performed using paired LOL datasets and unpaired 5K datasets. Where the LOL dataset contains 15 pairs of test sets (each pair comprising an original low light image and a luminance image corresponding to the original low light image), 5000 pictures (i.e., 5000 low light images) of a 5K dataset taken by a single lens reflex camera, and all 5000 pictures were decorated using software dedicated for photo adjustment (Adobe Lightroom).
Evaluation model:
the objective evaluation of enhanced images using the peak signal-to-noise ratio (PSNR), structural Similarity (SSIM), natural Image Quality Evaluation (NIQE), image intensity order difference (LOE) objective evaluation indicators was studied. Among them, PSNR is mainly used to evaluate the difference between images. The method is widely applied to quality evaluation of low-level image processing tasks such as image defogging, image noise reduction, image enhancement and the like.
PSNR is expressed as:
where MSE is the mean square error between images, and MaxValue is the maximum value of pixels in two images. The equation for MSE is expressed as:
where H denotes the image height and W denotes the image width. g (x, y) andrespectively, a luminance image and an enhanced image corresponding to the original low-illumination picture.
The SSIM is used for highlighting the brightness, contrast and structural similarity between two images (namely a brightness image and an enhanced image corresponding to an original low-illumination image), and the value range is 0-1; the closer the value is to 1, the more similar the two images are. Assuming that a and b are two input images, the formula is:
SSIM=[l(a,b)] α [C(a,b)] β [S(a,b)] γ (9);
where l (a, b) is brightness comparison, C (a, b) is contrast comparison, and S (a, b) is structure comparison. α, β, γ is greater than 0, for adjusting the specific gravity of the three parts, l (a, b), C (a, b) and S (a, b) have the following formulas, respectively:
wherein mu a And mu b Respectively representing the average, σ, of the two image pixels a And σ b Respectively representing the standard deviation of the two image pixels. Sigma ab Representing the covariance of the two image pixels. c. C 1 ,c 2 And c 3 The effect of (c) is to avoid a denominator of 0.
NIQE is based on a set of "quality-aware" features and fits them into the MVG model. The quality-aware features originate from a simple but highly regularized NSS model. The NIQE index for a given test image is then expressed as the distance between the MVG model of NSS features extracted from the test image and the MVG model of quality-aware features extracted from the natural image corpus. The NIQE equation is expressed as:
wherein v is 1 And m 1 Respectively representing the mean vector and covariance matrix, v, representing the MVG model of the natural image 2 And m 2 Mean vector and covariance matrix of the distorted image MVG model are respectively represented.
LOE is the sequential difference in image brightness, and the change in illumination of an image is evaluated by evaluating the sequential change process of image brightness in the neighborhood. The LOE reflects the natural retention of the image. A smaller value indicates that the image has a better luminance order and looks more natural. The formula is:
m is the height of the image, N is the width of the image, and RD (i, j) is the difference in relative brightness order between the original image and the enhanced image.
Different data sets were analyzed using different objective evaluation indices. Paired LOL datasets were evaluated quantitatively from PSNR and SSIM. Unpaired 5K datasets were quantitatively evaluated according to NIQE and LOE. The results are shown in fig. 4, 5, 6 and 7.
The comparison methods include HE, retinex, RRDNet, lightenNet, zero-DCE, DSLR and LLNet. The results of all comparison methods are from official codes.
Fig. 4 and 5 belong to the LOL data set, and HE in fig. 4 significantly improves the image brightness by raising the contrast, but the image distortion is severe as a whole. Retinex is the best for visual brightness enhancement in fig. 4, but the clothing color is distorted and significant noise is visible in the magnified image. The RRDNet is good in appearance on an original image with brighter brightness, but is relatively poor in appearance on a darker image, the brightness is not obviously improved, and a better visual effect is difficult to achieve. The lightnenet method generally improves brightness in the comparison method, but the image has white blocking. Zero-DCE can well retain the detail characteristics of an image, but the brightness is not obviously improved and the color contrast of the image is obviously reduced. The enhancement effect of the DSLR produces significant blocking, blocking and artifacts in the wardrobe parts as a whole. The LLNet is seen from the hanger and the enlargement with a serious loss of detail and an overall image enhancement effect is blurred. Compared with other methods, the method of the invention has the advantages that the brightness improving effect is possibly not optimal, but other problems such as color distortion detail loss, obvious image color contrast reduction, artifact phenomenon and the like are effectively avoided, the brightness of the improving effect is integrally improved, and the overexposure or underexposure phenomenon is avoided. Fig. 6 and 7 are from 5K data sets, and fig. 6 shows that the most visible brightness is the Retinex method, but the whole image is over-exposed, the visible detail loss is serious, the enhancement effect of the HE method is totally whitened, and the brightness enhancement is not obvious as RRDNet and DSLR, which greatly affects the visual effect.
Different algorithms global image enhancement objective evaluation:
to verify the performance of each algorithm, the LOL dataset was analyzed using PSNR and SSIM indices as shown in table 1. The 5K data set was analyzed using NIQE and LOE indices as shown in table 2. Two decimal places are reserved as a result. Results are shown bold in the first three.
Table 1:
table 2:
Method | NIQE↓ | LOE↓ |
Input | 28.12 | 0 |
HE | 30.76 | 254.87 |
Retinex | 23.33 | 291.14 |
RRDNet | 18.47 | 251.37 |
LightenNet | 20.97 | 305.50 |
Zero-DCE | 21.50 | 351.37 |
DSLR | 18.40 | 272.58 |
LLNet | 26.35 | 302.76 |
Ours | 18.02 | 249.25 |
as can be seen from tables 1 and 2, none of the methods can obtain an optimum value among all the image quality detection indexes. The method of the invention is best in the LOL data set test on the PSNR index, and secondly, the SSIM index is superior to most methods. In the 5K data set test, the method of the invention obtains the optimal value of the LOE index, and then NIQE also obtains the optimal performance. The above table more strongly indicates the effectiveness and applicability of the method of the invention.
The working principle and the process of the invention are as follows: the low-illumination image is divided into a reflection component, an illumination component and a characteristic component, and the contrast component is enhanced separately. In order to avoid the influence of noise, the image is denoised (namely, denoising is carried out through a maximum pooling layer and an upsampling layer) while being decomposed, and the image characteristic components are separately extracted, so that the loss of image details in the processing process is avoided. And finally, fusing the enhanced illumination component, the original reflection component and the characteristic component again to generate an enhanced result.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (5)
1. A low-illumination image enhancement method, comprising the steps of:
(a1) Decomposing the original image to obtain a reflection component, an illumination component and a characteristic component;
(a2) Performing brightness enhancement on the illumination component obtained by decomposition in the step (a 1);
(a3) And (b) carrying out weighted fusion on the illumination component enhanced in the step (a 2), the reflection component and the characteristic component obtained by decomposition in the step (a 1) to obtain an enhanced image.
2. The low-illuminance image enhancement method according to claim 1, specifically comprising the steps of:
(b1) Establishing a CNN network, wherein the CNN network comprises a decomposition network and an enhancement network;
(b2) Decomposing the original image through a decomposition network to obtain a reflection component, an illumination component and a characteristic component;
(b3) Performing brightness enhancement on the illumination component obtained by decomposition in the step (b 2) through an enhancement network;
(b4) And (c) carrying out weighted fusion on the illumination component enhanced in the step (b 3), the reflection component and the characteristic component obtained by decomposition in the step (b 2) to obtain an enhanced image.
3. A low-illuminance image enhancement method according to claim 2, wherein in the step (b 2), the decomposition network comprises three branch networks, namely a first branch network, a second branch network and a third branch network; the first branch network and the second branch network have the same structure and sequentially comprise a convolution layer, a ReLU activation function, a maximum pooling layer, a convolution layer, a ReLU activation function, an upsampling layer, a convolution layer, a ReLU activation function, a convolution layer and a Sigmoid function; the third branch network comprises a convolution layer, a ReLU activation function, a convolution layer and a Tanh function in sequence;
the step (b 2) is specifically as follows:
inputting an original image into a first branch network to obtain an image with the channel number of 3 as a reflection component; inputting an original image into a second branch network to obtain an image with the channel number of 1 as an illumination component; and inputting the original image into a third branch network to obtain an image with the channel number of 3 as a characteristic component.
4. The low-illumination image enhancement method according to claim 3, wherein in the step (b 3), the enhancement network sequentially comprises a convolutional layer, a ReLU activation function, a deconvolution layer, a ReLU activation function, a convolutional layer, and a Sigmoid function;
the step (b 3) is specifically as follows:
inputting the illumination component obtained by decomposition in the step (b 2) into an enhancement network, sequentially passing through a convolution layer, a ReLU activation function, an inverse convolution layer, a ReLU activation function, a convolution layer, a ReLU activation function and a convolution layer in the enhancement network, splicing the illumination component obtained by decomposition in the step (b 2) with the illumination component obtained by decomposition in the step (b 2) and inputting the illumination component into a Sigmoid function to obtain the enhanced illumination component.
5. The low-illuminance image enhancement method according to claim 4, characterized in that: the loss function of the CNN network is:
L=L recon +λ 1 L s +λ 2 L t (1);
wherein L is recon For reconstruction of the loss function, L s As a smoothness loss function, L t Estimating a loss function for the feature; lambda [ alpha ] 1 And λ 2 Is a weight factor;
wherein S represents the original image and the original image,i is the decomposed illumination component for the enhanced image,is the maximum value of the color channel, R is the reflection component obtained by decomposition,
where H denotes the height of the image, W denotes the width of the image, C denotes the channel of the image, Δ x Representing horizontal gradient operation, Δ y Represents a vertical gradient operation;
wherein |) F The Frobenius norm of the matrix is represented, P represents a characteristic component obtained by decomposition, and alpha x Representing horizontal gradient operation, α y Representing vertical gradient operation, β represents the illumination steering weight, as:
β=normalize[I·(α x R) 2 ·α y R 2 ] -1 (5);
where normaize denotes the minimum normalization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211495205.2A CN115760630A (en) | 2022-11-26 | 2022-11-26 | Low-illumination image enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211495205.2A CN115760630A (en) | 2022-11-26 | 2022-11-26 | Low-illumination image enhancement method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115760630A true CN115760630A (en) | 2023-03-07 |
Family
ID=85338540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211495205.2A Pending CN115760630A (en) | 2022-11-26 | 2022-11-26 | Low-illumination image enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115760630A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095133A (en) * | 2023-10-18 | 2023-11-21 | 华侨大学 | Building three-dimensional information acquisition method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489321A (en) * | 2020-03-09 | 2020-08-04 | 淮阴工学院 | Depth network image enhancement method and system based on derivative graph and Retinex |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
CN113256510A (en) * | 2021-04-21 | 2021-08-13 | 浙江工业大学 | CNN-based low-illumination image enhancement method with color restoration and edge sharpening effects |
CN114862711A (en) * | 2022-04-29 | 2022-08-05 | 西安理工大学 | Low-illumination image enhancement and denoising method based on dual complementary prior constraints |
-
2022
- 2022-11-26 CN CN202211495205.2A patent/CN115760630A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489321A (en) * | 2020-03-09 | 2020-08-04 | 淮阴工学院 | Depth network image enhancement method and system based on derivative graph and Retinex |
CN112561838A (en) * | 2020-12-02 | 2021-03-26 | 西安电子科技大学 | Image enhancement method based on residual self-attention and generation countermeasure network |
CN113256510A (en) * | 2021-04-21 | 2021-08-13 | 浙江工业大学 | CNN-based low-illumination image enhancement method with color restoration and edge sharpening effects |
CN114862711A (en) * | 2022-04-29 | 2022-08-05 | 西安理工大学 | Low-illumination image enhancement and denoising method based on dual complementary prior constraints |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095133A (en) * | 2023-10-18 | 2023-11-21 | 华侨大学 | Building three-dimensional information acquisition method and system |
CN117095133B (en) * | 2023-10-18 | 2024-01-05 | 华侨大学 | Building three-dimensional information acquisition method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111968044B (en) | Low-illumination image enhancement method based on Retinex and deep learning | |
Wang et al. | Adaptive image enhancement method for correcting low-illumination images | |
CN112381897B (en) | Low-illumination image enhancement method based on self-coding network structure | |
CN107798661B (en) | Self-adaptive image enhancement method | |
CN105046653B (en) | A kind of video raindrop minimizing technology and system | |
Kong et al. | Low-light image enhancement via poisson noise aware retinex model | |
Liu et al. | Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives | |
CN110807742B (en) | Low-light-level image enhancement method based on integrated network | |
CN114066747B (en) | Low-illumination image enhancement method based on illumination and reflection complementarity | |
CN111968041A (en) | Self-adaptive image enhancement method | |
Wang et al. | Low-light image joint enhancement optimization algorithm based on frame accumulation and multi-scale Retinex | |
CN113658057A (en) | Swin transform low-light-level image enhancement method | |
CN104318529A (en) | Method for processing low-illumination images shot in severe environment | |
CN114266707A (en) | Low-light image enhancement method combining attention mechanism and Retinex model | |
CN115063318A (en) | Adaptive frequency-resolved low-illumination image enhancement method and related equipment | |
CN115457249A (en) | Method and system for fusing and matching infrared image and visible light image | |
CN116188339A (en) | Retinex and image fusion-based scotopic vision image enhancement method | |
CN109064402A (en) | Based on the single image super resolution ratio reconstruction method for enhancing non local total variation model priori | |
CN115760630A (en) | Low-illumination image enhancement method | |
CN115272072A (en) | Underwater image super-resolution method based on multi-feature image fusion | |
Yang et al. | CSDM: A cross-scale decomposition method for low-light image enhancement | |
CN117830134A (en) | Infrared image enhancement method and system based on mixed filtering decomposition and image fusion | |
CN110706180B (en) | Method, system, equipment and medium for improving visual quality of extremely dark image | |
CN117391981A (en) | Infrared and visible light image fusion method based on low-light illumination and self-adaptive constraint | |
CN115797205A (en) | Unsupervised single image enhancement method and system based on Retinex fractional order variation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230307 |
|
RJ01 | Rejection of invention patent application after publication |