CN113129236B - Single low-light image enhancement method and system based on Retinex and convolutional neural network - Google Patents
Single low-light image enhancement method and system based on Retinex and convolutional neural network Download PDFInfo
- Publication number
- CN113129236B CN113129236B CN202110449727.8A CN202110449727A CN113129236B CN 113129236 B CN113129236 B CN 113129236B CN 202110449727 A CN202110449727 A CN 202110449727A CN 113129236 B CN113129236 B CN 113129236B
- Authority
- CN
- China
- Prior art keywords
- component
- loss
- image
- neural network
- illumination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 35
- 238000005286 illumination Methods 0.000 claims abstract description 72
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000003062 neural network model Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 44
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 241000023320 Luma <angiosperm> Species 0.000 claims description 3
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 13
- 238000012800 visualization Methods 0.000 description 11
- 235000008733 Citrus aurantifolia Nutrition 0.000 description 4
- 235000011941 Tilia x europaea Nutrition 0.000 description 4
- 239000004571 lime Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009916 joint effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012113 quantitative test Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008521 reorganization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer vision images, and provides a single low-illumination image enhancement method and system based on Retinex and a convolutional neural network. The method comprises the steps of obtaining an image, preprocessing the image, separating three channels, and obtaining a hue component, a saturation component and a brightness component; obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model; calculating a reflection component of the illumination component by utilizing a Retinex theory; and recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in the HSV color space.
Description
Technical Field
The invention belongs to the technical field of computer vision images, and particularly relates to a single low-illumination image enhancement method and system based on Retinex and a convolutional neural network.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
With the development of science and technology, it is a common means to acquire information from images. Many computer vision algorithms such as object detection, recognition and tracking are becoming more common. However, not all images acquired by the sensors can be used directly for these algorithms. For example, the quality of the image acquired in a low-light environment is degraded, such as low visibility, low contrast, color distortion, high noise, and the like. The information that humans can obtain through these photographs is very little. The direct application of these photographs to computer vision algorithms can affect the performance of the algorithms. Therefore, how to improve the quality of images acquired under complex environments is a research focus in recent years in the field of computer vision.
Low light images generally refer to images of poor quality that are acquired in low light environments. The main purpose of low-light image enhancement is to increase the brightness and contrast of a low-light image (or an underexposed image), highlight the main information of the image, and mainly realize the technology from the perspective of software. The enhanced image can be better used for computer vision tasks and can provide valuable information. The low-illumination image enhancement method has wide application prospects in the fields of monitoring, automatic driving and the like.
Traditional low-illumination image enhancement methods are mainly classified into four categories: (1) a method for enhancing images based on spatial domain. Such methods mainly change the distribution range of image pixel values to achieve image enhancement. Such methods are mostly based on histogram equalization, gamma correction and fuzzy logic transformations, etc., such as histogram equalization and limited contrast adaptive histogram equalization. (2) Transform domain based image enhancement methods, such methods mainly convert the image into the frequency domain where it is enhanced with a suitable filter function. Most of the methods are based on frequency domain and wavelet transform domain, and the used filter functions mainly include low-pass filter, band-pass filter, high-pass filter and the like. A representative method is a wavelet transform method. (3) An image enhancement method based on image fusion. The algorithm mainly realizes image enhancement in a mode of fusing a plurality of images. The method is based on a plurality of different images of the same scene, and parts with good visual effects in the different images are fused to form a high-brightness image. Representative methods such as high dynamic lighting rendering. (4) An image enhancement method based on Retinex theory. The Retinex theory refers to that an image is formed by the joint action of illumination and an object, and the image can be expressed by the product of illumination components and reflection components, wherein the reflection components are inherent properties of the object and have consistent invariance under different illumination conditions. Most of these methods use the original image to obtain the reflection component (also the illumination component and the reflection component). Representative methods are single-scale Retinex using gaussian filtering of the illumination component of the image; the method comprises the following steps of performing Gaussian filtering on an image in multiple scales and increasing a color influence factor to obtain a multi-scale Retinex with color recovery; the illumination component is estimated using a structural prior, the reflection component is solved using Retinex theory and used as LIME of the final result, and so on.
The methods can achieve better effect on partial images, but are limited by models, lack of generalization capability and difficult to be applied to wider scenes.
Convolutional neural networks have great advantages in processing image tasks, have been applied to various types of computer vision tasks, and have been with great success. In the field of low-light image processing, there are also many methods based on convolutional neural networks. LLNet establishes an autoencoder to enhance the image; msr-net realizes image enhancement by learning the mapping relation between light and dark images; Retinex-Net establishes a decomposition network and an enhancement network, trains the network using a paired image dataset, and the like. The method has strong generalization capability and can adapt to a plurality of scenes, but the effect of the method has a great relationship with a pairing data set used for training a network model, the establishment of the light and dark images to the data set is a difficult work, and the brightness of a real image (namely a normal image) does not have a specific standard.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a method and a system for enhancing a single low-light image based on Retinex and a convolutional neural network, which can effectively enhance the single low-light image, do not generate color distortion, can keep the texture details of an original image, and have good generalization capability on different data sets.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a single low-light image enhancement method based on Retinex and a convolutional neural network.
A single low-light image enhancement method based on Retinex and a convolutional neural network comprises the following steps:
acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
calculating a reflection component of the illumination component by utilizing a Retinex theory;
and recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in the HSV color space.
And converting the three-channel image in the HSV color space into an RGB color space to obtain an enhanced low-illumination image.
Further, the process of obtaining the enhanced low-light image comprises: and converting the three-channel image in the HSV color space into an RGB color space, and adjusting the pixel range of the three-channel image from [0,1] to [0,255] to obtain the enhanced low-illumination image.
Further, the process of obtaining the hue component, the saturation component, and the brightness component includes: normalizing the pixels of the image to [0,1], converting the normalized image from a color space RGB to an HSV space, and separating three channels to obtain hue components, saturation and brightness components.
Further, the process of training the deep convolutional neural network model includes:
constructing a deep convolutional neural network model, and establishing a target loss function of the neural network based on Retinex theory and prior hypothesis;
obtaining a logarithm brightness component after the brightness component is subjected to logarithm and normalization, and obtaining a bright channel prior component through the brightness component;
and (3) bringing the illumination component, the lightness component and the bright channel prior component into a target loss function, calculating an error, and realizing gradient updating on the weight and the parameters of the neural network by using an Adam optimization algorithm through the target loss function until the error is smaller than a set threshold or iteration reaches a preset number of times, and ending the model training.
The objective loss function is:
E=Lossis+λ1Lossr+λ2Lossrs+λ3Losslc
therein, LossisRepresenting the Loss function of illumination smoothness, LossrRepresenting the reflection Loss function, LossrsRepresenting Loss of reflection smoothness function, LosslcIs indicated to be brightChannel prior loss function, λ1,λ2,λ3Respectively, reflection loss, reflection component smoothness loss, and bright channel prior loss.
As an embodiment, three weight values may be taken: lambda [ alpha ]1=0.05,λ2=0.1,λ30.5. It should be noted that the value of the three weight values is only one implementation of the present invention, and should not be construed as limiting the present invention.
The second aspect of the invention provides a single low-light image enhancement system based on Retinex and a convolutional neural network.
A single low-light image enhancement system based on Retinex and a convolutional neural network comprises:
an acquisition and pre-processing module configured to: acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
an illumination component obtaining module configured to: obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
a reflected component obtaining module configured to: calculating a reflection component of the illumination component by utilizing a Retinex theory;
a reorganization module configured to: and recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in the HSV color space.
An output module configured to: and converting the three-channel image in the HSV color space into an RGB color space to obtain an enhanced low-illumination image.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the Retinex and convolutional neural network-based single-low-illumination image enhancement method as described in the first aspect above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the Retinex and convolutional neural network-based single low-light image enhancement method according to the first aspect when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, the low-illumination image is converted from the color space RGB to the color space HSV, the channels are separated, only the brightness component is enhanced, the color information is kept in the hue (H) and the saturation (S), and the color distortion in the image enhancing process is avoided.
2. The invention establishes the objective function of the deep learning neural network through Retinex theory and a series of priors, realizes the estimation of illumination component from the brightness component of a single image, further estimates the reflection component, obtains the enhanced image, completes the unsupervised image enhancement process and has strong generalization capability.
3. The invention uses fractional order differentiation in the objective function of the neural network, and improves the retention capacity of the enhanced image details.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic flow chart of a single low-light image enhancement method based on Retinex and a convolutional neural network;
FIG. 2 is a diagram of a deep convolutional network architecture in an embodiment of the present invention;
FIG. 3 is a schematic illustration of an object imaging process of the present invention;
FIG. 4(a) is an input low-light image;
FIG. 4(b) is a visualization effect diagram after the image enhancement processing of the DICM data set by the BIMEF algorithm;
FIG. 4(c) is a visualization effect diagram after the image enhancement processing of the DICM data set of the DONG algorithm;
FIG. 4(d) is a visualization effect diagram after the image enhancement processing of the DICM data set of the LIME algorithm;
FIG. 4(e) is a visualization effect diagram after the image enhancement processing of the DICM data set by the MF algorithm;
FIG. 4(f) is a visualization effect diagram after the image enhancement processing of the DICM data set of the NPE algorithm;
FIG. 4(g) is a visualization effect diagram after SIRE algorithm DICM data set image enhancement processing;
FIG. 4(h) is a visualization effect diagram after ULE algorithm DICM data set image enhancement processing;
FIG. 4(i) is a visualization effect diagram after the image enhancement processing of the Retinex-Net algorithm DICM data set;
FIG. 4(j) is a visualization effect diagram after image enhancement processing of the KinD algorithm DICM data set;
fig. 4(k) is a visualization effect diagram after the DICM dataset image enhancement processing according to the method of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
As shown in fig. 1, the embodiment provides a single low-light image enhancement method based on Retinex and convolutional neural network, and the embodiment is illustrated by applying the method to a server, it is understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network server, cloud communication, middleware service, a domain name service, a security service CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In this embodiment, the method includes the steps of:
step 1: acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
the method comprises the following steps of preprocessing an acquired low-illumination image:
step 1.1, in the stage of reading in the low-illumination image, the pixel value range of the read image is changed into [0,1] by using a special read-in function; or create a normalization function to normalize pixels of the low-light image from [0,255] to [0,1 ].
Step 1.2: converting the image obtained in the step 1.1 from a color space RGB to a color space HSV, and extracting three channel components: hue (H), saturation (S), lightness (V). The specific implementation method comprises the following steps:
step 1.2.1: calculating the maximum channel (c) of the imagemax) Minimum channel (c)min) And a contrast (Δ), the calculation formula being: c. Cmax=max(R,G,B),cmin=min(R,G,B),Δ=cmax-cmin(where R, G, B are the three channels of the color space RGB, whose values lie in [0,1]]In (d) of the first and second groups;
step 1.2.2: hue (H), saturation (S), lightness (V) are calculated according to the following formula:
V=cmax。
step 1.3: the luminance component is logarithmized using a function y ═ log (x +1), and the result is normalized to obtain a logarithmized luminance component (v).
Step 1.4: calculating to obtain a prior component (v) of a bright channel by a formulalight) The formula is as follows:wherein Ω is a k × k region centered on (i, j),and vp,qRepresenting the pixel at the specified location.
Step 2: obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
the deep convolution neural network model is constructed by the following specific steps:
step 2.1: and constructing a deep convolutional network. The network structure of the invention is shown in fig. 2, the input is the lightness component (v) obtained in step 1.3, the output is the illumination component (l), the first 4 layers of the network model are composed of a convolution operation and a ReLU function, the latter layer has only 1 convolution operation, and finally a sigmoid layer is connected. The specific information of each layer of the network model is as follows:
TABLE 1 deep convolutional network
Step 2.2: and establishing an objective function of the neural network based on Retinex theory and prior hypothesis. Fig. 3 shows the imaging process of an object, which the Retinex theory expresses as:where S represents the acquired (or observed) image, R is the reflected component of the object, L is the illumination component of the environment,representing multiplication by element. Under the assumption that the spatial variation of the illumination is smooth, a loss of illumination smoothness is proposed:
wherein l is the illumination component output by the network,is v is1(being positive) derivative of order, v2Is a positive number and N is the total number of pixels of the image. By applying constraint to the fractional order gradient of the illumination component, the illumination component obtained by the neural network is smooth when the space changes.
In order to avoid too large difference between the obtained illumination image and the original image, the reflection loss is designed:
where l is the illumination component of the network output, v is the lightness component of the input, v is the luminance component of the network output2Is a positive number.
In order to make the spatial variation of the reflection component as smooth as possible, to guarantee its clarity and visual effect, a loss of reflection smoothness is proposed:
in order to have the value of the reflection component (R) between [0,1], it is necessary to guarantee that the illumination component (L) is greater than (V), thus proposing a bright channel prior loss:
wherein v islightIs the bright channel prior component calculated from the luma component v,
in the above loss, the present invention uses fractional order gradient and fractional order differential to improve the retention of image texture details by the model.
And synthesizing the loss functions of all parts, wherein the target loss function of the model is as follows:
E=Lossis+λ1Lossr+λ2Lossrs+λ3Losslc
wherein λ is1,λ2,λ3The weights are respectively reflection loss, reflection component smoothness loss and bright channel prior loss, and the values of the three weights are respectively: lambda [ alpha ]1=0.05,λ2=0.1,λ30.5. The resulting illumination components can be spatially smoothed with 4 lossy constraints, preserving texture details.
Step 2.3: randomly initializing a network weight, and determining a weight optimization algorithm as an Adam optimization algorithm.
Wherein a neural network is trained. Inputting the lightness component (v) obtained in the step 1.3 into a convolution neural network, and obtaining an illumination component (l), a lightness component (v) and a bright channel prior component (v) by the neural networklight) And (3) carrying out an objective function in the step (2.2), calculating an error, and realizing gradient updating on the weight and parameters of the neural network by using an Adam optimization algorithm through the objective function, wherein the iteration is stopped when the error meets the requirement or the iteration reaches a preset number of times.
And 3, step 3: calculating a reflection component of the illumination component by utilizing a Retinex theory;
step 3.1: and (3) inputting the brightness component (v) obtained in the step (3.1) into the deep convolutional neural network trained in the step (3) to obtain an output illumination component (l).
Step 3.2: and (3) gamma correction is carried out on the illumination component obtained in the step (3.1), and then the lightness component (V) and the illumination component (L) obtained in the step (1.3) are subjected to indexing processing to obtain a lightness component (V) and an illumination component (L) after indexing. Obtaining a calculation formula of the reflection component according to Retinex theory: and R is V/L. The reflection component (R) is calculated by a calculation formula.
And 4, step 4: recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in an HSV color space; and converting the three-channel image in the HSV color space into an RGB color space to obtain an enhanced low-illumination image.
Specifically, the reflection component (R) is used as lightness component (V), and combined with hue (H) and saturation (S) obtained in step 1.2 to restore an image enhanced in color space HSV, and then the image is converted from color space HSV to color space RGB, and then the image value range is changed from [0,1] to [0,255] by a special read-out function or a created mapping function.
Through the steps, the enhanced image can be obtained.
We performed tests on DICM low-light image datasets and evaluated network performance using peak Signal-to-Noise ratio psnr (peak Signal to Noise ratio) and structural similarity ssim (structural similarity index). Meanwhile, visualization and quantitative comparison are carried out with the current advanced algorithm, including a natural color retention enhancement algorithm NPE, a DONG based on defogging, an MF based on fusion, a LIME based on illumination component estimation, a BIMEF based on illumination component estimation and multi-exposure fusion, an SRIE based on reflection component and illumination component estimation, and an ULE, Retinex-Net and KinD based on deep learning.
The environment of the experiment is CPU processor Intel (R) Xeon (R) CPU E5-2620 v3@2.40GHz, memory 128G, video card Nvidia GeForce GTX TITAN X, and video memory 12G. The software system is Windows10, the deep learning framework of the pytorech, python3.8, CUDA version 10.0, cuDNN version 7.4. The development software used was Pycharm2020 and Matlab2020 a.
The learning rate of the parameter setting of this experiment is 0.001, and the total iteration number is 1000.
Fig. 4(a) - (k) illustrate the visual comparison results of different algorithms on a DICM dataset. Fig. 4(a) is an input low-light image, fig. 4(b) -fig. 4(j) are results of other methods, and fig. 4(k) is a method proposed by the present invention. Fig. 4 (b): BIMEF, fig. 4 (d): LIME, fig. 4 (e): MF, fig. 4 (f): NPE, fig. 4 (h): ULE and fig. 4 (i): the Retinex-Net method over-enhances the image and the enhanced image exhibits color distortion. Fig. 4 (c): DONG and fig. 4 (g): the consequences of SIRE are unnatural and in some places, excessive enhancement occurs. As can be seen from the image shown in fig. 4(k), the method of the present embodiment retains the details of the image, makes the image more natural, and does not generate color distortion and excessive enhancement. By contrast, the method of the invention has better overall effect representation, and fine color and structure can be better recovered.
The quantitative test results on the DICM dataset are as follows:
TABLE 2 quantitative comparison on DICM datasets
Example two
The embodiment provides a single low-light image enhancement system based on Retinex and a convolutional neural network.
A single low-light image enhancement system based on Retinex and a convolutional neural network comprises:
an acquisition and pre-processing module configured to: acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
an illumination component obtaining module configured to: obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
a reflected component obtaining module configured to: calculating a reflection component of the illumination component by utilizing a Retinex theory;
a reassembly module configured to: and recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in the HSV color space.
An output module configured to: and converting the three-channel image in the HSV color space into an RGB color space to obtain an enhanced low-illumination image.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the single-low-illumination image enhancement method based on Retinex and convolutional neural network as described in the first embodiment above.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the program to implement the steps in the single-low-light image enhancement method based on Retinex and convolutional neural network as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A single low-light image enhancement method based on Retinex and a convolutional neural network is characterized by comprising the following steps:
acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
calculating a reflection component of the illumination component by utilizing a Retinex theory;
recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in an HSV color space;
the process of training the deep convolutional neural network model comprises the following steps:
constructing a deep convolutional neural network model, and establishing a target loss function of the neural network based on Retinex theory and prior hypothesis;
obtaining a logarithm brightness component after the brightness component is subjected to logarithm and normalization, and obtaining a bright channel prior component through the brightness component;
the illumination component, the brightness component and the bright channel prior component are brought into a target loss function, the error is calculated, the Adam optimization algorithm is used for the weight and the parameters of the neural network through the target loss function to realize gradient updating until the error is smaller than a set threshold value or iteration reaches a preset number of times, and the model training is finished;
the target loss function is:
E=Lossis+λ1Lossr+λ2Lossrs+λ3Losslc
therein, LossisRepresenting the Loss function of illumination smoothness, LossrRepresenting the reflection Loss function, LossrsRepresenting Loss of reflection smoothness function, LosslcRepresenting the prior loss function of the bright channel, λ1,λ2,λ3The weights are respectively reflection loss, reflection component smoothness loss and bright channel prior loss, and the values of the three weights are respectively: lambda [ alpha ]1=0.05,λ2=0.1,λ3=0.5;
Where l is the illumination component of the network output, v1Is a positive number, Dv1Is v is1Derivative of order, v2Is a positive number, N being the total number of pixels of the image;
wherein v is the input lightness component;
wherein v islightIs the bright channel prior component calculated from the luma component v.
2. The single-sheet low-illumination image enhancement method based on Retinex and the convolutional neural network as claimed in claim 1, wherein the three-channel image in HSV color space is converted into RGB color space, resulting in an enhanced low-illumination image.
3. The single low-light image enhancement method based on Retinex and convolutional neural network of claim 2, wherein the process of obtaining the enhanced low-light image comprises: and converting the three-channel image in the HSV color space into an RGB color space, and adjusting the pixel range of the three-channel image from [0,1] to [0,255] to obtain the enhanced low-illumination image.
4. The single-low-illumination image enhancement method based on Retinex and convolutional neural network of claim 1, wherein the process of obtaining hue components, saturation components and brightness components comprises: normalizing the pixels of the image to [0,1], converting the normalized image from a color space RGB to an HSV space, and separating three channels to obtain hue components, saturation and brightness components.
5. The single-sheet low-illumination image enhancement method based on Retinex and the convolutional neural network as claimed in claim 1, wherein the deep convolutional neural network model comprises five convolutional layers and one sigmoid activation layer.
6. A single low-light image enhancement system based on Retinex and a convolutional neural network is characterized by comprising:
an acquisition and pre-processing module configured to: acquiring an image, preprocessing the image, separating three channels and acquiring a hue component, a saturation component and a brightness component;
an illumination component obtaining module configured to: obtaining an illumination component according to the lightness component by adopting the trained deep convolution neural network model;
a reflected component obtaining module configured to: calculating a reflection component of the illumination component by utilizing a Retinex theory;
a reassembly module configured to: recombining the reflection component with the hue component and the saturation component to obtain a three-channel image in an HSV color space;
an output module configured to: converting the three-channel image in the HSV color space into an RGB color space to obtain an enhanced low-illumination image;
the deep convolutional neural network model training process comprises the following steps:
constructing a deep convolutional neural network model, and establishing a target loss function of the neural network based on Retinex theory and prior hypothesis;
obtaining a logarithm brightness component after the brightness component is subjected to logarithm and normalization, and obtaining a bright channel prior component through the brightness component;
the illumination component, the brightness component and the bright channel prior component are brought into a target loss function, the error is calculated, the Adam optimization algorithm is used for the weight and the parameters of the neural network through the target loss function to realize gradient updating until the error is smaller than a set threshold value or iteration reaches a preset number of times, and the model training is finished;
the target loss function is:
E=Lossis+λ1Lossr+λ2Lossrs+λ3Losslc
therein, LossisRepresenting the Loss function of illumination smoothness, LossrRepresenting the reflection Loss function, LossrsRepresenting Loss of reflection smoothness function, LosslcRepresenting the prior loss function of the bright channel, λ1,λ2,λ3The weights are respectively reflection loss, reflection component smoothness loss and bright channel prior loss, and the three weights are respectively: lambda [ alpha ]1=0.05,λ2=0.1,λ3=0.5;
Where l is the illumination component of the network output, v1Is a positive number, Dv1Is v is1Derivative of order, v2Is a positive number, N being the total number of pixels of the image;
wherein v is the input lightness component;
wherein v islightIs the bright channel prior component calculated from the luma component v.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the Retinex and convolutional neural network-based single-low-illumination image enhancement method according to any of claims 1 to 5.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps in the Retinex and convolutional neural network based single low-light image enhancement method of any of claims 1-5 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110449727.8A CN113129236B (en) | 2021-04-25 | 2021-04-25 | Single low-light image enhancement method and system based on Retinex and convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110449727.8A CN113129236B (en) | 2021-04-25 | 2021-04-25 | Single low-light image enhancement method and system based on Retinex and convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113129236A CN113129236A (en) | 2021-07-16 |
CN113129236B true CN113129236B (en) | 2022-07-12 |
Family
ID=76779830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110449727.8A Active CN113129236B (en) | 2021-04-25 | 2021-04-25 | Single low-light image enhancement method and system based on Retinex and convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113129236B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202475A (en) * | 2021-11-24 | 2022-03-18 | 北京理工大学 | Adaptive image enhancement method and system |
CN114283288B (en) * | 2021-12-24 | 2022-07-12 | 合肥工业大学智能制造技术研究院 | Method, system, equipment and storage medium for enhancing night vehicle image |
CN116993636B (en) * | 2023-07-10 | 2024-02-13 | 中国地质大学(武汉) | Image enhancement method and device for underground low-illumination deep stratum empty area |
CN116824511A (en) * | 2023-08-03 | 2023-09-29 | 行为科技(北京)有限公司 | Tool identification method and device based on deep learning and color space |
CN117853783A (en) * | 2023-12-12 | 2024-04-09 | 济南大学 | Single board defect identification method and system based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600564A (en) * | 2016-12-23 | 2017-04-26 | 潘敏 | Novel image enhancement method |
CN110298796A (en) * | 2019-05-22 | 2019-10-01 | 中山大学 | Based on the enhancement method of low-illumination image for improving Retinex and Logarithmic image processing |
CN112465727A (en) * | 2020-12-07 | 2021-03-09 | 北京邮电大学 | Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100879536B1 (en) * | 2006-10-30 | 2009-01-22 | 삼성전자주식회사 | Method And System For Image Enhancement |
CN110211049A (en) * | 2018-06-28 | 2019-09-06 | 京东方科技集团股份有限公司 | Image enchancing method, device and equipment based on Retinex theory |
CN110930341A (en) * | 2019-10-17 | 2020-03-27 | 杭州电子科技大学 | Low-illumination image enhancement method based on image fusion |
-
2021
- 2021-04-25 CN CN202110449727.8A patent/CN113129236B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600564A (en) * | 2016-12-23 | 2017-04-26 | 潘敏 | Novel image enhancement method |
CN110298796A (en) * | 2019-05-22 | 2019-10-01 | 中山大学 | Based on the enhancement method of low-illumination image for improving Retinex and Logarithmic image processing |
CN112465727A (en) * | 2020-12-07 | 2021-03-09 | 北京邮电大学 | Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory |
Also Published As
Publication number | Publication date |
---|---|
CN113129236A (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113129236B (en) | Single low-light image enhancement method and system based on Retinex and convolutional neural network | |
Lv et al. | Attention guided low-light image enhancement with a large scale low-light simulation dataset | |
Vijayalakshmi et al. | A comprehensive survey on image contrast enhancement techniques in spatial domain | |
CN108875935B (en) | Natural image target material visual characteristic mapping method based on generation countermeasure network | |
US20240062530A1 (en) | Deep perceptual image enhancement | |
CN111079764B (en) | Low-illumination license plate image recognition method and device based on deep learning | |
Liu et al. | Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives | |
Zhou et al. | Multi-scale retinex-based adaptive gray-scale transformation method for underwater image enhancement | |
Guo et al. | Image dehazing via enhancement, restoration, and fusion: A survey | |
KR102095443B1 (en) | Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning | |
CN112348747A (en) | Image enhancement method, device and storage medium | |
CN111047543A (en) | Image enhancement method, device and storage medium | |
Steffens et al. | Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing | |
Fan et al. | Multi-scale depth information fusion network for image dehazing | |
CN116157805A (en) | Camera image or video processing pipeline using neural embedding | |
CN114581318B (en) | Low-illumination image enhancement method and system | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
Wen et al. | Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment | |
Saleem et al. | A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset | |
Lei et al. | Low-light image enhancement using the cell vibration model | |
CN118134804B (en) | Image sharpening processing system based on ambiguity recognition | |
Zhou et al. | Sparse representation with enhanced nonlocal self-similarity for image denoising | |
Chen et al. | High-dynamic range, night vision, image-fusion algorithm based on a decomposition convolution neural network | |
CN117974459A (en) | Low-illumination image enhancement method integrating physical model and priori | |
CN117391987A (en) | Dim light image processing method based on multi-stage joint enhancement mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |