WO2023236445A1 - Low-illumination image enhancement method using long-exposure compensation - Google Patents
Low-illumination image enhancement method using long-exposure compensation Download PDFInfo
- Publication number
- WO2023236445A1 WO2023236445A1 PCT/CN2022/131018 CN2022131018W WO2023236445A1 WO 2023236445 A1 WO2023236445 A1 WO 2023236445A1 CN 2022131018 W CN2022131018 W CN 2022131018W WO 2023236445 A1 WO2023236445 A1 WO 2023236445A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- low
- image
- exposure
- long
- feature
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000005286 illumination Methods 0.000 title claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 33
- 238000005282 brightening Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 19
- 238000005457 optimization Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims 1
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 3
- 210000003710 cerebral cortex Anatomy 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the invention belongs to the field of low-light image enhancement of digital images, and relates to a low-light image enhancement method using long exposure compensation.
- Low light is a common image degradation. Insufficient light is usually caused by low-light shooting environment, camera failure, wrong parameter settings, etc. The enhancement of low-light images has always attracted attention from industry and academia.
- Low-light image enhancement methods can be divided into three categories. Based on the method of uniformly adjusting image brightness, low-light images can be brightened by uniformly adjusting the global brightness of the entire image. Based on the method of retinal cerebral cortex theory, the image is decomposed into two parts, the reflectance layer and the illumination layer, and prior knowledge is used to manually set constraints and adjust to achieve the purpose of enhancing low-light images. Based on the deep learning method, a data-driven convolution model is designed and trained end-to-end on a large data set. Only one parameter forward pass of the low-light image is required during inference.
- each training sample in the low-light training data set includes low-light images and normal illumination images of the same scene; generate a set of corresponding short-exposure images, long-exposure images according to each training sample images and real illumination images to obtain a synthetic data set S;
- the low-light enhancement model includes M-1 feature alignment modules and M-1 brightening modules; where, for the same group in the synthetic data set S The photo long exposure image I long and the short exposure image I short in the image are respectively mapped to the feature space by the low -light enhancement model to obtain the corresponding short exposure features. and long exposure characteristics And input it into the first feature alignment module;
- the low-light enhancement model also includes a detail removal module, which is used to eliminate the detailed features of the long-exposure image I long and then map them to the feature space to obtain the long-exposure features.
- a real-shot data set R is obtained.
- Each group of images in the real-shot data set R includes three images taken of the same scene, namely a short exposure image, a long exposure image and a real illumination image; using the The real-shot data set R is used to evaluate the trained low-light enhancement model.
- the long-exposure images in the synthetic data set S are synthesized according to the normal illumination images in the training samples; the short-exposure images in the synthetic data set S are low-light images in the training samples.
- the synthetic data set The real illumination image in S is the normal illumination image of the training sample.
- L rec ⁇ I normal -I GT ⁇
- L a ⁇ I assist -I GT ⁇ .
- ⁇ l (I normal ) represents the l-th layer feature of the image I normal extracted by the VGG network
- H l and W l represent the width and height of ⁇ l (I normal ) respectively.
- the present invention also provides a server, including a memory and a processor.
- the memory stores a computer program.
- the computer program is configured to be executed by the processor.
- the computer program includes instructions for executing each step in the above method. .
- the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method are implemented.
- the alignment module and the brightening module in step 2 of the "Detailed Implementation" below can effectively utilize the long-exposure features to achieve feature interaction between long-exposure images and short-exposure images. , thereby performing a brightening operation on low-light images with a more specific brightness target.
- This design alleviates the problem of unclear brightening targets encountered by other low-light enhancement technologies, allowing the present invention to significantly improve performance.
- This invention significantly improves the performance of low-light image enhancement.
- it can increase the peak signal-to-noise ratio (Peak Signal to Noise Ratio) of the general low-light enhancement model AGLLNet from 14.93 to 25.15.
- Figure 1 shows the training framework diagram of the low-light image enhancement network using long exposure compensation.
- Figure 2 is the framework diagram of the feature alignment sub-module.
- Figure 3 is a frame diagram of the brightening sub-module.
- Figure 4 is a comparison picture before and after enhancement according to the method of the present invention, in which (1) is a low-light image, (2) is a long-exposure blurred image, and (3) is an enhanced image.
- This embodiment discloses a low-light enhancement method for long exposure compensation.
- the specific description is as follows:
- Step 1 Collect the low-light training data set. For each low-light/normal-light image data pair, use the blur kernel space model to synthesize multiple different long-exposure blur images to build a combination of normal illumination/short exposure/synthetic long-exposure.
- the long-exposure compensated low-light enhanced synthetic data set S composed of image pairs is used for training and testing of the network; among them, the short-exposure/real-light images are obtained directly from the collected low-light/normal-light image pairs, that is, the collected low-light images
- the image is a short exposure image in the set S
- the collected normal illumination image is a real illumination image in the set S.
- the synthetic data set S collected in this step can be used for the training of this method and other subsequent methods, and the real shot data set R collected can be used for the evaluation and comparison of various low-light enhancement methods.
- the following takes the long exposure image I long and the short exposure image I short of a pair of photos in the data set S as an example to introduce the network framework.
- the long exposure image I long and the short exposure image I short first pass through the convolution layer, followed by a normalization layer and linear rectification function (ReLU) to map the image to the feature space to obtain the initial short exposure feature. and long exposure characteristics
- ReLU linear rectification function
- long exposure characteristics because long exposure pictures provide brightness and illumination information, we do not want the detailed information of long exposure to interfere with the model.
- a 16x downsampling and 16x downsampling are added before the long exposure is input into the long exposure feature decoding module.
- Upsampling module DRP to eliminate detailed features. This operation enhances the robustness of the method and allows it to adapt to various blur forms of long-exposure images.
- the feature alignment module S2L is added to the model.
- the structure of the feature alignment module is shown in Figure 2.
- Short exposure features First pass through a convolution layer to obtain an attention map. Then use this attention map A i to compare the long exposure features Perform soft threshold filtering operation to get where " ⁇ " represents element-wise multiplication. This operation selectively utilizes features extracted from long-exposure images in the spatial dimension, effectively alleviating the impact of interference information in long-exposure images on this method. after, and Together, they downsample and pass into the convolutional layer to predict the next long exposure feature. Downsample separately and pass it into the convolutional layer to predict the next short exposure feature The next feature is half the size of the previous feature in the spatial dimension, but twice the size of the previous feature in the channel dimension.
- Step 3 Use the constructed data set S to train the model, use the output of the model in the short-exposure decoding module as the optimization target I normal , and use the output of the long-exposure decoding module as the auxiliary output I assist to train the model.
- the total loss function term of the low-light image enhancement model using long exposure compensation is:
- L L rec + ⁇ SSIM L SSIM + ⁇ LPIPS L LPIPS + ⁇ a L a
- ⁇ SSIM , ⁇ LPIPS and ⁇ a are weight items.
- ⁇ SSIM is set to 0.4
- ⁇ LPIPS is set to 1
- ⁇ a is set to 1.
- the gradient value in gradient backpropagation will be truncated in the [-0.1, 0.1] interval.
- 256 ⁇ 256 pixel patches are randomly cropped and a two-stage training strategy is used. In the first stage, 1.5 ⁇ 10 5 iterations are trained without adding the attention mechanism of the feature alignment module S2L. Afterwards, the attention mechanism is added and then 3 ⁇ 10 4 iterations are trained with an initial learning rate of 1 ⁇ 10 -5 .
- L rec is the average absolute error loss function between the optimization target I normal and the real value I GT under normal lighting:
- L a is the average absolute error loss function between the auxiliary output I assist and the real value I GT under normal lighting:
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
Disclosed in the present invention is a low-illumination image enhancement method using long-exposure compensation. The method comprises: 1) collecting a low-illumination training data set, wherein each training sample in the low-illumination training data set comprises a low-illumination image and a normal-illumination image of the same scene, and generating, according to each training sample, a group of a short-exposure image, a long-exposure image and a real-illumination image, which correspond to each other, so as to obtain a synthetic data set S; 2) training a low-illumination enhancement model by using the synthetic data set S, wherein the low-illumination enhancement model comprises M-1 feature alignment modules and M-1 brightening modules; and 3) inputting a short-exposure image to be brightened and a corresponding blurred long-exposure image into the trained low-illumination enhancement model, so as to obtain a corresponding low-illumination enhanced image. The present invention can significantly improve the performance of low-illumination image enhancement.
Description
本发明属于数字图像低光照增强领域,涉及一种使用长曝光补偿的低光照图像增强方法。The invention belongs to the field of low-light image enhancement of digital images, and relates to a low-light image enhancement method using long exposure compensation.
低光照是一种常见的图像降质,光照不足通常由低光照拍摄环境、相机故障、参数设置错误等原因造成。低光照图像的增强一直受到工业界和学术界的关注。Low light is a common image degradation. Insufficient light is usually caused by low-light shooting environment, camera failure, wrong parameter settings, etc. The enhancement of low-light images has always attracted attention from industry and academia.
传统的低光照图像增强方法可以分为三类。基于均匀调整图像亮度的方法,通过均匀调整整个图像的全局亮度,从而提亮低光图像。基于视网膜大脑皮层理论的方法,将图像分解为反射率层和照明层两个部分,利用先验知识手动设置约束进行调整来达到增强低光图像的目的。基于深度学习的方法,设计数据驱动的卷积模型,在大型数据集上进行端到端的训练,推理时只需要低光照图像的一次参数前传。Traditional low-light image enhancement methods can be divided into three categories. Based on the method of uniformly adjusting image brightness, low-light images can be brightened by uniformly adjusting the global brightness of the entire image. Based on the method of retinal cerebral cortex theory, the image is decomposed into two parts, the reflectance layer and the illumination layer, and prior knowledge is used to manually set constraints and adjust to achieve the purpose of enhancing low-light images. Based on the deep learning method, a data-driven convolution model is designed and trained end-to-end on a large data set. Only one parameter forward pass of the low-light image is required during inference.
但是,低光照图像的增强是一个多解问题,一张低光照图像可以与多张理想的正常光图像相对应,这种优化目标的不确定性为准确而灵活的低光照图像增强造成了挑战。基于均匀调整图像亮度的方法无法解决局部过曝和信号噪声的问题,基于视网膜大脑皮层理论的方法无法满足通用化自动化的要求,基于深度学习的方法难以推广到处理各种照明条件的图像。因此,传统的低光照图 像增强方法均很难推广到处理各种照明条件下的低光照图片,无法满足实际应用的需求。However, the enhancement of low-light images is a multi-solution problem. One low-light image can correspond to multiple ideal normal-light images. The uncertainty of this optimization goal creates challenges for accurate and flexible low-light image enhancement. . Methods based on uniformly adjusting image brightness cannot solve the problems of local overexposure and signal noise. Methods based on retinal cerebral cortex theory cannot meet the requirements of general automation. Methods based on deep learning are difficult to generalize to handle images under various lighting conditions. Therefore, traditional low-light image enhancement methods are difficult to generalize to handle low-light images under various lighting conditions, and cannot meet the needs of practical applications.
发明内容Contents of the invention
针对上述问题,本发明的目的在于提供一种使用长曝光补偿的低光照图像增强方法,通过引入容易获得的模糊长曝光图像,利用模糊长曝光图像的亮度、色彩等信息对低光照增强问题添加约束,减少低光照图像增强问题的不确定性,使得优化目标更加明确,综合地提升低光照增强性能。In response to the above problems, the purpose of the present invention is to provide a low-light image enhancement method using long-exposure compensation. By introducing an easily obtained blurred long-exposure image, the brightness, color and other information of the blurred long-exposure image are used to add value to the low-light enhancement problem. Constraints reduce the uncertainty of low-light image enhancement problems, make the optimization goals clearer, and comprehensively improve low-light enhancement performance.
本发明采用的技术方案如下:The technical solutions adopted by the present invention are as follows:
一种使用长曝光补偿的低光照图像增强方法,其步骤包括:A low-light image enhancement method using long exposure compensation, the steps include:
1)收集低光照训练数据集,其中所述低光照训练数据集中的每一训练样本包括同一场景的低光照图像和正常光照图像;根据每一训练样本生成一组对应的短曝光图像、长曝光图像和真实光照图像,得到一合成数据集S;1) Collect a low-light training data set, wherein each training sample in the low-light training data set includes low-light images and normal illumination images of the same scene; generate a set of corresponding short-exposure images, long-exposure images according to each training sample images and real illumination images to obtain a synthetic data set S;
2)利用所述合成数据集S训练低光照增强模型,所述低光照增强模型包括M-1个特征对齐模块和M-1个提亮模块;其中,对于所述合成数据集S中同一组图像内的照片长曝光图像I
long和短曝光图像I
short,所述低光照增强模型将长曝光图像I
long和短曝光图像I
short分别映射到特征空间,获得对应的短曝光特征
及长曝光特征
并将其输入第一特征对齐模块;
2) Use the synthetic data set S to train a low-light enhancement model. The low-light enhancement model includes M-1 feature alignment modules and M-1 brightening modules; where, for the same group in the synthetic data set S The photo long exposure image I long and the short exposure image I short in the image are respectively mapped to the feature space by the low -light enhancement model to obtain the corresponding short exposure features. and long exposure characteristics And input it into the first feature alignment module;
3)第i特征对齐模块对输入的第i尺度长曝光特征
与第i尺度短曝光特征
进行对齐;其中,第i特征对齐模块将第i尺度短曝光特征
进行卷积处理,得到一张注意力图A
i,然后用注意力图A
i对第i尺度长曝光特征
进 行软阈值滤波操作,得到
其中“⊙”表示逐元素的乘法;然后将
与
共同进行降采样并传入卷积层,预测输出第i+1尺度长曝光特征
以及将
单独进行降采样并传入卷积层,预测输出第i+1尺度短曝光特征
将第M-1个特征对齐模块预测输出的第M尺度长曝光特征
第M尺度短曝光特征
进行拼接作为特征第M+1尺度长曝光特征
第M+1尺度短曝光特征
其中,i=1~M-1,
3) The i-th feature alignment module applies the i-th scale long exposure feature to the input with i-th scale short exposure features Align; among them, the i-th feature alignment module aligns the i-th scale short exposure features Perform convolution processing to obtain an attention map A i , and then use the attention map A i to compare the i-th scale long exposure features Perform soft threshold filtering operation to get where "⊙" represents element-wise multiplication; then and Together, we perform downsampling and pass it into the convolutional layer to predict and output the i+1th scale long exposure feature. and will Perform downsampling separately and pass it to the convolutional layer to predict and output the i+1th scale short exposure feature. Align the M-1th feature module to predict the output M-th scale long exposure feature Mth scale short exposure features Perform splicing as the feature M+1 scale long exposure feature M+1th scale short exposure features Among them, i=1~M-1,
4)第i提亮模块将第M+i尺度长曝光特征
第M+i尺度短曝光特征
进行拼接,并对拼接特征进行上采样,然后将上采样所得特征与第M-i尺度短曝光特征
相连后通过卷积层得到第M+i+1尺度短曝光特征
对
进行上采样所得特征与第M-i尺度长曝光特征
相连后通过卷积层得到得到第M+i+1尺度长曝光特征
以第M-1提亮模块输出的第2M尺度短曝光特征
作为优化目标I
normal、第2M尺度长曝光特征
作为辅助,优化所述低光照增强模型;其中,训练优化所述低光照增强模型的总损失函数为L=L
rec+λ
SSIML
SSIM+λ
LPIPSL
LPIPS+λ
aL
a;λ
SSIM、λ
LPIPS和λ
a为权重项,L
rec为优化目标I
normal和正常光照下真实值I
GT间的平均绝对误差损失函数;L
SSIM为优化目标I
normal和正常光照下真实值I
GT间的结构相似性损失函数;L
LPIPS为感知图像块相似度学习损失函数;L
a为辅助输出I
assist和正常光照下真实值I
GT间的平均绝对误差损失函数;
4) The i-th brightening module converts the M+i-th scale long exposure features M+ith scale short exposure features Perform splicing, upsample the spliced features, and then combine the upsampled features with the Mi-th scale short exposure features After connection, the M+i+1th scale short exposure feature is obtained through the convolution layer. right Features obtained by upsampling and Mi-th scale long exposure features After connection, the M+i+1-th scale long exposure feature is obtained through the convolution layer. The 2M-scale short exposure features output by the M-1 brightening module As the optimization target I normal , the 2M scale long exposure feature As an auxiliary, the low-light enhancement model is optimized; wherein, the total loss function of training and optimizing the low-light enhancement model is L=L rec +λ SSIM L SSIM +λ LPIPS L LPIPS +λ a L a ; λ SSIM , λ LPIPS and λ a are weight terms, L rec is the average absolute error loss function between the optimization target I normal and the real value I GT under normal lighting; L SSIM is the structural similarity between the optimization target I normal and the real value I GT under normal lighting. loss function; L LPIPS is the perceptual image block similarity learning loss function; L a is the average absolute error loss function between the auxiliary output I assist and the real value I GT under normal lighting;
5)将待提亮的短曝光图像和对应的模糊长曝光图像输入训练后的低光照增强模型,得到对应的低光照增强图像。5) Input the short exposure image to be brightened and the corresponding blurred long exposure image into the trained low-light enhancement model to obtain the corresponding low-light enhancement image.
进一步的,所述搭建低光照增强模型还包括细节去除模块,用于消除长曝光图像I
long的细节特征后映射到特征空间,获得长曝光特征
Furthermore, the low-light enhancement model also includes a detail removal module, which is used to eliminate the detailed features of the long-exposure image I long and then map them to the feature space to obtain the long-exposure features.
进一步的,获取一实拍数据集R,所述实拍数据集R中的每一组图像包括对同一场景拍摄的三张图像,即短曝光图像、长曝光图像和真实光照图像;利用所述实拍数据集R对训练后的低光照增强模型进行评测。Further, a real-shot data set R is obtained. Each group of images in the real-shot data set R includes three images taken of the same scene, namely a short exposure image, a long exposure image and a real illumination image; using the The real-shot data set R is used to evaluate the trained low-light enhancement model.
进一步的,根据训练样本中的正常光照图像合成得到所述合成数据集S中的长曝光图像;所述合成数据集S中的短曝光图像为训练样本中的低光照图像,所述合成数据集S中的真实光照图像为训练样本的正常光照图像。Further, the long-exposure images in the synthetic data set S are synthesized according to the normal illumination images in the training samples; the short-exposure images in the synthetic data set S are low-light images in the training samples. The synthetic data set The real illumination image in S is the normal illumination image of the training sample.
进一步的,利用模糊核空间模型对训练样本中的正常光照图像进行处理得到所述合成数据集S中的长曝光图像。Further, the blur kernel space model is used to process the normal illumination images in the training samples to obtain the long exposure images in the synthetic data set S.
进一步的,L
rec=‖I
normal-I
GT‖;L
a=‖I
assist-I
GT‖。
Further, L rec =‖I normal -I GT ‖; L a =‖I assist -I GT ‖.
进一步的,L
SSIM=1-SSIM(I
normal,I
GT);其中,SSIM(x,y)表示两张图像x和y的结构相似性;
μ
x是x的平均值,μ
y是y的平均值,
是x的方差,
是y的方差,σ
xy是x和y的协方差,c
1、c
2是用来维持稳定的常数。
Further, L SSIM =1-SSIM(I normal ,I GT ); where, SSIM(x, y) represents the structural similarity of two images x and y; μ x is the mean value of x, μ y is the mean value of y, is the variance of x, is the variance of y, σ xy is the covariance of x and y, and c 1 and c 2 are constants used to maintain stability.
进一步的,
其中,φ
l(I
normal)表示VGG网络提取的图像I
normal的第l层特征,H
l和W
l分别表示φ
l(I
normal)的宽和高。
further, Among them, φ l (I normal ) represents the l-th layer feature of the image I normal extracted by the VGG network, and H l and W l represent the width and height of φ l (I normal ) respectively.
本发明还提供一种服务器,包括存储器和处理器,所述存储器存储计算机程序,所述计算机程序被配置为由所述处理器执行,所述计算机程序包括用于 执行上述方法中各步骤的指令。The present invention also provides a server, including a memory and a processor. The memory stores a computer program. The computer program is configured to be executed by the processor. The computer program includes instructions for executing each step in the above method. .
本发明还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述方法的步骤。The present invention also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method are implemented.
与现有技术相比,本发明的积极效果为:Compared with the existing technology, the positive effects of the present invention are:
由于本发明使用了长曝光图像作为光照补偿,因此在下文“具体实施方式”步骤2中的对齐模块和提亮模块能够有效利用长曝光特征,实现长曝光图像和短曝光图像之间的特征交互,从而对低光照图像进行亮度目标更明确的提亮操作。这一设计缓解其它低光照增强技术遇到的提亮目标不明确的问题,使得本发明有明显的性能提升。Since the present invention uses long-exposure images as illumination compensation, the alignment module and the brightening module in step 2 of the "Detailed Implementation" below can effectively utilize the long-exposure features to achieve feature interaction between long-exposure images and short-exposure images. , thereby performing a brightening operation on low-light images with a more specific brightness target. This design alleviates the problem of unclear brightening targets encountered by other low-light enhancement technologies, allowing the present invention to significantly improve performance.
本发明显著提升低光照图片增强性能,在LEC-LOL-Real低光照增强基准数据集上,能够将通用低光照增强模型AGLLNet的峰值信噪比(Peak Signal to Noise Ratio)由14.93提升至25.15。This invention significantly improves the performance of low-light image enhancement. On the LEC-LOL-Real low-light enhancement benchmark data set, it can increase the peak signal-to-noise ratio (Peak Signal to Noise Ratio) of the general low-light enhancement model AGLLNet from 14.93 to 25.15.
图1为使用长曝光补偿的低光照图像增强网络的训练框架图。Figure 1 shows the training framework diagram of the low-light image enhancement network using long exposure compensation.
图2为特征对齐子模块的框架图。Figure 2 is the framework diagram of the feature alignment sub-module.
图3为提亮子模块的框架图。Figure 3 is a frame diagram of the brightening sub-module.
图4按本发明方法增强前后对比图,其中(1)为低光照图像,(2)为长曝光模糊图像,(3)为增强后的图像。Figure 4 is a comparison picture before and after enhancement according to the method of the present invention, in which (1) is a low-light image, (2) is a long-exposure blurred image, and (3) is an enhanced image.
为使本发明的上述特征和优点能更明显易懂,下文特举实施例,并配合所 附图作详细说明如下。需说明的是,以下实施例所给出的具体层数、模块数、函数数量以及对某些层的设置等都仅是一种较佳的实施方式,而不用于限制,本领域技术人员可以根据实际需要来选取数量和设置某些层,应可理解。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, embodiments are given below and described in detail with reference to the accompanying drawings. It should be noted that the specific number of layers, modules, functions, settings of certain layers, etc. given in the following embodiments are only preferred implementations and are not intended to be limiting. Those skilled in the art can It should be understandable to select the number and set some layers according to actual needs.
本实施例公开一种长曝光补偿的低光照增强方法,具体说明如下:This embodiment discloses a low-light enhancement method for long exposure compensation. The specific description is as follows:
步骤1:搜集低光照训练数据集,对于其中的每一个低光照/正常光照图像数据对,运用模糊核空间模型合成多张不同的长曝光模糊图像,搭建由正常光照/短曝光/合成长曝光图像对组成的长曝光补偿低光照增强合成数据集S,用于网络的训练和测试;其中,短曝光/真实光照图像直接从收集的低光照/正常光照图像对中获取,即收集的低光照图像为集合S中的短曝光图像,收集的正常光照图像为集合S中的真实光照图像。对于与训练数据集S场景不同的同一场景拍摄短曝光/长曝光/正常光照三张图像,组成数据对,搭建长曝光补偿低光增强实拍数据集R,用于网络的测试。该步骤收集的合成数据集S可用于该方法以及后续其它方法的训练,收集的实拍数据集R可用于各类低光照增强方法的评测与比较。Step 1: Collect the low-light training data set. For each low-light/normal-light image data pair, use the blur kernel space model to synthesize multiple different long-exposure blur images to build a combination of normal illumination/short exposure/synthetic long-exposure. The long-exposure compensated low-light enhanced synthetic data set S composed of image pairs is used for training and testing of the network; among them, the short-exposure/real-light images are obtained directly from the collected low-light/normal-light image pairs, that is, the collected low-light images The image is a short exposure image in the set S, and the collected normal illumination image is a real illumination image in the set S. Take three images of short exposure/long exposure/normal illumination in the same scene that is different from the scene in the training data set S to form a data pair, and build a long exposure compensation low-light enhanced real-shot data set R for network testing. The synthetic data set S collected in this step can be used for the training of this method and other subsequent methods, and the real shot data set R collected can be used for the evaluation and comparison of various low-light enhancement methods.
步骤2:搭建低光照增强训练框架。Step 2: Build a low-light enhanced training framework.
网络的结构如图1所示,包含特征对齐模块S2L、提亮模块L2S和细节去除模块DRP。The structure of the network is shown in Figure 1, which includes the feature alignment module S2L, the highlighting module L2S, and the detail removal module DRP.
下面以数据集S中的一对照片长曝光图像I
long和短曝光图像I
short为例对网络框架进行介绍。长曝光图像I
long和短曝光图像I
short先各自经过卷积层,后跟随一个归一化层和线性整流函数(ReLU),将图像映射到特征空间,获得初始 短曝光特征
及长曝光特征
特别的,因为长曝光图片提供的是亮度、光照的信息,因此不希望长曝光的细节信息对模型产生干扰,在长曝光输入长曝光的特征解码模块之前,添加一个16倍下采样和16倍上采样模块DRP来消除细节特征。该操作增强了本方法的鲁棒性,让本方法能适应各种不同的长曝光图像的模糊形式。
The following takes the long exposure image I long and the short exposure image I short of a pair of photos in the data set S as an example to introduce the network framework. The long exposure image I long and the short exposure image I short first pass through the convolution layer, followed by a normalization layer and linear rectification function (ReLU) to map the image to the feature space to obtain the initial short exposure feature. and long exposure characteristics In particular, because long exposure pictures provide brightness and illumination information, we do not want the detailed information of long exposure to interfere with the model. Before the long exposure is input into the long exposure feature decoding module, a 16x downsampling and 16x downsampling are added. Upsampling module DRP to eliminate detailed features. This operation enhances the robustness of the method and allows it to adapt to various blur forms of long-exposure images.
为了让每层中的长曝光图像的亮度特征
与短曝光图像的细节特征
对齐,实现有效的特征交互,模型中添加了特征对齐模块S2L。特征对齐模块的结构如图2所示,短曝光特征
先经过一个卷积层,得到一张注意力图
之后用这张注意力图A
i对长曝光特征
进行软阈值滤波操作,得到
其中“⊙”表示逐元素的乘法。该操作在空间维度上选择性地利用从长曝光图像提取得到特征,有效缓解长曝光图像中的干扰信息对本方法的影响。之后,
与
共同进行降采样并传入卷积层,预测下一个长曝光特征
单独进行降采样并传入卷积层,预测下一个短曝光特征
下一个特征在空间维度上是上一个特征的一半,但通道维度上是上一个特征的两倍。
In order to get the brightness characteristics of the long exposure image in each layer Detailed features with short exposure images Alignment to achieve effective feature interaction, the feature alignment module S2L is added to the model. The structure of the feature alignment module is shown in Figure 2. Short exposure features First pass through a convolution layer to obtain an attention map. Then use this attention map A i to compare the long exposure features Perform soft threshold filtering operation to get where "⊙" represents element-wise multiplication. This operation selectively utilizes features extracted from long-exposure images in the spatial dimension, effectively alleviating the impact of interference information in long-exposure images on this method. after, and Together, they downsample and pass into the convolutional layer to predict the next long exposure feature. Downsample separately and pass it into the convolutional layer to predict the next short exposure feature The next feature is half the size of the previous feature in the spatial dimension, but twice the size of the previous feature in the channel dimension.
经过了(M-1)个特征对齐模块,我们得到了多尺度的短曝光特征
和长曝光特征
下面,模型通过特征解码阶段将它们从特征空间解码到图像空间。在特征解码阶段,引导在相反方向进行。长曝光特征
将引导短曝光特征
的解码,因为我们需要亮度特征引导短曝光图像的增强,M为大于2的整数。
After (M-1) feature alignment modules, we obtained multi-scale short exposure features and long exposure features Below, the model decodes them from feature space to image space through a feature decoding stage. During the feature decoding phase, guidance proceeds in the opposite direction. Long exposure characteristics Will guide short exposure features decoding, because we need brightness features to guide the enhancement of short exposure images, M is an integer greater than 2.
与特征对齐模块S2L类似,模型中同样添加了提亮模块L2S,其结构如图3所示。提亮模块输入上一个尺寸的长曝光特征
和短曝光特征
以及通过跳跃连接相连的与输入特征
尺寸相当编码阶段的长短曝光特征
更具体地说,长曝光特征
首先与短曝光特征
相连。然后,通过上采样模块将上采样得到的特征与跳跃连接的特征
相连并通过卷积层,得到下一个尺度的特征
单独进行上采样,将上采样得到的特征与跳跃连接的特征
相连并通过卷积层,得到下一个尺度的特征
下一个尺度的特征和上一个尺度相比,在空间维度是原来的两倍,但在通道维度上是原来的一半。
Similar to the feature alignment module S2L, a highlighting module L2S is also added to the model, and its structure is shown in Figure 3. The highlight module inputs the long exposure features of the previous size and short exposure features and input features connected via skip connections Long and short exposure characteristics of the size equivalent encoding stage More specifically, long exposure characteristics First with the short exposure feature connected. Then, the upsampled features are combined with the skip-connected features through the upsampling module. are connected and passed through the convolutional layer to obtain the features of the next scale. Perform upsampling separately, and combine the upsampled features with the skip-connected features. are connected and passed through the convolutional layer to obtain the features of the next scale. Compared with the previous scale, the features of the next scale are twice the original size in the spatial dimension, but half the original size in the channel dimension.
步骤3:利用构建的数据集S训练模型,以模型在短曝光解码模块的输出为优化目标I
normal,以长曝光解码模块的输出为辅助输出I
assist来训练模型。使用长曝光补偿的低光照图像增强模型的总损失函数项为:
Step 3: Use the constructed data set S to train the model, use the output of the model in the short-exposure decoding module as the optimization target I normal , and use the output of the long-exposure decoding module as the auxiliary output I assist to train the model. The total loss function term of the low-light image enhancement model using long exposure compensation is:
L=L
rec+λ
SSIML
SSIM+λ
LPIPSL
LPIPS+λ
aL
a
L=L rec +λ SSIM L SSIM +λ LPIPS L LPIPS +λ a L a
其中,λ
SSIM、λ
LPIPS和λ
a是权重项,通常λ
SSIM设置为0.4,λ
LPIPS设置为1,λ
a设置为1。模型的训练批大小为16,使用Adam优化器,初始学习速率为1×10
-4,优化器超参数设置为β
1=0.9,β
2=0.999,权重衰减参数为1×10
-4。此外,为了避免梯度爆炸,梯度反传中梯度值将被截断在[-0.1,0.1]区间中。训练过程中随机裁剪256×256像素的块并使用两阶段的训练策略。第一阶段在不加入特征对齐模块S2L的注意力机制的情况下训练1.5×10
5次迭代,之后添加注意力机制后再用1×10
-5的初始学习速率训练3×10
4次迭代。
Among them, λ SSIM , λ LPIPS and λ a are weight items. Usually λ SSIM is set to 0.4, λ LPIPS is set to 1, and λ a is set to 1. The training batch size of the model is 16, the Adam optimizer is used, the initial learning rate is 1×10 -4 , the optimizer hyperparameters are set to β 1 =0.9, β 2 =0.999, and the weight decay parameter is 1×10 -4 . In addition, in order to avoid gradient explosion, the gradient value in gradient backpropagation will be truncated in the [-0.1, 0.1] interval. During training, 256 × 256 pixel patches are randomly cropped and a two-stage training strategy is used. In the first stage, 1.5×10 5 iterations are trained without adding the attention mechanism of the feature alignment module S2L. Afterwards, the attention mechanism is added and then 3×10 4 iterations are trained with an initial learning rate of 1×10 -5 .
1)L
rec为优化目标I
normal和正常光照下真实值I
GT间的平均绝对误差损失函数:
1) L rec is the average absolute error loss function between the optimization target I normal and the real value I GT under normal lighting:
L
rec=‖I
normal-I
GT‖,
L rec =‖I normal -I GT ‖,
2)L
SSIM为优化目标I
normal和正常光照下真实值I
GT间的结构相似性损失函数:
2) L SSIM is the structural similarity loss function between the optimization target I normal and the real value I GT under normal lighting:
L
SSIM=1-SSIM(I
normal,I
GT),
L SSIM = 1-SSIM (I normal , I GT ),
其中,SSIM(x,y)表示两张图像x和y的结构相似性,可根据以下方式求出:Among them, SSIM(x,y) represents the structural similarity of two images x and y, which can be calculated according to the following method:
其中μ
x是x的平均值,μ
y是y的平均值,
是x的方差,
是y的方差,σ
xy是x和y的协方差。c
1=(k
1L)
2,c
2=(k
2L)
2是用来维持稳定的常数。L是像素值的动态范围。k
1=0.01,k
2=0.03。
where μ x is the mean of x, μ y is the mean of y, is the variance of x, is the variance of y, σ xy is the covariance of x and y. c 1 =(k 1 L) 2 and c 2 =(k 2 L) 2 are constants used to maintain stability. L is the dynamic range of pixel values. k 1 =0.01, k 2 =0.03.
3)L
LPIPS为感知图像块相似度学习损失函数:
3) L LPIPS is the perceptual image block similarity learning loss function:
这里计算L
LPIPS使用的图像特征是采用ImageNet上预训练的VGG网络来提取的。通过将网络的输出结果I
normal和图像对中正常光照的图片I
GT分别输入预训练的VGG网络模型得到图像特征。其中φ
l(I)表示VGG网络提取的图像I的第l层特征,H
l和W
l分别表示φ
l(I)的宽和高。上述三个损失函数的设计对该方法的提亮结果在绝对误差、图像结构以及图像感知相似性三个方法进行约束,使得提亮结果接近用户的预期。
The image features used to calculate L LPIPS here are extracted using the VGG network pretrained on ImageNet. The image features are obtained by inputting the output result I normal of the network and the normal illumination picture I GT of the image pair into the pre-trained VGG network model respectively. Among them, φ l (I) represents the l-th layer feature of image I extracted by the VGG network, and H l and W l represent the width and height of φ l (I) respectively. The design of the above three loss functions constrains the brightening results of this method in three methods: absolute error, image structure and image perception similarity, so that the brightening results are close to the user's expectations.
4)L
a为辅助输出I
assist和正常光照下真实值I
GT间的平均绝对误差损失函数:
4) L a is the average absolute error loss function between the auxiliary output I assist and the real value I GT under normal lighting:
L
a=‖I
assist-I
GT‖。
L a =‖I assist -I GT ‖.
该损失函数在网络中找到一条参数优化的捷径,使得网络训练过程中参数优化更高效。This loss function finds a shortcut for parameter optimization in the network, making parameter optimization more efficient during network training.
步骤4:利用实拍数据集R对训练后的低光照增强模型进行评测。Step 4: Use the real-shot data set R to evaluate the trained low-light enhancement model.
步骤5:推理阶段,输入待提亮的短曝光图像和对应的模糊长曝光图像,最终输出想要的低光照增强结果。Step 5: In the inference stage, input the short exposure image to be brightened and the corresponding blurred long exposure image, and finally output the desired low-light enhancement result.
在实际应用中,若需要在光照不足的室内或夜间的户外环境中拍摄图像,可以先用相机的短曝光模式拍摄一张短曝光图像,如图4(1)所示低光照图像;再用相机的长曝光模式拍摄一张长曝光图像。这张短曝光图像会有亮度不足的问题,而这张长曝光图像会有模糊的问题,如图4(2)所示长曝光模糊图像。此时,使用本发明提出的技术方案,输入待提亮的短曝光图像和对应的模糊长曝光图像,就可以得到清晰的低光照增强结果,如图4(3)所示增强后的图像。In practical applications, if you need to capture images indoors with insufficient lighting or in outdoor environments at night, you can first use the camera's short exposure mode to capture a short exposure image, as shown in Figure 4(1) for the low-light image; then use The camera's long exposure mode captures a long exposure image. This short-exposure image will have the problem of insufficient brightness, and this long-exposure image will have the problem of blur, as shown in Figure 4(2), the long-exposure blurred image. At this time, by using the technical solution proposed by the present invention and inputting the short-exposure image to be brightened and the corresponding blurred long-exposure image, a clear low-light enhancement result can be obtained, as shown in Figure 4(3) after the enhancement.
以上实施例仅用以说明本发明的技术方案而非对其进行限制,本领域的普通技术人员可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明的精神和范围,本发明的保护范围应以权利要求书所述为准。The above embodiments are only used to illustrate the technical solutions of the present invention but not to limit them. Those of ordinary skill in the art can modify or equivalently replace the technical solutions of the present invention without departing from the spirit and scope of the present invention. The scope of protection shall be determined by the claims.
Claims (10)
- 一种使用长曝光补偿的低光照图像增强方法,其步骤包括:A low-light image enhancement method using long exposure compensation, the steps include:1)收集低光照训练数据集,其中所述低光照训练数据集中的每一训练样本包括同一场景的低光照图像和正常光照图像;根据每一训练样本生成一组对应的短曝光图像、长曝光图像和真实光照图像,得到一合成数据集S;1) Collect a low-light training data set, wherein each training sample in the low-light training data set includes low-light images and normal illumination images of the same scene; generate a set of corresponding short-exposure images, long-exposure images according to each training sample images and real illumination images to obtain a synthetic data set S;2)利用所述合成数据集S训练低光照增强模型,所述低光照增强模型包括M-1个特征对齐模块和M-1个提亮模块;其中,对于所述合成数据集S中同一组图像内的照片长曝光图像I long和短曝光图像I short,所述低光照增强模型将长曝光图像I long和短曝光图像I short分别映射到特征空间,获得对应的短曝光特征 及长曝光特征 并将其输入第一特征对齐模块; 2) Use the synthetic data set S to train a low-light enhancement model. The low-light enhancement model includes M-1 feature alignment modules and M-1 brightening modules; where, for the same group in the synthetic data set S The photo long exposure image I long and the short exposure image I short in the image are respectively mapped to the feature space by the low -light enhancement model to obtain the corresponding short exposure features. and long exposure characteristics And input it into the first feature alignment module;3)第i特征对齐模块对输入的第i尺度长曝光特征 与第i尺度短曝光特征 进行对齐;其中,第i特征对齐模块将第i尺度短曝光特征 进行卷积处理,得到一张注意力图A i,然后用注意力图A i对第i尺度长曝光特征 进行软阈值滤波操作,得到 其中“⊙”表示逐元素的乘法;然后将 与 共同进行降采样并传入卷积层,预测输出第i+1尺度长曝光特征 以及将 单独进行降采样并传入卷积层,预测输出第i+1尺度短曝光特征 将第M-1个特征对齐模块预测输出的第M尺度长曝光特征 第M尺度短曝光特征 进行拼接作为特征第M+1尺度长曝光特征 第M+1尺度短曝光特征 其中,i=1~M-1, 3) The i-th feature alignment module applies the i-th scale long exposure feature to the input with i-th scale short exposure features Align; among them, the i-th feature alignment module aligns the i-th scale short exposure features Perform convolution processing to obtain an attention map A i , and then use the attention map A i to compare the i-th scale long exposure features Perform soft threshold filtering operation to get where "⊙" represents element-wise multiplication; then and Together, we perform downsampling and pass it into the convolutional layer to predict and output the i+1th scale long exposure feature. and will Perform downsampling separately and pass it to the convolutional layer to predict and output the i+1th scale short exposure feature. Align the M-1th feature module to predict the output M-th scale long exposure feature Mth scale short exposure features Perform splicing as the feature M+1 scale long exposure feature M+1 scale short exposure features Among them, i=1~M-1,4)第i提亮模块将第M+i尺度长曝光特征 第M+i尺度短曝光特征 进 行拼接,并对拼接特征进行上采样,然后将上采样所得特征与第M-i尺度短曝光特征 相连后通过卷积层得到第M+i+1尺度短曝光特征 对 进行上采样所得特征与第M-i尺度长曝光特征 相连后通过卷积层得到得到第M+i+1尺度长曝光特征 以第M-1提亮模块输出的第2M尺度短曝光特征 作为优化目标I normal、第2M尺度长曝光特征 作为辅助,优化所述低光照增强模型;其中,训练优化所述低光照增强模型的总损失函数为L=L rec+λ SSIML SSIM+λ LPIPSL LPIPS+λ aL a;λ SSIM、λ LPIPS和λ a为权重项,L rec为优化目标I normal和正常光照下真实值I GT间的平均绝对误差损失函数;L SSIM为优化目标I normal和正常光照下真实值L GT间的结构相似性损失函数;L LPIPS为感知图像块相似度学习损失函数;L a为辅助输出I assist和正常光照下真实值I GT间的平均绝对误差损失函数; 4) The i-th brightening module converts the M+i-th scale long exposure features M+ith scale short exposure features Perform splicing, upsample the spliced features, and then combine the upsampled features with the Mi-th scale short exposure features After connection, the M+i+1th scale short exposure feature is obtained through the convolution layer. right Features obtained by upsampling and Mi-th scale long exposure features After connection, the M+i+1-th scale long exposure feature is obtained through the convolution layer. The 2M-scale short exposure features output by the M-1 brightening module As the optimization target I normal , the 2M scale long exposure feature As an auxiliary, the low-light enhancement model is optimized; wherein, the total loss function of training and optimizing the low-light enhancement model is L=L rec +λ SSIM L SSIM +λ LPIPS L LPIPS +λ a L a ; λ SSIM , λ LPIPS and λ a are weight terms, L rec is the average absolute error loss function between the optimization target I normal and the real value I GT under normal lighting; L SSIM is the structural similarity between the optimization target I normal and the real value L GT under normal lighting. loss function; L LPIPS is the perceptual image block similarity learning loss function; L a is the average absolute error loss function between the auxiliary output I assist and the real value I GT under normal lighting;5)将待提亮的短曝光图像和对应的模糊长曝光图像输入训练后的低光照增强模型,得到对应的低光照增强图像。5) Input the short exposure image to be brightened and the corresponding blurred long exposure image into the trained low-light enhancement model to obtain the corresponding low-light enhancement image.
- 根据权利要求1所述的方法,其特征在于,所述搭建低光照增强模型还包括细节去除模块,用于消除长曝光图像I long的细节特征后映射到特征空间,获得长曝光特征 The method according to claim 1, characterized in that said building a low-light enhancement model further includes a detail removal module for eliminating detailed features of the long-exposure image I long and then mapping them to the feature space to obtain the long-exposure features.
- 根据权利要求1所述的方法,其特征在于,获取一实拍数据集R,所述实拍数据集R中的每一组图像包括对同一场景拍摄的三张图像,即短曝光图像、长曝光图像和真实光照图像;利用所述实拍数据集R对训练后的低光照增强模型进行评测。The method according to claim 1, characterized in that a real-shot data set R is obtained, and each group of images in the real-shot data set R includes three images taken of the same scene, namely a short exposure image, Long exposure images and real illumination images; use the real shot data set R to evaluate the trained low-light enhancement model.
- 根据权利要求1或2或3所述的方法,其特征在于,根据训练样本中的正常光照图像合成得到所述合成数据集S中的长曝光图像;所述合成数据集S中的短曝光图像为训练样本中的低光照图像,所述合成数据集S中的真实光照图像为训练样本的正常光照图像。The method according to claim 1 or 2 or 3, characterized in that the long exposure image in the synthetic data set S is obtained by synthesizing the normal illumination images in the training sample; the short exposure image in the synthetic data set S is obtained is the low-illumination image in the training sample, and the real illumination image in the synthetic data set S is the normal illumination image of the training sample.
- 根据权利要求1或2或3所述的方法,其特征在于,利用模糊核空间模型对训练样本中的正常光照图像进行处理得到所述合成数据集S中的长曝光图像。The method according to claim 1, 2 or 3, characterized in that the long exposure image in the synthetic data set S is obtained by using a blur kernel space model to process the normal illumination images in the training samples.
- 根据权利要求1或2或3所述的方法,其特征在于,L rec=‖I normal-I GT‖;L a=‖I assist-I GT‖。 The method according to claim 1 or 2 or 3, characterized in that L rec =‖I normal -I GT ‖; L a =‖I assist -I GT ‖.
- 根据权利要求1所述的方法,其特征在于,L SSIM=1-SSIM(I normal,I GT);其中,SSIM(x,y)表示两张图像x和y的结构相似性; μ x是x的平均值,μ y是y的平均值, 是x的方差, 是y的方差,σ xy是x和y的协方差,c 1、c 2是用来维持稳定的常数。 The method according to claim 1, characterized in that, L SSIM =1-SSIM (I normal , I GT ); wherein, SSIM (x, y) represents the structural similarity of two images x and y; μ x is the mean value of x, μ y is the mean value of y, is the variance of x, is the variance of y, σ xy is the covariance of x and y, and c 1 and c 2 are constants used to maintain stability.
- 根据权利要求1所述的方法,其特征在于, 其中,φ l(I normal)表示VGG网络提取的图像I normal的第l层特征,H l和W l分别表示φ l(I normal)的宽和高。 The method according to claim 1, characterized in that: Among them, φ l (I normal ) represents the l-th layer feature of the image I normal extracted by the VGG network, and H l and W l represent the width and height of φ l (I normal ) respectively.
- 一种服务器,其特征在于,包括存储器和处理器,所述存储器存储计算机程序,所述计算机程序被配置为由所述处理器执行,所述计算机程序包括用于执行权利要求1至8任一所述方法中各步骤的指令。A server, characterized in that it includes a memory and a processor, the memory stores a computer program, the computer program is configured to be executed by the processor, the computer program includes a component for executing any one of claims 1 to 8 Instructions for each step in the method.
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8任一所述方法的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that when the computer program is executed by a processor, the steps of the method of any one of claims 1 to 8 are implemented.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210651629.7A CN115240022A (en) | 2022-06-09 | 2022-06-09 | Low-illumination image enhancement method using long exposure compensation |
CN202210651629.7 | 2022-06-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023236445A1 true WO2023236445A1 (en) | 2023-12-14 |
Family
ID=83669630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/131018 WO2023236445A1 (en) | 2022-06-09 | 2022-11-10 | Low-illumination image enhancement method using long-exposure compensation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115240022A (en) |
WO (1) | WO2023236445A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117499779A (en) * | 2023-12-27 | 2024-02-02 | 荣耀终端有限公司 | Image preview method, device and storage medium |
CN117611486A (en) * | 2024-01-24 | 2024-02-27 | 深圳大学 | Irregular self-supervision low-light image enhancement method |
CN117635478A (en) * | 2024-01-23 | 2024-03-01 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115240022A (en) * | 2022-06-09 | 2022-10-25 | 北京大学 | Low-illumination image enhancement method using long exposure compensation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190333198A1 (en) * | 2018-04-25 | 2019-10-31 | Adobe Inc. | Training and utilizing an image exposure transformation neural network to generate a long-exposure image from a single short-exposure image |
CN111064904A (en) * | 2019-12-26 | 2020-04-24 | 深圳深知未来智能有限公司 | Dark light image enhancement method |
CN111798400A (en) * | 2020-07-20 | 2020-10-20 | 福州大学 | Non-reference low-illumination image enhancement method and system based on generation countermeasure network |
CN111915526A (en) * | 2020-08-05 | 2020-11-10 | 湖北工业大学 | Photographing method based on brightness attention mechanism low-illumination image enhancement algorithm |
CN115240022A (en) * | 2022-06-09 | 2022-10-25 | 北京大学 | Low-illumination image enhancement method using long exposure compensation |
-
2022
- 2022-06-09 CN CN202210651629.7A patent/CN115240022A/en active Pending
- 2022-11-10 WO PCT/CN2022/131018 patent/WO2023236445A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190333198A1 (en) * | 2018-04-25 | 2019-10-31 | Adobe Inc. | Training and utilizing an image exposure transformation neural network to generate a long-exposure image from a single short-exposure image |
CN111064904A (en) * | 2019-12-26 | 2020-04-24 | 深圳深知未来智能有限公司 | Dark light image enhancement method |
CN111798400A (en) * | 2020-07-20 | 2020-10-20 | 福州大学 | Non-reference low-illumination image enhancement method and system based on generation countermeasure network |
CN111915526A (en) * | 2020-08-05 | 2020-11-10 | 湖北工业大学 | Photographing method based on brightness attention mechanism low-illumination image enhancement algorithm |
CN115240022A (en) * | 2022-06-09 | 2022-10-25 | 北京大学 | Low-illumination image enhancement method using long exposure compensation |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117499779A (en) * | 2023-12-27 | 2024-02-02 | 荣耀终端有限公司 | Image preview method, device and storage medium |
CN117499779B (en) * | 2023-12-27 | 2024-05-10 | 荣耀终端有限公司 | Image preview method, device and storage medium |
CN117635478A (en) * | 2024-01-23 | 2024-03-01 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
CN117635478B (en) * | 2024-01-23 | 2024-05-17 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
CN117611486A (en) * | 2024-01-24 | 2024-02-27 | 深圳大学 | Irregular self-supervision low-light image enhancement method |
CN117611486B (en) * | 2024-01-24 | 2024-04-02 | 深圳大学 | Irregular self-supervision low-light image enhancement method |
Also Published As
Publication number | Publication date |
---|---|
CN115240022A (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023236445A1 (en) | Low-illumination image enhancement method using long-exposure compensation | |
Chen et al. | Real-world single image super-resolution: A brief review | |
US11037278B2 (en) | Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures | |
Zhou et al. | Cross-view enhancement network for underwater images | |
EP3937481A1 (en) | Image display method and device | |
US20230214976A1 (en) | Image fusion method and apparatus and training method and apparatus for image fusion model | |
WO2021164234A1 (en) | Image processing method and image processing device | |
Jinno et al. | Multiple exposure fusion for high dynamic range image acquisition | |
CN111669514B (en) | High dynamic range imaging method and apparatus | |
CN111292264A (en) | Image high dynamic range reconstruction method based on deep learning | |
CN113052210A (en) | Fast low-illumination target detection method based on convolutional neural network | |
WO2022000397A1 (en) | Low-illumination image enhancement method and apparatus, and computer device | |
US20230074180A1 (en) | Method and apparatus for generating super night scene image, and electronic device and storage medium | |
Tan et al. | A real-time video denoising algorithm with FPGA implementation for Poisson–Gaussian noise | |
CN117372764A (en) | Non-cooperative target detection method in low-light environment | |
WO2023045627A1 (en) | Image super-resolution method, apparatus and device, and storage medium | |
CN116433516A (en) | Low-illumination image denoising and enhancing method based on attention mechanism | |
CN112287998B (en) | Method for detecting target under low illumination condition | |
WO2023110880A1 (en) | Image processing methods and systems for low-light image enhancement using machine learning models | |
He et al. | Low-light image enhancement with multi-scale attention and frequency-domain optimization | |
CN111402153B (en) | Image processing method and system | |
CN110489584B (en) | Image classification method and system based on dense connection MobileNet model | |
CN115311149A (en) | Image denoising method, model, computer-readable storage medium and terminal device | |
Wang et al. | BrightFormer: A transformer to brighten the image | |
Deng et al. | Colour Variation Minimization Retinex Decomposition and Enhancement with a Multi-Branch Decomposition Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22945574 Country of ref document: EP Kind code of ref document: A1 |