CN115880177A - Full-resolution low-light image enhancement method for aggregating context and enhancing details - Google Patents

Full-resolution low-light image enhancement method for aggregating context and enhancing details Download PDF

Info

Publication number
CN115880177A
CN115880177A CN202211600774.9A CN202211600774A CN115880177A CN 115880177 A CN115880177 A CN 115880177A CN 202211600774 A CN202211600774 A CN 202211600774A CN 115880177 A CN115880177 A CN 115880177A
Authority
CN
China
Prior art keywords
low
module
feature
convolution
full
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211600774.9A
Other languages
Chinese (zh)
Inventor
牛玉贞
林晓锋
兰杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202211600774.9A priority Critical patent/CN115880177A/en
Publication of CN115880177A publication Critical patent/CN115880177A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a full-resolution low-illumination image enhancement method for aggregating context and enhancing details, which comprises the following steps: carrying out data preprocessing, including data pairing, data random cutting and data enhancement processing, to obtain a training data set; designing a full-resolution low-illumination image enhancement network for aggregating context and enhancing details, wherein the network consists of a full-resolution detail extraction module, a frequency-space domain context information attention module and a feature aggregation and enhancement module; designing a loss function, and guiding the parameter optimization of the network designed in the step B; and B, training the full-resolution low-illumination image enhancement network for the aggregation context and the enhancement details in the step B by using the training data set obtained in the step A, and converging to Nash balance to obtain a trained full-resolution low-illumination image enhancement model for the aggregation context and the enhancement details. The invention can enhance the low-illumination image and solve the problems of low-illumination image detail loss, color distortion, insufficient brightness and the like.

Description

聚合上下文和增强细节的全分辨率低照度图像增强方法Full-resolution low-light image enhancement method for aggregating context and enhancing details

技术领域technical field

本发明属于图像处理以及计算机视觉技术领域,尤其涉及一种聚合上下文和增强细节的全分辨率低照度图像增强方法。The invention belongs to the technical fields of image processing and computer vision, and in particular relates to a full-resolution low-illuminance image enhancement method for aggregating context and enhancing details.

背景技术Background technique

低照度图像增强是图像增强的一个重要分支。由于光照不足、非均匀光照、背光等环境原因以及照相机成像过程中容易受到干扰等场景条件的影响,低照度图像呈现亮度低、噪声大、颜色及细节信息丢失等退化情况,这就需要低照度图像增强算法对其进行处理,在保留有用的细节信息的基础上,恢复颜色等有用信息,去除噪声,从而得到满足人类视觉感知体验或者更适合下游任务分析处理的正常照度图像。Low-light image enhancement is an important branch of image enhancement. Due to insufficient lighting, non-uniform lighting, backlight and other environmental reasons, as well as scene conditions such as interference during the imaging process of the camera, low-light images show degradation such as low brightness, high noise, and loss of color and detail information, which requires low-light The image enhancement algorithm processes it, and on the basis of retaining useful detail information, restores useful information such as color and removes noise, so as to obtain a normal illumination image that satisfies human visual perception experience or is more suitable for downstream task analysis and processing.

低照度图像增强技术有着广泛的应用前景。它可以提高夜间检查或监测的可见度。在公共场所设置摄像头是为了更好地监控和记录,夜间监控摄像头捕获的图像在光线不足的情况下大多是黑暗和不清晰的,不能为一些事件提供有力的证据。而经低照度图像增强技术处理后,越清晰的视频对事件的判断和决策起到越有力的支撑作用。低照度图像增强技术还可以提升人类视觉感知质量。拍摄到的图片中环境光照不尽如人意,但可以通过该技术的处理迅速达到令人满意的水平,而无需布景重新拍摄。同时,低照度图像增强技术还可以改善下游任务的性能。许多下游技术,比如人脸识别、人体关键点识别等都对输入图像的质量有着比较高的要求,算法能够正确地识别到人脸和躯体位置取决于输入图像是否足够清晰。在输入图片黑暗或模糊的情况下,识别出人脸或者身体轮廓是极具挑战性的,而低照度图像增强技术可以提升图像质量,从而显著提升算法的识别和检测精度。Low-light image enhancement technology has broad application prospects. It improves visibility for nighttime inspections or monitoring. The purpose of setting up cameras in public places is to better monitor and record, and the images captured by surveillance cameras at night are mostly dark and unclear in low light conditions, which cannot provide strong evidence for some events. After being processed by low-light image enhancement technology, the clearer the video, the more powerful the supporting role for the judgment and decision-making of the event. Low-light image enhancement technology can also improve the quality of human visual perception. Ambient lighting in captured images was less than ideal, but could be quickly brought to a satisfactory level by the technology without requiring set reshoots. At the same time, low-light image enhancement techniques can also improve the performance of downstream tasks. Many downstream technologies, such as face recognition and human body key point recognition, have relatively high requirements on the quality of the input image. The ability of the algorithm to correctly identify the position of the face and body depends on whether the input image is clear enough. In the case of dark or blurred input images, it is extremely challenging to identify human faces or body contours, and low-light image enhancement technology can improve image quality, thereby significantly improving the recognition and detection accuracy of the algorithm.

早期方法主要基于直方图均衡化和Retinex理论,直方图均衡化通过扩展图像像素的动态范围来实现增强,这种方法可以较好地提高对比度,但是由于缺乏对局部的考虑,容易导致图像的过度曝光和曝光不足;Retinex理论认为图像可描述为反射分量R与光照分量I的乘积,需要先验知识,较差的先验知识将带来色差严重、噪声放大的不真实的增强。近年来,许多深度学习的方法被提出。一些方法将Retinex理论与卷积神经网络结合,直接将反射分类作为增强后的图像,造成细节丢失和色彩偏差;一些方法直接将其他领域的主流网络架构迁移到低照度任务中,缺乏对低照度特性的考虑;还有一些方法独立地解决低照度图像存在的光照不足、噪声大、颜色及细节信息丢失问题中的某几个方面,忽略了问题间的相关性。然而,低照度图像存在细节、噪声等低层信息、颜色、场景等高层信息,两类特征间并不是毫不相关的。场景、光照的处理有助于细节的恢复,细节的恢复又促进整体场景的复原。Early methods are mainly based on histogram equalization and Retinex theory. Histogram equalization achieves enhancement by expanding the dynamic range of image pixels. This method can improve the contrast, but due to the lack of local considerations, it is easy to cause excessive images. Exposure and underexposure; Retinex theory believes that the image can be described as the product of the reflection component R and the illumination component I, which requires prior knowledge. Poor prior knowledge will bring about serious color difference and unreal enhancement of noise amplification. In recent years, many deep learning methods have been proposed. Some methods combine Retinex theory with convolutional neural network, and directly classify reflections as enhanced images, resulting in loss of details and color deviation; Considering the characteristics; there are also some methods that independently solve certain aspects of the problems of insufficient illumination, large noise, and loss of color and detail information in low-illumination images, ignoring the correlation between the problems. However, low-light images contain low-level information such as details and noise, and high-level information such as color and scene, and the two types of features are not irrelevant. The processing of scenes and lighting helps restore details, and the restoration of details promotes the restoration of the overall scene.

已有方法对细节、噪声等低层信息和颜色、场景等高层信息赋予相同的地位独立处理,而低照度图像本身是一种比较精细的图像处理任务,需要更注重细节方面的处理,然后再融合进颜色、场景等高层信息的处理。而且,两类特征之间也应该建立联系,共同增强,不能独立处理。Existing methods give the same status to low-level information such as details and noise and high-level information such as color and scene for independent processing, while low-light images themselves are a relatively delicate image processing task that requires more attention to detail and then fusion Into the processing of high-level information such as color and scene. Moreover, a connection should also be established between the two types of features, which are jointly enhanced and cannot be processed independently.

发明内容Contents of the invention

针对现有技术存在的缺陷和不足,本发明的目的在于提供一种聚合上下文和增强细节的全分辨率低照度图像增强方法,该方法聚合了细节特征和上下文特征协同增强,有利于显著提高低照度图像增强的性能。Aiming at the defects and deficiencies in the prior art, the purpose of the present invention is to provide a full-resolution low-light image enhancement method that aggregates context and enhances details. Performance of illumination image enhancement.

本发明设计了一种聚合上下文和增强细节的全分辨率低照度图像增强方法,首先设计全分辨率细节提取模块提取细节特征,然后设计频空域上下文信息注意力模块提取频域、空域上的颜色、场景等上下文特征并用注意力模块学习频域、空域上特征的重要程度,最后设计特征聚合和增强模块对细节特征和上下文特征进行聚合后协同增强。The present invention designs a full-resolution low-illuminance image enhancement method that aggregates context and enhances details. First, a full-resolution detail extraction module is designed to extract detail features, and then a frequency-space context information attention module is designed to extract colors in the frequency domain and space domain. , scene and other context features and use the attention module to learn the importance of features in the frequency domain and space domain, and finally design the feature aggregation and enhancement module to aggregate the detail features and context features and then synergistically enhance them.

其包括:进行数据预处理,包括数据配对、数据随机裁切、数据增强处理,得到训练数据集;设计聚合上下文和增强细节的全分辨率低照度图像增强网络,该网络由全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块组成;设计损失函数,指导步骤B所设计网络的参数优化;使用步骤A得到的训练数据集训练步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络,收敛到纳什平衡,得到训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型;将待测低照度图像输入训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型,输出增强后的正常照度图像。本发明能对低照度图像进行增强,解决低照度图像细节缺失、颜色失真、亮度不足等问题。It includes: data preprocessing, including data pairing, data random cropping, and data enhancement processing, to obtain a training data set; design a full-resolution low-light image enhancement network that aggregates context and enhances details, and the network is extracted from full-resolution details Module, frequency-space domain context information attention module, feature aggregation and enhancement module; design loss function to guide the parameter optimization of the network designed in step B; use the training data set obtained in step A to train the aggregation context and enhancement details in step B The full-resolution low-illumination image enhancement network converges to Nash equilibrium, and the trained full-resolution low-illumination image enhancement model with aggregated context and enhanced details is obtained; the low-illuminated image to be tested is input into the trained aggregated context and enhanced details. A high-resolution low-illumination image enhancement model that outputs an enhanced normal-illumination image. The invention can enhance the low-illuminance image, and solve the problems of low-illuminance image detail loss, color distortion, insufficient brightness and the like.

本发明解决其技术问题采用的技术方案是:The technical scheme that the present invention solves its technical problem adopts is:

一种聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于:A full-resolution low-light image enhancement method that aggregates context and enhances details, characterized by:

步骤A、进行数据预处理,包括数据配对、数据随机裁切、数据增强处理,得到训练数据集;Step A, data preprocessing, including data pairing, data random cutting, data enhancement processing, to obtain a training data set;

步骤B、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,包括:全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块;Step B. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, including: a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module;

步骤C、设计损失函数,用于指导步骤B所设计网络的参数优化;Step C, designing a loss function, which is used to guide the parameter optimization of the network designed in step B;

步骤D、使用步骤A得到的训练数据集训练步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络,收敛到纳什平衡,得到训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型;Step D. Use the training data set obtained in step A to train the full-resolution low-light image enhancement network for the aggregated context and enhanced details in step B, converge to Nash equilibrium, and obtain the trained aggregated context and enhanced details for the full-resolution low-light image Illuminance image enhancement model;

步骤E、将待测低照度图像输入训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型,输出增强后的正常照度图像。Step E: Input the low-illumination image to be tested into the trained full-resolution low-illumination image enhancement model that aggregates context and enhances details, and outputs an enhanced normal-illuminance image.

进一步地,步骤A的具体实现步骤如下:Further, the specific implementation steps of step A are as follows:

步骤A1、配对低照度图像和对应的标签图像;Step A1, pairing the low-illumination image and the corresponding label image;

步骤A2、将每张尺寸为h×w×3的低照度图像随机裁切为p×p×3大小的图像,并对其对应的标签图像采用相同的随机裁切方式,其中,h、w是低照度图像与标签图像的高度和宽度,p是裁切出图像的高度和宽度;Step A2. Randomly crop each low-illumination image with a size of h×w×3 into an image with a size of p×p×3, and use the same random cropping method for the corresponding label image, where h, w is the height and width of the low-light image and the label image, and p is the height and width of the cropped image;

步骤A3、随机采用以下8种中的1种增强方式对待训练配对图像进行数据增强:保持原始图像、垂直翻转、旋转90度、旋转90度后垂直翻转、旋转180度、旋转180度后垂直翻转、旋转270度、旋转270度后垂直翻转。Step A3. Randomly use one of the following 8 enhancement methods to perform data enhancement on the training paired image: keep the original image, flip vertically, rotate 90 degrees, rotate vertically after 90 degrees, rotate 180 degrees, rotate vertically after 180 degrees , rotate 270 degrees, rotate vertically after 270 degrees.

进一步地,步骤B的具体实现步骤如下:Further, the specific implementation steps of step B are as follows:

步骤B1、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块、频域变换子模块组成,使用所设计的网络提取细节特征;Step B1, build a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule, and a frequency domain transformation submodule, and use the designed network to extract detail features;

步骤B2、设计频空域上下文信息注意力模块,由多尺度特征提取子模块、频空域特征融合子模块组成,使用所设计的网络提取上下文特征;Step B2. Design the frequency-space context information attention module, which is composed of a multi-scale feature extraction sub-module and a frequency-space feature fusion sub-module, and use the designed network to extract context features;

步骤B3、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,聚合步骤B1中提取的细节特征和步骤B2提取的上下文特征,共同增强两类特征;Step B3, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, aggregate the detailed features extracted in step B1 and the context features extracted in step B2, and jointly enhance the two types of features;

步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,包括全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块。Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, including a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module.

进一步地,步骤B1的具体实现步骤如下:Further, the specific implementation steps of step B1 are as follows:

步骤B11、设计浅层特征提取子模块,输入为低照度图像I,经过3×3卷积得到初始特征图Fori后,进入三个分支,第一个分支包含1个3×3卷积,第二个分支包含2个串行的3×3卷积,第三个分支包含3个串行的3×3卷积,将三个分支的处理结果FB1、FB2、FB3沿通道维度拼接后,经过一个3×3卷积,得到浅层特征提取子模块输出的特征图Flow;具体公式表示如下:Step B11. Design the sub-module of shallow feature extraction. The input is the low-light image I. After 3×3 convolution to obtain the initial feature map F ori , enter three branches. The first branch contains a 3×3 convolution, The second branch contains 2 serial 3×3 convolutions, and the third branch contains 3 serial 3×3 convolutions, and the processing results F B1 , F B2 , and F B3 of the three branches are along the channel dimension After splicing, after a 3×3 convolution, the feature map F low output by the shallow feature extraction sub-module is obtained; the specific formula is expressed as follows:

Fori=Conv3(I)F ori = Conv3(I)

FB1=Conv3(Fori)F B1 =Conv3(F ori )

FB2=Conv3(Conv3(Fori))F B2 =Conv3(Conv3(F ori ))

FB3=Conv3(Conv3(Conv3(Fori)))F B3 =Conv3(Conv3(Conv3(F ori )))

Flow=Conv3(Concat(FB1,FB2,FB3))F low =Conv3(Concat(F B1 ,F B2 ,F B3 ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension;

步骤B12、构建基于CBAM的注意力子模块,由串行的通道维度的注意力Attc和空间维度的注意力Atts组成,输入特征图为步骤B11得到的特征图Flow,得到基于CBAM的注意力子模块输出的特征图为Fspa;具体公式表示如下:Step B12, build a CBAM-based attention sub-module, which is composed of serial channel-dimension attention Att c and space-dimension attention Att s , the input feature map is the feature map Flow obtained in step B11, and CBAM-based The feature map output by the attention sub-module is F spa ; the specific formula is expressed as follows:

Fspa=Atts(Attc(Flow))F spa =Att s (Att c (F low ))

其中,Attc是通道维度的注意力,Atts是空间维度的注意力;Among them, Att c is the attention of the channel dimension, and Att s is the attention of the spatial dimension;

步骤B13、设计频域变换子模块,输入特征图为步骤B12得到的特征图Fspa,使用傅里叶变换函数将空域转为频域、依次经过3×3卷积、归一化层、ReLU激活函数后,使用傅里叶逆变换函数将频域转为空域,得到输出特征图Ffre;具体公式表示如下:Step B13, design the frequency domain transformation sub-module, the input feature map is the feature map F spa obtained in step B12, use the Fourier transform function to convert the space domain into the frequency domain, and then go through 3×3 convolution, normalization layer, ReLU After activating the function, use the inverse Fourier transform function to convert the frequency domain into the space domain to obtain the output feature map F fre ; the specific formula is expressed as follows:

Ffre=idft(ReLU(BN(Conv3(dft(Fspa)))))F fre = idft(ReLU(BN(Conv3(dft(F spa )))))

其中,dft是傅里叶变换,idft是傅里叶逆变换,ReLU是ReLU激活函数,BN是批归一化层,Conv3是3×3卷积;Among them, dft is the Fourier transform, idft is the inverse Fourier transform, ReLU is the ReLU activation function, BN is the batch normalization layer, and Conv3 is a 3×3 convolution;

步骤B14、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块和频域变换子模块组成;设输入为经过步骤A处理后的低照度图像I,依次经过浅层特征提取子模块、注意力子模块、频域变换子模块处理后得到特征图Flow、Fspa、FfreStep B14, build a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule and a frequency domain transformation submodule; assuming that the input is the low-illumination image I processed in step A, sequentially pass through The feature maps F low , F spa , and F fre are obtained after processing by the shallow feature extraction submodule, the attention submodule, and the frequency domain transformation submodule.

进一步地,步骤B2的具体实现步骤如下:Further, the specific implementation steps of step B2 are as follows:

步骤B21、设计多尺度特征提取子模块,输入特征图记为F,

Figure BDA0003995040430000041
H,W,C分别为特征F的高度、宽度和通道数,经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图
Figure BDA0003995040430000042
然后分成两个分支,上分支经过1×1卷积继续降维后,通过上采样层得到上分支的输出/>
Figure BDA0003995040430000043
a是降维后的通道数;另一个分支经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图/>
Figure BDA0003995040430000044
中间特征图F121再依次经过上采样层、1×1卷积、ReLU激活函数、上采样层,得到下分支的输出/>
Figure BDA0003995040430000051
将F11和F12相加后,与F在通道维度上拼接,经过SE模块后再通过1×1卷积调整通道,得到多尺度特征提取子模块输出的特征图
Figure BDA0003995040430000052
具体公式表示如下:Step B21, design multi-scale feature extraction sub-module, input feature map is marked as F,
Figure BDA0003995040430000041
H, W, and C are the height, width, and number of channels of feature F respectively. After an average pooling layer with a kernel size of 2×2 and a step size of 2, it goes through 1×1 convolution, ReLU activation function, 1 ×1 convolution and ReLU activation function for dimensionality reduction to obtain intermediate feature maps
Figure BDA0003995040430000042
Then it is divided into two branches. After the upper branch continues to reduce the dimension through 1×1 convolution, the output of the upper branch is obtained through the upsampling layer>>
Figure BDA0003995040430000043
a is the number of channels after dimensionality reduction; the other branch passes through an average pooling layer with a kernel size of 2×2 and a step size of 2, followed by 1×1 convolution, ReLU activation function, 1×1 convolution, The ReLU activation function performs dimensionality reduction to obtain the intermediate feature map/>
Figure BDA0003995040430000044
The intermediate feature map F 121 then passes through the upsampling layer, 1×1 convolution, ReLU activation function, and upsampling layer in sequence to obtain the output of the lower branch/>
Figure BDA0003995040430000051
After adding F 11 and F 12 , they are concatenated with F in the channel dimension, and after passing through the SE module, the channel is adjusted by 1×1 convolution to obtain the feature map output by the multi-scale feature extraction sub-module
Figure BDA0003995040430000052
The specific formula is expressed as follows:

F1=ReLU(Conv1(ReLU(Conv1(Avgpooling(F)))))F 1 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F)))))

F11=Upsampling(Conv1(F1))F 11 =Upsampling(Conv1(F 1 ))

F121=ReLU(Conv1(ReLU(Conv1(Avgpooling(F1)))))F 121 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F 1 )))))

F12=Upsampling(ReLU(Conv1(Upsampling(F121))))F 12 =Upsampling(ReLU(Conv1(Upsampling(F 121 ))))

Fm=Conv1(SE(Concat(F11+F12,F)))F m =Conv1(SE(Concat(F 11 +F 12 ,F)))

其中,ReLU是激活函数,Conv1是1×1卷积,SE(·)是SE模块,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层,Concat是沿通道维度拼接操作;Among them, ReLU is an activation function, Conv1 is a 1×1 convolution, SE(·) is an SE module, Avgpooling is an average pooling layer with a kernel size of 2×2 and a step size of 2, and Upsampling is twice the nearest neighbor upsampling Layer, Concat is a splicing operation along the channel dimension;

步骤B22、设计频空域特征融合子模块,由通道注意力和空间注意力串行连接组成;Step B22, designing a frequency-space domain feature fusion sub-module, which is composed of serial connections of channel attention and spatial attention;

步骤B23、设计频空域上下文信息注意力模块,由三个多尺度特征提取子模块和频空域特征融合子模块组成;三个多尺度特征提取子模块的输入分别为步骤B1得到的三个特征图Flow、Fspa、Ffre,分别经过步骤B21中设计的多尺度特征提取子模块处理后,得到具有上下文信息的特征图Flow_m、Fspa_m、Ffre_m,然后经过步骤B22中设计的频空域特征融合子模块,得到频空域上下文信息注意力模块输出的特征图FfStep B23. Design the frequency-space context information attention module, which is composed of three multi-scale feature extraction sub-modules and frequency-space feature fusion sub-modules; the inputs of the three multi-scale feature extraction sub-modules are the three feature maps obtained in step B1 respectively F low , F spa , F fre are respectively processed by the multi-scale feature extraction sub-module designed in step B21 to obtain feature maps F low_m , F spa_m , F fre_m with context information, and then pass through the frequency-space domain designed in step B22 The feature fusion sub-module obtains the feature map F f output by the frequency-space context information attention module.

进一步地,步骤B22的具体实现步骤如下:Further, the specific implementation steps of step B22 are as follows:

步骤B221、设计通道注意力,输入为步骤B23得到的特征图

Figure BDA0003995040430000053
Figure BDA0003995040430000054
三个特征图分别经过空间维度的全局平均池化后得到三个尺度为1×1×C的向量,然后将三个向量沿通道维度拼接,得到中间特征图/>
Figure BDA0003995040430000055
将Fc依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数、1×1卷积进行降维和升维,再通过Sigmoid激活函数得到通道维度上的权重/>
Figure BDA0003995040430000056
将FW1沿通道维度分解为三个尺度为1×1×C的向量FW10、FW11、FW12,分别与频空域特征融合子模块的输入特征图Flow_m、Fspa_m、Ffre_m相乘,得到通道注意力的输出特征图/>
Figure BDA0003995040430000057
Figure BDA0003995040430000058
具体公式表示如下:Step B221, design channel attention, the input is the feature map obtained in step B23
Figure BDA0003995040430000053
Figure BDA0003995040430000054
The three feature maps are respectively subjected to the global average pooling of the spatial dimension to obtain three vectors with a scale of 1×1×C, and then the three vectors are spliced along the channel dimension to obtain the intermediate feature map/>
Figure BDA0003995040430000055
Fc undergoes 1×1 convolution, ReLU activation function, 1×1 convolution, ReLU activation function, 1×1 convolution to reduce and increase dimensionality, and then obtain the weight on the channel dimension through the Sigmoid activation function/>
Figure BDA0003995040430000056
Decompose F W1 along the channel dimension into three vectors F W10 , F W11 , F W12 with a scale of 1×1×C, and multiply them with the input feature maps F low_m , F spa_m , F fre_m of the frequency-space domain feature fusion sub-module respectively , get the output feature map of the channel attention />
Figure BDA0003995040430000057
Figure BDA0003995040430000058
The specific formula is expressed as follows:

Fc=Concat(Avgpoolings(Flow_m),Avgpoolings(Fspa_m),Avgpoolings(Ffre_m))F c =Concat(Avgpooling s (F low_m ),Avgpooling s (F spa_m ),Avgpooling s (F fre_m ))

FW1=Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(Fc))))))FW 1 =Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(F c ))))))

Flow_c=FW10×Flow_m F low_c = F W10 × F low_m

Fspa_c=FW11×Fspa_m F spa_c = F W11 × F spa_m

Ffre_c=FW12×Ffre_m F fre_c = F W12 × F fre_m

其中,Concat是沿通道维度拼接操作,Avgpoolings是空间维度的全局平均池化,ReLU是激活函数,Conv1是1×1卷积,Sigmoid是Sigmoid激活函数;Among them, Concat is a splicing operation along the channel dimension, Avgpooling s is the global average pooling of the spatial dimension, ReLU is the activation function, Conv1 is 1×1 convolution, and Sigmoid is the Sigmoid activation function;

步骤B222、设计空间注意力,输入为步骤B221得到的三个特征图Flow_c、Fspa_c、Ffre_c,三个特征图分别进行通道维度的平均池化后得到三个尺度为H×W×1的特征图,然后将三个特征图沿通道维度拼接,得到中间特征图

Figure BDA0003995040430000061
将Fs依次经过核大小为2×2、步长为2的平均池化层、ReLU激活函数、上采样层后,通过Sigmoid激活函数得到空间维度上的权重/>
Figure BDA0003995040430000062
将FW2分解为三个尺度为H×W×1的特征图FW20、FW21、FW22,分别与空间注意力的输入特征图Flow_c、Fspa_c、Ffre_c相乘,得到空间注意力的输出特征图
Figure BDA0003995040430000063
具体公式表示如下:Step B222, design spatial attention, the input is the three feature maps F low_c , F spa_c , and F fre_c obtained in step B221, and the three feature maps are respectively averaged in the channel dimension to obtain three scales of H×W×1 feature map, and then splicing the three feature maps along the channel dimension to obtain the intermediate feature map
Figure BDA0003995040430000061
After passing F s through the average pooling layer with a kernel size of 2×2 and a step size of 2, the ReLU activation function, and the upsampling layer, the weight in the spatial dimension is obtained through the Sigmoid activation function>
Figure BDA0003995040430000062
Decompose F W2 into three feature maps F W20 , F W21 , and F W22 with a scale of H×W×1, and multiply them with the input feature maps F low_c , F spa_c , and F fre_c of spatial attention to obtain spatial attention The output feature map of
Figure BDA0003995040430000063
The specific formula is expressed as follows:

Fs=Concat(Avgpoolingc(Flow_c),Avgpoolingc(Fspa_c),Avgpoolingc(Ffre_c))F s =Concat(Avgpooling c (F low_c ),Avgpooling c (F spa_c ),Avgpooling c (F fre_c ))

FW2=Sigmoid(Upsampling(ReLU(Avgpooling(Fs))))F W2 =Sigmoid(Upsampling(ReLU(Avgpooling(F s ))))

Flow_s=FW20×Flow_c F low_s = F W20 × F low_c

Fspa_s=FW21×Fspa_c F spa_s = F W21 × F spa_c

Ffre_s=FW22×Ffre_c F fre_s = F W22 × F fre_c

其中,Concat是沿通道维度拼接操作,Avgpoolingc是通道维度的平均池化,ReLU是激活函数,Sigmoid是Sigmoid激活函数,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层;Among them, Concat is a splicing operation along the channel dimension, Avgpooling c is the average pooling of the channel dimension, ReLU is the activation function, Sigmoid is the Sigmoid activation function, Avgpooling is the average pooling layer with a kernel size of 2×2 and a step size of 2, Upsampling is twice the nearest neighbor upsampling layer;

步骤B223、设计频空域特征融合子模块,输入特征图为步骤B23得到的特征图Flow_m、Fspa_m、Ffre_m,三个特征图先经过步骤B221中的通道注意力,得到特征图Flow_c、Fspa_c、Ffre_c,再经过步骤B222中的空间注意力,得到特征图Flow_s、Fspa_s、Ffre_s,三个特征图相加后得到最终的输出Ff;具体公式表示如下:Step B223. Design the frequency-space domain feature fusion sub-module. The input feature maps are the feature maps F low_m , F spa_m , and F fre_m obtained in step B23. The three feature maps first pass through the channel attention in step B221 to obtain the feature maps F low_c , F spa_c , F fre_c , and then through the spatial attention in step B222, feature maps F low_s , F spa_s , F fre_s are obtained, and the final output F f is obtained after adding the three feature maps; the specific formula is as follows:

Ff=Flow_s+Fspa_s+Ffre_sF f =F low_s +F spa_s +F fre_s .

进一步地,步骤B3的具体实现步骤如下:Further, the specific implementation steps of step B3 are as follows:

步骤B31、设计特征聚合卷积块,实现细节信息和上下文信息的融合;输入特征图为步骤B1得到特征图Ffre和步骤B2得到的特征图Ff,将两者沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fconv;具体公式表示如下:Step B31, design a feature aggregation convolution block to realize the fusion of detail information and context information; the input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2, and the two are spliced along the channel dimension and then passed through a 3×3 convolution to obtain the output feature map F conv ; the specific formula is expressed as follows:

Fconv=Conv3(Concat(Ffre,Ff))F conv =Conv3(Concat(F fre ,F f ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension;

步骤B32、设计协同增强子模块,对细节信息和上下文信息的融合信息协同增强;输入特征图为步骤B31中得到的Fconv,将Fconv依次通过1×1卷积、ReLU6激活函数、Dropout随机失活层、1×1卷积、Dropout随机失活层后,与Fconv相加,得到中间特征图Fmid,然后通过LeakyReLU激活函数,与Fconv沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fco;具体公式表示如下:Step B32. Design a synergistic enhancement sub-module to synergistically enhance the fusion information of detail information and context information; the input feature map is the F conv obtained in step B31, and the F conv is sequentially passed through 1×1 convolution, ReLU6 activation function, and Dropout random After the deactivation layer, 1×1 convolution, and Dropout random deactivation layer, it is added to F conv to obtain the intermediate feature map F mid , and then through the LeakyReLU activation function, it is spliced with F conv along the channel dimension and then passed through a 3×3 convolution product to obtain the output feature map F co ; the specific formula is expressed as follows:

Fmid=Dropout(Conv1(Dropout(ReLU6(Conv1(Fconv)))))+Fconv F mid = Dropout(Conv1(Dropout(ReLU6(Conv1(F conv )))))+F conv

Fco=Conv3(Concat(LeakyReLU(Fmid),Fconv))F co =Conv3(Concat(LeakyReLU(F mid ),F conv ))

其中,Conv1是1×1卷积,Conv3是3×3卷积,Concat是沿通道维度拼接操作,Dropout是随机失活层,ReLU6是ReLU6激活函数,LeakyReLU是LeakyReLU激活函数;Among them, Conv1 is a 1×1 convolution, Conv3 is a 3×3 convolution, Concat is a splicing operation along the channel dimension, Dropout is a random inactivation layer, ReLU6 is a ReLU6 activation function, and LeakyReLU is a LeakyReLU activation function;

步骤B33、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,输入特征图为步骤B1得到的特征图Ffre和步骤B2得到的特征图Ff,经过特征聚合卷积块后,得到特征图Fconv,再经过协同增强子模块后得到特征图FcoStep B33, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, the input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2, after feature aggregation convolution After blocks, the feature map F conv is obtained, and then the feature map F co is obtained after the collaborative enhancement sub-module.

进一步地,步骤B4具体实现方式为:Further, the specific implementation of step B4 is as follows:

步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,通过整合全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块组成;输入低照度图像I,经过步骤B1中的全分辨率细节提取模块后得到三个特征图Flow、Fspa、Ffre,经过频空域上下文信息注意力模块后得到特征图Ff,然后经过特征聚合和增强模块得到特征图Fco,接着Fco与步骤B1中的特征图Flow沿通道维度拼接,再经过3×3卷积,得到最后的增强图像Iout;具体公式表示如下:Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details. It is composed of a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module; input a low-light image I, and pass After the full-resolution detail extraction module in step B1, three feature maps F low , F spa , and F fre are obtained, and the feature map F f is obtained after the frequency-spatial context information attention module, and then the feature map is obtained through the feature aggregation and enhancement module F co , then F co is spliced with the feature map F low in step B1 along the channel dimension, and after 3×3 convolution, the final enhanced image I out is obtained; the specific formula is expressed as follows:

Iout=Conv3(Concat(Fco,Flow))I out =Conv3(Concat(F co ,F low ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作。Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension.

进一步地,步骤C具体实现方式为:Further, the specific implementation of step C is as follows:

步骤C、设计损失函数,由L2损失和VGG感知损失组成,网络的总目标损失函数如下:Step C, design loss function, which is composed of L2 loss and VGG perception loss, the total target loss function of the network is as follows:

l=ω1||Iout-G||22||Φ(Iout)-Φ(G)||1 l=ω 1 ||I out -G|| 22 ||Φ(I out )-Φ(G)|| 1

其中,Φ(·)表示使用在ImageNet数据集上预训练的VGG-16分类模型提取Conv4-1层特征的操作;Iout表示低照度图像I的增强图像,G表示低照度图像I对应的标签图像,||.||1表示L1损失,||.||2表示L2损失,ω1、ω2为权重。Among them, Φ( ) represents the operation of extracting the features of the Conv4-1 layer using the VGG-16 classification model pre-trained on the ImageNet dataset; I out represents the enhanced image of the low-light image I, and G represents the label corresponding to the low-light image I Image, ||.|| 1 represents L1 loss, ||.|| 2 represents L2 loss, and ω 1 and ω 2 are weights.

进一步地,步骤D的具体实现步骤如下:Further, the specific implementation steps of step D are as follows:

步骤D1、将步骤A得到的训练数据集随机划分为若干个批次,每个批次包含N对图像;Step D1, randomly divide the training data set obtained in step A into several batches, each batch contains N pairs of images;

步骤D2、输入低照度图像I,经过步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络后得到增强图像Iout,使用步骤C中的公式计算损失l;Step D2, input a low-illumination image I, obtain an enhanced image I out after the full-resolution low-illumination image enhancement network that aggregates context and enhances details in step B, and calculates the loss l using the formula in step C;

步骤D3、根据损失使用反向传播方法计算网络中参数的梯度,并利用Adam优化方法更新网络参数;Step D3, using the backpropagation method to calculate the gradient of the parameters in the network according to the loss, and using the Adam optimization method to update the network parameters;

步骤D4、以批次为单位重复执行步骤D1至步骤D3,直至网络的目标损失函数数值收敛至纳什平衡,保存网络参数,得到聚合上下文和增强细节的全分辨率低照度图像增强模型。Step D4, repeat steps D1 to D3 in batches until the target loss function of the network converges to Nash equilibrium, save the network parameters, and obtain a full-resolution low-light image enhancement model that aggregates context and enhances details.

相较于现有技术,本发明及其优选方案在全分辨率上提取细节特征,在频域、空域上提取上下文特征,并将两类特征聚合在一起,共同增强,能更好地提取到两类信息,并且在增强过程中学习两类特征之间的关系。设计了一种聚合上下文和增强细节的全分辨率低照度图像增强网络,分别设置全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块提取细节特征、提取频空域上下文特征、将两个特征聚合并协同增强,不同于其他方法独立地解决低照度图像存在的问题,本发明能够更好地提取细节特征、上下文特征并实现协同增强。Compared with the prior art, the present invention and its preferred solution extract detailed features in full resolution, extract contextual features in frequency domain and spatial domain, and aggregate the two types of features together to enhance them together, which can better extract Two types of information, and the relationship between the two types of features is learned during the enhancement process. A full-resolution low-light image enhancement network that aggregates context and enhances details is designed, and the full-resolution detail extraction module, the frequency-spatial context information attention module, the feature aggregation and enhancement module to extract detail features, and the frequency-spatial context features are respectively set. 1. Aggregating and synergistically enhancing the two features, unlike other methods that independently solve the problems existing in low-light images, the present invention can better extract detail features and context features and achieve synergistic enhancement.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明进一步详细的说明:Below in conjunction with accompanying drawing and specific embodiment the present invention is described in further detail:

图1是本发明实施例方法的实现流程图。Fig. 1 is a flow chart of the implementation of the method of the embodiment of the present invention.

图2是本发明实施例中聚合上下文和增强细节的全分辨率低照度图像增强网络的结构图。Fig. 2 is a structural diagram of a full-resolution low-light image enhancement network that aggregates context and enhances details in an embodiment of the present invention.

图3是本发明实施例中多尺度特征提取子模块的结构图。Fig. 3 is a structural diagram of a multi-scale feature extraction sub-module in an embodiment of the present invention.

图4是本发明实施例中频空域特征融合子模块的结构图。Fig. 4 is a structural diagram of an intermediate frequency spatial domain feature fusion submodule according to an embodiment of the present invention.

图5是本发明实施例中协同增强子模块的结构图。Fig. 5 is a structural diagram of the cooperative enhancement sub-module in the embodiment of the present invention.

具体实施方式Detailed ways

为让本专利的特征和优点能更明显易懂,下文特举实施例,作详细说明如下:In order to make the features and advantages of this patent more obvious and easy to understand, the following special examples are described in detail as follows:

应该指出,以下详细说明都是例示性的,旨在对本申请提供进一步的说明。除非另有指明,本说明书使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。It should be pointed out that the following detailed description is exemplary and intended to provide further explanation to the present application. Unless otherwise specified, all technical and scientific terms used in this specification have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used here is only for describing specific implementations, and is not intended to limit the exemplary implementations according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof.

以下结合附图对本实施例方案作进一步的具体介绍:Below in conjunction with accompanying drawing, present embodiment scheme is further described in detail:

本发明提供一种聚合上下文和增强细节的全分辨率低照度图像增强方法,如图1-图5所示,包括以下步骤:The present invention provides a full-resolution low-light image enhancement method that aggregates context and enhances details, as shown in Figures 1-5, comprising the following steps:

步骤A、进行数据预处理,包括数据配对、数据随机裁切、数据增强处理,得到训练数据集;Step A, data preprocessing, including data pairing, data random cutting, data enhancement processing, to obtain a training data set;

步骤B、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,该网络由全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块组成;Step B. Design a full-resolution low-light image enhancement network that aggregates context and enhances details. The network consists of a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module;

步骤C、设计损失函数,指导步骤B所设计网络的参数优化;Step C, design a loss function, and guide the parameter optimization of the network designed in step B;

步骤D、使用步骤A得到的训练数据集训练步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络,收敛到纳什平衡,得到训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型;Step D. Use the training data set obtained in step A to train the full-resolution low-light image enhancement network for the aggregated context and enhanced details in step B, converge to Nash equilibrium, and obtain the trained aggregated context and enhanced details for the full-resolution low-light image Illuminance image enhancement model;

步骤E、将待测低照度图像输入训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型,输出增强后的正常照度图像。Step E: Input the low-illumination image to be tested into the trained full-resolution low-illumination image enhancement model that aggregates context and enhances details, and outputs an enhanced normal-illuminance image.

进一步地,步骤A包括以下步骤:Further, step A includes the following steps:

步骤A1、配对低照度图像和对应的标签图像;Step A1, pairing the low-illumination image and the corresponding label image;

步骤A2、将每张尺寸为h×w×3的低照度图像随机裁切为p×p×3大小的图像,并对其对应的标签图像采用相同的随机裁切方式,其中,h、w是低照度图像与标签图像的高度和宽度,p是裁切出图像的高度和宽度;Step A2. Randomly crop each low-illumination image with a size of h×w×3 into an image with a size of p×p×3, and use the same random cropping method for the corresponding label image, where h, w is the height and width of the low-light image and the label image, and p is the height and width of the cropped image;

步骤A3、随机采用以下8种中的1种增强方式对待训练配对图像进行数据增强:保持原始图像、垂直翻转、旋转90度、旋转90度后垂直翻转、旋转180度、旋转180度后垂直翻转、旋转270度、旋转270度后垂直翻转。Step A3. Randomly use one of the following 8 enhancement methods to perform data enhancement on the training paired image: keep the original image, flip vertically, rotate 90 degrees, rotate vertically after 90 degrees, rotate 180 degrees, rotate vertically after 180 degrees , rotate 270 degrees, rotate vertically after 270 degrees.

进一步地,步骤B包括以下步骤:Further, step B includes the following steps:

步骤B1、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块、频域变换子模块组成,使用所设计的网络提取细节特征;Step B1, build a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule, and a frequency domain transformation submodule, and use the designed network to extract detail features;

步骤B2、设计频空域上下文信息注意力模块,由多尺度特征提取子模块、频空域特征融合子模块组成,使用所设计的网络提取上下文特征;Step B2. Design the frequency-space context information attention module, which is composed of a multi-scale feature extraction sub-module and a frequency-space feature fusion sub-module, and use the designed network to extract context features;

步骤B3、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,聚合步骤B1中提取的细节特征和步骤B2提取的上下文特征,共同增强两类特征;Step B3, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, aggregate the detailed features extracted in step B1 and the context features extracted in step B2, and jointly enhance the two types of features;

步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,包括全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块。Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, including a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module.

进一步地,步骤B1包括以下步骤:Further, step B1 includes the following steps:

步骤B11、设计浅层特征提取子模块,输入为低照度图像I,经过3×3卷积得到初始特征图Fori后,进入三个分支,第一个分支包含1个3×3卷积,第二个分支包含2个串行的3×3卷积,第三个分支包含3个串行的3×3卷积,将三个分支的处理结果FB1、FB2、FB3沿通道维度拼接后,经过一个3×3卷积,得到浅层特征提取子模块输出的特征图Flow。具体公式表示如下:Step B11. Design the sub-module of shallow feature extraction. The input is the low-light image I. After 3×3 convolution to obtain the initial feature map F ori , enter three branches. The first branch contains a 3×3 convolution, The second branch contains 2 serial 3×3 convolutions, and the third branch contains 3 serial 3×3 convolutions, and the processing results F B1 , F B2 , and F B3 of the three branches are along the channel dimension After splicing, after a 3×3 convolution, the feature map F low output by the shallow feature extraction sub-module is obtained. The specific formula is expressed as follows:

Fori=Conv3(I)F ori = Conv3(I)

FB1=Conv3(Fori)F B1 =Conv3(F ori )

FB2=Conv3(Conv3(Fori))F B2 =Conv3(Conv3(F ori ))

FB3=Conv3(Conv3(Conv3(Fori)))F B3 =Conv3(Conv3(Conv3(F ori )))

Flow=Conv3(Concat(FB1,FB2,FB3))F low =Conv3(Concat(F B1 ,F B2 ,F B3 ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension;

步骤B12、构建基于CBAM的注意力子模块,该模块由串行的通道维度的注意力Attc和空间维度的注意力Atts组成,输入特征图为步骤B11得到的特征图Flow,得到基于CBAM的注意力子模块输出的特征图为Fspa。具体公式表示如下:Step B12. Build a CBAM-based attention sub-module. This module is composed of serial attention Att c in the channel dimension and attention Att s in the space dimension . The feature map output by the attention sub-module of CBAM is F spa . The specific formula is expressed as follows:

Fspa=Atts(Attc(Flow))F spa =Att s (Att c (F low ))

其中,Attc是通道维度的注意力,Atts是空间维度的注意力。Among them, Att c is the attention of the channel dimension, and Att s is the attention of the spatial dimension.

步骤B13、设计频域变换子模块,输入特征图为步骤B12得到的特征图Fspa,使用傅里叶变换函数将空域转为频域、依次经过3×3卷积、归一化层、ReLU激活函数后,使用傅里叶逆变换函数将频域转为空域,得到输出特征图Ffre。具体公式表示如下:Step B13, design the frequency domain transformation sub-module, the input feature map is the feature map F spa obtained in step B12, use the Fourier transform function to convert the space domain into the frequency domain, and then go through 3×3 convolution, normalization layer, ReLU After activating the function, use the inverse Fourier transform function to convert the frequency domain into the space domain to obtain the output feature map F fre . The specific formula is expressed as follows:

Ffre=idft(ReLU(BN(Conv3(dft(Fspa)))))F fre = idft(ReLU(BN(Conv3(dft(F spa )))))

其中,dft是傅里叶变换,idft是傅里叶逆变换,ReLU是ReLU激活函数,BN是批归一化层,Conv3是3×3卷积。Among them, dft is the Fourier transform, idft is the inverse Fourier transform, ReLU is the ReLU activation function, BN is the batch normalization layer, and Conv3 is a 3×3 convolution.

步骤B14、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块和频域变换子模块组成。设输入为经过步骤A处理后的低照度图像I,依次经过浅层特征提取子模块、注意力子模块、频域变换子模块处理后得到特征图Flow、Fspa、FfreStep B14, building a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule and a frequency domain transformation submodule. Assume that the input is the low-illumination image I processed in step A, and the feature maps F low , F spa , and F fre are obtained after being processed by the shallow feature extraction sub-module, the attention sub-module, and the frequency-domain transformation sub-module in sequence.

进一步地,步骤B2包括以下步骤:Further, step B2 includes the following steps:

步骤B21、设计多尺度特征提取子模块。该模块的输入特征图记为F,

Figure BDA0003995040430000111
H,W,C分别为特征F的高度、宽度和通道数)经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图
Figure BDA0003995040430000112
然后分成两个分支,上分支经过1×1卷积继续降维后,通过上采样层得到上分支的输出/>
Figure BDA0003995040430000113
a是降维后的通道数。另一个分支经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图/>
Figure BDA0003995040430000114
中间特征图F121再依次经过上采样层、1×1卷积、ReLU激活函数、上采样层,得到下分支的输出/>
Figure BDA0003995040430000115
将F11和F12相加后,与F在通道维度上拼接,经过SE模块后再通过1×1卷积调整通道,得到多尺度特征提取子模块输出的特征图
Figure BDA0003995040430000116
具体公式表示如下:Step B21, designing a multi-scale feature extraction sub-module. The input feature map of this module is denoted as F,
Figure BDA0003995040430000111
H, W, and C are the height, width, and number of channels of the feature F respectively) After passing through an average pooling layer with a kernel size of 2×2 and a step size of 2, it goes through 1×1 convolution, ReLU activation function, 1 ×1 convolution and ReLU activation function for dimensionality reduction to obtain intermediate feature maps
Figure BDA0003995040430000112
Then it is divided into two branches. After the upper branch continues to reduce the dimension through 1×1 convolution, the output of the upper branch is obtained through the upsampling layer>>
Figure BDA0003995040430000113
a is the number of channels after dimensionality reduction. After the other branch passes through an average pooling layer with a kernel size of 2×2 and a step size of 2, it undergoes 1×1 convolution, ReLU activation function, 1×1 convolution, and ReLU activation function for dimensionality reduction to obtain the middle Feature map />
Figure BDA0003995040430000114
The intermediate feature map F 121 then passes through the upsampling layer, 1×1 convolution, ReLU activation function, and upsampling layer in sequence to obtain the output of the lower branch/>
Figure BDA0003995040430000115
After adding F 11 and F 12 , they are concatenated with F in the channel dimension, and after passing through the SE module, the channel is adjusted by 1×1 convolution to obtain the feature map output by the multi-scale feature extraction sub-module
Figure BDA0003995040430000116
The specific formula is expressed as follows:

F1=ReLU(Conv1(ReLU(Conv1(Avgpooling(F)))))F 1 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F)))))

F11=Upsampling(Conv1(F1))F 11 =Upsampling(Conv1(F 1 ))

F121=ReLU(Conv1(ReLU(Conv1(Avgpooling(F1)))))F 121 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F 1 )))))

F12=Upsampling(EeLU(Conv1(Upsampling(F121))))F 12 =Upsampling(EeLU(Conv1(Upsampling(F 121 ))))

Fm=Conv1(SE(Concat(F11+F12,F)))F m =Conv1(SE(Concat(F 11 +F 12 ,F)))

其中,ReLU是激活函数,Conv1是1×1卷积,SE(·)是SE模块,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层,Concat是沿通道维度拼接操作;Among them, ReLU is an activation function, Conv1 is a 1×1 convolution, SE(·) is an SE module, Avgpooling is an average pooling layer with a kernel size of 2×2 and a step size of 2, and Upsampling is twice the nearest neighbor upsampling Layer, Concat is a splicing operation along the channel dimension;

步骤B22、设计频空域特征融合子模块,由通道注意力和空间注意力串行连接组成;Step B22, designing a frequency-space domain feature fusion sub-module, which is composed of serial connections of channel attention and spatial attention;

步骤B23、设计频空域上下文信息注意力模块,由三个多尺度特征提取子模块和频空域特征融合子模块组成。三个多尺度特征提取子模块的输入分别为步骤B1得到的三个特征图Flow、Fspa、Ffre,分别经过步骤B21中设计的多尺度特征提取子模块处理后,得到具有上下文信息的特征图Flow_m、Fspa_m、Ffre_m,然后经过步骤B22中设计的频空域特征融合子模块,得到频空域上下文信息注意力模块输出的特征图FfStep B23, designing the frequency-space context information attention module, which is composed of three multi-scale feature extraction sub-modules and frequency-space feature fusion sub-module. The inputs of the three multi-scale feature extraction sub-modules are the three feature maps F low , F spa , and F fre obtained in step B1 respectively. Feature maps F low_m , F spa_m , F fre_m , and then through the frequency-space domain feature fusion sub-module designed in step B22, the feature map F f output by the frequency-space domain context information attention module is obtained.

进一步地,步骤B22包括以下步骤:Further, step B22 includes the following steps:

步骤B221、设计通道注意力,输入为步骤B23得到的特征图

Figure BDA0003995040430000121
Figure BDA0003995040430000122
三个特征图分别经过空间维度的全局平均池化后得到三个尺度为1×1×C的向量,然后将三个向量沿通道维度拼接,得到中间特征图/>
Figure BDA0003995040430000123
将Fc依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数、1×1卷积进行降维和升维,再通过Sigmoid激活函数得到通道维度上的权重/>
Figure BDA0003995040430000126
将FW1沿通道维度分解为三个尺度为1×1×C的向量FW10、FW11、FW12,分别与频空域特征融合子模块的输入特征图Flow_m、Fspa_m、Ffre_m相乘,得到通道注意力的输出特征图/>
Figure BDA0003995040430000124
Figure BDA0003995040430000125
具体公式表示如下:Step B221, design channel attention, the input is the feature map obtained in step B23
Figure BDA0003995040430000121
Figure BDA0003995040430000122
The three feature maps are respectively subjected to the global average pooling of the spatial dimension to obtain three vectors with a scale of 1×1×C, and then the three vectors are spliced along the channel dimension to obtain the intermediate feature map/>
Figure BDA0003995040430000123
Fc undergoes 1×1 convolution, ReLU activation function, 1×1 convolution, ReLU activation function, 1×1 convolution to reduce and increase dimensionality, and then obtain the weight on the channel dimension through the Sigmoid activation function/>
Figure BDA0003995040430000126
Decompose F W1 along the channel dimension into three vectors F W10 , F W11 , F W12 with a scale of 1×1×C, and multiply them with the input feature maps F low_m , F spa_m , F fre_m of the frequency-space domain feature fusion sub-module respectively , get the output feature map of the channel attention />
Figure BDA0003995040430000124
Figure BDA0003995040430000125
The specific formula is expressed as follows:

Fc=Concat(Avgpoolings(Flow_m),Avgpoolings(Fspa_m),Avgpoolings(Ffre_m))F c =Concat(Avgpooling s (F low_m ),Avgpooling s (F spa_m ),Avgpooling s (F fre_m ))

FW1=Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(Fc))))))F W1 =Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(F c ))))))

Flow_c=FW10×Flow_m F low_c = F W10 × F low_m

Fspa_c=FW11×Fspa_m F spa_c = F W11 × F spa_m

Ffre_c=FW12×Ffre_m F fre_c = F W12 × F fre_m

其中,Concat是沿通道维度拼接操作,Avgpoolings是空间维度的全局平均池化,ReLU是激活函数,Conv1是1×1卷积,Sigmoid是Sigmoid激活函数;Among them, Concat is a splicing operation along the channel dimension, Avgpooling s is the global average pooling of the spatial dimension, ReLU is the activation function, Conv1 is 1×1 convolution, and Sigmoid is the Sigmoid activation function;

步骤B222、设计空间注意力,输入为步骤B221得到的三个特征图Flow_c、Fspa_c、Ffre_c,三个特征图分别进行通道维度的平均池化后得到三个尺度为H×W×1的特征图,然后将三个特征图沿通道维度拼接,得到中间特征图

Figure BDA0003995040430000133
将Fs依次经过核大小为2×2、步长为2的平均池化层、ReLU激活函数、上采样层后,通过Sigmoid激活函数得到空间维度上的权重/>
Figure BDA0003995040430000131
将FW2分解为三个尺度为H×W×1的特征图FW20、FW21、FW22,分别与空间注意力的输入特征图Flow_c、Fspa_c、Ffre_c相乘,得到空间注意力的输出特征图
Figure BDA0003995040430000132
具体公式表示如下:Step B222, design spatial attention, the input is the three feature maps F low_c , F spa_c , and F fre_c obtained in step B221, and the three feature maps are respectively averaged in the channel dimension to obtain three scales of H×W×1 feature map, and then splicing the three feature maps along the channel dimension to obtain the intermediate feature map
Figure BDA0003995040430000133
After passing F s through the average pooling layer with a kernel size of 2×2 and a step size of 2, the ReLU activation function, and the upsampling layer, the weight in the spatial dimension is obtained through the Sigmoid activation function>
Figure BDA0003995040430000131
Decompose F W2 into three feature maps F W20 , F W21 , and F W22 with a scale of H×W×1, and multiply them with the input feature maps F low_c , F spa_c , and F fre_c of spatial attention to obtain spatial attention The output feature map of
Figure BDA0003995040430000132
The specific formula is expressed as follows:

Fs=Concat(Avgpoolingc(Flow_c),Avgpoolingc(Fspa_c),Avgpoolingc(Ffre_c))F s =Concat(Avgpooling c (F low_ c),Avgpooling c (F spa_c ),Avgpooling c (F fre_c ))

FW2=Sigmoid(Upsampling(ReLU(Avgpooling(Fs))))F W2 =Sigmoid(Upsampling(ReLU(Avgpooling(F s ))))

Flow_s=FW20×Flow_c F low_s = F W20 × F low_c

Fspa_s=FW21×Fspa_c F spa_s = F W21 × F spa_c

Ffre_s=FW22×Ffre_c F fre_s = F W22 × F fre_c

其中,Concat是沿通道维度拼接操作,Avgpoolingc是通道维度的平均池化,ReLU是激活函数,Sigmoid是Sigmoid激活函数,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层;Among them, Concat is a splicing operation along the channel dimension, Avgpooling c is the average pooling of the channel dimension, ReLU is the activation function, Sigmoid is the Sigmoid activation function, Avgpooling is the average pooling layer with a kernel size of 2×2 and a step size of 2, Upsampling is twice the nearest neighbor upsampling layer;

步骤B223、设计频空域特征融合子模块,输入特征图为步骤B23得到的特征图Flow_m、Fspa_m、Ffre_m,三个特征图先经过步骤B221中的通道注意力,得到特征图Flow_c、Fspa_c、Ffre_c,再经过步骤B222中的空间注意力,得到特征图Flow_s、Fspa_s、Ffre_s,三个特征图相加后得到最终的输出Ff。具体公式表示如下:Step B223. Design the frequency-space domain feature fusion sub-module. The input feature maps are the feature maps F low_m , F spa_m , and F fre_m obtained in step B23. The three feature maps first pass through the channel attention in step B221 to obtain the feature maps F low_c , F spa_c , F fre_c , and then through the spatial attention in step B222, feature maps F low_s , F spa_s , F fre_s are obtained, and the final output F f is obtained after adding the three feature maps. The specific formula is expressed as follows:

Ff=Flow_s+Fspa_s+Ffre_s F f =F low_s +F spa_s +F fre_s

进一步地,步骤B3实现如下:Further, step B3 is implemented as follows:

步骤B31、设计特征聚合卷积块,实现细节信息和上下文信息的融合。输入特征图为步骤B1得到特征图Ffre和步骤B2得到的特征图Ff,将两者沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fconv。具体公式表示如下:Step B31, designing a feature aggregation convolution block to realize the fusion of detail information and context information. The input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2. The two are concatenated along the channel dimension and then undergo a 3×3 convolution to obtain the output feature map F conv . The specific formula is expressed as follows:

Fconv=Conv3(Concat(Ffre,Ff))F conv =Conv3(Concat(F fre ,F f ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension;

步骤B32、设计协同增强子模块,对细节信息和上下文信息的融合信息协同增强。输入特征图为步骤B31中得到的Fconv,将Fconv依次通过1×1卷积、ReLU6激活函数、Dropout随机失活层、1×1卷积、Dropout随机失活层后,与Fconv相加,得到中间特征图Fmid,然后通过LeakyReLU激活函数,与Fconv沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fco。具体公式表示如下:Step B32, designing a synergistic enhancement sub-module to synergistically enhance the fusion information of the detail information and the context information. The input feature map is the F conv obtained in step B31, and the F conv is sequentially passed through 1×1 convolution, ReLU6 activation function, Dropout random inactivation layer, 1×1 convolution, Dropout random inactivation layer, and then compared with F conv plus, get the intermediate feature map F mid , and then use the LeakyReLU activation function to concatenate with F conv along the channel dimension and then go through a 3×3 convolution to get the output feature map F co . The specific formula is expressed as follows:

Fmid=Dropout(Conv1(Dropout(ReLU6(Conv1(Fconv)))))+Fconv F mid = Dropout(Conv1(Dropout(ReLU6(Conv1(F conv )))))+F conv

Fco=Conv3(Concat(LeakyReLU(Fmid),Fconv))F co =Conv3(Concat(LeakyReLU(F mid ),F conv ))

其中,Conv1是1×1卷积,Conv3是3×3卷积,Concat是沿通道维度拼接操作,Dropout是随机失活层,ReLU6是ReLU6激活函数,LeakyReLU是LeakyReLU激活函数;Among them, Conv1 is a 1×1 convolution, Conv3 is a 3×3 convolution, Concat is a splicing operation along the channel dimension, Dropout is a random inactivation layer, ReLU6 is a ReLU6 activation function, and LeakyReLU is a LeakyReLU activation function;

步骤B33、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,输入特征图为步骤B1得到的特征图Ffre和步骤B2得到的特征图Ff,经过特征聚合卷积块后,得到特征图Fconv,再经过协同增强子模块后得到特征图FcoStep B33, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, the input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2, after feature aggregation convolution After blocks, the feature map F conv is obtained, and then the feature map F co is obtained after the collaborative enhancement sub-module.

进一步地,步骤B4实现如下:Further, step B4 is implemented as follows:

步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,通过整合全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块组成。输入低照度图像I,经过步骤B1中的全分辨率细节提取模块后得到三个特征图Flow、Fspa、Ffre,经过频空域上下文信息注意力模块后得到特征图Ff,最后经过特征聚合和增强模块得到特征图Fco,接着Fco与步骤B1中的特征图Flow沿通道维度拼接,再经过3×3卷积,得到最后的增强图像Iout。具体公式表示如下:Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, and is composed of a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module. Input the low-light image I, and get three feature maps F low , F spa , F fre after the full-resolution detail extraction module in step B1, get the feature map F f after the frequency-space domain context information attention module, and finally pass the feature The aggregation and enhancement module obtains the feature map F co , and then F co is concatenated with the feature map Flow in step B1 along the channel dimension, and then undergoes 3×3 convolution to obtain the final enhanced image I out . The specific formula is expressed as follows:

Iout=Conv3(Concat(Fco,Flow))I out =Conv3(Concat(F co ,F low ))

其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作。Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension.

进一步地,步骤C实现如下:Further, step C is implemented as follows:

步骤C、设计损失函数,由L2损失和VGG感知损失组成,网络的总目标损失函数如下:Step C, design loss function, which is composed of L2 loss and VGG perception loss, the total target loss function of the network is as follows:

l=ω1||Iout-G||22||Φ(Iout)-Φ(G)||1 l=ω 1 ||I out -G|| 22 ||Φ(I out )-Φ(G)|| 1

其中,Φ(·)表示使用在ImageNet数据集上预训练的VGG-16分类模型提取Conv4-1层特征的操作。Iout表示低照度图像I的增强图像,G表示低照度图像I对应的标签图像,||.||1表示L1损失,||.||2表示L2损失,ω1、ω2为权重。Among them, Φ( ) represents the operation of extracting the features of Conv4-1 layer using the VGG-16 classification model pre-trained on the ImageNet dataset. I out represents the enhanced image of the low-light image I, G represents the label image corresponding to the low-light image I, ||.|| 1 represents the L1 loss, ||.|| 2 represents the L2 loss, and ω 1 and ω 2 are weights.

进一步地,步骤D实现如下:Further, step D is implemented as follows:

步骤D1、将步骤A得到的训练数据集随机划分为若干个批次,每个批次包含N对图像;Step D1, randomly divide the training data set obtained in step A into several batches, each batch contains N pairs of images;

步骤D2、输入低照度图像I,经过步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络后得到增强图像Iout,使用步骤C中的公式计算损失l;Step D2, input a low-illumination image I, obtain an enhanced image I out after the full-resolution low-illumination image enhancement network that aggregates context and enhances details in step B, and calculates the loss l using the formula in step C;

步骤D3、根据损失使用反向传播方法计算网络中参数的梯度,并利用Adam优化方法更新网络参数。Step D3. Calculate the gradient of the parameters in the network by using the backpropagation method according to the loss, and update the network parameters by using the Adam optimization method.

步骤D4、以批次为单位重复执行步骤D1至步骤D3,直至网络的目标损失函数数值收敛至纳什平衡,保存网络参数,得到聚合上下文和增强细节的全分辨率低照度图像增强模型。Step D4, repeat steps D1 to D3 in batches until the target loss function of the network converges to Nash equilibrium, save the network parameters, and obtain a full-resolution low-light image enhancement model that aggregates context and enhances details.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowcharts and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.

以上所述,仅是本发明的较佳实施例而已,并非是对本发明作其它形式的限制,任何熟悉本专业的技术人员可能利用上述揭示的技术内容加以变更或改型为等同变化的等效实施例。但是凡是未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与改型,仍属于本发明技术方案的保护范围。The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention to other forms. Any skilled person who is familiar with this profession may use the technical content disclosed above to change or modify the equivalent of equivalent changes. Example. However, any simple modifications, equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention without departing from the content of the technical solution of the present invention still belong to the protection scope of the technical solution of the present invention.

本专利不局限于上述最佳实施方式,任何人在本专利的启示下都可以得出其它各种形式的聚合上下文和增强细节的全分辨率低照度图像增强方法,凡依本发明申请专利范围所做的均等变化与修饰,皆应属本专利的涵盖范围。This patent is not limited to the above-mentioned optimal implementation mode, anyone can draw other various forms of full-resolution low-illuminance image enhancement methods that aggregate context and enhance details under the inspiration of this patent. All equivalent changes and modifications should fall within the scope of this patent.

Claims (10)

1.一种聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于:1. A full-resolution low-light image enhancement method that aggregates context and enhances details, characterized in that: 步骤A、进行数据预处理,包括数据配对、数据随机裁切、数据增强处理,得到训练数据集;Step A, data preprocessing, including data pairing, data random cutting, data enhancement processing, to obtain a training data set; 步骤B、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,包括:全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块;Step B. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, including: a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module; 步骤C、设计损失函数,用于指导步骤B所设计网络的参数优化;Step C, designing a loss function, which is used to guide the parameter optimization of the network designed in step B; 步骤D、使用步骤A得到的训练数据集训练步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络,收敛到纳什平衡,得到训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型;Step D. Use the training data set obtained in step A to train the full-resolution low-light image enhancement network for the aggregated context and enhanced details in step B, converge to Nash equilibrium, and obtain the trained aggregated context and enhanced details for the full-resolution low-light image Illuminance image enhancement model; 步骤E、将待测低照度图像输入训练好的聚合上下文和增强细节的全分辨率低照度图像增强模型,输出增强后的正常照度图像。Step E. Input the low-illuminance image to be tested into the trained full-resolution low-illumination image enhancement model that aggregates context and enhances details, and outputs an enhanced normal-illuminance image. 2.根据权利要求1所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤A的具体实现步骤如下:2. the full-resolution low-illuminance image enhancement method of aggregation context and enhanced detail according to claim 1, is characterized in that, the specific implementation steps of step A are as follows: 步骤A1、配对低照度图像和对应的标签图像;Step A1, pairing the low-illumination image and the corresponding label image; 步骤A2、将每张尺寸为h×w×3的低照度图像随机裁切为p×p×3大小的图像,并对其对应的标签图像采用相同的随机裁切方式,其中,h、w是低照度图像与标签图像的高度和宽度,p是裁切出图像的高度和宽度;Step A2. Randomly crop each low-illumination image with a size of h×w×3 into an image with a size of p×p×3, and use the same random cropping method for the corresponding label image, where h, w is the height and width of the low-light image and the label image, and p is the height and width of the cropped image; 步骤A3、随机采用以下8种中的1种增强方式对待训练配对图像进行数据增强:保持原始图像、垂直翻转、旋转90度、旋转90度后垂直翻转、旋转180度、旋转180度后垂直翻转、旋转270度、旋转270度后垂直翻转。Step A3. Randomly use one of the following 8 enhancement methods to perform data enhancement on the training paired image: keep the original image, flip vertically, rotate 90 degrees, rotate vertically after 90 degrees, rotate 180 degrees, rotate vertically after 180 degrees , Rotate 270 degrees, rotate vertically after 270 degrees. 3.根据权利要求1所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B的具体实现步骤如下:3. the full-resolution low-illuminance image enhancement method of aggregation context and enhanced detail according to claim 1, is characterized in that, the specific implementation steps of step B are as follows: 步骤B1、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块、频域变换子模块组成,使用所设计的网络提取细节特征;Step B1, build a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule, and a frequency domain transformation submodule, and use the designed network to extract detail features; 步骤B2、设计频空域上下文信息注意力模块,由多尺度特征提取子模块、频空域特征融合子模块组成,使用所设计的网络提取上下文特征;Step B2. Design the frequency-space context information attention module, which is composed of a multi-scale feature extraction sub-module and a frequency-space feature fusion sub-module, and use the designed network to extract context features; 步骤B3、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,聚合步骤B1中提取的细节特征和步骤B2提取的上下文特征,共同增强两类特征;Step B3, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, aggregate the detailed features extracted in step B1 and the context features extracted in step B2, and jointly enhance the two types of features; 步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,包括全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块。Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details, including a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module. 4.根据权利要求3所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B1的具体实现步骤如下:4. the full-resolution low-illuminance image enhancement method of aggregation context and enhanced detail according to claim 3, is characterized in that, the specific implementation steps of step B1 are as follows: 步骤B11、设计浅层特征提取子模块,输入为低照度图像I,经过3×3卷积得到初始特征图Fori后,进入三个分支,第一个分支包含1个3×3卷积,第二个分支包含2个串行的3×3卷积,第三个分支包含3个串行的3×3卷积,将三个分支的处理结果FB1、FB2、FB3沿通道维度拼接后,经过一个3×3卷积,得到浅层特征提取子模块输出的特征图Flow;具体公式表示如下:Step B11. Design the sub-module of shallow feature extraction. The input is the low-light image I. After 3×3 convolution to obtain the initial feature map F ori , enter three branches. The first branch contains a 3×3 convolution, The second branch contains 2 serial 3×3 convolutions, and the third branch contains 3 serial 3×3 convolutions, and the processing results F B1 , F B2 , and F B3 of the three branches are along the channel dimension After splicing, after a 3×3 convolution, the feature map F low output by the shallow feature extraction sub-module is obtained; the specific formula is expressed as follows: Fori=Conv3(I)F ori = Conv3(I) FB1=Conv3(Fori)F B1 =Conv3(F ori ) FB2=Conv3(Conv3(Fori))F B2 =Conv3(Conv3(F ori )) FB3=Conv3(Conv3(Conv3(Fori)))F B3 =Conv3(Conv3(Conv3(F ori ))) Flow=Conv3(Concat(FB1,FB2,FB3))F low =Conv3(Concat(F B1 ,F B2 ,F B3 )) 其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension; 步骤B12、构建基于CBAM的注意力子模块,由串行的通道维度的注意力Attc和空间维度的注意力Atts组成,输入特征图为步骤B11得到的特征图Flow,得到基于CBAM的注意力子模块输出的特征图为Fspa;具体公式表示如下:Step B12, build a CBAM-based attention sub-module, which is composed of serial channel-dimension attention Att c and space-dimension attention Att s , the input feature map is the feature map Flow obtained in step B11, and CBAM-based The feature map output by the attention sub-module is F spa ; the specific formula is expressed as follows: Fspa=Atts(Attc(Flow))F spa =Att s (Att c (F low )) 其中,Attc是通道维度的注意力,Atts是空间维度的注意力;Among them, Att c is the attention of the channel dimension, and Att s is the attention of the spatial dimension; 步骤B13、设计频域变换子模块,输入特征图为步骤B12得到的特征图Fspa,使用傅里叶变换函数将空域转为频域、依次经过3×3卷积、归一化层、ReLU激活函数后,使用傅里叶逆变换函数将频域转为空域,得到输出特征图Ffre;具体公式表示如下:Step B13, design the frequency domain transformation sub-module, the input feature map is the feature map F spa obtained in step B12, use the Fourier transform function to convert the space domain into the frequency domain, and then go through 3×3 convolution, normalization layer, ReLU After activating the function, use the inverse Fourier transform function to convert the frequency domain into the space domain to obtain the output feature map F fre ; the specific formula is expressed as follows: Ffre=idft(ReLU(BN(Conv3(dft(Fspa)))))F fre = idft(ReLU(BN(Conv3(dft(F spa ))))) 其中,dft是傅里叶变换,idft是傅里叶逆变换,ReLU是ReLU激活函数,BN是批归一化层,Conv3是3×3卷积;Among them, dft is the Fourier transform, idft is the inverse Fourier transform, ReLU is the ReLU activation function, BN is the batch normalization layer, and Conv3 is a 3×3 convolution; 步骤B14、构建全分辨率细节提取模块,由浅层特征提取子模块、基于CBAM的注意力子模块和频域变换子模块组成;设输入为经过步骤A处理后的低照度图像I,依次经过浅层特征提取子模块、注意力子模块、频域变换子模块处理后得到特征图Flow、Fspa、FfreStep B14, build a full-resolution detail extraction module, which is composed of a shallow feature extraction submodule, a CBAM-based attention submodule and a frequency domain transformation submodule; assuming that the input is the low-illumination image I processed in step A, sequentially pass through The feature maps F low , F spa , and F fre are obtained after processing by the shallow feature extraction submodule, the attention submodule, and the frequency domain transformation submodule. 5.根据权利要求4所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B2的具体实现步骤如下:5. the full-resolution low-illuminance image enhancement method of aggregation context and enhanced detail according to claim 4, is characterized in that, the specific implementation steps of step B2 are as follows: 步骤B21、设计多尺度特征提取子模块,输入特征图记为F,
Figure FDA0003995040420000031
H,W,C分别为特征F的高度、宽度和通道数,经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图/>
Figure FDA0003995040420000032
然后分成两个分支,上分支经过1×1卷积继续降维后,通过上采样层得到上分支的输出
Figure FDA0003995040420000033
a是降维后的通道数;另一个分支经过一个核大小为2×2、步长为2的平均池化层后,依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数进行降维,得到中间特征图/>
Figure FDA0003995040420000034
中间特征图F121再依次经过上采样层、1×1卷积、ReLU激活函数、上采样层,得到下分支的输出/>
Figure FDA0003995040420000035
将F11和F12相加后,与F在通道维度上拼接,经过SE模块后再通过1×1卷积调整通道,得到多尺度特征提取子模块输出的特征图/>
Figure FDA0003995040420000036
具体公式表示如下:
Step B21, design multi-scale feature extraction sub-module, input feature map is marked as F,
Figure FDA0003995040420000031
H, W, and C are the height, width, and number of channels of feature F respectively. After an average pooling layer with a kernel size of 2×2 and a step size of 2, it goes through 1×1 convolution, ReLU activation function, 1 ×1 convolution and ReLU activation function for dimensionality reduction to obtain intermediate feature maps/>
Figure FDA0003995040420000032
Then it is divided into two branches. After the upper branch continues to reduce the dimension through 1×1 convolution, the output of the upper branch is obtained through the upsampling layer.
Figure FDA0003995040420000033
a is the number of channels after dimensionality reduction; the other branch passes through an average pooling layer with a kernel size of 2×2 and a step size of 2, followed by 1×1 convolution, ReLU activation function, 1×1 convolution, The ReLU activation function performs dimensionality reduction to obtain the intermediate feature map/>
Figure FDA0003995040420000034
The intermediate feature map F 121 then passes through the upsampling layer, 1×1 convolution, ReLU activation function, and upsampling layer in sequence to obtain the output of the lower branch/>
Figure FDA0003995040420000035
After adding F 11 and F 12 , they are concatenated with F in the channel dimension, and after the SE module, the channel is adjusted by 1×1 convolution to obtain the feature map output by the multi-scale feature extraction sub-module/>
Figure FDA0003995040420000036
The specific formula is expressed as follows:
F1=ReLU(Conv1(ReLU(Conv1(Avgpooling(F)))))F 1 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F))))) F11=Upsampling(Conv1(F1))F 11 =Upsampling(Conv1(F 1 )) F121=ReLU(Conv1(ReLU(Conv1(Avgpooling(F1)))))F 121 =ReLU(Conv1(ReLU(Conv1(Avgpooling(F 1 ))))) F12=Upsampling(ReLU(Conv1(Upsampling(F121))))F 12 =Upsampling(ReLU(Conv1(Upsampling(F 121 )))) Fm=Conv1(SE(Concat(F11+F12,F)))Fm=Conv1(SE(Concat(F 11 +F 12 ,F))) 其中,ReLU是激活函数,Conv1是1×1卷积,SE(·)是SE模块,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层,Concat是沿通道维度拼接操作;Among them, ReLU is an activation function, Conv1 is a 1×1 convolution, SE(·) is an SE module, Avgpooling is an average pooling layer with a kernel size of 2×2 and a step size of 2, and Upsampling is twice the nearest neighbor upsampling Layer, Concat is a splicing operation along the channel dimension; 步骤B22、设计频空域特征融合子模块,由通道注意力和空间注意力串行连接组成;Step B22, designing a frequency-space domain feature fusion sub-module, which is composed of serial connections of channel attention and spatial attention; 步骤B23、设计频空域上下文信息注意力模块,由三个多尺度特征提取子模块和频空域特征融合子模块组成;三个多尺度特征提取子模块的输入分别为步骤B1得到的三个特征图Flow、Fspa、Ffre,分别经过步骤B21中设计的多尺度特征提取子模块处理后,得到具有上下文信息的特征图Flow_m、Fspa_m、Ffre_m,然后经过步骤B22中设计的频空域特征融合子模块,得到频空域上下文信息注意力模块输出的特征图FfStep B23. Design the frequency-space context information attention module, which is composed of three multi-scale feature extraction sub-modules and frequency-space feature fusion sub-modules; the inputs of the three multi-scale feature extraction sub-modules are the three feature maps obtained in step B1 respectively F low , F spa , F fre are respectively processed by the multi-scale feature extraction sub-module designed in step B21 to obtain feature maps F low_m , F spa_m , F fre_m with context information, and then pass through the frequency-space domain designed in step B22 The feature fusion sub-module obtains the feature map F f output by the frequency-space context information attention module.
6.根据权利要求5所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B22的具体实现步骤如下:6. the full-resolution low-illuminance image enhancement method of aggregation context and enhanced detail according to claim 5, is characterized in that, the specific implementation steps of step B22 are as follows: 步骤B221、设计通道注意力,输入为步骤B23得到的特征图
Figure FDA0003995040420000041
Figure FDA0003995040420000042
三个特征图分别经过空间维度的全局平均池化后得到三个尺度为1×1×C的向量,然后将三个向量沿通道维度拼接,得到中间特征图/>
Figure FDA0003995040420000043
将Fc依次经过1×1卷积、ReLU激活函数、1×1卷积、ReLU激活函数、1×1卷积进行降维和升维,再通过Sigmoid激活函数得到通道维度上的权重/>
Figure FDA0003995040420000044
将FW1沿通道维度分解为三个尺度为1×1×C的向量FW10、FW11、FW12,分别与频空域特征融合子模块的输入特征图Flow_m、Fspa_m、Ffre_m相乘,得到通道注意力的输出特征图/>
Figure FDA0003995040420000045
Figure FDA0003995040420000046
具体公式表示如下:
Step B221, design channel attention, the input is the feature map obtained in step B23
Figure FDA0003995040420000041
Figure FDA0003995040420000042
The three feature maps are respectively subjected to the global average pooling of the spatial dimension to obtain three vectors with a scale of 1×1×C, and then the three vectors are spliced along the channel dimension to obtain the intermediate feature map/>
Figure FDA0003995040420000043
Fc undergoes 1×1 convolution, ReLU activation function, 1×1 convolution, ReLU activation function, 1×1 convolution to reduce and increase dimensionality, and then obtain the weight on the channel dimension through the Sigmoid activation function/>
Figure FDA0003995040420000044
Decompose F W1 along the channel dimension into three vectors F W10 , F W11 , F W12 with a scale of 1×1×C, and multiply them with the input feature maps F low_m , F spa_m , F fre_m of the frequency-space domain feature fusion sub-module respectively , get the output feature map of the channel attention />
Figure FDA0003995040420000045
Figure FDA0003995040420000046
The specific formula is expressed as follows:
Fc=Concat(Avgpoolings(Flow_m),Avgpoolings(Fspa_m),Avgpoolings(Ffre_m))F c =Concat(Avgpooling s (F low_m ),Avgpooling s (F spa_m ),Avgpooling s (F fre_m )) FW1=Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(Fc))))))FW 1 =Sigmoid(Conv1(ReLU(Conv1(ReLU(Conv1(F c )))))) Flow_c=FW10×Flow_m F low_c = F W10 × F low_m Fspa_c=FW11×Fspa_m F spa_c = F W11 × F spa_m Ffre_c=FW12×Ffre_m F fre_c = F W12 × F fre_m 其中,Concat是沿通道维度拼接操作,Avgpoolings是空间维度的全局平均池化,ReLU是激活函数,Conv1是1×1卷积,Sigmoid是Sigmoid激活函数;Among them, Concat is a splicing operation along the channel dimension, Avgpooling s is the global average pooling of the spatial dimension, ReLU is the activation function, Conv1 is 1×1 convolution, and Sigmoid is the Sigmoid activation function; 步骤B222、设计空间注意力,输入为步骤B221得到的三个特征图Flow_c、Fspa_c、Ffre_c,三个特征图分别进行通道维度的平均池化后得到三个尺度为H×W×1的特征图,然后将三个特征图沿通道维度拼接,得到中间特征图
Figure FDA0003995040420000047
将Fs依次经过核大小为2×2、步长为2的平均池化层、ReLU激活函数、上采样层后,通过Sigmoid激活函数得到空间维度上的权重/>
Figure FDA0003995040420000048
将FW2分解为三个尺度为H×W×1的特征图FW20、FW21、FW22,分别与空间注意力的输入特征图Flow_c、Fspa_c、Ffre_c相乘,得到空间注意力的输出特征图
Figure FDA0003995040420000049
具体公式表示如下:
Step B222, design spatial attention, the input is the three feature maps F low_c , F spa_c , and F fre_c obtained in step B221, and the three feature maps are respectively averaged in the channel dimension to obtain three scales of H×W×1 feature map, and then splicing the three feature maps along the channel dimension to obtain the intermediate feature map
Figure FDA0003995040420000047
After passing F s through the average pooling layer with a kernel size of 2×2 and a step size of 2, the ReLU activation function, and the upsampling layer, the weight in the spatial dimension is obtained through the Sigmoid activation function>
Figure FDA0003995040420000048
Decompose F W2 into three feature maps F W20 , F W21 , and F W22 with a scale of H×W×1, and multiply them with the input feature maps F low_c , F spa_c , and F fre_c of spatial attention to obtain spatial attention The output feature map of
Figure FDA0003995040420000049
The specific formula is expressed as follows:
Fs=Concat(Avgpoolingc(Flow_c),Avgpoolingc(Fspa_c),Avgpoolingc(Ffre_c))F s =Concat(Avgpooling c (F low_c ),Avgpooling c (Fs pa_c ),Avgpooling c (F fre_c )) FW2=Sigmoid(Upsampling(ReLU(Avgpooling(Fs))))F W2 =Sigmoid(Upsampling(ReLU(Avgpooling(F s )))) Flow_s=FW20×Flow_c F low_s = F W20 × F low_c Fspa_s=FW21×Fspa_c F spa_s = F W21 × F spa_c Ffre_s=FW22×Ffre_c F fre_s = F W22 × F fre_c 其中,Concat是沿通道维度拼接操作,Avgpoolingc是通道维度的平均池化,ReLU是激活函数,Sigmoid是Sigmoid激活函数,Avgpooling是核大小为2×2、步长为2的平均池化层,Upsampling是两倍最近邻上采样层;Among them, Concat is a splicing operation along the channel dimension, Avgpoolingc is the average pooling of the channel dimension, ReLU is the activation function, Sigmoid is the Sigmoid activation function, Avgpooling is the average pooling layer with a kernel size of 2×2 and a step size of 2, Upsampling is twice the nearest neighbor upsampling layer; 步骤B223、设计频空域特征融合子模块,输入特征图为步骤B23得到的特征图Flow_m、Fspa_m、Ffre_m,三个特征图先经过步骤B221中的通道注意力,得到特征图Flow_c、Fspa_c、Ffre_c,再经过步骤B222中的空间注意力,得到特征图Flow_s、Fspa_s、Ffre_s,三个特征图相加后得到最终的输出Ff;具体公式表示如下:Step B223. Design the frequency-space domain feature fusion sub-module. The input feature maps are the feature maps F low_m , F spa_m , and F fre_m obtained in step B23. The three feature maps first pass through the channel attention in step B221 to obtain the feature maps F low_c , F spa_c , F fre_c , and then through the spatial attention in step B222, feature maps F low_s , F spa_s , F fre_s are obtained, and the final output F f is obtained after adding the three feature maps; the specific formula is as follows: Ff=Flow_s+Fspa_s+Ffre_sF f =F low_s +F spa_s +F fre_s .
7.根据权利要求6所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B3的具体实现步骤如下:7. The full-resolution low-illuminance image enhancement method of aggregation context and enhanced details according to claim 6, characterized in that, the specific implementation steps of step B3 are as follows: 步骤B31、设计特征聚合卷积块,实现细节信息和上下文信息的融合;输入特征图为步骤B1得到特征图Ffre和步骤B2得到的特征图Ff,将两者沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fconv;具体公式表示如下:Step B31, design a feature aggregation convolution block to realize the fusion of detail information and context information; the input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2, and the two are spliced along the channel dimension and then passed through a 3×3 convolution to obtain the output feature map F conv ; the specific formula is expressed as follows: Fconv=Conv3(Concat(Ffre,Ff))F conv =Conv3(Concat(F fre ,F f )) 其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作;Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension; 步骤B32、设计协同增强子模块,对细节信息和上下文信息的融合信息协同增强;输入特征图为步骤B31中得到的Fconv,将Fconv依次通过1×1卷积、ReLU6激活函数、Dropout随机失活层、1×1卷积、Dropout随机失活层后,与Fconv相加,得到中间特征图Fmid,然后通过LeakyReLU激活函数,与Fconv沿通道维度拼接后经过一个3×3卷积,得到输出特征图Fco;具体公式表示如下:Step B32. Design a synergistic enhancement sub-module to synergistically enhance the fusion information of detail information and context information; the input feature map is the F conv obtained in step B31, and the F conv is sequentially passed through 1×1 convolution, ReLU6 activation function, and Dropout random After the deactivation layer, 1×1 convolution, and Dropout random deactivation layer, it is added to F conv to obtain the intermediate feature map F mid , and then through the LeakyReLU activation function, it is spliced with F conv along the channel dimension and then passed through a 3×3 volume product to obtain the output feature map F co ; the specific formula is expressed as follows: Fmid=Dropout(Conv1Dropout(ReLU6(Conv1(Fconv)))))+Fconv F mid = Dropout(Conv1Dropout(ReLU6(Conv1(F conv )))))+F conv Fco=Conv3(Concat(LeakyReLU(Fmid),Fconv))F co =Conv3(Concat(LeakyReLU(F mid ),F conv )) 其中,Conv1是1×1卷积,Conv3是3×3卷积,Concat是沿通道维度拼接操作,Dropout是随机失活层,ReLU6是ReLU6激活函数,LeakyReLU是LeakyReLU激活函数;Among them, Conv1 is a 1×1 convolution, Conv3 is a 3×3 convolution, Concat is a splicing operation along the channel dimension, Dropout is a random inactivation layer, ReLU6 is a ReLU6 activation function, and LeakyReLU is a LeakyReLU activation function; 步骤B33、设计特征聚合和增强模块,由特征聚合卷积块、协同增强子模块组成,输入特征图为步骤B1得到的特征图Ffre和步骤B2得到的特征图Ff,经过特征聚合卷积块后,得到特征图Fconv,再经过协同增强子模块后得到特征图FcoStep B33, design feature aggregation and enhancement module, which is composed of feature aggregation convolution block and collaborative enhancement sub-module, the input feature map is the feature map F fre obtained in step B1 and the feature map F f obtained in step B2, after feature aggregation convolution After blocks, the feature map F conv is obtained, and then the feature map F co is obtained after the collaborative enhancement sub-module. 8.根据权利要求7所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤B4具体实现方式为:8. The full-resolution low-light image enhancement method of aggregating context and enhancing details according to claim 7, characterized in that the specific implementation of step B4 is as follows: 步骤B4、设计聚合上下文和增强细节的全分辨率低照度图像增强网络,通过整合全分辨率细节提取模块、频空域上下文信息注意力模块、特征聚合和增强模块组成;输入低照度图像I,经过步骤B1中的全分辨率细节提取模块后得到三个特征图Flow、Fspa、Ffre,经过频空域上下文信息注意力模块后得到特征图Ff,然后经过特征聚合和增强模块得到特征图Fco,接着Fco与步骤B1中的特征图Flow沿通道维度拼接,再经过3×3卷积,得到最后的增强图像Iout;具体公式表示如下:Step B4. Design a full-resolution low-light image enhancement network that aggregates context and enhances details. It is composed of a full-resolution detail extraction module, a frequency-spatial context information attention module, and a feature aggregation and enhancement module; input a low-light image I, and pass After the full-resolution detail extraction module in step B1, three feature maps F low , F spa , and F fre are obtained, and the feature map F f is obtained after the frequency-spatial context information attention module, and then the feature map is obtained through the feature aggregation and enhancement module F co , and then splicing F co and the feature map F low in step B1 along the channel dimension, and then undergoing 3×3 convolution to obtain the final enhanced image I out ; the specific formula is expressed as follows: Iout=Conv3(Concat(Fco,Flow))I out =Conv3(Concat(F co ,F low )) 其中,Conv3是3×3卷积,Concat是沿通道维度拼接操作。Among them, Conv3 is a 3×3 convolution, and Concat is a splicing operation along the channel dimension. 9.根据权利要求1所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤C具体实现方式为:9. The full-resolution low-light image enhancement method of aggregating context and enhancing details according to claim 1, characterized in that the specific implementation of step C is as follows: 步骤C、设计损失函数,由L2损失和VGG感知损失组成,网络的总目标损失函数如下:Step C, design loss function, which is composed of L2 loss and VGG perception loss, the total target loss function of the network is as follows: l=ω1||Iout-G||22||Φ(Iout)-Φ(G)||1 l=ω 1 ||I out -G|| 22 ||Φ(I out )-Φ(G)|| 1 其中,Φ(·)表示使用在ImageNet数据集上预训练的VGG-16分类模型提取Conv4-1层特征的操作;Iout表示低照度图像I的增强图像,G表示低照度图像I对应的标签图像,||.||1表示L1损失,||.||2表示L2损失,ω1、ω2为权重。Among them, Φ( ) represents the operation of extracting the features of the Conv4-1 layer using the VGG-16 classification model pre-trained on the ImageNet dataset; I out represents the enhanced image of the low-light image I, and G represents the label corresponding to the low-light image I Image, ||.|| 1 represents L1 loss, ||.|| 2 represents L2 loss, and ω 1 and ω 2 are weights. 10.根据权利要求1所述的聚合上下文和增强细节的全分辨率低照度图像增强方法,其特征在于,步骤D的具体实现步骤如下:10. The full-resolution low-illuminance image enhancement method of aggregating context and enhancing details according to claim 1, wherein the specific implementation steps of step D are as follows: 步骤D1、将步骤A得到的训练数据集随机划分为若干个批次,每个批次包含N对图像;Step D1, randomly divide the training data set obtained in step A into several batches, each batch contains N pairs of images; 步骤D2、输入低照度图像I,经过步骤B中的聚合上下文和增强细节的全分辨率低照度图像增强网络后得到增强图像Iout,使用步骤C中的公式计算损失l;Step D2, input a low-illumination image I, obtain an enhanced image I out after the full-resolution low-illumination image enhancement network that aggregates context and enhances details in step B, and calculates the loss l using the formula in step C; 步骤D3、根据损失使用反向传播方法计算网络中参数的梯度,并利用Adam优化方法更新网络参数;Step D3, using the backpropagation method to calculate the gradient of the parameters in the network according to the loss, and using the Adam optimization method to update the network parameters; 步骤D4、以批次为单位重复执行步骤D1至步骤D3,直至网络的目标损失函数数值收敛至纳什平衡,保存网络参数,得到聚合上下文和增强细节的全分辨率低照度图像增强模型。Step D4: Repeat step D1 to step D3 in batches until the target loss function of the network converges to Nash equilibrium, save the network parameters, and obtain a full-resolution low-light image enhancement model that aggregates context and enhances details.
CN202211600774.9A 2022-12-12 2022-12-12 Full-resolution low-light image enhancement method for aggregating context and enhancing details Pending CN115880177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211600774.9A CN115880177A (en) 2022-12-12 2022-12-12 Full-resolution low-light image enhancement method for aggregating context and enhancing details

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211600774.9A CN115880177A (en) 2022-12-12 2022-12-12 Full-resolution low-light image enhancement method for aggregating context and enhancing details

Publications (1)

Publication Number Publication Date
CN115880177A true CN115880177A (en) 2023-03-31

Family

ID=85767341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211600774.9A Pending CN115880177A (en) 2022-12-12 2022-12-12 Full-resolution low-light image enhancement method for aggregating context and enhancing details

Country Status (1)

Country Link
CN (1) CN115880177A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116137023A (en) * 2023-04-20 2023-05-19 中国民用航空飞行学院 Low-illumination image enhancement method based on background modeling and detail enhancement
CN117152019A (en) * 2023-09-15 2023-12-01 河北师范大学 A low-light image enhancement method and system based on dual-branch feature processing
CN118883440A (en) * 2024-07-09 2024-11-01 南通磊晟浆纱有限公司 A textile fiber identification and composition detection system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116137023A (en) * 2023-04-20 2023-05-19 中国民用航空飞行学院 Low-illumination image enhancement method based on background modeling and detail enhancement
CN117152019A (en) * 2023-09-15 2023-12-01 河北师范大学 A low-light image enhancement method and system based on dual-branch feature processing
CN118883440A (en) * 2024-07-09 2024-11-01 南通磊晟浆纱有限公司 A textile fiber identification and composition detection system

Similar Documents

Publication Publication Date Title
WO2023092813A1 (en) Swin-transformer image denoising method and system based on channel attention
WO2021164234A1 (en) Image processing method and image processing device
CN115880177A (en) Full-resolution low-light image enhancement method for aggregating context and enhancing details
CN110675328B (en) Low-illumination image enhancement method and device based on condition generation countermeasure network
CN106920221B (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN112233038A (en) True image denoising method based on multi-scale fusion and edge enhancement
CN112669242A (en) Night scene restoration method based on improved image enhancement algorithm and generation countermeasure network
Chen et al. THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor
CN113129236B (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
CN108520504A (en) An End-to-End Blind Restoration Method for Blurred Images Based on Generative Adversarial Networks
CN111915530A (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
Luan et al. Fast single image dehazing based on a regression model
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN112348747A (en) Image enhancement method, device and storage medium
CN111462019A (en) Image deblurring method and system based on deep neural network parameter estimation
CN110189260B (en) An Image Noise Reduction Method Based on Multi-scale Parallel Gated Neural Network
CN116137023B (en) Low-light image enhancement method based on background modeling and detail enhancement
CN109544487A (en) A kind of infrared image enhancing method based on convolutional neural networks
CN112733929A (en) Improved method for detecting small target and shielded target of Yolo underwater image
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN109978789A (en) A kind of image enchancing method based on Retinex algorithm and guiding filtering
CN116433518A (en) A fire image smoke removal method based on improved Cycle-Dehaze neural network
CN115063331A (en) Ghost-free multi-exposure image fusion algorithm based on multi-scale block LBP operator
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN115775376A (en) A Crowd Counting Method Based on Low Light Image Enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination