CN117994161B - RAW format weak light image enhancement method and device - Google Patents

RAW format weak light image enhancement method and device Download PDF

Info

Publication number
CN117994161B
CN117994161B CN202410397743.0A CN202410397743A CN117994161B CN 117994161 B CN117994161 B CN 117994161B CN 202410397743 A CN202410397743 A CN 202410397743A CN 117994161 B CN117994161 B CN 117994161B
Authority
CN
China
Prior art keywords
image
representing
network
fusion
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410397743.0A
Other languages
Chinese (zh)
Other versions
CN117994161A (en
Inventor
赵谦
刘心怡
谢琦
孟德宇
王红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202410397743.0A priority Critical patent/CN117994161B/en
Publication of CN117994161A publication Critical patent/CN117994161A/en
Application granted granted Critical
Publication of CN117994161B publication Critical patent/CN117994161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a method and a device for enhancing a RAW format dim light image, wherein the method comprises the following steps: preprocessing the image data set to obtain a training data set; constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network; extracting multi-scale features of the training dataset; inputting the multi-scale features, the guide images and the guided images into an image domain sub-fusion network and a feature domain sub-fusion network to obtain an image fusion result and a feature fusion result; integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format weak light image enhancement network; training the RAW format weak light image enhancement network, and optimizing the RAW format weak light image enhancement network to obtain a training model. The method solves the problems that the prior RAW format weak light image enhancement technology does not effectively utilize the information of the image, so that the network performance is limited and the image quality is lower. The method and the device realize that in the process of enhancing the image, the image noise is reduced and more image details are restored.

Description

RAW format weak light image enhancement method and device
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for enhancing a RAW format dim light image.
Background
An Image in a RAW (RAW Image Format) Format is an unprocessed picture directly taken by an Image pickup apparatus. Also, RAW information of the digital camera sensor, and some metadata (e.g., setting of ISO, shutter speed, aperture value, white balance, etc.) generated by camera shooting are recorded in the RAW file.
Currently, the existing RAW format low-light image enhancement technology is mainly a deep learning method. Among these are top-down self-guiding architectures, which can better utilize multi-scale features. Or a frequency-based decomposition and enhancement structure is employed for sufficient noise removal. There are also multi-exposure frames generated from the raw sensor images to fuse them for higher contrast and to utilize a pre-trained edge detection network to better preserve detail. In these methods, the main clue of the network design is to fuse the multi-modal source images generated from the original RAW images, typically using feature domain fusion, and directly input the aggregated features (e.g., connections or averages) extracted from the different source images into the subsequent network modules. This approach does not fully explore the physical correlation between source images and does not effectively exploit complementary information from different source images, resulting in limited network performance, particularly in terms of preserving detail and reducing noise.
Disclosure of Invention
The embodiment of the application solves the problems that the prior RAW format weak light image enhancement technology does not effectively utilize the complementary information of different source images, so that the network performance is limited and the quality of the processed image is lower by providing the RAW format weak light image enhancement method and device.
In a first aspect, an embodiment of the present application provides a method for enhancing a RAW format low-light image, including: preprocessing the image data set to obtain a training data set; the training data set comprises a guide image, a guided image and an sRGB format normal illumination image; constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network according to the image information of the training data set; extracting multi-scale features of the training dataset through the multi-scale feature extraction sub-network; wherein the multi-scale features include a guided feature, and a shared feature; inputting the multi-scale features, the guide image and the guided image into the image domain sub-fusion network and the feature domain sub-fusion network to obtain an image fusion result and a feature fusion result; integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format weak light image enhancement network according to the enhancement result; and training the RAW format weak light image enhancement network by using the training data set, and iteratively optimizing the RAW format weak light image enhancement network to obtain a training model.
With reference to the first aspect, in one possible implementation manner, before the preprocessing the image dataset to obtain the training dataset, the method further includes: and performing content matching on the image data to obtain a pair of RAW format normal illumination images and RAW format dim light images to form the image data set.
With reference to the first aspect, in a possible implementation manner, the preprocessing the image dataset to obtain a training dataset includes: processing the RAW format normal illumination image into an sRGB format normal illumination image; multiplying the RAW format dim light image by illumination multiplying power, and carrying out color channel rearrangement on the RAW format dim light image to obtain the guide image; filtering the guide map to obtain a guided map; and forming the training data set by groups of the sRGB format normal illumination image, the guide image and the guided image.
With reference to the first aspect, in one possible implementation manner, after the preprocessing the image dataset to obtain the training dataset, the method further includes: the training data set is normalized to have its pixel values ranging between 0 and 1.
With reference to the first aspect, in a possible implementation manner, the extracting, by the multi-scale feature extraction sub-network, multi-scale features of the training dataset includes: inputting the guide map and the guided map into the multi-scale feature extraction sub-network to initialize to obtain an initial guide map and an initial guided map; extracting features of the initial guide graph and the initial guided graph to obtain the guide features, the guided features and the sharing features.
With reference to the first aspect, in one possible implementation manner, the image fusion result is as follows:
; wherein/> ,/>; In the method, in the process of the invention,Representing the image fusion result,/>And/>Respectively represent different deep convolution layers in the convolution neural network,/>And/>Respectively represent the utilization of the depth convolution layer/>And/>Learning results of the learned parameters, I representing the guide map,/>Representing the first of said guiding features,/>Representing the first of said guided features,/>Representing the guided graph,/>And/>Expressed as/>And/>Convolutional neural network as parameter,/>And/>Parameters representing a convolutional neural network.
With reference to the first aspect, in one possible implementation manner, the feature fusion result is as follows:
,/> ; wherein, ; In the/>Representing the feature fusion result, i.e. upsampled/>,/>Representing the output result of each sub-module in the characteristic domain sub-fusion network,/>The expression parameter is/>Is a convolutional neural network of (a),Expressed by the parameter/>Up-sampling module consisting of convolutional neural network and bilinear interpolation filter,/>Parameters representing convolutional neural networks,/>Representing parallel operation combining channels and spatial attention layers,/>For the parameters learned by convolutional neural networks,/>Representing the guiding feature,/>Representing the parameters calculated by the steering filter, s representing the sub-modules in the feature domain sub-fusion network.
With reference to the first aspect, in one possible implementation manner, the parameters calculated by the steering filter are as follows:
; in the/> Representing the parameters calculated by the steering filter,/>Representing an average filter,/>Representing the guided features,/>Representing the guiding feature,/>For parameters learned by convolutional neural networks, s represents a sub-module in a feature domain sub-fusion network.
With reference to the first aspect, in one possible implementation manner, the integrating the image fusion result with the feature fusion result obtains an enhanced result, which is specifically as follows:
; in the/> Representing enhanced results,/>Parameters representing a neural convolutional network,/>The expression parameter is/>Convolutional neural network of/>A pixel rearrangement operation is represented and is performed,Representing parallel operation,/>Representing the fusion result of the image domain sub-fusion network,/>And representing the fusion result of the first submodule in the characteristic domain sub-fusion network.
In a second aspect, an embodiment of the present application provides a RAW format low-light image enhancement apparatus, including: the preprocessing module is used for preprocessing the image data set to obtain a training data set; the training data set comprises a guide image, a guided image and an sRGB format normal illumination image; the construction module is used for constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network according to the image information of the training data set; the extraction module is used for extracting the multi-scale characteristics of the training data set through the multi-scale characteristic extraction sub-network; wherein the multi-scale features include a guided feature, and a shared feature; the input module is used for inputting the multi-scale features, the guide image and the guided image into the image domain sub-fusion network and the feature domain sub-fusion network to obtain an image fusion result and a feature fusion result; the integration module is used for integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format dim light image enhancement network according to the enhancement result; and the optimization module is used for training the RAW format weak light image enhancement network by utilizing the training data set and iteratively optimizing the RAW format weak light image enhancement network to obtain a training model.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
the embodiment of the application can fully utilize the image information by extracting the multi-scale characteristics of the training data set, effectively fuse the image information, further enhance the detail retention and noise reduction of the image in the processing process, and effectively solve the problems of limited network performance and lower processed image quality caused by the fact that the existing RAW format weak light image enhancement technology does not effectively utilize the complementary information of different source images. And further, the image noise is reduced in the process of processing the RAW format weak light image, and more image details are recovered at the same time, so that a better image enhancement result is obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the embodiments of the present application or the drawings used in the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for enhancing a low-light image in a RAW format according to an embodiment of the present application;
FIG. 2 is a flowchart of preprocessing an image dataset to obtain a training dataset according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a RAW format weak light image enhancement device according to an embodiment of the present application;
FIG. 4 is a graph showing an example of a comparative experiment on a SID-Fuji dataset provided by an embodiment of the present application;
FIG. 5 is a graph of an example of a comparative experiment on a SID-Sony dataset provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a RAW format weak light image enhancement network according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a feature domain sub-fusion network according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a multi-scale feature extraction sub-network according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Some of the techniques involved in the embodiments of the present application are described below to aid understanding, and they should be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, for the sake of clarity and conciseness, descriptions of well-known functions and constructions are omitted in the following description.
The sRGB format is an international standard widely adopted and supported, can be directly opened and edited in most image editing software and operating systems, can also be directly previewed in most devices and software, does not need additional conversion or processing, and has a relatively small file size, so that the storage and transmission are convenient.
The RAW format is a lossless compressed or unprocessed file format, which will typically vary from one camera brand to another and from one model to another. And special software is needed for processing or can be used for viewing and editing, so that the method is more complex for an ordinary user, and the image file is larger and occupies more storage space.
Fig. 1 is a flowchart of a RAW format dim light image enhancement method according to an embodiment of the present application, including steps 101 to 106. Wherein fig. 1 is only one execution order shown in the embodiment of the present application, and does not represent the only execution order of a RAW format low-light image enhancement method, and the steps shown in fig. 1 may be executed in parallel or upside down in case that the final result can be achieved.
Step 101: the image dataset is preprocessed to obtain the training dataset. In the embodiment of the application, the image data comprises a plurality of RAW format normal illumination images and RAW format weak light images, and before the image data set is obtained, the image data is required to be subjected to content matching, so that the image data set is formed by the pair of RAW format normal illumination images and RAW format weak light images. Specifically, the image content characteristics of the RAW format normal illumination image and the RAW format weak light image are respectively extracted, and a group of the RAW format normal illumination image and the RAW format weak light image which are matched are obtained to form an image data set.
It should be noted that in the image dataset, the group of RAW format normal light images and RAW format dim light images may be in a one-to-one relationship, or may be in a one-to-many, many-to-one, or many-to-many relationship.
In one embodiment of the present application, the image dataset may be a pair of a normal illumination image in RAW format and a dim illumination image in RAW format that are sorted in advance, so that the step of content matching is omitted when the method of the present application is performed.
In the embodiment of the present application, the steps of preprocessing the image data set to obtain the training data set are shown in fig. 2, and include steps 201 to 204, which are specifically as follows.
Step 201: and processing the RAW format normal illumination image into an sRGB format normal illumination image. In the embodiment of the application, the RAW format normal illumination image is processed into the sRGB format normal illumination image by utilizing an image processing technology. For example, if two or more RAW format normal illumination images exist in each group of images in the training dataset, one image with better effect is obtained and processed.
Step 202: multiplying the RAW format dim light image by the illumination multiplying power, and carrying out color channel rearrangement on the RAW format dim light image to obtain a guide image. In the embodiment of the present application, the brightness of the RAW format low-light image is enhanced by multiplying each pixel value by the illumination magnification (here, the illumination magnification is a value greater than 1).
In addition, in order to ensure that the range of each pixel value on the RAW format weak light image is kept at [0,255], the application multiplies the RAW format weak light image by the illumination multiplying power and then carries out the modulo operation, namely the pixel value multiplied by the illumination multiplying power is modulo 256, so as to ensure that the pixel value is within the range of [0,255 ].
And (3) carrying out color channel rearrangement on the RAW format dim light image multiplied by the illumination multiplying power according to R (red), G (green) and B (blue) formats to obtain a guide image. The boot graph is written as: P represents a boot graph,/> Representing the real number set, H, W and C represent the height, width and channel number of the pilot map, respectively.
It should be appreciated by those skilled in the art that the above-mentioned implementation of color channel rearrangement using the RGGB format is only one embodiment of the present application, and is not intended to limit the scope of the present application, and those skilled in the art may also implement color channel rearrangement using other formats, such as GBRG, GRBG, BGGR, etc., according to practical situations.
Step 203: and filtering the guide map to obtain a guided map. In the embodiment of the application, the guided graph is obtained by adopting double-triple filtering to carry out filtering processing on the guided graph. The guided graph is written as: I represents a guided graph,/> Representing the real set, H, W and C represent the height, width and number of channels of the directed graph, respectively.
In one embodiment of the present application, if the group of RAW format normal illumination images and the RAW format low-light image in the image dataset are in a one-to-many or many-to-many relationship, then part of the RAW format low-light image in each group may be processed into a guide map, and the other part may be processed into a guided map.
For example, in a group of image data of the image dataset, one RAW format normal illumination image and two RAW format weak light images are included, then the two RAW format weak light images are multiplied by illumination magnification and color channel rearrangement, then one of the two RAW format weak light images is subjected to filtering processing, the RAW format weak light image which is not subjected to filtering processing is a guide image, and the RAW format weak light image which is subjected to filtering processing is a guided image.
Step 204: the set of sRGB format normal illumination images, the guide map and the guided map constitute a training dataset. Specifically, the sRGB format normal light image, the guidance map and the guided map, which are processed in steps 201 to 203, respectively, form a training data set.
Furthermore, the training data set is normalized such that its pixel value ranges between 0 and 1.
Step 102: and constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network according to the image information of the training data set. In the embodiment of the application, a multi-scale feature extraction sub-network is constructed according to the image information of the guide image and the guided image which are input in pairs, and an image domain sub-fusion network and a feature domain sub-fusion network are constructed by combining the principle of a guide filter.
Specifically, the principle of the steering filter of the image domain sub-fusion network and the feature domain sub-fusion network is as follows:
. In the/> Representing the output result of the guided filtering, I representing the guided graph,/>Represents a window centered on pixel k in I,/>And/>Windows respectivelyCoefficients of the middle pixel point j, j representing/>Pixel points in/>Values representing pixel j in guided graph I,/>And/>Representing the coefficients/>, respectively, about pixel jAnd/>Average value of (2).
The principle matrix of the steering filter is as follows:
. In the/> Matrix representing output result of guided filtering,/>The representation of the guided image is presented,And/>Respectively by/>And/>Composition, representation/>And/>Corresponding coefficients in matrix form,/>And/>Representing the coefficients/>, respectively, about pixel jAnd/>Average value of (2).
In addition, the feature domain sub-fusion network has four sub-modules, the output channel number of each sub-module is twice that of the last sub-module, and the input channel numbers are 128, 256 and 512 respectively, and the specific structure is shown in fig. 7.
In the embodiment of the application, the image information comprises private information and shared information. Where private information refers to specific data directly related to a specific image or set of images that can be used to understand and interpret the image or set of images. Such data is only meaningful for a particular image or set of images, and is not meaningful or understandable for other images or sets of images. For example, shooting time, place, setting, etc. of an image. Shared information refers to generic data that is related to all images or collections of images. These generic data are meaningful to all images or sets of images and can be shared and compared across different images or sets of images. Such as color, brightness, contrast, size, etc. of the image.
Step 103: and extracting the multi-scale characteristics of the training data set through the multi-scale characteristic extraction sub-network. Fig. 8 is a schematic structural diagram of a multi-scale feature extraction sub-network according to the present application. In the embodiment of the application, the multi-scale feature extraction sub-network consists of sub-modules with the same four-scale structure and different channel numbers, the different sub-modules are distinguished by using the subscript s,
Illustratively, in FIG. 8, the convolutional layer has 64 input channels and 64 output channels. The number of input channels of the sub-module 1 is 64, and the number of output channels is 128. The number of input channels of the sub-module 2 is 128, and the number of output channels is 128. The number of input channels of the sub-module 3 is 256, and the number of output channels is 256. The number of input channels of the sub-module 4 is 512, and the number of output channels is 512. There is a downsampling layer between each sub-module, and the number of output channels of the downsampling layer is 128, 256 and 512 in sequence.
And inputting the guided image and the guided image into a multi-scale feature extraction sub-network to initialize to obtain an initial guided image and an initial guided image. Specifically, the convolution layers of the paired guide graphs and the guided graphs input into the multi-scale feature extraction sub-network are initialized, and the obtained features are respectively recorded as an initial guide graph and an initial guided graph.
Extracting features of the initial guide graph and the initial guided graph to obtain guide features, guided features and shared features. Specifically, the initial guide image and the initial guided image are checked by utilizing a convolution with unshared parameters to perform feature extraction, so as to obtain guide features and guided features. And carrying out shared feature extraction on the initial guide diagram and the initial guided diagram by utilizing the convolution check of parameter sharing to obtain shared features.
As shown in figure 8 of the drawings,And/>Convolution kernel representing that the parameters related to the boot graph are not shared,/>And/>Convolution kernel representing parameter sharing,/>And/>Representing convolution kernels that are not shared by parameters associated with the directed graph. /(I)And/>Respectively represent the guiding characteristic and the guided characteristic output by the sub-module 1,/>And/>Respectively represent the guiding features and guided features output by the sub-module 2,/>And/>Respectively represent the guiding features and guided features output by the sub-module 3,/>And/>Indicating the guiding features and guided features, respectively, of the output of the sub-module 4.
In addition, the man skilled in the art can also connect the guiding feature, the guided feature and the shared feature in parallel and then perform downsampling processing through convolution layer and maximum pooling operation.
Step 104: and inputting the multi-scale features, the guide map and the guided map into an image domain sub-fusion network and a feature domain sub-fusion network to obtain an image fusion result and a feature fusion result. In the embodiment of the application, the image fusion result is as follows:
. Wherein/> ,/>. In the method, in the process of the invention,Representing the image fusion result,/>And/>Respectively represent different deep convolution layers in the convolution neural network,/>And/>Respectively represent the utilization of the depth convolution layer/>And/>Learning results of the learned parameters, I represents a guidance map,/>Representing the first guidance feature,/>Representing the first guided feature,/>Representing a guided graph,/>And/>Expressed as/>And/>Convolutional neural network as parameter,/>And/>Parameters representing a convolutional neural network.
The feature fusion results are as follows:
,/> . Wherein, . In the/>Representing the feature fusion result, i.e. upsampled/>,/>Output results of all sub-modules in the characteristic domain sub-fusion network are represented,/>The expression parameter is/>Is a convolutional neural network of (a),Expressed by the parameter/>Up-sampling module consisting of convolutional neural network and bilinear interpolation filter,/>Parameters representing convolutional neural networks,/>Representing parallel operation combining channels and spatial attention layers,/>For the parameters learned by convolutional neural networks,/>Representing guidance features,/>Representing the parameters calculated by the steering filter, s representing the sub-modules in the feature domain sub-fusion network.
In addition, in the case of the optical fiber,. In the/>Representing the parameters calculated by the steering filter,/>Representing an average filter,/>Representing guided features,/>Representing guidance features,/>For parameters learned by convolutional neural networks, s represents a sub-module in a feature domain sub-fusion network.
. In the/>For parameters to be learned by convolutional neural networks,The expression parameter is/>Convolutional neural network of/>Parameters representing convolutional neural networks,/>Representation layer regularization,/>Representing guided features,/>Representing the guiding feature, s representing the sub-modules in the feature domain sub-fusion network, and a representing the coefficients.
Step 105: and integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format dim light image enhancement network based on the enhancement result. The structure of the RAW format low-light image enhancement network is shown in fig. 6, and in the embodiment of the present application, a specific method for obtaining an enhancement result is as follows:
. In the/> Representing enhanced results,/>Parameters representing a neural convolutional network,/>The expression parameter is/>Convolutional neural network of/>A pixel rearrangement operation is represented and is performed,Representing parallel operation,/>Representing the fusion result of the image domain sub-fusion network,/>And representing the fusion result of the first submodule in the characteristic domain sub-fusion network.
Step 106: and training the RAW format weak light image enhancement network by using the training data set, and iteratively optimizing the RAW format weak light image enhancement network to obtain a training model. Specifically, the guiding image and the guided image are input into the RAW format weak light image enhancement network in pairs to obtain the enhanced normal light image in the sRGB format, and the total loss function of the RAW format weak light image enhancement network at the moment is calculated, wherein the loss function is as follows:
. Where L represents the loss value of the loss function,/> Representing normal light image in enhanced sRGB format,/>Representing a normal light image in sRGB format.
And updating network parameters in the RAW format weak light image enhancement network by adopting a random gradient descent algorithm to optimize a loss function, and iteratively optimizing the RAW format weak light image enhancement network by adopting a reverse optimization algorithm so that the output result of the network gradually approximates to the sRGB format normal illumination image. When the number of the optimizing times of the RAW format weak light image enhancement network reaches the set iteration number, training is terminated, and parameters of the RAW format weak light image enhancement network are stored at the moment, so that a training model is obtained.
In one embodiment of the application, the performance of the training model may also be tested by RAW format low-light images.
In the embodiment of the application, other seven methods for enhancing the low-light image in the RAW format are selected and compared with the method of the application on the SID-Sony data set and the SID-Fuji data set, and the results are shown in the following tables 1 and 2. Wherein, SID-Sony and SID-Fuji are two data sets acquired by different cameras.
Table 1 Table of comparative experiment results on SID-Fuji dataset
TABLE 2 comparative experiment results Table on SID-Sony dataset
In the above table, GF (Guided Filter) is guided filtering, FGF (Fast Guided Filter) is a fast guided filtering method, SGN (Self-Guided Network for FAST IMAGE denoising) is a fast denoising neural network algorithm, SID (See In the Dark) is a RAW format weak light image Enhancement method, EEMEFN (Edge-Enhanced Multi-Exposure Fusion Network) is an Edge-Enhanced Multi-exposure fusion network, LRDE (Learning to Restore low-LIGHT IMAGES VIA composition-and-Enhancement) is a two-stage optimization algorithm, DBLE (Abandoning the bayer-filter to SEE IN THE DARK) is a Multi-image fusion-based RAW format weak light image Enhancement network.
PSNR is a peak signal-to-noise ratio, which is an image quality evaluation index, and a larger value means smaller distortion, representing better image quality. The SSIM is a structural similarity index and is used for evaluating the similarity of a plurality of images, the value range of the SSIM is [ -1,1], and the closer the SSIM is to 1, the more similar the images are. LPIPS is learning the similarity of the perceived image blocks, and is used for evaluating the visual similarity of the images, wherein the smaller the value is, the more similar the two images are.Is a color difference, and is used to represent the difference between two colors, the smaller the value thereof is, the closer the two colors are.
As can be seen from the data in the table 1, the training model of the application has the maximum peak signal-to-noise ratio and the similarity of the learning perception image block, the minimum chromatic aberration, and the similarity of the learning perception image block is slightly larger than EEMEFN networks, and the data of the training model of the application in the table 2 are all superior to other methods, and the enhancement effect of the comprehensive evaluation training model on the RAW format weak light image is superior to the effect of the other methods. Fig. 4 and 5 are diagrams showing examples of comparison experiments on SID-Fuji data sets and SID-Sony data sets according to embodiments of the present application, where (a) represents an input RAW format dim light image, (j) is an sRGB format normal light image, and (b) - (i) are fusion results of 7 sets of comparison experiments and the training model according to the present application.
Although the application provides method operational steps as described in the examples or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive labor. The order of steps recited in the present embodiment is only one way of performing the steps in a plurality of steps, and does not represent a unique order of execution. When implemented by an actual device or client product, the method of the present embodiment or the accompanying drawings may be performed sequentially or in parallel (e.g., in a parallel processor or a multithreaded environment).
As shown in fig. 3, the embodiment of the application further provides a RAW format low-light image enhancement device 300. The device comprises: the following are specific examples of the preprocessing module 301, the construction module 302, the extraction module 303, the input module 304, the integration module 305, and the optimization module 306.
The preprocessing module 301 is configured to preprocess the image data set to obtain a training data set. The training data set comprises a guide image, a guided image and an sRGB format normal illumination image. The preprocessing module 301 is specifically configured to include a plurality of RAW format normal illumination images and RAW format dim light images in the image data, and perform content matching on the image data before obtaining the image dataset, so as to obtain a pair of RAW format normal illumination images and RAW format dim light images to form the image dataset. Specifically, the image content characteristics of the RAW format normal illumination image and the RAW format weak light image are respectively extracted, and a group of the RAW format normal illumination image and the RAW format weak light image which are matched are obtained to form an image data set.
It should be noted that in the image dataset, the group of RAW format normal light images and RAW format dim light images may be in a one-to-one relationship, or may be in a one-to-many, many-to-one, or many-to-many relationship.
In one embodiment of the present application, the image dataset may be a pair of a normal illumination image in RAW format and a dim illumination image in RAW format that are sorted in advance, so that the step of content matching is omitted when the method of the present application is performed.
In the embodiment of the present application, the step of preprocessing the image dataset to obtain the training dataset is shown in fig. 2, and is specifically as follows.
And processing the RAW format normal illumination image into an sRGB format normal illumination image. In the embodiment of the application, the RAW format normal illumination image is processed into the sRGB format normal illumination image by utilizing an image processing technology. For example, if two or more RAW format normal illumination images exist in each group of images in the training dataset, one image with better effect is obtained and processed.
Multiplying the RAW format dim light image by the illumination multiplying power, and carrying out color channel rearrangement on the RAW format dim light image to obtain a guide image. In the embodiment of the present application, the brightness of the RAW format low-light image is enhanced by multiplying each pixel value by the illumination magnification (here, the illumination magnification is a value greater than 1).
In addition, in order to ensure that the range of each pixel value on the RAW format weak light image is kept at [0,255], the application multiplies the RAW format weak light image by the illumination multiplying power and then carries out the modulo operation, namely the pixel value multiplied by the illumination multiplying power is modulo 256, so as to ensure that the pixel value is within the range of [0,255 ].
And (3) carrying out color channel rearrangement on the RAW format dim light image multiplied by the illumination multiplying power according to R (red), G (green) and B (blue) formats to obtain a guide image. The boot graph is written as: P represents a boot graph,/> Representing the real number set, H, W and C represent the height, width and channel number of the pilot map, respectively.
It should be appreciated by those skilled in the art that the above-mentioned implementation of color channel rearrangement using the RGGB format is only one embodiment of the present application, and is not intended to limit the scope of the present application, and those skilled in the art may also implement color channel rearrangement using other formats, such as GBRG, GRBG, BGGR, etc., according to practical situations.
And filtering the guide map to obtain a guided map. In the embodiment of the application, the guided graph is obtained by adopting double-triple filtering to carry out filtering processing on the guided graph. The guided graph is written as: I represents a guided graph,/> Representing the real set, H, W and C represent the height, width and number of channels of the directed graph, respectively.
In one embodiment of the present application, if the group of RAW format normal illumination images and the RAW format low-light image in the image dataset are in a one-to-many or many-to-many relationship, then part of the RAW format low-light image in each group may be processed into a guide map, and the other part may be processed into a guided map.
For example, in a group of image data of the image dataset, one RAW format normal illumination image and two RAW format weak light images are included, then the two RAW format weak light images are multiplied by illumination magnification and color channel rearrangement, then one of the two RAW format weak light images is subjected to filtering processing, the RAW format weak light image which is not subjected to filtering processing is a guide image, and the RAW format weak light image which is subjected to filtering processing is a guided image.
The set of sRGB format normal illumination images, the guide map and the guided map constitute a training dataset. Specifically, the sRGB format normal light image, the guidance map and the guided map, which are processed in steps 201 to 203, respectively, form a training data set.
Furthermore, the training data set is normalized such that its pixel value ranges between 0 and 1.
The construction module 302 is configured to construct a multi-scale feature extraction sub-network, an image domain sub-fusion network, and a feature domain sub-fusion network according to image information of the training data set. The construction module 302 is specifically configured to construct a multi-scale feature extraction sub-network according to the image information of the guide map and the guided map, and construct an image domain sub-fusion network and a feature domain sub-fusion network by combining the principle of the guide filter.
Specifically, the principle of the steering filter of the image domain sub-fusion network and the feature domain sub-fusion network is as follows:
. In the/> Representing the output result of the guided filtering, I representing the guided graph,/>Represents a window centered on pixel k in I,/>And/>Windows respectivelyCoefficients of the middle pixel point j, j representing/>Pixel points in/>Values representing pixel j in guided graph I,/>And/>Representing the coefficients/>, respectively, about pixel jAnd/>Average value of (2).
The principle matrix of the steering filter is as follows:
. In the/> Matrix representing output result of guided filtering,/>The representation of the guided image is presented,And/>Respectively by/>And/>Composition, representation/>And/>Corresponding coefficients in matrix form,/>And/>Representing the coefficients/>, respectively, about pixel jAnd/>Average value of (2).
In addition, the feature domain sub-fusion network has four sub-modules, the output channel number of each sub-module is twice that of the last sub-module, and the input channel numbers are 128, 256 and 512 respectively, and the specific structure is shown in fig. 7.
In the embodiment of the application, the image information comprises private information and shared information. Where private information refers to specific data directly related to a specific image or set of images that can be used to understand and interpret the image or set of images. Such data is only meaningful for a particular image or set of images, and is not meaningful or understandable for other images or sets of images. For example, shooting time, place, setting, etc. of an image. Shared information refers to generic data that is related to all images or collections of images. These generic data are meaningful to all images or sets of images and can be shared and compared across different images or sets of images. Such as color, brightness, contrast, size, etc. of the image.
The extraction module 303 is configured to extract multi-scale features of the training dataset through the multi-scale feature extraction sub-network. The multi-scale features include a guided feature, and a shared feature. The extraction module 303 is specifically configured to use a multi-scale feature extraction sub-network to be composed of sub-modules with four scales, the same structure and different channel numbers, and the different sub-modules are distinguished by using the subscript s,
Illustratively, in FIG. 8, the convolutional layer has 64 input channels and 64 output channels. The number of input channels of the sub-module 1 is 64, and the number of output channels is 128. The number of input channels of the sub-module 2 is 128, and the number of output channels is 128. The number of input channels of the sub-module 3 is 256, and the number of output channels is 256. The number of input channels of the sub-module 4 is 512, and the number of output channels is 512. There is a downsampling layer between each sub-module, and the number of output channels of the downsampling layer is 128, 256 and 512 in sequence.
And inputting the guided image and the guided image into a multi-scale feature extraction sub-network to initialize to obtain an initial guided image and an initial guided image. Specifically, the convolution layers of the paired guide graphs and the guided graphs input into the multi-scale feature extraction sub-network are initialized, and the obtained features are respectively recorded as an initial guide graph and an initial guided graph.
Extracting features of the initial guide graph and the initial guided graph to obtain guide features, guided features and shared features. Specifically, the initial guide image and the initial guided image are checked by utilizing a convolution with unshared parameters to perform feature extraction, so as to obtain guide features and guided features. And carrying out shared feature extraction on the initial guide diagram and the initial guided diagram by utilizing the convolution check of parameter sharing to obtain shared features.
As shown in figure 8 of the drawings,And/>Convolution kernel representing that the parameters related to the boot graph are not shared,/>And/>Convolution kernel representing parameter sharing,/>And/>Representing convolution kernels that are not shared by parameters associated with the directed graph. /(I)And/>Respectively represent the guiding characteristic and the guided characteristic output by the sub-module 1,/>And/>Respectively represent the guiding features and guided features output by the sub-module 2,/>And/>Respectively represent the guiding features and guided features output by the sub-module 3,/>And/>Indicating the guiding features and guided features, respectively, of the output of the sub-module 4.
In addition, the man skilled in the art can also connect the guiding feature, the guided feature and the shared feature in parallel and then perform downsampling processing through convolution layer and maximum pooling operation.
The input module 304 is configured to input the multi-scale feature, the guide map, and the guided map into the image domain sub-fusion network and the feature domain sub-fusion network to obtain an image fusion result and a feature fusion result. The input module 304 is specifically configured to perform image fusion as follows:
. Wherein/> ,/>. In the method, in the process of the invention,Representing the image fusion result,/>And/>Respectively represent different deep convolution layers in the convolution neural network,/>And/>Respectively represent the utilization of the depth convolution layer/>And/>Learning results of the learned parameters, I represents a guidance map,/>Representing the first guidance feature,/>Representing the first guided feature,/>Representing a guided graph,/>And/>Expressed as/>And/>Convolutional neural network as parameter,/>And/>Parameters representing a convolutional neural network.
The feature fusion results are as follows:
,/> . Wherein/> . In the/>Representing the feature fusion result, i.e. upsampled/>,/>Output results of all sub-modules in the characteristic domain sub-fusion network are represented,/>The expression parameter is/>Convolutional neural network of/>Expressed by the parameter/>Up-sampling module consisting of convolutional neural network and bilinear interpolation filter,/>Parameters representing convolutional neural networks,/>Representing parallel operation combining channels and spatial attention layers,/>For the parameters learned by convolutional neural networks,/>Representing guidance features,/>Representing the parameters calculated by the steering filter, s representing the sub-modules in the feature domain sub-fusion network.
In addition, in the case of the optical fiber,. In the/>Representing the parameters calculated by the steering filter,/>Representing an average filter,/>Representing guided features,/>Representing guidance features,/>For parameters learned by convolutional neural networks, s represents a sub-module in a feature domain sub-fusion network.
. In the/>For parameters to be learned by convolutional neural networks,The expression parameter is/>Convolutional neural network of/>Parameters representing convolutional neural networks,/>Representation layer regularization,/>Representing guided features,/>Representing the guiding feature, s representing the sub-modules in the feature domain sub-fusion network, and a representing the coefficients.
The integration module 305 is configured to integrate the image fusion result and the feature fusion result to obtain an enhancement result, and thereby establish a RAW format dim light image enhancement network. The integration module 305 is specifically configured to obtain the enhancement result by the following specific method:
. In the/> Representing enhanced results,/>Parameters representing a neural convolutional network,/>The expression parameter is/>Convolutional neural network of/>A pixel rearrangement operation is represented and is performed,Representing parallel operation,/>Representing the fusion result of the image domain sub-fusion network,/>And representing the fusion result of the first submodule in the characteristic domain sub-fusion network.
The optimizing module 306 is configured to train the RAW format weak light image enhancement network using the training data set, and iteratively optimize the RAW format weak light image enhancement network to obtain a training model. The optimizing module 306 is specifically configured to pair-input the guided image and the guided image into the RAW format weak light image enhancement network to obtain a normal light image in an enhanced sRGB format, and calculate a total loss function of the RAW format weak light image enhancement network at this time, where the loss function is as follows:
. Where L represents the loss value of the loss function,/> Representing normal light image in enhanced sRGB format,/>Representing a normal light image in sRGB format.
And updating network parameters in the RAW format weak light image enhancement network by adopting a random gradient descent algorithm to optimize a loss function, and iteratively optimizing the RAW format weak light image enhancement network by adopting a reverse optimization algorithm so that the output result of the network gradually approximates to the sRGB format normal illumination image. When the number of the optimizing times of the RAW format weak light image enhancement network reaches the set iteration number, training is terminated, and parameters of the RAW format weak light image enhancement network are stored at the moment, so that a training model is obtained.
Some of the modules of the apparatus of the present application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The apparatus or module set forth in the embodiments of the application may be implemented in particular by a computer chip or entity, or by a product having a certain function. For convenience of description, the above devices are described as being functionally divided into various modules, respectively. The functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present application. Of course, a module that implements a certain function may be implemented by a plurality of sub-modules or a combination of sub-units.
The methods, apparatus or modules described in this application may be implemented in computer readable program code means and in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (english: application SPECIFIC INTEGRATED Circuit; ASIC), programmable logic controller and embedded microcontroller, examples of the controller including but not limited to the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller can be regarded as a hardware component, and means for implementing various functions included therein can also be regarded as a structure within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The embodiment of the application also provides equipment, which comprises: a processor; a memory for storing processor-executable instructions; the processor, when executing the executable instructions, implements the method according to the embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer readable storage medium having stored thereon a computer program or instructions which, when executed, cause a method as described in embodiments of the present application to be implemented.
In addition, each functional module in the embodiments of the present invention may be integrated into one processing module, each module may exist alone, or two or more modules may be integrated into one module.
The storage medium includes, but is not limited to, a random access Memory (English: random Access Memory; RAM), a Read-Only Memory (ROM), a Cache (English: cache), a hard disk (English: HARD DISK DRIVE; HDD), or a Memory Card (English: memory Card). The memory may be used to store computer program instructions.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus necessary hardware. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product or may be embodied in the implementation of data migration. The computer software product may be stored on a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., comprising instructions for causing a computer device (which may be a personal computer, mobile terminal, server, or network device, etc.) to perform the methods described in the various embodiments or portions of the embodiments of the application.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment is mainly described as a difference from other embodiments. All or portions of the present application are operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, mobile communication terminals, multiprocessor systems, microprocessor-based systems, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the present application; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (6)

1. A RAW format low-light image enhancement method, comprising:
Preprocessing the image data set to obtain a training data set; the training data set comprises a guide image, a guided image and an sRGB format normal illumination image; the preprocessing of the image data set to obtain a training data set comprises the following steps:
processing the RAW format normal illumination image into an sRGB format normal illumination image;
Multiplying the RAW format dim light image by illumination multiplying power, and carrying out color channel rearrangement on the RAW format dim light image to obtain the guide image;
Filtering the guide map to obtain a guided map;
The group of sRGB format normal illumination images, the guide image and the guided image form the training data set;
constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network according to the image information of the training data set;
Extracting multi-scale features of the training dataset through the multi-scale feature extraction sub-network; wherein the multi-scale features include a guided feature, and a shared feature;
inputting the multi-scale features, the guide image and the guided image into the image domain sub-fusion network and the feature domain sub-fusion network to obtain an image fusion result and a feature fusion result; wherein, the image fusion result is as follows:
; wherein/> ,/>; In the/>Representing the image fusion result,/>And/>Respectively represent different deep convolution layers in the convolution neural network,/>And/>Respectively represent the utilization of the depth convolution layer/>And/>Learning results of the learned parameters, I representing the guide map,/>Representing the first of said guiding features,/>Representing the first of said guided features,/>Representing the guided graph,/>And/>Expressed as/>And/>Convolutional neural network as parameter,/>And/>Parameters representing a convolutional neural network;
The feature fusion results are as follows:
,/> ; wherein/> ; In the/>Representing the feature fusion result, i.e. upsampled/>,/>Representing the output result of each sub-module in the characteristic domain sub-fusion network,/>The expression parameter is/>Convolutional neural network of/>Expressed by the parameter/>Up-sampling module consisting of convolutional neural network and bilinear interpolation filter,/>Parameters representing convolutional neural networks,/>Representing parallel operation combining channels and spatial attention layers,/>For the parameters learned by convolutional neural networks,/>Representing the guiding feature,/>Representing parameters obtained through calculation of a guide filter, and s represents a sub-module in the feature domain sub-fusion network;
Integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format weak light image enhancement network according to the enhancement result; and integrating the image fusion result and the feature fusion result to obtain an enhanced result, wherein the enhanced result is as follows:
; in the/> Representing enhanced results,/>Parameters representing a neural convolutional network,/>The expression parameter is/>Convolutional neural network of/>Representing a pixel rearrangement operation,/>Representing parallel operation,/>Representing the fusion result of the image domain sub-fusion network,/>Representing a fusion result of a first submodule in the feature domain sub-fusion network;
and training the RAW format weak light image enhancement network by using the training data set, and iteratively optimizing the RAW format weak light image enhancement network to obtain a training model.
2. The method of claim 1, wherein before preprocessing the image dataset to obtain the training dataset, further comprising:
And performing content matching on the image data to obtain a pair of RAW format normal illumination images and RAW format dim light images to form the image data set.
3. The method of claim 1, wherein the preprocessing the image dataset to obtain the training dataset further comprises:
the training data set is normalized to have its pixel values ranging between 0 and 1.
4. The method of claim 1, wherein the extracting multi-scale features of the training dataset through the multi-scale feature extraction sub-network comprises:
Inputting the guide map and the guided map into the multi-scale feature extraction sub-network to initialize to obtain an initial guide map and an initial guided map;
Extracting features of the initial guide graph and the initial guided graph to obtain the guide features, the guided features and the sharing features.
5. The method according to claim 1, wherein the parameters calculated by the steering filter are as follows:
; in the/> Representing the parameters calculated by the steering filter,/>Representing an average filter,/>Representing the guided features,/>Representing the guiding feature,/>For parameters learned by convolutional neural networks, s represents a sub-module in a feature domain sub-fusion network.
6. A RAW format low-light image enhancement apparatus, comprising:
The preprocessing module is used for preprocessing the image data set to obtain a training data set; the training data set comprises a guide image, a guided image and an sRGB format normal illumination image; the preprocessing of the image data set to obtain a training data set comprises the following steps:
processing the RAW format normal illumination image into an sRGB format normal illumination image;
Multiplying the RAW format dim light image by illumination multiplying power, and carrying out color channel rearrangement on the RAW format dim light image to obtain the guide image;
Filtering the guide map to obtain a guided map;
The group of sRGB format normal illumination images, the guide image and the guided image form the training data set;
the construction module is used for constructing a multi-scale feature extraction sub-network, an image domain sub-fusion network and a feature domain sub-fusion network according to the image information of the training data set;
The extraction module is used for extracting the multi-scale characteristics of the training data set through the multi-scale characteristic extraction sub-network; wherein the multi-scale features include a guided feature, and a shared feature;
The input module is used for inputting the multi-scale features, the guide image and the guided image into the image domain sub-fusion network and the feature domain sub-fusion network to obtain an image fusion result and a feature fusion result; wherein, the image fusion result is as follows:
; wherein/> ,/>; In the/>Representing the image fusion result,/>And/>Respectively represent different deep convolution layers in the convolution neural network,/>And/>Respectively represent the utilization of the depth convolution layer/>And/>Learning results of the learned parameters, I representing the guide map,/>Representing the first of said guiding features,/>Representing the first of said guided features,/>Representing the guided graph,/>And/>Expressed as/>And/>Convolutional neural network as parameter,/>And/>Parameters representing a convolutional neural network;
The feature fusion results are as follows:
,/> ; wherein/> ; In the/>Representing the feature fusion result, i.e. upsampled/>,/>Representing the output result of each sub-module in the characteristic domain sub-fusion network,/>The expression parameter is/>Convolutional neural network of/>Expressed by the parameter/>Up-sampling module consisting of convolutional neural network and bilinear interpolation filter,/>Parameters representing convolutional neural networks,/>Representing parallel operation combining channels and spatial attention layers,/>For the parameters learned by convolutional neural networks,/>Representing the guiding feature,/>Representing parameters obtained through calculation of a guide filter, and s represents a sub-module in the feature domain sub-fusion network;
the integration module is used for integrating the image fusion result and the feature fusion result to obtain an enhancement result, and establishing a RAW format dim light image enhancement network according to the enhancement result; and integrating the image fusion result and the feature fusion result to obtain an enhanced result, wherein the enhanced result is as follows:
; in the/> Representing enhanced results,/>Parameters representing a neural convolutional network,/>The expression parameter is/>Convolutional neural network of/>Representing a pixel rearrangement operation,/>Representing parallel operation,/>Representing the fusion result of the image domain sub-fusion network,/>Representing a fusion result of a first submodule in the feature domain sub-fusion network;
And the optimization module is used for training the RAW format weak light image enhancement network by utilizing the training data set and iteratively optimizing the RAW format weak light image enhancement network to obtain a training model.
CN202410397743.0A 2024-04-03 RAW format weak light image enhancement method and device Active CN117994161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410397743.0A CN117994161B (en) 2024-04-03 RAW format weak light image enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410397743.0A CN117994161B (en) 2024-04-03 RAW format weak light image enhancement method and device

Publications (2)

Publication Number Publication Date
CN117994161A CN117994161A (en) 2024-05-07
CN117994161B true CN117994161B (en) 2024-06-21

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309116A (en) * 2023-02-06 2023-06-23 北京理工大学 Low-dim-light image enhancement method and device based on RAW image
CN116579940A (en) * 2023-04-29 2023-08-11 中国人民解放军海军特色医学中心 Real-time low-illumination image enhancement method based on convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309116A (en) * 2023-02-06 2023-06-23 北京理工大学 Low-dim-light image enhancement method and device based on RAW image
CN116579940A (en) * 2023-04-29 2023-08-11 中国人民解放军海军特色医学中心 Real-time low-illumination image enhancement method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
EP4109392A1 (en) Image processing method and image processing device
WO2021164731A1 (en) Image enhancement method and image enhancement apparatus
CN108364270B (en) Color reduction method and device for color cast image
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN111507914A (en) Training method, repairing method, device, equipment and medium of face repairing model
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN112348747A (en) Image enhancement method, device and storage medium
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN113554739A (en) Relighting image generation method and device and electronic equipment
CN114067018B (en) Infrared image colorization method for generating countermeasure network based on expansion residual error
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN113724151B (en) Image enhancement method, electronic equipment and computer readable storage medium
Pan et al. Joint demosaicking and denoising for CFA and MSFA images using a mosaic-adaptive dense residual network
CN117994161B (en) RAW format weak light image enhancement method and device
CN114648467B (en) Image defogging method and device, terminal equipment and computer readable storage medium
CN117994161A (en) RAW format weak light image enhancement method and device
CN115375909A (en) Image processing method and device
US20220164934A1 (en) Image processing method and apparatus, device, video processing method and storage medium
CN114299105A (en) Image processing method, image processing device, computer equipment and storage medium
CN112734655A (en) Low-light image enhancement method for enhancing CRM (customer relationship management) based on convolutional neural network image
CN113744141B (en) Image enhancement method and device and automatic driving control method and device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant