CN115456903B - Deep learning-based full-color night vision enhancement method and system - Google Patents
Deep learning-based full-color night vision enhancement method and system Download PDFInfo
- Publication number
- CN115456903B CN115456903B CN202211166825.1A CN202211166825A CN115456903B CN 115456903 B CN115456903 B CN 115456903B CN 202211166825 A CN202211166825 A CN 202211166825A CN 115456903 B CN115456903 B CN 115456903B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- loss function
- representing
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides a full-color night vision enhancement method and system based on deep learning. The method comprises the following steps: s1, acquiring RAW format image sequence information under various ambient illuminance; s2, preprocessing the RAW format image sequence to obtain an RGB format image sequence after pixel fusion; s3, obtaining a black level image, and removing the black level; s4, linearly brightening according to the brightness of the typical area of the image; s5, acquiring a denoised image sequence through a denoising network with a gating circulation unit; s6, recovering the initial brightness; and S7, self-adaptively adjusting the brightness of the image sequence through a self-supervised cyclic convolutional neural network. The invention uses the long time sequence information to denoise the image sequence, and can effectively remove 10 ‑3 Image noise collected in the Lux left and right environments improves the signal to noise ratio of the image.
Description
Technical Field
The invention relates to a full-color night vision enhancement method and system based on deep learning, and belongs to the field of computer vision.
Background
Night vision imaging enhancement is a back-end image processing technique that removes noise and color deviation by algorithm processing after obtaining an infrared or low-light image at night, forming a clear and easily observable enhanced image. The enhancement of infrared images generally uses filtering algorithms to achieve noise and background suppression, highlighting the primary objective. The enhancement of black-white low-light images is similar to infrared, and mainly uses adaptive filtering and other methods to enhance the signal-to-noise ratio of the images. In addition, some enhancement algorithms impart pseudo-color information to the black-and-white low-light-level image to highlight the main targets in the scene and enhance the user's viewing experience. Compared with the night vision technology, the full-color low-light-level image is acquired, the quality of the low-light-level image is improved by combining the deep learning technology, the image with accurate color, high signal to noise ratio and balanced brightness can be obtained, and the night vision impression is improved.
The existing artificial intelligence glimmer enhancement technology uses paired or unpaired glimmer images and an image training network under normal illumination to realize mapping from glimmer to normal illumination, and has better glimmer enhancement effect. But lacks the full-color night vision enhancement technology of starlight level or atmospheric light level, and can not improve the signal to noise ratio of the image and adjust the brightness, when the ambient illuminance detects 10 -3 When Lux is equal to or lower than Lux, the existing low-light enhancement algorithm based on deep learning cannot effectively improve the image quality.
Disclosure of Invention
In order to solve the technical problems existing in the prior art, the temperature is improved to be 10 at the minimum -3 The invention provides a deep learning-based full-color night vision enhancement method and a deep learning-based full-color night vision enhancement system with separate denoising and brightness adjustment.
The specific technical scheme of the method is as follows:
a full-color night vision enhancement method based on deep learning comprises the following steps:
s1: collecting a low-light image in a RAW format, and recording image information as X RAW ;
S2: for the image information X RAW Performing pixel fusion, converting into RGB format image, and marking as X RGB ;
S3: obtaining N dark field images in RGB format by using the same acquisition parameters in the step S1 and using the processing method in the step S2, taking the average value of the N Zhang Anchang images as black level information, and marking the average value as X BLACK ;
S4: selecting the image information X RAW Image blocks of M x N resolution in five typical positions around and in the center of (a), a mean value is calculatedAnd further calculating to obtain linear brightness enhancement coefficient +.>And input X of denoising network IN1 =Ratio×(X RGB -X BLACK );
S5: x is to be IN1 Inputting into a denoising network to obtain an image X after removing noise OUT1 ;
S6: recovering the denoised image X from the coefficient Ratio obtained in step S4 OUT1 As input X to an adaptive brightness adjustment network IN2 =X OUT1 /Ratio;
S7: x is to be IN2 And inputting the image sequence into a self-adaptive brightness adjustment network to obtain a final output image sequence.
The invention has the following beneficial effects:
(1) Aiming at the imaging characteristics of the low-light-level image in the extremely low-light environment, the invention uses the preprocessing methods such as pixel fusion, black reduction level and the like to remove partial noise and color deviation in advance, and can improve the quality of the low-light-level image.
(2) The low-light-level enhancement method is split into two steps of denoising and self-adaptive brightness adjustment, corresponding functions are realized by using two convolutional neural networks respectively, and the self-adaptive brightness adjustment network is used for processing the denoised low-light-level image, so that the removing effect of noise of the extremely-low illumination image is effectively improved, and the image brightness is improved.
(3) The gating circulation unit GRU is used in the denoising network, image time sequence information is utilized for denoising, the signal to noise ratio of the image is effectively improved, and 10 can be effectively removed -3 Image noise collected in the environment around Lux. Up-sampling is done using a pixel rebinning method PixelShuffle, avoiding image blurring and checkerboard noise introduced by deconvolution.
(4) The self-supervision learning method is used for training the cyclic convolutional neural network, memory occupation is reduced through multiplexing weight parameters, the self-supervision learning effectively improves the robustness of self-adaptive brightness adjustment, the overall brightness of an output image is uniform and stable, the color deviation is small, and the display consistency is improved.
Drawings
FIG. 1 is a schematic diagram of a full color night vision enhancement system of the present invention;
FIG. 2 is a flow chart of the full color night vision enhancement method of the present invention;
FIG. 3 is a schematic diagram of the structure of the denoising network DenoiseNet of the present invention;
fig. 4 is a schematic structural diagram of an adaptive brightness adjustment network LightNet according to the present invention;
FIG. 5 is a schematic view of a GRU unit structure used in the present invention.
Detailed Description
The following describes the scheme of the invention in detail with reference to the accompanying drawings.
As shown in fig. 1, the present embodiment provides a full-color night vision enhancement system based on deep learning, including:
the low-light image acquisition module is used for acquiring low-light images in a RAW format, a full-color low-light camera is generally used for acquiring visible light information under the environment illumination of a starlight level and effectively imaging, the acquired image format is RAW, and the arrangement mode is RGGB;
the preprocessing module is used for preprocessing the RAW format image acquired by the low-light image acquisition module: performing BIN2 pixel fusion on the RAW format image, namely fusing adjacent four pixels, converting the RAW format image into an RGB format, and subtracting black level information, wherein the black level information is the average value of N RGB format dark field images obtained by a micro-light image acquisition module;
the denoising module is used for removing noise of the RGB image after preprocessing through a denoising network; linearly brightening an RGB image, inputting the RGB image into a denoising network, and linearly restoring the RGB image to initial brightness distribution to serve as the input of a self-adaptive brightness adjustment network;
the self-adaptive brightness adjustment module is used for processing the RGB image after denoising through the trained self-adaptive brightness adjustment network and adjusting brightness distribution;
and the coding output module is used for coding the RGB image which is processed and enhanced by the self-adaptive brightness adjustment module into a video signal, and storing the video signal in a local storage medium or transmitting the video signal to a display for display.
As shown in fig. 2, the method for enhancing full-color night vision based on deep learning provided in this embodiment includes the following steps:
s1: using a micro-light camera to collect RAW format image, fixing parameters such as exposure time, gain, aperture size and the like of the camera, and recording initial image information as X RAW ;
S2: for X RAW BIN2 fusion, namely adjacent four pixels fusion, is carried out, and then the image is converted into an RGB format image, and is marked as X RGB ;
S3: using the same acquisition parameters in the step S1, acquiring N Zhang Anchang images by using a micro-light camera, then converting the dark field images into RGB format by using the method in the step S2, taking the average value of the N dark field images in RGB format as the black level information of the camera, and recording as X BLACK ;
S4: selecting the image information X in step S1 RAW Image blocks of M x N resolution in five typical positions around and in the center of (a), calculating the mean valueAnd further calculating to obtain linear brightness enhancement coefficient +.>And input X of denoising network IN1 =Ratio×(X RGB -X BLACK );
S5: x is to be IN1 Inputting a denoising network DenoiseNet to obtain a denoised image X OUT1 ;
The de-noiseNet has a specific structure shown in FIG. 3, and comprises an encoder, a feature mapping unit and a decoder, wherein the encoder encodes an image with a size of H×W×3 into an image with a size of H×W×3Wherein H and W represent the height and width of the input image, respectively), the decoder uses the pixel rebinning PixelShuffle method to scale +.>The feature map data of (2) is rearranged into an output image with the size of H multiplied by W multiplied by 3, a feature mapping unit consisting of a residual error network ResNet and a gating circulation unit GRU is inserted into an encoder and a decoder, and the mapping from noise features to noiseless features is completed; the encoder section comprises three 3×3 convolutions, step size 2, padding 1, activation function ReLU, output +.>Feature map F of (1) encode The method comprises the steps of carrying out a first treatment on the surface of the The feature mapping unit comprises two ResNet blocks and a GRU unit, the GRU unit structure is shown in FIG. 5, F is firstly carried out encode Splitting into F 1 And F 2 The sizes are all +.>Then, the four 3×3 convolutions are respectively performed, the step length is 1, the filling is 1, the activation function is ReLU, and F is obtained 3 F is passed through a 3X 3 convolution layer 3 The signature path is compressed from 192 to 64, denoted F 4 And input into GRU unit, GRU unit receives the hidden characteristic layer H obtained by processing the previous frame image by GRU unit t-1 And obtaining the output of the GRU: current frame implicit feature layer H t ,H t The number of recovered channels was 192 by 3×3 convolution, and the mapping from noise to noise-free features was accomplished by the same structure as the ResNet block described above, the result of which was denoted as F 5 Size of +.>The decoder consists of a layer of PixelShuffle layer, the up-sampling multiple is set to 8, i.e. 192 channels are reduced by 64 times to 3 channels, both length and width are increased by 8 times, and the output is h×w×3. Wherein, at the initial frame, an array of all 0's is used as an implicit feature layer of the gating loop.
In particular, the training mode of the denoising network DenoiseNet adopts supervised learning, and the average loss of the statistical image sequence is used as an error for back propagation during training. Firstly, analyzing noise distribution characteristics of an acquired RAW format image after preprocessing, modeling the noise distribution characteristics into a noise model formed by a mixed result of Gaussian noise, poisson noise, dynamic stripe noise and color degradation noise, constructing a noise data set from a noise-free image sequence, and designing a loss function as follows:
L DN =L pixel +L ssim +α 1 L tv +α 2 L lpips
wherein the method comprises the steps ofN represents the number of pixels, x i Representing the pixel value of the input image at point i, y i Pixel value representing the label image at point i, DN (x i ) Representing pixel values of an image of the input image after denoising by a denoising network, wherein the loss function represents an absolute value error of each pixel between the output image and the real image;μ x sum mu y Representing the mean of the input image and the mean, sigma, of the output image xy Representing covariance between input image and output image, < >>And->Representing the variance of the input image and the output image, C1 and C2 being constants, the loss function characterizing the structural similarity error of the output image with the real image; and->Representing the output image in both x and y directionsThe loss function characterizes noise error; />Representing the consistency error between feature vectors after the output image and the real image are subjected to the VGG16 network feature extraction, wherein the loss function represents the consistency of high-dimensional features between the two images; α1 and α2 are adjustable parameters.
S6: restoring the initial brightness distribution of the denoised image by the Ratio obtained in the step S4 as the input X of the self-adaptive brightness adjustment network IN2 =X OUT1 /Ratio;
S7: x is to be IN2 Inputting a self-adaptive brightness adjustment network (LightNet) to obtain a final output image sequence;
the self-adaptive brightness adjustment network comprises a coder-decoder and a gating circulation unit, X IN2 After the input of the self-adaptive brightness adjustment network, the increment output delta is obtained i And hidden layer output H of the gated loop unit i Transferring the output back to the input, adding the output to the input, and performing second enhancement, namely X IN2 +Δ i Inputting into a self-adaptive brightness adjustment network, and H i And inputting the input data into a gating circulation unit to realize circulation. In this embodiment, the specific structure of the adaptive brightness adjustment network LightNet is shown in fig. 4, and includes an encoder composed of 3 convolution layers, 1 gate control circulation unit GRU, and a decoder composed of 3 deconvolution layers, where the output of each GRU and the output of the network are sent back to the input of the GRU and the input of the network, so as to implement circulation; the encoder comprises three 3 x 3 convolutional layers, step size 2, padding 1, and mapping a feature map of input size H x W x 3 to sizeAs input to the GRU; the GRU unit realizes characteristic transmission of each cycle, and the structure of the GRU unit is shown in fig. 5; the decoder comprises three 3 x 3 deconvolution layers, step size 2, padding 1, input +.>Is characterized by (a) feature mapThe output is H×W×3, and after 8 cycles, the final output result is obtained.
In particular, the training mode of the self-adaptive brightness adjustment network LightNet is self-supervision learning, the paired low-illumination and normal-illumination image data sets are not needed, the large-scale low-illumination image data set and the real low-illumination image data set are adopted for training, and the loss function is designed as follows:
L LN =L light +β 1 L contrast +β 2 L color
here, the image is first divided into M pixel blocks of 16×16, and the above-described loss function is calculated for these pixel blocks.Y[]The method comprises the steps of calculating average gray values of M pixel blocks processed by an adaptive brightness adjustment network LN, wherein the loss function constrains the overall brightness of an output image to be 0.5;wherein->Calculating the sum of the average values of the absolute values of the gradients of the pixel blocks in the x and y directions, wherein the loss function constrains the output image to have similar contrast with the input image;wherein mu i And->Representing the average value of three channels of pixel points RGB, wherein the loss function constrains the output image to be consistent with the input image in color; beta 1 And beta 2 Is an adjustable parameter.
S8: the encoded output is either saved as video or output to a display.
In summary, the method and system provided in this embodiment establishes a noise model of a low-light image by cascading a denoising network denoise and a self-adaptive brightness adjustment network LightNet, collects data in a RAW format for preprocessing, uses a gate-control circulation unit GRU to remove noise in time sequence, increases the signal-to-noise ratio of a full-color night vision image, optimizes the brightness distribution of an output image, and can clearly present the night vision image under the condition of extremely low illumination.
The above description is only a specific embodiment of the present invention, and is not intended to limit the present invention in any way. It should be noted that the micro-light image capturing device used does not limit the present invention, the image resolution does not limit the present invention, and the image content does not limit the present invention. The scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and it is intended to cover the scope of the present invention.
Claims (6)
1. The full-color night vision enhancement method based on deep learning is characterized by comprising the following steps of:
s1: collecting a low-light image in a RAW format, and recording image information as X RAW ;
S2: for the image information X RAW Performing pixel fusion, converting into RGB format image, and marking as X RGB ;
S3: obtaining N dark field images in RGB format by using the same acquisition parameters in the step S1 and using the processing method in the step S2, taking the average value of the N Zhang Anchang images as black level information, and marking the average value as X BLACK ;
S4: selecting the image information X RAW Image blocks of M x N resolution in five typical positions around and in the center of (a), a mean value is calculatedAnd further calculating to obtain linear brightness enhancement coefficient +.>And input X of denoising network IN1 =Ratio×(X RGB -X BLACK );
S5: x is to be IN1 Inputting into a denoising network to obtain an image X after removing noise OUT1 The method comprises the steps of carrying out a first treatment on the surface of the The denoising network comprises an encoder, a feature mapping unit and a decoder; the characteristic mapping unit comprises a residual error network and a gating circulation unit and is used for finishing the mapping from the noise characteristic to the noiseless characteristic; the loss function of the denoising network is as follows:
L DN =L pixel +L ssim +α 1 L tv +α 2 L lpips
wherein the method comprises the steps ofN represents the number of pixels, x i Representing the pixel value of the input image at point i, y i Pixel value representing the label image at point i, DN (x i ) Representing pixel values of an image of the input image after denoising by a denoising network, wherein the loss function represents an absolute value error of each pixel between the output image and the real image; />μ x Sum mu y Representing the mean of the input image and the mean, sigma, of the output image xy Representing covariance between input image and output image, < >>Andrepresenting the variance of the input image and the output image, C1 and C2 being constants, the loss function characterizing the structural similarity error of the output image with the real image; /> And->Representing outputGradient of the image in both x and y directions, the loss function characterizing noise error; />After the characteristics of the output image and the real image are extracted through a convolutional neural network, the consistency error between the characteristic vectors is represented, and the loss function represents the consistency of the high-dimensional characteristics between the two images; α1 and α2 are adjustable parameters;
s6: recovering the denoised image X from the coefficient Ratio obtained in step S4 OUT1 As input X to an adaptive brightness adjustment network IN2 =X OUT1 /Ratio;
S7: x is to be IN2 Inputting the image sequence into a self-adaptive brightness adjustment network to obtain a final output image sequence; the self-adaptive brightness adjustment network comprises a coder and decoder and a gating circulation unit; the self-adaptive brightness adjustment network is trained by using a self-supervision learning method, and the loss function is as follows:
L LN =L light +β 1 L contrast +β 2 L color
wherein the image is first divided into 16×16M pixel blocks, and the above-mentioned loss function is calculated for these pixel blocks
Y[]The method comprises the steps of calculating average gray values of M pixel blocks processed by an adaptive brightness adjustment network LN, wherein the loss function constrains the overall brightness of an output image to be 0.5; wherein->Calculating the sum of the mean values of the absolute values of the gradients of the pixel blocks in both the x and y directions, the loss function constraining the output image and the outputThe incoming images have similar contrast;wherein mu i And->Representing the average value of three channels of pixel points RGB, wherein the loss function constrains the output image to be consistent with the input image in color; beta 1 And beta 2 Is an adjustable parameter.
3. The deep learning-based full-color night vision enhancement method according to claim 2, wherein the feature mapping unit consists of two residual error networks and a gating circulation unit, and the specific implementation process comprises the following steps: first, the size is as followsIs split into two +.>Respectively extracting effective features in two subgraphs by using two residual error networks, and splicing to obtain an input G of a gating circulation unit IN The gating circulation unit receives an implicit characteristic layer H obtained by processing the previous frame of image through the gating circulation unit at the same time t-1 And outputs the implicit feature layer H of the current frame t WhereinIn the initial frame, an array of all 0 s is used as an implicit characteristic layer of the gating cycle unit; acquiring secondary implicit feature layer H using the residual network t Mapping to noiseless features.
4. The deep learning-based full-color night vision enhancement method according to claim 1, wherein the denoising network is trained by using a supervised learning method, training data is a simulation data set with artificially added noise, noise is modeled as a mixed result of gaussian noise, poisson noise, dynamic stripe noise and color degradation noise, and average loss of a statistical image sequence is counter-propagated as an error during training.
5. The deep learning-based full-color night vision enhancement method according to claim 1, wherein in step S7, X is set to be IN2 Inputting the self-adaptive brightness adjustment network to obtain incremental output delta i And hidden layer output H of the gate control loop unit i Transferring the output back to the input, adding the output to the input, and performing second enhancement, namely X IN2 +Δ i Inputting the self-adaptive brightness adjustment network to adjust H i And inputting the gate control circulation unit to realize circulation.
6. A deep learning-based full-color night vision enhancement system, the system comprising:
the low-light image acquisition module is used for acquiring low-light images in a RAW format;
the preprocessing module is used for preprocessing the RAW format image acquired by the low-light image acquisition module;
the denoising module is used for removing noise of the RGB image obtained by the preprocessing module through a denoising network; the denoising network comprises an encoder, a feature mapping unit and a decoder; the characteristic mapping unit comprises a residual error network and a gating circulation unit and is used for finishing the mapping from the noise characteristic to the noiseless characteristic; the loss function of the denoising network is as follows:
L DN =L pixel +L ssim +α 1 L tv +α 2 L lpips
wherein the method comprises the steps ofN represents the number of pixels, x i Representing the pixel value of the input image at point i, y i Pixel value representing the label image at point i, DN (x i ) Representing pixel values of an image of the input image after denoising by a denoising network, wherein the loss function represents an absolute value error of each pixel between the output image and the real image; />μ x Sum mu y Representing the mean of the input image and the mean, sigma, of the output image xy Representing covariance between input image and output image, < >>Andrepresenting the variance of the input image and the output image, C1 and C2 being constants, the loss function characterizing the structural similarity error of the output image with the real image; /> And->Representing the gradient of the output image in both the x and y directions, the loss function characterizing the noise error; />After the characteristics of the output image and the real image are extracted through a convolutional neural network, the consistency error between the characteristic vectors is represented, and the loss function represents the consistency of the high-dimensional characteristics between the two images; alpha1 and α2 are adjustable parameters;
the self-adaptive brightness adjustment module is used for processing the RGB image after the denoising by the denoising module through the self-adaptive brightness adjustment network and adjusting the brightness distribution of the image; the self-adaptive brightness adjustment network comprises a coder and decoder and a gating circulation unit; the self-adaptive brightness adjustment network is trained by using a self-supervision learning method, and the loss function is as follows:
L LN =L light +β 1 L contrast +β 2 L color
wherein the image is first divided into 16×16M pixel blocks, and the above-mentioned loss function is calculated for these pixel blocks
Y[]The method comprises the steps of calculating average gray values of M pixel blocks processed by an adaptive brightness adjustment network LN, wherein the loss function constrains the overall brightness of an output image to be 0.5; wherein->Calculating the sum of the average values of the absolute values of the gradients of the pixel blocks in the x and y directions, wherein the loss function constrains the output image to have similar contrast with the input image;wherein mu i And->Representing the average value of three channels of pixel points RGB, wherein the loss function constrains the output image to be consistent with the input image in color; beta 1 And beta 2 Is an adjustable parameter;
and the coding output module is used for coding the RGB image which is processed and enhanced by the adaptive brightness adjustment module into a video signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211166825.1A CN115456903B (en) | 2022-09-23 | 2022-09-23 | Deep learning-based full-color night vision enhancement method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211166825.1A CN115456903B (en) | 2022-09-23 | 2022-09-23 | Deep learning-based full-color night vision enhancement method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115456903A CN115456903A (en) | 2022-12-09 |
CN115456903B true CN115456903B (en) | 2023-05-09 |
Family
ID=84307312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211166825.1A Active CN115456903B (en) | 2022-09-23 | 2022-09-23 | Deep learning-based full-color night vision enhancement method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115456903B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223253A (en) * | 2019-06-10 | 2019-09-10 | 江苏科技大学 | A kind of defogging method based on image enhancement |
CN110880163A (en) * | 2018-09-05 | 2020-03-13 | 南京大学 | Low-light color imaging method based on deep learning |
CN112614061A (en) * | 2020-12-08 | 2021-04-06 | 北京邮电大学 | Low-illumination image brightness enhancement and super-resolution method based on double-channel coder-decoder |
CN113643202A (en) * | 2021-07-29 | 2021-11-12 | 西安理工大学 | Low-light-level image enhancement method based on noise attention map guidance |
CN113822830A (en) * | 2021-08-30 | 2021-12-21 | 天津大学 | Multi-exposure image fusion method based on depth perception enhancement |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10929955B2 (en) * | 2017-06-05 | 2021-02-23 | Adasky, Ltd. | Scene-based nonuniformity correction using a convolutional recurrent neural network |
-
2022
- 2022-09-23 CN CN202211166825.1A patent/CN115456903B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110880163A (en) * | 2018-09-05 | 2020-03-13 | 南京大学 | Low-light color imaging method based on deep learning |
CN110223253A (en) * | 2019-06-10 | 2019-09-10 | 江苏科技大学 | A kind of defogging method based on image enhancement |
CN112614061A (en) * | 2020-12-08 | 2021-04-06 | 北京邮电大学 | Low-illumination image brightness enhancement and super-resolution method based on double-channel coder-decoder |
CN113643202A (en) * | 2021-07-29 | 2021-11-12 | 西安理工大学 | Low-light-level image enhancement method based on noise attention map guidance |
CN113822830A (en) * | 2021-08-30 | 2021-12-21 | 天津大学 | Multi-exposure image fusion method based on depth perception enhancement |
Non-Patent Citations (2)
Title |
---|
Deep Residual Convolutional Network for Natural Image Denoising and Brightness Enhancement;Wenjie Xu等;《2018 International Conference on Platform Technology and Service》;1-6 * |
Low-Light Image Enhancement via Progressive-Recursive Network;Jinjiang Li等;《IEEE Transactions on Circuits and Systems for Video Technology》;第31卷(第11期);4227-4240 * |
Also Published As
Publication number | Publication date |
---|---|
CN115456903A (en) | 2022-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107123089B (en) | Remote sensing image super-resolution reconstruction method and system based on depth convolution network | |
CN111739082A (en) | Stereo vision unsupervised depth estimation method based on convolutional neural network | |
Wang et al. | Joint iterative color correction and dehazing for underwater image enhancement | |
CN115393227B (en) | Low-light full-color video image self-adaptive enhancement method and system based on deep learning | |
CN112116601A (en) | Compressive sensing sampling reconstruction method and system based on linear sampling network and generation countermeasure residual error network | |
CN113658057A (en) | Swin transform low-light-level image enhancement method | |
CN114170286B (en) | Monocular depth estimation method based on unsupervised deep learning | |
CN111553856B (en) | Image defogging method based on depth estimation assistance | |
CN115209119B (en) | Video automatic coloring method based on deep neural network | |
CN113034413A (en) | Low-illumination image enhancement method based on multi-scale fusion residual error codec | |
CN115953321A (en) | Low-illumination image enhancement method based on zero-time learning | |
CN113379606B (en) | Face super-resolution method based on pre-training generation model | |
CN115035011A (en) | Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy | |
CN117422653A (en) | Low-light image enhancement method based on weight sharing and iterative data optimization | |
CN117611467A (en) | Low-light image enhancement method capable of balancing details and brightness of different areas simultaneously | |
CN115456903B (en) | Deep learning-based full-color night vision enhancement method and system | |
CN116703750A (en) | Image defogging method and system based on edge attention and multi-order differential loss | |
CN114638764B (en) | Multi-exposure image fusion method and system based on artificial intelligence | |
CN116208812A (en) | Video frame inserting method and system based on stereo event and intensity camera | |
CN115861113A (en) | Semi-supervised defogging method based on fusion of depth map and feature mask | |
CN115841523A (en) | Double-branch HDR video reconstruction algorithm based on Raw domain | |
CN114549343A (en) | Defogging method based on dual-branch residual error feature fusion | |
CN114549386A (en) | Multi-exposure image fusion method based on self-adaptive illumination consistency | |
CN113240589A (en) | Image defogging method and system based on multi-scale feature fusion | |
Xie et al. | Just noticeable visual redundancy forecasting: a deep multimodal-driven approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |