CN113034392A - HDR denoising and deblurring method based on U-net - Google Patents
HDR denoising and deblurring method based on U-net Download PDFInfo
- Publication number
- CN113034392A CN113034392A CN202110302616.4A CN202110302616A CN113034392A CN 113034392 A CN113034392 A CN 113034392A CN 202110302616 A CN202110302616 A CN 202110302616A CN 113034392 A CN113034392 A CN 113034392A
- Authority
- CN
- China
- Prior art keywords
- model
- noise
- test
- image
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000012360 testing method Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000013507 mapping Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000003786 synthesis reaction Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 230000002123 temporal effect Effects 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000010420 art technique Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G06T5/73—
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Abstract
The invention relates to the technical field of image processing, in particular to a HDR denoising and deblurring method based on U-net, which comprises S1, constructing an original data set; s2, performing fuzzy processing and noise processing on the constructed original data set through the motion fuzzy model, the pixel noise model and the row/column noise model to form a training set; s3, obtaining a test image through camera shooting to form a test set; s4, constructing and training a U-Net network model; s5, model test: and (3) carrying out denoising and deblurring processing on the images in the test set by adopting the trained U-Net network model, and finely adjusting related parameters. The invention solves the problems of inherent related noise, spatial variation blurring, interleaving, reduced spatial resolution and the like of the sensor by jointly processing the low-exposure image and the high-exposure image and utilizing the perfect spatial and temporal registration thereof.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a HDR denoising and deblurring method based on U-net.
Background
Common cameras can only capture a limited range of luminance values (LDR), but in order to be used for most display and editing tasks, a higher range of luminance values (HDR) needs to be captured. Without loss of generality, the exposure rate per even row obtained by capture is low, while the exposure rate per odd row obtained by capture is high, which results in a certain distortion, and the pixel noise in the image no longer follows a single model, but strongly correlates with the rows, different exposures will produce different noise, resulting in a blurred HDR image.
Problems or disadvantages of the prior art: present dual exposure sensors for reconstructing sharp, noiseless High Dynamic Range (HDR) video record different Low Dynamic Range (LDR) information in different pixel columns, odd columns providing low exposure, sharp but noisy information; while the even columns provide information with less noise and high exposure, image processing is now usually performed using a deep neural network in order to remove image distortion (deblurring and denoising), but the current method is very time consuming to capture readings on a warped sensor, and the deep neural network model also lacks clean HDR data.
Therefore, there is a need for improvements in the prior art.
Disclosure of Invention
In order to overcome the defects in the prior art, a U-net-based HDR denoising and deblurring method for an HDR image obtained by a single-lens double-exposure sensor is provided.
In order to solve the technical problems, the invention adopts the technical scheme that:
a HDR denoising and deblurring method based on U-net comprises the following steps:
s1, data acquisition: acquiring video data from a high-speed video data set, and constructing to form an original data set;
s2, constructing a training set: performing fuzzy processing and noise processing on the constructed original data set through a motion fuzzy model, a pixel noise model and a row/column noise model to form a training set;
s3, constructing a test set: shooting by a camera to obtain a test image, and forming a test set;
s4, constructing a U-Net network model and training: the U-Net network model comprises an encoding part and a decoding part, wherein the encoding part is used for acquiring context information, and the decoding part is used for outputting a prediction graph; training the U-Net network model by using a training set;
s5, model test: and (3) carrying out denoising and deblurring processing on the images in the test set by adopting the trained U-Net network model, and finely adjusting related parameters.
Further, in S2, constructing the training set includes:
s21, forming a motion blurred image IMB by the video data in the original data set through a motion blurred model, wherein the processing formula is as follows:
IMB=clamp(γ×Εt∈{0,1,2,3}[IL(t)]),
wherein gamma denotes a blur index, IL denotes a low frame image, clamp denotes a mean value, EEt∈{0,1,2,3}4 low frame states are represented;
s22, performing noise synthesis on the IMB by using a pixel noise model to form an image IPN of the simulated MB; calculating a mean value y c of the row/column by iterating each row, channel and exposure, starting from the image containing MB and pixel noise IPN with a noise model of the row/column; e, and again one from ξ c; e | xi | | y | random number ξ c; e; wherein y represents the GT value obtained by each pixel and each channel image IMB iteration of MB, ξ c represents a random number, e is used to find the corresponding cumulative histogram Cc for generating an analog sensor value x; the difference between the averages is added to the row/column and the row/column average is matched to the desired average, resulting in a training set.
Further, in S3, the method further includes: the acquired test image is subjected to gamma correction and photographic tone mapping.
Further, the encoding portion is configured to obtain context information, including repeated 3 × 3 convolution and 2 × 2 max pooling layers, and the activation function uses ReLU, which is expressed as follows:
down-sampling thereafter results in doubling of the eigen-channel.
Further, the decoding section is configured to output a prediction map, and includes:
using deconvolution to halve the characteristic channel, splicing the deconvolution result with the corresponding characteristic graph in the encoding stage, performing 2 times of 3 × 3 convolution on the spliced characteristic graph, adopting a 1 × 1 convolution kernel in the last layer of the decoding stage, and mapping each 2-bit characteristic vector to an input layer of the network; the U-Net network model based on residual connection is characterized in that a residual module is added into the U-Net network, and the residual connection formula is as follows:
F(x)=H(x)-x,
where h (x) is the output of the residual network, and f (x) is the output after the convolution operation.
Further, the method also comprises the following steps:
s6, verification of the U-Net network model: and checking whether the model loss function continuously descends, if so, indicating that the model is not optimal, continuously training the model, and if not, storing the model.
Further, in S5, the model test includes:
the deblurring and denoising effects of the model are evaluated by using SSIM, and the evaluation formula is as follows:
wherein, muxIs the average value of x; mu.syIs the average value of y; deltaxIs the variance of x; deltayIs the variance of y; deltaxyIs the covariance of x and y; c1 ═ k1L)2, c2 ═ k2L)2 are two variables which remain stable; l is the dynamic range of the pixel, k 1-0.01 and k 2-0.03 are default values.
Compared with the prior art, the invention has the following beneficial effects:
1. the method can be used for HDR images obtained by a single-lens double-exposure sensor, and solves the problems of inherent related noise, fuzzy space change, interlacing, reduction of spatial resolution and the like of the sensor by jointly processing low-exposure-rate images and high-exposure-rate images and utilizing perfect spatial and temporal registration of the low-exposure-rate images and the high-exposure-rate images.
2. The present invention generates synthetic training data by capturing a limited amount of data specific to the sensor and using a simple histogram to represent noise statistics, thereby yielding better denoising and deblurring quality results than the state-of-the-art techniques.
Drawings
The following will explain embodiments of the present invention in further detail through the accompanying drawings.
FIG. 1 is a flow chart of a U-net based HDR denoising and deblurring method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example (b):
as shown in fig. 1, a HDR denoising and deblurring method based on U-net includes the following steps:
and S1, collecting a high-speed video data set from Adobe, wherein the high-speed video data set comprises 123 videos, and constructing to form an original data set.
S2, performing fuzzy processing and noise processing on the constructed original data set through the motion fuzzy model, the pixel noise model and the row/column noise model to form a training set; test images are obtained through Axiom-beta camera shooting, and a test set is formed.
For the training set, different exposures are in different columns, and their MBs are also different. For example, when the exposure ratio r is 4, MB is also 4 times longer, the image is a mixture of sharp and blurred columns, and since it is difficult to acquire reference data without MB (especially HDR), a multi-exposure MB is simulated by using existing LDR high-speed video material for this purpose. Firstly, 123 videos in the collected original data set form a motion blurred image IMB through a motion blur model, and the processing formula is as follows:
IMB=clamp(γ×Εt∈{0,1,2,3}[IL(t)]),
wherein gamma denotes a blur index, IL denotes a low frame image, clamp denotes a mean value, EEt∈{0,1,2,3}4 low frame states are represented; the IMB is then noise synthesized by a pixel noise model to form an image IPN of the simulated MB. Each channel image IMB for each pixel and MB is iterated to obtain GT values y. A random number xi c, e is used for searching a corresponding cumulative histogram Cc; e is used to generate an analog sensor value x. All pixels, channels and exposures are combined together to obtain an IPN image containing MB and pixel noise; finally, starting from the image containing MB and pixel noise IPN, by iterating each row, channel and exposure, using the row/column noise model, calculate the mean value y c of the row/column; e, and again one from ξ c; e | xi | | y | random number ξ c; e. finally, the difference between the average values is added to the row/column, and the row/column average value is matched with the expected average value to obtain a final training set for training the model.
For the training set, the test images were taken with an Axiom-beta camera with a CMOSIS CMV 12000 sensor and a Canon EF-S18-135 mm lens, resolution 4096X 3072RAW, using an exposure ratio of 4 and (low) exposure time, while gamma correction and photographic tone mapping of the acquired test images were performed.
S3, constructing a U-Net network model, wherein the U-Net network model comprises an encoding part and a decoding part, the encoding part is used for acquiring context information, and the decoding part is used for outputting a prediction graph; and training the U-Net network model by using a training set.
Specifically, a Unet network model based on residual connection is constructed, and a 128 × 64 × 8 input map is output as a 128 × 128 × 8 prediction map through residual connection and sub-pixel convolution in a denoising and deblurring model. The sub-pixel convolution is a method of ingenious image and feature map upscale, the method can reduce the influence of artificial factors when converting a low-resolution image into a high-resolution image, Unet is a full convolution network obtained based on FCN improvement, the structure of the Unet is similar to a U shape, the network needs less training sets and has high segmentation accuracy, meanwhile, the Unet network consists of two parts of encoding and decoding, the encoding stage is used for acquiring context information and consists of repeated 3 x 3 convolution and 2 x 2 maximum pooling layers, and the activation function uses ReLU, and the formula is as follows:
then, downsampling is carried out to double the characteristic channels, the encoding stage is used for outputting a prediction graph, each time, deconvolution is used for halving the characteristic channels, then the deconvolution result is spliced with the characteristic graph of the corresponding encoding stage, the spliced characteristic graph is subjected to 3 × 3 convolution for 2 times, the last layer of the decoding stage adopts a 1 × 1 convolution kernel, each 2-bit characteristic vector is mapped to an input layer of a network, a residual module is added into a U-Net network based on a U-Net network model based on residual connection, and the residual connection formula is as follows:
F(x)=H(x)-x,
where h (x) is the output of the residual network, and f (x) is the output after the convolution operation.
The structure effectively solves the problem of parameter multi-kernel gradient dispersion caused by deepening of the network layer number, and the new residual error learning unit is easier to train than the previous U-Net model, so that the training speed of the model is greatly improved, the network model can obtain smaller parameters, and meanwhile, the deblurring and denoising performance of the model is further improved.
And S5, testing the model, performing denoising and deblurring processing on the images in the test set by adopting the trained U-Net network model, and performing fine adjustment on related parameters.
The deblurring and denoising effects of the model are evaluated by using SSIM, and the evaluation formula is as follows:
wherein, muxIs the average value of x; mu.syIs the average value of y; deltaxIs the variance of x; deltayIs the variance of y; deltaxyIs the covariance of x and y; c1 ═ k1L)2, c2 ═ k2L)2 are two variables which remain stable; l is the dynamic range of the pixel, k 1-0.01 and k 2-0.03 are default values.
S6, verifying the U-Net network model:
and checking whether the model loss function continuously descends, if so, indicating that the model is not optimal, continuously training the model, and if not, storing the model.
In the embodiment, 123 videos from an Adobe high-speed video data set are collected and obtained, no inherent MB exists or inherent MBs can be ignored in 8000 frames in total, corresponding images are obtained from the 123 videos in order to obtain input data of a model, relevant processing is carried out, such as noise and ambiguity increase, an HDR data set is constructed and formed, then the data set is input into a well-constructed improved U-net model to train the model, and after a model loss function does not decrease, the model is stored, and construction of the model is completed. HDR images obtained with single-lens dual-exposure sensors solve a series of serious problems inherent to such sensors, such as correlated noise and spatially varying blur, as well as interlacing and spatial resolution reduction, by jointly processing low-exposure and high-exposure images and using their perfect spatial and temporal registration. By capturing a limited amount of data specific to these sensors and using a simple histogram to represent the noise statistics, synthetic training data is generated that yields better denoising and deblurring quality results than the prior art.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.
Claims (7)
1. A HDR denoising and deblurring method based on U-net is characterized by comprising the following steps:
s1, data acquisition: acquiring video data from a high-speed video data set, and constructing to form an original data set;
s2, constructing a training set: performing fuzzy processing and noise processing on the constructed original data set through a motion fuzzy model, a pixel noise model and a row/column noise model to form a training set;
s3, constructing a test set: shooting by a camera to obtain a test image, and forming a test set;
s4, constructing a U-Net network model and training: the U-Net network model comprises an encoding part and a decoding part, wherein the encoding part is used for acquiring context information, and the decoding part is used for outputting a prediction graph; training the U-Net network model by using a training set;
s5, model test: and (3) carrying out denoising and deblurring processing on the images in the test set by adopting the trained U-Net network model, and finely adjusting related parameters.
2. The method as claimed in claim 1, wherein in S2, constructing the training set comprises:
s21, forming a motion blurred image IMB by the video data in the original data set through a motion blurred model, wherein the processing formula is as follows:
IMB=clamp(γ×Εt∈{0,1,2,3}[IL(t)]),
s22, performing noise synthesis on the IMB by using a pixel noise model to form an image IPN of the simulated MB; calculating a mean value y c of the row/column by iterating each row, channel and exposure, starting from the image containing MB and pixel noise IPN with a noise model of the row/column; e, and again one from ξ c; e | xi | | y | random number ξ c; e; wherein y represents the GT value obtained by each pixel and each channel image IMB iteration of MB, ξ c represents a random number, e is used to find the corresponding cumulative histogram Cc for generating an analog sensor value x; the difference between the averages is added to the row/column and the row/column average is matched to the desired average, resulting in a training set.
3. The method for HDR denoising and deblurring based on U-net according to claim 1, wherein in S3, further comprising: the acquired test image is subjected to gamma correction and photographic tone mapping.
4. The method as claimed in claim 1, wherein the encoding part is used to obtain the context information, including repeated 3 x 3 convolution and 2 x 2 max pooling layer, and the activation function uses ReLU, which is formulated as follows:
down-sampling thereafter results in doubling of the eigen-channel.
5. The method as claimed in claim 1, wherein the decoding part is used for outputting the prediction map, and comprises:
using deconvolution to halve the characteristic channel, splicing the deconvolution result with the corresponding characteristic graph in the encoding stage, performing 2 times of 3 × 3 convolution on the spliced characteristic graph, adopting a 1 × 1 convolution kernel in the last layer of the decoding stage, and mapping each 2-bit characteristic vector to an input layer of the network;
adding a residual module into the U-Net network based on the U-Net network model of residual connection, wherein the residual connection formula is as follows:
F(x)=H(x)-x,
where h (x) is the output of the residual network, and f (x) is the output after the convolution operation.
6. The method of claim 1, further comprising:
s6, verification of the U-Net network model: and checking whether the model loss function continuously descends, if so, indicating that the model is not optimal, continuously training the model, and if not, storing the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110302616.4A CN113034392A (en) | 2021-03-22 | 2021-03-22 | HDR denoising and deblurring method based on U-net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110302616.4A CN113034392A (en) | 2021-03-22 | 2021-03-22 | HDR denoising and deblurring method based on U-net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113034392A true CN113034392A (en) | 2021-06-25 |
Family
ID=76472312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110302616.4A Pending CN113034392A (en) | 2021-03-22 | 2021-03-22 | HDR denoising and deblurring method based on U-net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113034392A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018250A (en) * | 2021-10-18 | 2022-02-08 | 杭州鸿泉物联网技术股份有限公司 | Inertial navigation method, electronic device, storage medium, and computer program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961186A (en) * | 2018-06-29 | 2018-12-07 | 赵岩 | A kind of old film reparation recasting method based on deep learning |
CN110097106A (en) * | 2019-04-22 | 2019-08-06 | 苏州千视通视觉科技股份有限公司 | The low-light-level imaging algorithm and device of U-net network based on deep learning |
US20200265567A1 (en) * | 2019-02-18 | 2020-08-20 | Samsung Electronics Co., Ltd. | Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames |
CN111787300A (en) * | 2020-07-29 | 2020-10-16 | 北京金山云网络技术有限公司 | VR video processing method and device and electronic equipment |
-
2021
- 2021-03-22 CN CN202110302616.4A patent/CN113034392A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961186A (en) * | 2018-06-29 | 2018-12-07 | 赵岩 | A kind of old film reparation recasting method based on deep learning |
US20200265567A1 (en) * | 2019-02-18 | 2020-08-20 | Samsung Electronics Co., Ltd. | Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames |
CN110097106A (en) * | 2019-04-22 | 2019-08-06 | 苏州千视通视觉科技股份有限公司 | The low-light-level imaging algorithm and device of U-net network based on deep learning |
CN111787300A (en) * | 2020-07-29 | 2020-10-16 | 北京金山云网络技术有限公司 | VR video processing method and device and electronic equipment |
Non-Patent Citations (1)
Title |
---|
张承志: "极低照度下图像去噪方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114018250A (en) * | 2021-10-18 | 2022-02-08 | 杭州鸿泉物联网技术股份有限公司 | Inertial navigation method, electronic device, storage medium, and computer program product |
CN114018250B (en) * | 2021-10-18 | 2024-05-03 | 杭州鸿泉物联网技术股份有限公司 | Inertial navigation method, electronic device, storage medium and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028177B (en) | Edge-based deep learning image motion blur removing method | |
US8315474B2 (en) | Image processing device and method, and image sensing apparatus | |
CN101485193B (en) | Image generating device and image generating method | |
CN104144298B (en) | A kind of wide dynamic images synthetic method | |
CN103413286B (en) | United reestablishing method of high dynamic range and high-definition pictures based on learning | |
JP2021526248A (en) | HDR image generation from a single shot HDR color image sensor | |
US20100061642A1 (en) | Prediction coefficient operation device and method, image data operation device and method, program, and recording medium | |
CN111369466B (en) | Image distortion correction enhancement method of convolutional neural network based on deformable convolution | |
Chang et al. | Low-light image restoration with short-and long-exposure raw pairs | |
Pu et al. | Robust high dynamic range (hdr) imaging with complex motion and parallax | |
CN111986084A (en) | Multi-camera low-illumination image quality enhancement method based on multi-task fusion | |
CN112837245A (en) | Dynamic scene deblurring method based on multi-mode fusion | |
CN110225260B (en) | Three-dimensional high dynamic range imaging method based on generation countermeasure network | |
CN113096029A (en) | High dynamic range image generation method based on multi-branch codec neural network | |
CN111986106A (en) | High dynamic image reconstruction method based on neural network | |
Seybold et al. | Towards an evaluation of denoising algorithms with respect to realistic camera noise | |
CN111652815B (en) | Mask plate camera image restoration method based on deep learning | |
CN114170286A (en) | Monocular depth estimation method based on unsupervised depth learning | |
CN111369443A (en) | Zero-order learning super-resolution method for optical field cross-scale | |
CN116389912B (en) | Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera | |
CN115082341A (en) | Low-light image enhancement method based on event camera | |
CN113034392A (en) | HDR denoising and deblurring method based on U-net | |
Liu et al. | Joint hdr denoising and fusion: A real-world mobile hdr image dataset | |
Chang et al. | Beyond camera motion blur removing: How to handle outliers in deblurring | |
CN112750092A (en) | Training data acquisition method, image quality enhancement model and method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210625 |
|
RJ01 | Rejection of invention patent application after publication |