CN110246105B - Video denoising method based on actual camera noise modeling - Google Patents
Video denoising method based on actual camera noise modeling Download PDFInfo
- Publication number
- CN110246105B CN110246105B CN201910518690.2A CN201910518690A CN110246105B CN 110246105 B CN110246105 B CN 110246105B CN 201910518690 A CN201910518690 A CN 201910518690A CN 110246105 B CN110246105 B CN 110246105B
- Authority
- CN
- China
- Prior art keywords
- noise
- video
- denoising
- actual
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 238000013178 mathematical model Methods 0.000 claims abstract description 9
- 230000002708 enhancing effect Effects 0.000 claims abstract description 7
- 230000002123 temporal effect Effects 0.000 claims abstract description 5
- 238000003384 imaging method Methods 0.000 abstract description 13
- 230000008569 process Effects 0.000 abstract description 12
- 238000011160 research Methods 0.000 abstract description 3
- 230000007123 defense Effects 0.000 abstract description 2
- 230000007613 environmental effect Effects 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000012937 correction Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000005096 rolling process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a video denoising method based on actual camera noise modeling. The method comprises the following specific steps: (1) Exploring the physical cause of main noise in the imaging process and establishing a mathematical model of noise distribution; (2) Based on the established noise model, calibrating model parameters to generate a noise video training set which accords with the reality; (3) Designing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise; (4) Training an optimized neural network, and verifying the practicability of the method by using the synthesized and real-time acquired video data set. The denoising method is suitable for the denoising of the low-light video, and has very important application requirements in the fields of national defense and military, security monitoring, scientific research and environmental protection and the like.
Description
Technical Field
The invention relates to the field of computational photography and deep learning, in particular to the field of low-light video denoising in actual camera noise modeling.
Background
In very low light conditions, a large amount of noise can significantly degrade the image quality, and therefore low light video imaging is a challenging problem. A number of video denoising or video enhancement algorithms have been proposed to solve this problem, however most of the noise models in these algorithms are simple independent identically distributed assumptions including additive white gaussian noise, poisson noise or a mixture of gaussian and poisson noise. In practice, the noise in video is very complex, and especially in low-light situations, some factors that are usually ignored, such as dynamic graphic noise, inter-channel noise relationship, truncation effect, etc., become major problems.
Deep learning based approaches have made significant progress over many image processing tasks. Deep learning learns more useful characteristics of noise by constructing a neural network model with a plurality of hidden layers and massive training data, so that the image denoising effect is finally improved. And by layer-by-layer feature transformation, the feature representation of the sample in the original space is transformed to another new feature space, so that the image reconstruction is easier.
Therefore, how to accurately establish a mathematical model for the noise distribution of the video and utilize a neural network to recover high-quality low-light video is a current research direction.
Disclosure of Invention
Aiming at the defects of the existing video denoising method, the invention aims to provide a video denoising method for actual camera noise modeling.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a low-light video denoising method for actual camera noise modeling,
step 1, establishing an actual noise mathematical model of a weak light environment, wherein the model comprises dynamic stripe noise, inter-noise channel relation and truncation influence;
step 2, calibrating parameters of the mathematical model in the step 1 by using noise of an actual camera to generate a noise video training set which accords with an actual situation;
step 3, constructing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise;
and 4, training and optimizing the neural network by using the noise video training set generated in the step 2 and the actually acquired low-light video data set.
The invention considers the physical cause of main noise in the actual camera imaging, firstly establishes a mathematical model of noise distribution, generates data more conforming to the practical situation to train the video denoising and enhancing network based on the long-time and short-time memory LSTM, the designed network input can be weak light video with any frame number, and the network outputs high-quality video frames frame by frame.
The invention has the advantages that: (1) The proposed actual noise model is based on the hardware characteristics of the physical imaging process and the sensor, and mainly considers three non-negligible noises under the condition of weak light: the model can well process complex noise in actual situations, particularly videos under weak light imaging due to dynamic stripe noise, correlation among noise channels and truncation influence;
(2) The estimation method of the noise model is provided and the noise video training set which accords with the reality is synthesized, and the training data set does not need to be acquired too much, so that the method is very attractive to a video denoising algorithm which is difficult to acquire paired noise and a clean video frame simultaneously;
(3) The design is based on long-and-short-term memory LSTM video denoising and enhancing neural network, the network structure of the invention can memorize and utilize information of up to the previous 20 frames to process and recover the information of the current frame image;
(4) The low-light video denoising method has very important application prospects in the fields of national defense and military, security monitoring, scientific research and environmental protection and the like.
Drawings
FIG. 1 is a flowchart of a denoising method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a noise source in an imaging process according to an embodiment of the present invention.
FIG. 3 is a 2D lookup table in an embodiment of the present invention, (a) is a truncation effect mean correction, and (b) is a truncation effect variance correction.
Fig. 4 is a schematic diagram of a network architecture designed in the embodiment of the present invention.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a video denoising method for actual camera noise modeling according to this embodiment includes the following specific steps:
step 1, exploring the physical cause of main noise in the imaging process and establishing a mathematical model of noise distribution. In low-light imaging, the high-sensitivity camera settings make some slight noise an important noise component in low light. Fig. 2 shows common noise sources in the whole imaging process, and a basic noise model mainly comprising three noise sources is established based on the physical imaging process. The entire model assumes that the camera follows a globally consistent linear response curve with a gain K. The measured value y is obtained by the following formula i :
y i =K(S i +D i +R i ) (1)
Where i pixel index, y i Is the pixel acquiredThe value of the sum of the values,indicates shot noise, and>is a poisson distribution, <' > based on>Is the number of photoelectrons at pixel i, < >>Representing dark current, N d Is the number of dark current electrons per pixel value,represents a read noise, <' >>Is a Gaussian distribution sign>Variance of gaussian distribution.
In addition to the above widely known noise categories, the present invention addresses some particular sources of noise during low light imaging.
Dynamic streak noise: compared to conventional fixed pattern noise, dynamic streak noise is a continuous transition and cannot be eliminated by direct subtraction. Typically in video, dynamic streak noise occurs in the form of line streaks. This phenomenon exists not only in rolling shutter cameras but also in global shutter cameras, but the characteristics are slightly different. Through the imaging process of exploring an imaging chip, two physical causes are used for explanation, circuit fluctuation and an asynchronous trigger. Both of these causes slightly affect the global row gain, and finally K in equation (1) is replaced with the row gain K r :
Where K is the globally consistent system gain.Is the disturbance caused by circuit fluctuation and conforms to the Gaussian distribution of color (1/f).Is the disturbance caused by an asynchronous trigger and accords with Gaussian white noise distribution. λ is a measure of the weight between the circuit ripple and the asynchronous flip-flop. Because +>And &>All follow a Gaussian distribution with a mean value of 0, K r The expectation is exactly K, and to simplify the calibration of the parameters, equation (2) reduces to:
K r =Kβ r (3)
β r is a correction parameter for dynamic streak noise, and is equivalent to that in a rolling shutter cameraIn a global shutter camera, on an par of +>β r Corresponding to a color or white Gaussian distribution respectively>
Noise inter-channel connection: in the present invention, the noise relationship between channels is simulated by exploring the physical characteristics of the color sensor while taking into account pixel uniformity and channel differences. Typically, rolling shutter and global shutter cameras acquire three-channel images by overlaying a color array filter on a silicon sensor. In practice, the noise of three channels is not uniform, mainly for two reasons: three channels haveDifferent system gains and three channels have significant DPN fluctuations. Accordingly, formula (3) K is modified r Obtaining: is a correction parameter, K, of channel-dependent dynamic streak noise c Is the global system gain associated with the channel.
Truncation impact: digital sensors are typically positive, regardless of readout noise. However, the readout noise follows a gaussian distribution with a mean value of 0, when negative values are present. Especially in low light environments, in which case even many signals are weaker than the readout noise, which can lead to the appearance of many negative values. These negative values are truncated to a value of 0 before the result is output. In mathematical expressions, operations by truncationCan be expressed as follows:
in conclusion, the final form of the actual noise model of the low-light environment provided by the invention is as follows:
And 2, calibrating model parameters based on the established noise model to generate a noise video training set which accords with the reality. The noise model of the invention can carry out parameter calibration on the actual camera noise. To facilitate reasoning, truncation calculations are not considered first. The invention proposes to perform a correction of the bias that truncates causes the expectation and variance based on a 2D look-up table, such as fig. 3.
Step 21, calibratingAcquiring a video frame in dark field condition, can be->The pixel mean per row divided by the global pixel mean may be ≧>I.e. is>Get->Thereafter, the dynamic pattern noise may be removed by dividing by the pixel value of each row. And (3) deducing a dynamic pattern noise corrected formula: y is i =K c (D i +R i )| c∈{r,g,b} . According to the type of the camera, a rolling shutter camera or a global shutter camera, the distribution characteristic of the dynamic pattern noise is determined to be color noise or white gaussian noise.
Step 22, calibrate K c . Dark currentReading out noise->y i The corrected expectation and variance can be expressed as:
E[y']=K c N d
using N d =E[y']/K c Substitution of N d The following equation will be obtained:
based on dark current changing along with exposure time, dark field video frames with different exposures are adopted, and constants can be eliminatedThe final formula can be obtained:
ΔD[y']=K c ΔE[y'] (8)
whereint 1 ,t 2 Representing different exposure times. Practically, E [ y']Is equivalent to mean (y '), D [ y']Can be equivalent to var [ y']. A series of dark field videos are taken at different exposure times, and a set of points (Δ E [ y ') can be calculated'],ΔD[y']) So K is c A linear fit can be made from these points.
Step 23, calibrate N d Andcalibrated to obtain K c Then, N d And &>Can be calculated from equation (6).
And step 24, truncation error lookup table correction. The mean (x) and the variance var (x) after the truncation operation are greatly different from the mean (x) and the variance var (x) without truncation. In actual calculations, the effect of truncation is difficult to calculate without a priori knowledge of the pixel x. In the present invention, all random variables in the truncation can be divided into two parts, a poisson distribution part and a gaussian distribution part with zero mean. Utilizing matlab to generate a series of pixels x with different expectations and variances for a large amount of data, and making a cut mean valueVariance->A 2D table corresponding to the true mean (x), variance var (x). By means of the table lookup, the value of the real data can be obtained.
And 25, training data synthesis. From the clean video sequence, a noisy training data set can be synthesized according to equation (5) and the above camera parameter calibration steps. The invention firstly deduces the expected number N of photoelectrons e ,
Wherein S is pixel Is the area of a single pixel, C lum2radiant Is the transfer constant from luminous flux to radiation intensity, E p Is the energy of a single photon, Q e Is the equivalent system quantum efficiency of the camera. Adjusting the average value of the image to a desired number of photoelectrons E N e ]The desired number of photons can be derived from the image pixel values and finally the noisy training data set is synthesized according to equation (5) according to a monte carlo simulation.
And 3, designing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise. The invention provides an LSTM-based video denoising enhancement network, which inputs a noise video shot by an actual camera under weak light and outputs a bright and clear video frame. Fig. 4 is a network architecture designed in this embodiment. In order to adaptively extract both short term dependencies and long term dependencies from the video, a spatiotemporal associative memory unit ST-LSTM is employed in the network. ST-LSTM can model spatial and temporal representations in one unified memory unit and transfer memory vertically across layers and horizontally across states. The inventive network architecture comprises two convolutional layers and 4 ST-LSTM layers, first the convolutional layers extract the features of the incoming frames and then pass the features to the ST-LSTM layers. A hopping connection is added to the spatial correlation. And the last layer combines the learning reconstruction information of the previous layer into an sRGB space. Wherein the convolution kernel size used by the first and fourth ST-LSTM layers is 3x3, the convolution kernel size used by the second and third ST-LSTM layers is 5x5, and the number of features per layer is 64. Zero padding is used to ensure dimensional consistency between input and output.
And 4, training an optimized neural network, and verifying the practicability of the method by using the noise training data set synthesized in the step 25 and the actually acquired low-light video data set. By minimizing the frame I output by the network and the corresponding actual real frame I * The loss function (equation (10)) is used to train the network.
Defining a fundamental loss function asAnd &>Weighted average of losses, <' >>Distance sum->The consistency of the pixel intensities is calculated, the former making the output smooth and the latter making the output finer. In order to further improve the quality of perception, a perception loss is introduced>The method is characterized in that advanced features are extracted through a pre-trained visual perception group (VGG) network to carry out network output and real value constraint. In addition, a total variation regulator is added to the loss function>As a smoothed regularization term. Here, α, β, γ, δ are the hyper-parameters of the training process, N is the trainingThe number of video frames trained. In the training process of the invention, alpha =5, beta =1, gamma =0.06, delta =2 × 10 -6 ,N=8。
The deep learning network structure implemented by the network is Pythrch, the optimizer adopted in the training process is Adma, the learning rate is 1 multiplied by 10 -6 . The training set contains a large number of clean videos and approximately 900 sequences are selected, which are very rich in moving scenes. The network is trained with different training data for different cameras, taking into account that each camera has a unique set of noise parameters.
Claims (4)
1. A video denoising method based on actual camera noise modeling is characterized by comprising the following steps:
step 1, establishing an actual noise mathematical model of a weak light environment, wherein the model comprises dynamic stripe noise, inter-noise channel relation and truncation influence, and the actual noise mathematical model specifically comprises the following steps:
where i is the pixel index, y i Is the value of the pixel that was acquired,is a truncation operation, K c Is the global system gain associated with the channel,is a fluctuation factor of the dynamic streak noise; s i Indicates shot noise, and> is a poisson distribution,. Sup.>Is the number of photoelectrons at pixel i; d i Represents dark current, is greater than or equal to>N d Is the number of dark current electrons, R, per pixel value i Represents a readout noise, <' > or> Is a Gaussian distribution sign>The variance of Gaussian distribution, c ∈ { r, g, b } is a color three channel;
step 2, calibrating parameters of the mathematical model in the step 1 by using noise of an actual camera to generate a noise video training set which accords with an actual situation;
step 3, constructing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise;
and 4, training and optimizing the neural network by using the noise video training set generated in the step 2 and the actually acquired low-light video data set.
2. The method as claimed in claim 1, wherein in step 2, a series of pixels x with different mean and variance are generated, and then the mean value after truncation operation is madeVariance->Two-dimensional tables corresponding to the true mean (x), variance var (x), by looking up the tablesAnd the grid method can obtain the value of the real data.
3. The method for denoising video based on actual camera noise modeling according to claim 1, wherein in step 3, the input of the video denoising and enhancing neural network is the noise video shot under the weak light of the actual camera, and the output is a bright and clear video frame.
4. The method as claimed in claim 1, wherein in step 4, the frame I outputted by the minimization network and the corresponding actual real frame I are outputted * Training the network with a loss function as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910518690.2A CN110246105B (en) | 2019-06-15 | 2019-06-15 | Video denoising method based on actual camera noise modeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910518690.2A CN110246105B (en) | 2019-06-15 | 2019-06-15 | Video denoising method based on actual camera noise modeling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110246105A CN110246105A (en) | 2019-09-17 |
CN110246105B true CN110246105B (en) | 2023-03-28 |
Family
ID=67887384
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910518690.2A Active CN110246105B (en) | 2019-06-15 | 2019-06-15 | Video denoising method based on actual camera noise modeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110246105B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110807812B (en) * | 2019-09-29 | 2022-04-05 | 浙江大学 | Digital image sensor system error calibration method based on prior noise model |
CN111260579B (en) * | 2020-01-17 | 2021-08-03 | 北京理工大学 | Low-light-level image denoising and enhancing method based on physical noise generation model |
CN111724317A (en) * | 2020-05-20 | 2020-09-29 | 天津大学 | Method for constructing Raw domain video denoising supervision data set |
CN112381731B (en) * | 2020-11-12 | 2021-08-10 | 四川大学 | Single-frame stripe image phase analysis method and system based on image denoising |
CN112686828B (en) * | 2021-03-16 | 2021-07-02 | 腾讯科技(深圳)有限公司 | Video denoising method, device, equipment and storage medium |
CN114219820B (en) * | 2021-12-08 | 2024-09-06 | 苏州工业园区智在天下科技有限公司 | Neural network generation method, denoising method and device thereof |
CN114418073B (en) * | 2022-03-30 | 2022-06-21 | 深圳时识科技有限公司 | Impulse neural network training method, storage medium, chip and electronic product |
CN114897729B (en) * | 2022-05-11 | 2024-06-04 | 北京理工大学 | Filtering array type spectral image denoising enhancement method and system based on physical modeling |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7652788B2 (en) * | 2006-06-23 | 2010-01-26 | Nokia Corporation | Apparatus, method, mobile station and computer program product for noise estimation, modeling and filtering of a digital image |
CN107424176A (en) * | 2017-07-24 | 2017-12-01 | 福州智联敏睿科技有限公司 | A kind of real-time tracking extracting method of weld bead feature points |
CN109214990A (en) * | 2018-07-02 | 2019-01-15 | 广东工业大学 | A kind of depth convolutional neural networks image de-noising method based on Inception model |
-
2019
- 2019-06-15 CN CN201910518690.2A patent/CN110246105B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110246105A (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110246105B (en) | Video denoising method based on actual camera noise modeling | |
CN110211056B (en) | Self-adaptive infrared image de-striping algorithm based on local median histogram | |
CN108492262B (en) | No-ghost high-dynamic-range imaging method based on gradient structure similarity | |
Wang et al. | Low-light image enhancement based on virtual exposure | |
Wang et al. | Joint iterative color correction and dehazing for underwater image enhancement | |
CN111986084A (en) | Multi-camera low-illumination image quality enhancement method based on multi-task fusion | |
CN114998141B (en) | Space environment high dynamic range imaging method based on multi-branch network | |
Zhao et al. | End-to-end denoising of dark burst images using recurrent fully convolutional networks | |
Ley et al. | Reconstructing white walls: Multi-view, multi-shot 3d reconstruction of textureless surfaces | |
Rong et al. | Infrared fix pattern noise reduction method based on shearlet transform | |
CN114240767A (en) | Image wide dynamic range processing method and device based on exposure fusion | |
Wang et al. | End-to-end exposure fusion using convolutional neural network | |
CN111652815A (en) | Mask camera image restoration method based on deep learning | |
Tan et al. | High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation | |
CN113935917A (en) | Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network | |
CN111798484B (en) | Continuous dense optical flow estimation method and system based on event camera | |
CN116389912B (en) | Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera | |
CN111932478A (en) | Self-adaptive non-uniform correction method for uncooled infrared focal plane | |
Yu et al. | An improved retina-like nonuniformity correction for infrared focal-plane array | |
Teutsch et al. | An evaluation of objective image quality assessment for thermal infrared video tone mapping | |
CN114998173A (en) | High dynamic range imaging method for space environment based on local area brightness adjustment | |
De Neve et al. | An improved HDR image synthesis algorithm | |
CN113096033A (en) | Low-illumination image enhancement method based on Retinex model self-adaptive structure | |
Kinoshita et al. | Deep inverse tone mapping using LDR based learning for estimating HDR images with absolute luminance | |
Cao et al. | A Perceptually Optimized and Self-Calibrated Tone Mapping Operator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |