CN110246105B - Video denoising method based on actual camera noise modeling - Google Patents

Video denoising method based on actual camera noise modeling Download PDF

Info

Publication number
CN110246105B
CN110246105B CN201910518690.2A CN201910518690A CN110246105B CN 110246105 B CN110246105 B CN 110246105B CN 201910518690 A CN201910518690 A CN 201910518690A CN 110246105 B CN110246105 B CN 110246105B
Authority
CN
China
Prior art keywords
noise
video
denoising
actual
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910518690.2A
Other languages
Chinese (zh)
Other versions
CN110246105A (en
Inventor
王伟
陈鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201910518690.2A priority Critical patent/CN110246105B/en
Publication of CN110246105A publication Critical patent/CN110246105A/en
Application granted granted Critical
Publication of CN110246105B publication Critical patent/CN110246105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video denoising method based on actual camera noise modeling. The method comprises the following specific steps: (1) Exploring the physical cause of main noise in the imaging process and establishing a mathematical model of noise distribution; (2) Based on the established noise model, calibrating model parameters to generate a noise video training set which accords with the reality; (3) Designing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise; (4) Training an optimized neural network, and verifying the practicability of the method by using the synthesized and real-time acquired video data set. The denoising method is suitable for the denoising of the low-light video, and has very important application requirements in the fields of national defense and military, security monitoring, scientific research and environmental protection and the like.

Description

Video denoising method based on actual camera noise modeling
Technical Field
The invention relates to the field of computational photography and deep learning, in particular to the field of low-light video denoising in actual camera noise modeling.
Background
In very low light conditions, a large amount of noise can significantly degrade the image quality, and therefore low light video imaging is a challenging problem. A number of video denoising or video enhancement algorithms have been proposed to solve this problem, however most of the noise models in these algorithms are simple independent identically distributed assumptions including additive white gaussian noise, poisson noise or a mixture of gaussian and poisson noise. In practice, the noise in video is very complex, and especially in low-light situations, some factors that are usually ignored, such as dynamic graphic noise, inter-channel noise relationship, truncation effect, etc., become major problems.
Deep learning based approaches have made significant progress over many image processing tasks. Deep learning learns more useful characteristics of noise by constructing a neural network model with a plurality of hidden layers and massive training data, so that the image denoising effect is finally improved. And by layer-by-layer feature transformation, the feature representation of the sample in the original space is transformed to another new feature space, so that the image reconstruction is easier.
Therefore, how to accurately establish a mathematical model for the noise distribution of the video and utilize a neural network to recover high-quality low-light video is a current research direction.
Disclosure of Invention
Aiming at the defects of the existing video denoising method, the invention aims to provide a video denoising method for actual camera noise modeling.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a low-light video denoising method for actual camera noise modeling,
step 1, establishing an actual noise mathematical model of a weak light environment, wherein the model comprises dynamic stripe noise, inter-noise channel relation and truncation influence;
step 2, calibrating parameters of the mathematical model in the step 1 by using noise of an actual camera to generate a noise video training set which accords with an actual situation;
step 3, constructing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise;
and 4, training and optimizing the neural network by using the noise video training set generated in the step 2 and the actually acquired low-light video data set.
The invention considers the physical cause of main noise in the actual camera imaging, firstly establishes a mathematical model of noise distribution, generates data more conforming to the practical situation to train the video denoising and enhancing network based on the long-time and short-time memory LSTM, the designed network input can be weak light video with any frame number, and the network outputs high-quality video frames frame by frame.
The invention has the advantages that: (1) The proposed actual noise model is based on the hardware characteristics of the physical imaging process and the sensor, and mainly considers three non-negligible noises under the condition of weak light: the model can well process complex noise in actual situations, particularly videos under weak light imaging due to dynamic stripe noise, correlation among noise channels and truncation influence;
(2) The estimation method of the noise model is provided and the noise video training set which accords with the reality is synthesized, and the training data set does not need to be acquired too much, so that the method is very attractive to a video denoising algorithm which is difficult to acquire paired noise and a clean video frame simultaneously;
(3) The design is based on long-and-short-term memory LSTM video denoising and enhancing neural network, the network structure of the invention can memorize and utilize information of up to the previous 20 frames to process and recover the information of the current frame image;
(4) The low-light video denoising method has very important application prospects in the fields of national defense and military, security monitoring, scientific research and environmental protection and the like.
Drawings
FIG. 1 is a flowchart of a denoising method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a noise source in an imaging process according to an embodiment of the present invention.
FIG. 3 is a 2D lookup table in an embodiment of the present invention, (a) is a truncation effect mean correction, and (b) is a truncation effect variance correction.
Fig. 4 is a schematic diagram of a network architecture designed in the embodiment of the present invention.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a video denoising method for actual camera noise modeling according to this embodiment includes the following specific steps:
step 1, exploring the physical cause of main noise in the imaging process and establishing a mathematical model of noise distribution. In low-light imaging, the high-sensitivity camera settings make some slight noise an important noise component in low light. Fig. 2 shows common noise sources in the whole imaging process, and a basic noise model mainly comprising three noise sources is established based on the physical imaging process. The entire model assumes that the camera follows a globally consistent linear response curve with a gain K. The measured value y is obtained by the following formula i
y i =K(S i +D i +R i ) (1)
Where i pixel index, y i Is the pixel acquiredThe value of the sum of the values,
Figure BDA0002095905920000021
indicates shot noise, and>
Figure BDA0002095905920000022
is a poisson distribution, <' > based on>
Figure BDA0002095905920000023
Is the number of photoelectrons at pixel i, < >>
Figure BDA0002095905920000024
Representing dark current, N d Is the number of dark current electrons per pixel value,
Figure BDA0002095905920000025
represents a read noise, <' >>
Figure BDA0002095905920000026
Is a Gaussian distribution sign>
Figure BDA0002095905920000027
Variance of gaussian distribution.
In addition to the above widely known noise categories, the present invention addresses some particular sources of noise during low light imaging.
Dynamic streak noise: compared to conventional fixed pattern noise, dynamic streak noise is a continuous transition and cannot be eliminated by direct subtraction. Typically in video, dynamic streak noise occurs in the form of line streaks. This phenomenon exists not only in rolling shutter cameras but also in global shutter cameras, but the characteristics are slightly different. Through the imaging process of exploring an imaging chip, two physical causes are used for explanation, circuit fluctuation and an asynchronous trigger. Both of these causes slightly affect the global row gain, and finally K in equation (1) is replaced with the row gain K r
Figure BDA0002095905920000031
Where K is the globally consistent system gain.
Figure BDA0002095905920000032
Is the disturbance caused by circuit fluctuation and conforms to the Gaussian distribution of color (1/f). />
Figure BDA0002095905920000033
Is the disturbance caused by an asynchronous trigger and accords with Gaussian white noise distribution. λ is a measure of the weight between the circuit ripple and the asynchronous flip-flop. Because +>
Figure BDA0002095905920000034
And &>
Figure BDA0002095905920000035
All follow a Gaussian distribution with a mean value of 0, K r The expectation is exactly K, and to simplify the calibration of the parameters, equation (2) reduces to:
K r =Kβ r (3)
β r is a correction parameter for dynamic streak noise, and is equivalent to that in a rolling shutter camera
Figure BDA0002095905920000036
In a global shutter camera, on an par of +>
Figure BDA0002095905920000037
β r Corresponding to a color or white Gaussian distribution respectively>
Figure BDA0002095905920000038
Noise inter-channel connection: in the present invention, the noise relationship between channels is simulated by exploring the physical characteristics of the color sensor while taking into account pixel uniformity and channel differences. Typically, rolling shutter and global shutter cameras acquire three-channel images by overlaying a color array filter on a silicon sensor. In practice, the noise of three channels is not uniform, mainly for two reasons: three channels haveDifferent system gains and three channels have significant DPN fluctuations. Accordingly, formula (3) K is modified r Obtaining:
Figure BDA0002095905920000039
Figure BDA00020959059200000310
is a correction parameter, K, of channel-dependent dynamic streak noise c Is the global system gain associated with the channel.
Truncation impact: digital sensors are typically positive, regardless of readout noise. However, the readout noise follows a gaussian distribution with a mean value of 0, when negative values are present. Especially in low light environments, in which case even many signals are weaker than the readout noise, which can lead to the appearance of many negative values. These negative values are truncated to a value of 0 before the result is output. In mathematical expressions, operations by truncation
Figure BDA00020959059200000311
Can be expressed as follows:
Figure BDA00020959059200000312
in conclusion, the final form of the actual noise model of the low-light environment provided by the invention is as follows:
Figure BDA0002095905920000041
in the case of a global shutter camera, the camera,
Figure BDA0002095905920000042
the same for the three channels.
And 2, calibrating model parameters based on the established noise model to generate a noise video training set which accords with the reality. The noise model of the invention can carry out parameter calibration on the actual camera noise. To facilitate reasoning, truncation calculations are not considered first. The invention proposes to perform a correction of the bias that truncates causes the expectation and variance based on a 2D look-up table, such as fig. 3.
Step 21, calibrating
Figure BDA0002095905920000043
Acquiring a video frame in dark field condition, can be->
Figure BDA0002095905920000044
The pixel mean per row divided by the global pixel mean may be ≧>
Figure BDA0002095905920000045
I.e. is>
Figure BDA0002095905920000046
Get->
Figure BDA0002095905920000047
Thereafter, the dynamic pattern noise may be removed by dividing by the pixel value of each row. And (3) deducing a dynamic pattern noise corrected formula: y is i =K c (D i +R i )| c∈{r,g,b} . According to the type of the camera, a rolling shutter camera or a global shutter camera, the distribution characteristic of the dynamic pattern noise is determined to be color noise or white gaussian noise.
Step 22, calibrate K c . Dark current
Figure BDA0002095905920000048
Reading out noise->
Figure BDA0002095905920000049
y i The corrected expectation and variance can be expressed as:
E[y']=K c N d
Figure BDA00020959059200000410
using N d =E[y']/K c Substitution of N d The following equation will be obtained:
Figure BDA00020959059200000411
based on dark current changing along with exposure time, dark field video frames with different exposures are adopted, and constants can be eliminated
Figure BDA00020959059200000412
The final formula can be obtained:
ΔD[y']=K c ΔE[y'] (8)
wherein
Figure BDA00020959059200000413
t 1 ,t 2 Representing different exposure times. Practically, E [ y']Is equivalent to mean (y '), D [ y']Can be equivalent to var [ y']. A series of dark field videos are taken at different exposure times, and a set of points (Δ E [ y ') can be calculated'],ΔD[y']) So K is c A linear fit can be made from these points.
Step 23, calibrate N d And
Figure BDA00020959059200000414
calibrated to obtain K c Then, N d And &>
Figure BDA00020959059200000415
Can be calculated from equation (6).
And step 24, truncation error lookup table correction. The mean (x) and the variance var (x) after the truncation operation are greatly different from the mean (x) and the variance var (x) without truncation. In actual calculations, the effect of truncation is difficult to calculate without a priori knowledge of the pixel x. In the present invention, all random variables in the truncation can be divided into two parts, a poisson distribution part and a gaussian distribution part with zero mean. Utilizing matlab to generate a series of pixels x with different expectations and variances for a large amount of data, and making a cut mean value
Figure BDA0002095905920000051
Variance->
Figure BDA0002095905920000052
A 2D table corresponding to the true mean (x), variance var (x). By means of the table lookup, the value of the real data can be obtained.
And 25, training data synthesis. From the clean video sequence, a noisy training data set can be synthesized according to equation (5) and the above camera parameter calibration steps. The invention firstly deduces the expected number N of photoelectrons e
Figure BDA0002095905920000053
Wherein S is pixel Is the area of a single pixel, C lum2radiant Is the transfer constant from luminous flux to radiation intensity, E p Is the energy of a single photon, Q e Is the equivalent system quantum efficiency of the camera. Adjusting the average value of the image to a desired number of photoelectrons E N e ]The desired number of photons can be derived from the image pixel values and finally the noisy training data set is synthesized according to equation (5) according to a monte carlo simulation.
And 3, designing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise. The invention provides an LSTM-based video denoising enhancement network, which inputs a noise video shot by an actual camera under weak light and outputs a bright and clear video frame. Fig. 4 is a network architecture designed in this embodiment. In order to adaptively extract both short term dependencies and long term dependencies from the video, a spatiotemporal associative memory unit ST-LSTM is employed in the network. ST-LSTM can model spatial and temporal representations in one unified memory unit and transfer memory vertically across layers and horizontally across states. The inventive network architecture comprises two convolutional layers and 4 ST-LSTM layers, first the convolutional layers extract the features of the incoming frames and then pass the features to the ST-LSTM layers. A hopping connection is added to the spatial correlation. And the last layer combines the learning reconstruction information of the previous layer into an sRGB space. Wherein the convolution kernel size used by the first and fourth ST-LSTM layers is 3x3, the convolution kernel size used by the second and third ST-LSTM layers is 5x5, and the number of features per layer is 64. Zero padding is used to ensure dimensional consistency between input and output.
And 4, training an optimized neural network, and verifying the practicability of the method by using the noise training data set synthesized in the step 25 and the actually acquired low-light video data set. By minimizing the frame I output by the network and the corresponding actual real frame I * The loss function (equation (10)) is used to train the network.
Figure BDA0002095905920000054
Defining a fundamental loss function as
Figure BDA0002095905920000061
And &>
Figure BDA0002095905920000062
Weighted average of losses, <' >>
Figure BDA0002095905920000063
Distance sum->
Figure BDA0002095905920000064
The consistency of the pixel intensities is calculated, the former making the output smooth and the latter making the output finer. In order to further improve the quality of perception, a perception loss is introduced>
Figure BDA0002095905920000065
The method is characterized in that advanced features are extracted through a pre-trained visual perception group (VGG) network to carry out network output and real value constraint. In addition, a total variation regulator is added to the loss function>
Figure BDA0002095905920000066
As a smoothed regularization term. Here, α, β, γ, δ are the hyper-parameters of the training process, N is the trainingThe number of video frames trained. In the training process of the invention, alpha =5, beta =1, gamma =0.06, delta =2 × 10 -6 ,N=8。
The deep learning network structure implemented by the network is Pythrch, the optimizer adopted in the training process is Adma, the learning rate is 1 multiplied by 10 -6 . The training set contains a large number of clean videos and approximately 900 sequences are selected, which are very rich in moving scenes. The network is trained with different training data for different cameras, taking into account that each camera has a unique set of noise parameters.

Claims (4)

1. A video denoising method based on actual camera noise modeling is characterized by comprising the following steps:
step 1, establishing an actual noise mathematical model of a weak light environment, wherein the model comprises dynamic stripe noise, inter-noise channel relation and truncation influence, and the actual noise mathematical model specifically comprises the following steps:
Figure FDA0003940291750000011
where i is the pixel index, y i Is the value of the pixel that was acquired,
Figure FDA0003940291750000012
is a truncation operation, K c Is the global system gain associated with the channel,
Figure FDA0003940291750000013
is a fluctuation factor of the dynamic streak noise; s i Indicates shot noise, and>
Figure FDA0003940291750000014
Figure FDA0003940291750000015
is a poisson distribution,. Sup.>
Figure FDA0003940291750000016
Is the number of photoelectrons at pixel i; d i Represents dark current, is greater than or equal to>
Figure FDA0003940291750000017
N d Is the number of dark current electrons, R, per pixel value i Represents a readout noise, <' > or>
Figure FDA0003940291750000018
Figure FDA0003940291750000019
Is a Gaussian distribution sign>
Figure FDA00039402917500000110
The variance of Gaussian distribution, c ∈ { r, g, b } is a color three channel;
step 2, calibrating parameters of the mathematical model in the step 1 by using noise of an actual camera to generate a noise video training set which accords with an actual situation;
step 3, constructing a video denoising and enhancing neural network, and combining spatial and temporal information to suppress and weaken noise;
and 4, training and optimizing the neural network by using the noise video training set generated in the step 2 and the actually acquired low-light video data set.
2. The method as claimed in claim 1, wherein in step 2, a series of pixels x with different mean and variance are generated, and then the mean value after truncation operation is made
Figure FDA00039402917500000111
Variance->
Figure FDA00039402917500000112
Two-dimensional tables corresponding to the true mean (x), variance var (x), by looking up the tablesAnd the grid method can obtain the value of the real data.
3. The method for denoising video based on actual camera noise modeling according to claim 1, wherein in step 3, the input of the video denoising and enhancing neural network is the noise video shot under the weak light of the actual camera, and the output is a bright and clear video frame.
4. The method as claimed in claim 1, wherein in step 4, the frame I outputted by the minimization network and the corresponding actual real frame I are outputted * Training the network with a loss function as follows:
Figure FDA00039402917500000113
wherein the content of the first and second substances,
Figure FDA00039402917500000114
is a final loss function>
Figure FDA00039402917500000115
Is an absolute value error function>
Figure FDA00039402917500000116
Mean square error function->
Figure FDA00039402917500000117
A perception-loss function, <' > or>
Figure FDA00039402917500000118
The total variation regularization function, α, β, γ, and δ are all hyper-parameters. />
CN201910518690.2A 2019-06-15 2019-06-15 Video denoising method based on actual camera noise modeling Active CN110246105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910518690.2A CN110246105B (en) 2019-06-15 2019-06-15 Video denoising method based on actual camera noise modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910518690.2A CN110246105B (en) 2019-06-15 2019-06-15 Video denoising method based on actual camera noise modeling

Publications (2)

Publication Number Publication Date
CN110246105A CN110246105A (en) 2019-09-17
CN110246105B true CN110246105B (en) 2023-03-28

Family

ID=67887384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910518690.2A Active CN110246105B (en) 2019-06-15 2019-06-15 Video denoising method based on actual camera noise modeling

Country Status (1)

Country Link
CN (1) CN110246105B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807812B (en) * 2019-09-29 2022-04-05 浙江大学 Digital image sensor system error calibration method based on prior noise model
CN111260579B (en) * 2020-01-17 2021-08-03 北京理工大学 Low-light-level image denoising and enhancing method based on physical noise generation model
CN111724317A (en) * 2020-05-20 2020-09-29 天津大学 Method for constructing Raw domain video denoising supervision data set
CN112381731B (en) * 2020-11-12 2021-08-10 四川大学 Single-frame stripe image phase analysis method and system based on image denoising
CN112686828B (en) * 2021-03-16 2021-07-02 腾讯科技(深圳)有限公司 Video denoising method, device, equipment and storage medium
CN114219820A (en) * 2021-12-08 2022-03-22 苏州工业园区智在天下科技有限公司 Neural network generation method, denoising method and device
CN114418073B (en) * 2022-03-30 2022-06-21 深圳时识科技有限公司 Impulse neural network training method, storage medium, chip and electronic product
CN114897729B (en) * 2022-05-11 2024-06-04 北京理工大学 Filtering array type spectral image denoising enhancement method and system based on physical modeling

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7652788B2 (en) * 2006-06-23 2010-01-26 Nokia Corporation Apparatus, method, mobile station and computer program product for noise estimation, modeling and filtering of a digital image
CN107424176A (en) * 2017-07-24 2017-12-01 福州智联敏睿科技有限公司 A kind of real-time tracking extracting method of weld bead feature points
CN109214990A (en) * 2018-07-02 2019-01-15 广东工业大学 A kind of depth convolutional neural networks image de-noising method based on Inception model

Also Published As

Publication number Publication date
CN110246105A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110246105B (en) Video denoising method based on actual camera noise modeling
Wang et al. Enhancing low light videos by exploring high sensitivity camera noise
CN108492262B (en) No-ghost high-dynamic-range imaging method based on gradient structure similarity
CN110211056B (en) Self-adaptive infrared image de-striping algorithm based on local median histogram
Liu et al. Noise estimation from a single image
CN105318971B (en) The adaptive nonuniformity correction method of image registration is used to infrared video sequence
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
CN111986084A (en) Multi-camera low-illumination image quality enhancement method based on multi-task fusion
CN106197690B (en) Image calibrating method and system under the conditions of a kind of wide temperature range
CN114998141B (en) Space environment high dynamic range imaging method based on multi-branch network
CN111652815B (en) Mask plate camera image restoration method based on deep learning
Rong et al. Infrared fix pattern noise reduction method based on shearlet transform
CN115393227A (en) Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
Wang et al. End-to-end exposure fusion using convolutional neural network
Ley et al. Reconstructing white walls: Multi-view, multi-shot 3d reconstruction of textureless surfaces
Tan et al. High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation
CN114240767A (en) Image wide dynamic range processing method and device based on exposure fusion
Ye et al. LFIENet: light field image enhancement network by fusing exposures of LF-DSLR image pairs
CN111932478A (en) Self-adaptive non-uniform correction method for uncooled infrared focal plane
Yu et al. An improved retina-like nonuniformity correction for infrared focal-plane array
Teutsch et al. An evaluation of objective image quality assessment for thermal infrared video tone mapping
CN114998173A (en) High dynamic range imaging method for space environment based on local area brightness adjustment
CN113096033A (en) Low-illumination image enhancement method based on Retinex model self-adaptive structure
De Neve et al. An improved HDR image synthesis algorithm
Ma et al. Image Dehazing Based on Improved Color Channel Transfer and Multiexposure Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant