CN108961186A - A kind of old film reparation recasting method based on deep learning - Google Patents
A kind of old film reparation recasting method based on deep learning Download PDFInfo
- Publication number
- CN108961186A CN108961186A CN201810699895.0A CN201810699895A CN108961186A CN 108961186 A CN108961186 A CN 108961186A CN 201810699895 A CN201810699895 A CN 201810699895A CN 108961186 A CN108961186 A CN 108961186A
- Authority
- CN
- China
- Prior art keywords
- network
- training
- video
- block
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013135 deep learning Methods 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 96
- 230000000694 effects Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 58
- 230000004913 activation Effects 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 19
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 11
- 238000003475 lamination Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000008929 regeneration Effects 0.000 claims description 4
- 238000011069 regeneration method Methods 0.000 claims description 4
- 238000005086 pumping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 2
- 238000000926 separation method Methods 0.000 claims description 2
- 230000008439 repair process Effects 0.000 abstract description 7
- 238000003909 pattern recognition Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000009418 renovation Methods 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The present invention discloses a kind of old film reparation recasting method based on deep learning, it is the following steps are included: step 1: video being taken out frame by ffmpeg, and is respectively formed the training dataset of deinterlacing model, video interleave model, deblurring network and super-resolution model;Step 2: training deinterlacing network model;Step 3: training video interleave network model;Step 4: training deblurring network;Step 5: training super-resolution network;Step 6: training denoising network.The present invention is based on deep learnings to apply deinterlacing, video denoising, video deblurring to old film respectively, video interleave and super-resolution technique repair it, and compared with manually, stability is higher, arithmetic speed is improved, the accuracy of image restoration is improved.The present invention treated image restoration effect is good, restore after image definition it is high, easy to use, at low cost the advantages that.
Description
Technical field
The present invention relates to deep learning and computer vision more particularly to a kind of old film reparations based on deep learning
Recasting method.
Background technique
Film cultural heritage is a country and national precious memory, is the important composition of the following non-material cultural heritage
Part is the excellent carrier that modern times Chinese national culture is walked out.It strives to traditional red film, reflection Chinese modern
For the film of the positive energy spirit of struggle, it can be restored and be presented more plentiful using modern technologies.But due to mistake
Technique for taking is gone to limit, a large amount of old films have been unable to satisfy people to the viewing demand of high definition vision.
China needs the cinefilm substantial amounts repaired, and existing film movie light feature film just has two or three ten thousand, and
Nowadays about 60 old cinefilms can be repaired by being often only.According to the reparation speed in the current whole nation, it will there is many copies repairing
It " dies " before, country has paid attention to the seriousness to situation at present, is just supporting and is advocating energetically old film and repairing industry, but is having
The classic movies that ability carries out exquisite reparation only have 200.In order to preferably be repaired to of the remote past, the serious film of damage
It is multiple, it needs through image reconstruction technique etc., the detailed information that has disappeared on " manufacture " frame out and image deblurring is surpassed
The processing such as resolution ratio renovation.The picture reparation that artificial refine, substantially a staff can be only done 100 to 200 frames for one day,
One 90 minutes film, about 120,000 9600 frame pictures.If on a frame-by-frame basis finely to repair, a film is wanted at least
With the time of some months, cost is also at million grades.
Summary of the invention
The purpose of the present invention is to provide a kind of, and method is remake in the old film reparation based on deep learning.
The technical solution adopted by the present invention is that:
A kind of old film reparation recasting method based on deep learning comprising following steps:
Step 1: video being taken out into frame by ffmpeg, and is respectively formed the training dataset of deinterlacing model, video
The training dataset of the training dataset of interleave model, the training dataset of deblurring network and super-resolution model;
Step 2: training deinterlacing network model inputs interleaved odd field and even field image blockIt obtains
The prediction result of deinterlacing
Step 2.1: deinterlacing network includes characteristic extracting module, non-linear mapping module and reconstruction module;Go every
The characteristic extracting module and non-linear mapping module of row scanning are stacked by convolutional layer of simply connecting, and each convolutional layer
Have ReLU as activation primitive afterwards, ReLU function formula is as follows:
F (x)=max (0, x);
Step 2.2: using MSE-1 function as the loss function of training deinterlacing network model, MSE-1 function is such as
Shown in lower:
Wherein, MSE-1 indicates loss function,For trained input target image block,It is exported for trained network
Forecast image block;
Step 3: training video interleave network model inputs continuous three video frame It-1、It、It+1, respectively indicate previous
Frame, present frame and a later frame obtain present frame ItPrediction result It', the as output of interleave network;
Step 3.1: the non-linear mapping module of video interleave network model takes the network structure of U-Net, the net of U-Net
Network structure includes coding module and decoder module;Coding module includes series connection convolutional layer and an average pond layer;Average pond
The effect of layer is to carry out down-sampling to the characteristic pattern of output, is further reduced parameter by removing unessential sample in characteristic pattern
Amount;Decoder module successively includes series connection convolutional layer and up-sampling layer;
Step 3.2: using MSE-2 function as the loss function of training in video interleave network, the following institute of MSE-2 function
Show:
Wherein, MSE-2 indicates loss function, ItFor trained input target image block, It' exported for trained network
Forecast image block;
Step 4: training deblurring network;
Step 4.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 4.2: the fuzzy subimage block after will be processedUsing residual error network model pass through respectively feature extraction,
Residual error convolution sum obtains the subimage block of deblurring after rebuilding;
Step 4.3: use MSE-3 function as the loss function of deblurring network, MSE-3 function is as follows:
Wherein, MSE-3 indicates loss function,For trained input target image block,It is exported for trained network
Forecast image block;
Step 5: training super-resolution network
Step 5.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 5.2: the down-sampling subimage block after input processingPass through spy respectively using super-resolution network model
Sign is extracted, Nonlinear Mapping and reconstruction obtain network output
Step 5.3: using Charbonnier function as the loss function of super-resolution network;
Step 6: training denoising network selects data set provided by NTIRE2018 to be trained;
Step 6.1: feature extraction being passed through using denoising network model to input noise image respectively and Nonlinear Mapping obtains
It is exported to denoising network,
Step 6.2: using Charbonnier function as the loss function of denoising network.
Further, step 1 specifically includes the following steps:
Step 1.1: form the training dataset of deinterlacing model:
Step 1.1.1: video is taken out into frame by ffmpeg and obtains each frame image, obtained video frame is subjected to idol respectively
Number field scan and odd field scan obtain interleaved training dataset, and original image is as training objective;
Step 1.1.2: sub-video frame and the corresponding training objective in interlacing scan data set are taken every time, by d × d size
Intercept subimage blockWithForm the pairing set of several image blocks
Step 1.1.3: upset the sequence of the subimage block in pairing set at random, obtain the training number of deinterlacing model
According to collection;
Step 1.2: form the training dataset of video interleave model:
Step 1.2.1: taking out frame by ffmpeg for video and obtain each frame image as training data, takes every time continuous
Three frame images are one group of training video frame pair, wherein target of every group of the second frame as training network,
Step 1.2.2: subimage block I is intercepted by d × d size to every group of imaget-1, It, It+1Form several subgraphs
Pairing set { the I of blockt-1, It, It+1};
Step 1.2.3: upset the sequence of the subimage block in pairing set at random, obtain the training data of video interleave model
Collection;
Step 1.3: form the training dataset of deblurring network:
Step 1.3.1: according to image blurring formula:
B (x, y)=(k × I) × (x, y)+G (x, y)
Wherein b, I, k are expressed as blurred picture, original image, fuzzy core, and G represents noise;The width and height of fuzzy core k size
The random value from (0,5) respectively, white Gaussian noise variance G, from (0,100) interior random value, so that each HD video
There is the corresponding video obscured in various degree;
Step 1.3.2: carrying out pumping frame to HD video and fuzzy video respectively, obtains high-definition data collection and corresponding fuzzy
Data set;
Step 1.3.3: the video frame in fuzzy data set is taken to intercept subimage block by d × d size every timeSimultaneously in height
It takes corresponding video frame to execute same operation in clear data set, obtains subimage blockForm the pairing of several subimage blocks
Collection
Step 1.3.4: upset the sequence of the subimage block in pairing set at random, obtain the training dataset of FUZZY NETWORK;
Step 1.4: form the training dataset of super-resolution model:
Step 1.4.1: taking out frame by ffmpeg for video and obtain each frame image, and obtained video frame is carried out down-sampling
Low resolution video frame is formed, original high resolution video frame is as training objective;
Step 1.4.2: taking the low resolution video frame in low-resolution video data set every time and corresponds to training objective
Video frame intercepts subimage block by d × d sizeWithForm the pairing set of several subimage blocks
Step 1.4.3: upset the sequence of the subimage block in pairing set at random, obtain the training data of super-resolution model
Collection;
3. method is remake in a kind of old film reparation based on deep learning according to claim 1, feature exists
In: the specific steps of step 4.1 extraction Y channel data are as follows:
Step 4.1.1: the pixel value of image block be in [0,255] range, by each pixel value in image block divided by
255, so that image of each pixel value between [0,1], after being normalized;
Step 4.1.2: the RGB image block after taking normalization is converted into YCbcCr format, according to formula
Y=(0.256789 × R+0.504129 × G+0.097906 × B)+16.0
Cb=(- 0.148223 × R-0.290992 × G+0.439215 × B)+128.0
Cr=(0.439215 × R-0.367789 × G-0.071426 × B)+128.0
The image block of obtained YCbCr is subjected to channel separation, obtains Y channel data.
Further, the feature extraction phases in step 4.2, step 5.2 and step 6.1 include a convolutional layer and non-thread
Property active coating, by study obtain low-level image feature F1;
Wherein W1And B1For the weight and offset parameter of initial convolutional layer, * represents convolution operation;
Further, each residual error convolution module in the residual error convolution stage in step 4.2 includes one set gradually
Convolutional layer, a nonlinear activation layer, a convolutional layer and a jump attended operation;Attended operation jump for the residual error convolution
The input feature vector F of block2k-1It is added with the output feature of second convolutional layer in the residual error convolution block, it may be assumed that
F2k+1=(W2k+1*Fk+b2k+1)+F2k-1
In formula, k represents residual block serial number, FkThe output of first convolutional layer and nonlinear activation layer in residual block is represented,
W2k+1And b2k+1Respectively represent the weight and biasing of second convolutional layer in residual block, F2k-1Represent the input of residual block.
Further, each magnification level in Nonlinear Mapping stage is arranged 5 depth and remembers in step 5.2 and step 6.1
Recall module, and is all activation primitive after all convolutional layers for the nonlinear activation layer for revealing line rectification function;Profound memory
Module includes profound memory made of module is stacked as residual error module and intensive modular unit;
The concrete operations of each profound memory module are as follows:
Step S1: each profound memory module first extracts feature, and this feature is denoted as f1, and operated by three-layer coil product, and
With feature f1It is added, the output of the operation is denoted as r1,
Step S2: feature f is mentioned1By the intensive connection of four layers of convolution, the output of the operation is denoted as d1,
Then by r1, d1With feature f1It is attached operation, output feature at this time is denoted as f2;
Step S3: feature f2By two layers of convolution operation, and with feature f2It is added, the output of the operation is denoted as r2;Meanwhile
Feature f2By the intensive connection of four layers of convolution, the output of the operation is denoted as b2;
Step S4: by r2, b2With feature f2It is attached operation.
Further, the reconstruction layer of phase of regeneration is warp lamination in step 5.2, and warp lamination is defeated by previous layer network
It is up-sampled out, keeps the super-resolution image of output and training objective equal in magnitude.
Further, the Charbonnier function in step 5.3 and step 6.2 is as follows:
Wherein,For trained input target image block,For the forecast image block of network output, and ε is set as
0.001, Charbonnier loss function is minimized using Adam optimization method.
The invention adopts the above technical scheme, applies deinterlacing, video respectively to old film based on deep learning
Denoising, video deblurring, video interleave and super-resolution technique repair it, and compared with manually, stability is higher,
Arithmetic speed is improved, while reducing the consumption of calculator memory.Effective solution of the present invention the making an uproar of existing restoration algorithm
Sound problem improves the accuracy of image restoration, increases the clarity of restored image to improve the effect of image repair.
The present invention treated image restoration effect is good, restore after image definition it is high, easy to use, at low cost the advantages that.
Detailed description of the invention
The present invention is described in further details below in conjunction with the drawings and specific embodiments;
Fig. 1 is the flow diagram that method is remake in a kind of old film reparation based on deep learning of the present invention;
Fig. 2 is the network structure for the super-resolution that method is remake in a kind of old film reparation based on deep learning of the present invention
Figure;
Fig. 3 is the profound memory modular structure that method is remake in a kind of old film reparation based on deep learning of the present invention
Figure.
Specific embodiment
As shown in one of Fig. 1-3, the invention proposes a kind of, and method is remake in the old film reparation based on deep learning, should
Repair process mainly includes deinterlacing, video denoising, video deblurring, video interleave and super-resolution technique, specific
Process is as shown in Figure 1.It is 3 × 3 convolution kernel that all convolutional layers, which use size, in the present invention, the specific steps of which are as follows:
Step 1: video being taken out into frame by ffmpeg, and is respectively formed the training dataset of deinterlacing model, video
The training dataset of the training dataset of interleave model, the training dataset of deblurring network and super-resolution model;
Step 1.1: form the training dataset of deinterlacing model (model1):
Step 1.1.1: video is taken out into frame by ffmpeg and obtains each frame image, obtained video frame is subjected to idol respectively
Number field scan and odd field scan obtain interleaved training dataset, and original image is as training objective;
Step 1.1.2: sub-video frame and the corresponding training objective in interlacing scan data set are taken every time, by d × d size
Intercept subimage blockWithForm the pairing set of several image blocks
Step 1.1.3: upset the sequence of the subimage block in pairing set at random, obtain deinterlacing model (model1)
Training dataset;
Step 1.2: form the training dataset of video interleave model (model2):
Step 1.2.1: taking out frame by ffmpeg for video and obtain each frame image as training data, takes every time continuous
Three frame images are one group of training video frame pair, wherein target of every group of the second frame as training network,
Step 1.2.2: subimage block I is intercepted by d × d size to every group of imaget-1, It, It+1Form several subgraphs
Pairing set { the I of blockt-1, It, It+1};
Step 1.2.3: upset the sequence of the subimage block in pairing set at random, obtain video interleave model (model2)
Training dataset;
Step 1.3: form the training dataset of deblurring network (model3):
Step 1.3.1: according to image blurring formula:
B (x, y)=(k × I) × (x, y)+G (x, y)
Wherein b, I, k are expressed as blurred picture, original image, fuzzy core, and G represents noise;The width and height of fuzzy core k size
The random value from (0,5) respectively, white Gaussian noise variance G, from (0,100) interior random value, so that each HD video
There is the corresponding video obscured in various degree;
Step 1.3.2: carrying out pumping frame to HD video and fuzzy video respectively, obtains high-definition data collection and corresponding fuzzy
Data set;
Step 1.3.3: the video frame in fuzzy data set is taken to intercept subimage block by d × d size every timeSimultaneously in height
It takes corresponding video frame to execute same operation in clear data set, obtains subimage blockForm the pairing of several subimage blocks
Collection
Step 1.3.4: upset the sequence of the subimage block in pairing set at random, obtain the training of FUZZY NETWORK (model3)
Data set;
Step 1.4: form the training dataset of super-resolution model (model4):
Step 1.4.1: taking out frame by ffmpeg for video and obtain each frame image, and obtained video frame is carried out down-sampling
Low resolution video frame is formed, original high resolution video frame is as training objective;
Step 1.4.2: taking the low resolution video frame in low-resolution video data set every time and corresponds to training objective
Video frame intercepts subimage block by d × d sizeWithForm the pairing set of several subimage blocks
Step 1.4.3: upset the sequence of the subimage block in pairing set at random, obtain super-resolution model (model4)
Training dataset;
Step 2: training deinterlacing network model (model1)
Step 2.1: inputting interleaved odd field and even field image blockObtain the prediction result of deinterlacingThe as output of deinterlacing network.Wherein, deinterlacing network mainly includes characteristic extracting module, non-linear to reflect
It penetrates module and rebuilds module composition.The characteristic extracting module and non-linear mapping module of deinterlacing are all by simply connecting
Convolutional layer stacks, and has non-linear rectification function (ReLU) as activation primitive, ReLU function after each convolutional layer
Formula is as follows:
F (x)=max (0, x)
Step 2.2: using MSE function as training objective image block I in video interleave networktWith the prediction of network output
Image block It' loss function, MSE function is as follows:
Step 3: training video interleave network model (model2).
Step 3.1: the continuous three video frame I of inputt-1, It, It+1(respectively indicating former frame, present frame and a later frame),
Obtain present frame ItPrediction result It', the as output of interleave network.Wherein, the Nonlinear Mapping of video interleave network model
Module takes and U-Net[1]Network structure, coding module includes series connection convolutional layer and an average pond layer.Average pond
The effect of layer is to carry out down-sampling to the characteristic pattern of output, is further reduced parameter by removing unessential sample in characteristic pattern
Amount.Its decoder module successively includes series connection convolutional layer and up-sampling layer.
Step 3.2: using MSE function as training objective image block I in video interleave networktWith the prediction of network output
Image block It' loss function, MSE function is as follows:
Step 4: training deblurring network (model3)
Step 4.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 4.2: the fuzzy subimage block after will be processedUsing residual error network model pass through respectively feature extraction,
Residual error convolution sum obtains the subimage block of deblurring after rebuilding;
Further, the feature extraction phases in step 4.2 include a convolutional layer and nonlinear activation layer, pass through study
Obtain low-level image feature F1;
Wherein W1And B1For the weight and offset parameter of initial convolutional layer, * represents convolution operation;
Further, each residual error convolution module in the residual error convolution stage in step 4.2 includes one set gradually
Convolutional layer, a nonlinear activation layer, a convolutional layer and a jump attended operation;Attended operation jump for the residual error convolution
The input feature vector F of block2k-1It is added with the output feature of second convolutional layer in the residual error convolution block, it may be assumed that
F2k+1=(W2k+1*Fk+b2k+1)+F2k-1
In formula, k represents residual block serial number, FkThe output of first convolutional layer and nonlinear activation layer in residual block is represented,
W2k+1And b2k+1Respectively represent the weight and biasing of second convolutional layer in residual block, F2k-1Represent the input of residual block.
Further, the reconstruction layer of the phase of regeneration in step 4.2 is convolutional layer, and reconstruction obtains the image after deblurring
Block.
Step 4.3: use MSE-3 function as the loss function of deblurring network, MSE-3 function is as follows:
Wherein, MSE-3 indicates loss function,For trained input target image block,It is exported for trained network
Forecast image block;
Step 5: training super-resolution network (model4);Wherein, super-resolution network respectively include characteristic extracting module,
Non-linear mapping module and reconstruction module, network structure are as shown in Figure 2.
Step 5.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 5.2: the down-sampling subimage block after input processingPass through spy respectively using super-resolution network model
Sign is extracted, Nonlinear Mapping and reconstruction obtain network output
Further, the feature extraction phases in step 5.2 include a convolutional layer and nonlinear activation layer, pass through study
Obtain low-level image feature F1;
Wherein W1And B1For the weight and offset parameter of initial convolutional layer, * represents convolution operation;
Further, each magnification level in Nonlinear Mapping stage is arranged 5 depth and remembers in step 5.2 and step 6.1
Recall module, and is all activation primitive after all convolutional layers for the nonlinear activation layer for revealing line rectification function;Profound memory
Module includes profound memory made of module is stacked as residual error module and intensive modular unit;
The concrete operations of each profound memory module are as follows:
Step S1: each profound memory module first extracts feature, and this feature is denoted as f1, and operated by three-layer coil product, and
With feature f1It is added, the output of the operation is denoted as r1,
Step S2: feature f is mentioned1By the intensive connection (concat) of four layers of convolution, the output of the operation is denoted as d1,
Then by r1, d1With feature f1It is attached operation, output feature at this time is denoted as f2;
Step S3: feature f2By two layers of convolution operation, and with feature f2It is added, the output of the operation is denoted as r2;Meanwhile
Feature f2By the intensive connection of four layers of convolution, the output of the operation is denoted as b2;
Step S4: by r2, b2With feature f2It is attached operation.
Further, the reconstruction layer of phase of regeneration is warp lamination (deconvolution), warp lamination in step 5.2
The output of previous layer network is up-sampled, keeps the super-resolution image of output and training objective equal in magnitude.
Step 5.3: using Charbonnier function as the loss function of super-resolution network;Charbonnier function
It is as follows:
Under normal conditions, ε is set as 0.001, minimizes loss function using Adam optimization method.
Step 6: training denoising network (model5):
Data set provided by NTIRE2018 is selected to be trained;
Step 6.1: feature extraction being passed through using denoising network model to input noise image respectively and Nonlinear Mapping obtains
It is exported to denoising network,
Further, the feature extraction phases in step 6.1 include a convolutional layer and nonlinear activation layer, pass through study
Obtain low-level image feature F1;
Wherein W1And B1For the weight and offset parameter of initial convolutional layer, * represents convolution operation;
Further, 5 profound memory modules are arranged in each magnification level in Nonlinear Mapping stage in step 6.1, and
It is all activation primitive after all convolutional layers for the nonlinear activation layer for revealing line rectification function;Profound memory module includes mould
Profound memory made of block is stacked as residual error module and intensive modular unit;
The concrete operations of each profound memory module are as follows:
Step S1: each profound memory module first extracts feature, and this feature is denoted as f1, and operated by three-layer coil product, and
With feature f1It is added, the output of the operation is denoted as r1,
Step S2: feature f is mentioned1By the intensive connection of four layers of convolution, the output of the operation is denoted as d1,
Then by r1, d1With feature f1It is attached operation, output feature at this time is denoted as f2;
Step S3: feature f2By two layers of convolution operation, and with feature f2It is added, the output of the operation is denoted as r2;Meanwhile
Feature f2By the intensive connection of four layers of convolution, the output of the operation is denoted as b2;
Step S4: by r2, b2With feature f2It is attached operation.
Step 6.2: using Charbonnier function as the loss function of denoising network.Charbonnier function is as follows
It is shown:
Under normal conditions, ε is set as 0.001, minimizes loss function using Adam optimization method.
The invention adopts the above technical scheme, applies deinterlacing, video respectively to old film based on deep learning
Denoising, video deblurring, video interleave and super-resolution technique repair it, and compared with manually, stability is higher,
Arithmetic speed is improved, while reducing the consumption of calculator memory.Effective solution of the present invention the making an uproar of existing restoration algorithm
Sound problem improves the accuracy of image restoration, increases the clarity of restored image to improve the effect of image repair.
The present invention treated image restoration effect is good, restore after image definition it is high, easy to use, at low cost the advantages that.
Bibliography
[1] Olaf Ronneberger, Philipp Fisher, and Thomas Brox.U-Net:
Convolutional Networks for Biomedicla Image Segmentation[C]//International
Conference on Medical Image computing and computer-assisted
Intervention.Springer, Cham, 2015:234-241.
[2] KaiMing He, XiangYu Zhang, ShaoQing Ren, et al.Deep Residual Learning
for Image Recognition[C]//Procedings of the IEEE conference on computer
Vision and pattern recognition.2015:770-778.
[3] Gao Huang, Zhuang Liu, Laurens van der Maaten, et al.Densely
Connected Convolutional Networks[C].Procedings of the IEEE conference on
Computer vision and pattern recognition.2017:4700-4708.
[4] WeiSheng Lai, JiaBin Huang, Narendra Ahuja, et al.Deep Laplacian
Pyramid Networks for Fast and Accurate Super-Resolution[C].Procedings of the
IEEE conference on computer vision and pattern recognition.2017:624-632.
Claims (8)
1. method is remake in a kind of old film reparation based on deep learning, it is characterised in that: itself the following steps are included:
Step 1: video being taken out into frame by ffmpeg, and is respectively formed the training dataset of deinterlacing model, video interleave
The training dataset of the training dataset of model, the training dataset of deblurring network and super-resolution model;
Step 2: training deinterlacing network model inputs interleaved odd field and even field image blockObtain every
The prediction result of row scanning
Step 2.1: deinterlacing network includes characteristic extracting module, non-linear mapping module and reconstruction module;De interlacing is swept
The characteristic extracting module and non-linear mapping module retouched are stacked by convolutional layer of simply connecting, and after each convolutional layer
Have ReLU as activation primitive, ReLU function formula is as follows:
F (x)=max (0, x);
Step 2.2: using MSE-1 function as the loss function of training deinterlacing network model, the following institute of MSE-1 function
Show:
Wherein, MSE-1 indicates loss function,For trained input target image block,The prediction exported for trained network
Image block;
Step 3: training video interleave network model inputs continuous three video frame It-1、It、It+1, respectively indicate former frame, when
Previous frame and a later frame obtain present frame ItPrediction result It′, the as output of interleave network;
Step 3.1: the non-linear mapping module of video interleave network model takes the network structure of U-Net, the network knot of U-Net
Structure includes coding module and decoder module;Coding module includes series connection convolutional layer and an average pond layer;Average pond layer
Effect is to carry out down-sampling to the characteristic pattern of output, is further reduced parameter amount by removing unessential sample in characteristic pattern;
Decoder module successively includes series connection convolutional layer and up-sampling layer;
Step 3.2: use MSE-2 function as the loss function of training in video interleave network, MSE-2 function is as follows:
Wherein, MSE-2 indicates loss function, ItFor trained input target image block, It′The prediction exported for trained network
Image block;
Step 4: training deblurring network;
Step 4.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 4.2: the fuzzy subimage block after will be processedPass through feature extraction, residual error respectively using residual error network model
Convolution sum obtains the subimage block of deblurring after rebuilding;
Step 4.3: use MSE-3 function as the loss function of deblurring network, MSE-3 function is as follows:
Wherein, MSE-3 indicates loss function,For trained input target image block,The prediction exported for trained network
Image block;
Step 5: training super-resolution network
Step 5.1: the subimage block that training data is concentratedIt is normalized and extracts Y channel data,
Step 5.2: the down-sampling subimage block after input processingIt is mentioned respectively by feature using super-resolution network model
It takes, Nonlinear Mapping and reconstruction obtain network output
Step 5.3: using Charbonnier function as the loss function of super-resolution network;
Step 6: training denoising network selects data set provided by NTIRE2018 to be trained;
Step 6.1: feature extraction being passed through using denoising network model to input noise image respectively and Nonlinear Mapping is gone
Network of making an uproar output;
Step 6.2: using Charbonnier function as the loss function of denoising network.
2. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Rapid 1 specifically includes the following steps:
Step 1.1: form the training dataset of deinterlacing model:
Step 1.1.1: video is taken out into frame by ffmpeg and obtains each frame image, obtained video frame is subjected to even field respectively
Scanning and odd field scan obtain interleaved training dataset, and original image is as training objective;
Step 1.1.2: taking sub-video frame and the corresponding training objective in interlacing scan data set every time, intercepts by d × d size
Subimage blockWithForm the pairing set of several image blocks
Step 1.1.3: upset the sequence of the subimage block in pairing set at random, obtain the training data of deinterlacing model
Collection;
Step 1.2: form the training dataset of video interleave model:
Step 1.2.1: video is taken out into frame by ffmpeg and obtains each frame image as training data, takes continuous three frame every time
Image is one group of training video frame pair, wherein target of every group of the second frame as training network,
Step 1.2.2: subimage block I is intercepted by d × d size to every group of imaget-1, It, It+1Form matching for several subimage blocks
To collection { It-1, It, It+1};
Step 1.2.3: upset the sequence of the subimage block in pairing set at random, obtain the training dataset of video interleave model;
Step 1.3: form the training dataset of deblurring network:
Step 1.3.1: according to image blurring formula:
B (x, y)=(k × I)=(x, y)+G (x, y)
Wherein b, I, k are expressed as blurred picture, original image, fuzzy core, and G represents noise;The width of fuzzy core k size and high difference
The random value from (0,5), white Gaussian noise variance G, from (0,100) interior random value, so that each HD video has
The corresponding video obscured in various degree;
Step 1.3.2: carrying out pumping frame to HD video and fuzzy video respectively, obtains high-definition data collection and corresponding fuzzy data
Collection;
Step 1.3.3: the video frame in fuzzy data set is taken to intercept subimage block by d × d size every timeSimultaneously in high definition number
It takes corresponding video frame to execute same operation according to concentration, obtains subimage blockForm the pairing set of several subimage blocks
Step 1.3.4: upset the sequence of the subimage block in pairing set at random, obtain the training dataset of FUZZY NETWORK;
Step 1.4: form the training dataset of super-resolution model:
Step 1.4.1: taking out frame by ffmpeg for video and obtain each frame image, and obtained video frame progress down-sampling is formed
Low resolution video frame, original high resolution video frame is as training objective;
Step 1.4.2: the video of the low resolution video frame and corresponding training objective in low-resolution video data set is taken every time
Frame intercepts subimage block by d × d sizeWithForm the pairing set of several subimage blocks
Step 1.4.3: upset the sequence of the subimage block in pairing set at random, obtain the training dataset of super-resolution model.
3. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Rapid 4.1 extract the specific steps of Y channel data are as follows:
Step 4.1.1: the pixel value of image block be in [0,255] range, by each pixel value in image block divided by 255,
So that image of each pixel value between [0,1], after being normalized;
Step 4.1.2: the RGB image block after taking normalization is converted into YCbcCr format, according to formula
Y=(0.256789 × R+0.504129 × G+0.097906 × B)+16.0
Cb=(- 0.148223 × R-0.290992 × G+0.439215 × B)+128.0
Cr=(0.439215 × R-0.367789 × G-0.071426 × B)+128.0
The image block of obtained YCbCr is subjected to channel separation, obtains Y channel data.
4. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Feature extraction phases in rapid 4.2, step 5.2 and step 6.1 include a convolutional layer and nonlinear activation layer, by learning
To low-level image feature F1;
Wherein W1And B1For the weight and offset parameter of initial convolutional layer, * represents convolution operation.
5. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Each residual error convolution module in the residual error convolution stage in rapid 4.2 include the convolutional layer set gradually, one it is non-linear swash
Layer, a convolutional layer and a jump attended operation living;Attended operation jump for the input feature vector F of the residual error convolution block2k-1With
The output feature of second convolutional layer is added in the residual error convolution block, it may be assumed that
F2k+1=(W2k+1*Fk+b2k+1)+F2k-1
In formula, k represents residual block serial number, FkRepresent the output of first convolutional layer and nonlinear activation layer in residual block, W2k+1With
b2k+1Respectively represent the weight and biasing of second convolutional layer in residual block, F2k-1Represent the input of residual block.
6. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Rapid 5.2 and step 6.1 in each magnification level in Nonlinear Mapping stage 5 profound memory modules, and all convolutional layers are set
It is all activation primitive afterwards for the nonlinear activation layer for revealing line rectification function;Profound memory module includes module by residual error mould
Profound memory made of block and intensive modular unit stack;
The concrete operations of each profound memory module are as follows:
Step S1: each profound memory module first extracts feature, and this feature is denoted as f1, and operated by three-layer coil product, and and feature
f1It is added, the output of the operation is denoted as r1,
Step S2: feature f is mentioned1By the intensive connection of four layers of convolution, the output of the operation is denoted as d1,
Then by r1, d1With feature f1It is attached operation, output feature at this time is denoted as f2;
Step S3: feature f2By two layers of convolution operation, and with feature f2It is added, the output of the operation is denoted as r2;Meanwhile feature f2
By the intensive connection of four layers of convolution, the output of the operation is denoted as b2;
Step S4: by r2, b2With feature f2It is attached operation.
7. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
The reconstruction layer of phase of regeneration is warp lamination in rapid 5.2, and warp lamination up-samples the output of previous layer network, makes to export
Super-resolution image and training objective it is equal in magnitude.
8. method is remake in a kind of old film reparation based on deep learning according to claim 1, it is characterised in that: step
Rapid 5.3 and step 6.2 in Charbonnier function it is as follows:
Wherein,For trained input target image block,For the forecast image block of network output, and ε is set as 0.001, makes
Charbonnier loss function is minimized with Adam optimization method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810699895.0A CN108961186B (en) | 2018-06-29 | 2018-06-29 | Old film repairing and reproducing method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810699895.0A CN108961186B (en) | 2018-06-29 | 2018-06-29 | Old film repairing and reproducing method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108961186A true CN108961186A (en) | 2018-12-07 |
CN108961186B CN108961186B (en) | 2022-02-15 |
Family
ID=64484635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810699895.0A Active CN108961186B (en) | 2018-06-29 | 2018-06-29 | Old film repairing and reproducing method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108961186B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559290A (en) * | 2018-12-14 | 2019-04-02 | 中国石油大学(华东) | A kind of image denoising method of the asymmetric jump connection of depth |
CN109785249A (en) * | 2018-12-22 | 2019-05-21 | 昆明理工大学 | A kind of Efficient image denoising method based on duration memory intensive network |
CN109816620A (en) * | 2019-01-31 | 2019-05-28 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110276739A (en) * | 2019-07-24 | 2019-09-24 | 中国科学技术大学 | A kind of video jitter removing method based on deep learning |
CN110378860A (en) * | 2019-07-30 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium of restored video |
CN110428382A (en) * | 2019-08-07 | 2019-11-08 | 杭州微帧信息科技有限公司 | A kind of efficient video Enhancement Method, device and storage medium for mobile terminal |
CN110490817A (en) * | 2019-07-22 | 2019-11-22 | 武汉大学 | A kind of image noise suppression method based on mask study |
CN110751597A (en) * | 2019-10-12 | 2020-02-04 | 西安电子科技大学 | Video super-resolution method based on coding damage repair |
CN111524068A (en) * | 2020-04-14 | 2020-08-11 | 长安大学 | Variable-length input super-resolution video reconstruction method based on deep learning |
CN111738951A (en) * | 2020-06-22 | 2020-10-02 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN111757087A (en) * | 2020-06-30 | 2020-10-09 | 北京金山云网络技术有限公司 | VR video processing method and device and electronic equipment |
CN112188236A (en) * | 2019-07-01 | 2021-01-05 | 北京新唐思创教育科技有限公司 | Video interpolation frame model training method, video interpolation frame generation method and related device |
CN112686811A (en) * | 2020-11-27 | 2021-04-20 | 深兰科技(上海)有限公司 | Video processing method, video processing apparatus, electronic device, and storage medium |
CN113034392A (en) * | 2021-03-22 | 2021-06-25 | 山西三友和智慧信息技术股份有限公司 | HDR denoising and deblurring method based on U-net |
CN113554058A (en) * | 2021-06-23 | 2021-10-26 | 广东奥普特科技股份有限公司 | Method, system, device and storage medium for enhancing resolution of visual target image |
CN114286126A (en) * | 2020-09-28 | 2022-04-05 | 阿里巴巴集团控股有限公司 | Video processing method and device |
CN114697709A (en) * | 2020-12-25 | 2022-07-01 | 华为技术有限公司 | Video transmission method and device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071362A1 (en) * | 2004-12-16 | 2007-03-29 | Peyman Milanfar | Dynamic reconstruction of high-resolution video from color-filtered low-resolution video-to-video super-resolution |
CN101231693A (en) * | 2007-01-24 | 2008-07-30 | 通用电气公司 | System and method for reconstructing restored facial images from video |
US20090060373A1 (en) * | 2007-08-24 | 2009-03-05 | General Electric Company | Methods and computer readable medium for displaying a restored image |
CN102496165A (en) * | 2011-12-07 | 2012-06-13 | 四川九洲电器集团有限责任公司 | Method for comprehensively processing video based on motion detection and feature extraction |
CN104616257A (en) * | 2015-01-26 | 2015-05-13 | 山东省计算中心(国家超级计算济南中心) | Recovery evidence obtaining method for blurred degraded digital images in administration of justice |
JP2015095702A (en) * | 2013-11-11 | 2015-05-18 | 株式会社朋栄 | One path video super resolution processing method and video processor performing video processing thereof |
US9218648B2 (en) * | 2009-10-27 | 2015-12-22 | Honeywell International Inc. | Fourier domain blur estimation method and system |
CN106251289A (en) * | 2016-07-21 | 2016-12-21 | 北京邮电大学 | A kind of based on degree of depth study and the video super-resolution method for reconstructing of self-similarity |
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
CN107274347A (en) * | 2017-07-11 | 2017-10-20 | 福建帝视信息科技有限公司 | A kind of video super-resolution method for reconstructing based on depth residual error network |
CN108109109A (en) * | 2017-12-22 | 2018-06-01 | 浙江大华技术股份有限公司 | A kind of super-resolution image reconstruction method, device, medium and computing device |
-
2018
- 2018-06-29 CN CN201810699895.0A patent/CN108961186B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071362A1 (en) * | 2004-12-16 | 2007-03-29 | Peyman Milanfar | Dynamic reconstruction of high-resolution video from color-filtered low-resolution video-to-video super-resolution |
CN101231693A (en) * | 2007-01-24 | 2008-07-30 | 通用电气公司 | System and method for reconstructing restored facial images from video |
US20090060373A1 (en) * | 2007-08-24 | 2009-03-05 | General Electric Company | Methods and computer readable medium for displaying a restored image |
US9218648B2 (en) * | 2009-10-27 | 2015-12-22 | Honeywell International Inc. | Fourier domain blur estimation method and system |
CN102496165A (en) * | 2011-12-07 | 2012-06-13 | 四川九洲电器集团有限责任公司 | Method for comprehensively processing video based on motion detection and feature extraction |
JP2015095702A (en) * | 2013-11-11 | 2015-05-18 | 株式会社朋栄 | One path video super resolution processing method and video processor performing video processing thereof |
CN104616257A (en) * | 2015-01-26 | 2015-05-13 | 山东省计算中心(国家超级计算济南中心) | Recovery evidence obtaining method for blurred degraded digital images in administration of justice |
CN106251289A (en) * | 2016-07-21 | 2016-12-21 | 北京邮电大学 | A kind of based on degree of depth study and the video super-resolution method for reconstructing of self-similarity |
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
CN107274347A (en) * | 2017-07-11 | 2017-10-20 | 福建帝视信息科技有限公司 | A kind of video super-resolution method for reconstructing based on depth residual error network |
CN108109109A (en) * | 2017-12-22 | 2018-06-01 | 浙江大华技术股份有限公司 | A kind of super-resolution image reconstruction method, device, medium and computing device |
Non-Patent Citations (3)
Title |
---|
YUKI MATSUSHITA ET AL: "Simultaneous deblur and super-resolution technique for video sequence captured by hand-held video camera", 《2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
潘浩: "数字视频的修复方法研究", 《中国博士学位论文全文数据库信息科技辑》 * |
贾苏娟: "视频图像超分辨率重建算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559290A (en) * | 2018-12-14 | 2019-04-02 | 中国石油大学(华东) | A kind of image denoising method of the asymmetric jump connection of depth |
CN109785249A (en) * | 2018-12-22 | 2019-05-21 | 昆明理工大学 | A kind of Efficient image denoising method based on duration memory intensive network |
CN109816620A (en) * | 2019-01-31 | 2019-05-28 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109816620B (en) * | 2019-01-31 | 2021-01-05 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112188236A (en) * | 2019-07-01 | 2021-01-05 | 北京新唐思创教育科技有限公司 | Video interpolation frame model training method, video interpolation frame generation method and related device |
CN110490817A (en) * | 2019-07-22 | 2019-11-22 | 武汉大学 | A kind of image noise suppression method based on mask study |
CN110276739A (en) * | 2019-07-24 | 2019-09-24 | 中国科学技术大学 | A kind of video jitter removing method based on deep learning |
CN110276739B (en) * | 2019-07-24 | 2021-05-07 | 中国科学技术大学 | Video jitter removal method based on deep learning |
CN110378860A (en) * | 2019-07-30 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium of restored video |
CN110378860B (en) * | 2019-07-30 | 2023-08-18 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for repairing video |
CN110428382A (en) * | 2019-08-07 | 2019-11-08 | 杭州微帧信息科技有限公司 | A kind of efficient video Enhancement Method, device and storage medium for mobile terminal |
CN110428382B (en) * | 2019-08-07 | 2023-04-18 | 杭州微帧信息科技有限公司 | Efficient video enhancement method and device for mobile terminal and storage medium |
CN110751597A (en) * | 2019-10-12 | 2020-02-04 | 西安电子科技大学 | Video super-resolution method based on coding damage repair |
CN111524068A (en) * | 2020-04-14 | 2020-08-11 | 长安大学 | Variable-length input super-resolution video reconstruction method based on deep learning |
CN111524068B (en) * | 2020-04-14 | 2023-06-02 | 长安大学 | Variable-length input super-resolution video reconstruction method based on deep learning |
CN111738951A (en) * | 2020-06-22 | 2020-10-02 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN111738951B (en) * | 2020-06-22 | 2024-03-15 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN111757087A (en) * | 2020-06-30 | 2020-10-09 | 北京金山云网络技术有限公司 | VR video processing method and device and electronic equipment |
CN114286126A (en) * | 2020-09-28 | 2022-04-05 | 阿里巴巴集团控股有限公司 | Video processing method and device |
CN112686811A (en) * | 2020-11-27 | 2021-04-20 | 深兰科技(上海)有限公司 | Video processing method, video processing apparatus, electronic device, and storage medium |
CN114697709A (en) * | 2020-12-25 | 2022-07-01 | 华为技术有限公司 | Video transmission method and device |
CN113034392A (en) * | 2021-03-22 | 2021-06-25 | 山西三友和智慧信息技术股份有限公司 | HDR denoising and deblurring method based on U-net |
CN113554058A (en) * | 2021-06-23 | 2021-10-26 | 广东奥普特科技股份有限公司 | Method, system, device and storage medium for enhancing resolution of visual target image |
Also Published As
Publication number | Publication date |
---|---|
CN108961186B (en) | 2022-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961186A (en) | A kind of old film reparation recasting method based on deep learning | |
CN109741260B (en) | Efficient super-resolution method based on depth back projection network | |
Anwar et al. | Densely residual laplacian super-resolution | |
Dong et al. | Multi-scale boosted dehazing network with dense feature fusion | |
Liu et al. | Video super-resolution based on deep learning: a comprehensive survey | |
Zhang et al. | Residual non-local attention networks for image restoration | |
Liu et al. | Learning temporal dynamics for video super-resolution: A deep learning approach | |
CN110782399B (en) | Image deblurring method based on multitasking CNN | |
CN108921786A (en) | Image super-resolution reconstructing method based on residual error convolutional neural networks | |
Deng et al. | Lau-net: Latitude adaptive upscaling network for omnidirectional image super-resolution | |
CN109087273A (en) | Image recovery method, storage medium and the system of neural network based on enhancing | |
Yu et al. | Memory-augmented non-local attention for video super-resolution | |
Guan et al. | Srdgan: learning the noise prior for super resolution with dual generative adversarial networks | |
López-Tapia et al. | A single video super-resolution GAN for multiple downsampling operators based on pseudo-inverse image formation models | |
Choi et al. | Wavelet attention embedding networks for video super-resolution | |
Wang et al. | Medical image super-resolution analysis with sparse representation | |
CN113240581A (en) | Real world image super-resolution method for unknown fuzzy kernel | |
CN116797456A (en) | Image super-resolution reconstruction method, system, device and storage medium | |
Kim et al. | Artifacts Reduction Using Multi-Scale Feature Attention Network in Compressed Medical Images. | |
Choi et al. | Group-based bi-directional recurrent wavelet neural network for efficient video super-resolution (VSR) | |
Sun et al. | Distilling with residual network for single image super resolution | |
Wu et al. | Infrared and visible light dual-camera super-resolution imaging with texture transfer network | |
CN114549314A (en) | Method for improving image resolution | |
Wang et al. | Boosting light field image super resolution learnt from single-image prior | |
Wang et al. | RGNAM: recurrent grid network with an attention mechanism for single-image dehazing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190716 Address after: 350000 Area B, 5th Floor, No. 2 Building, Yunzu, 528 Xihong Road, Gulou District, Fuzhou City, Fujian Province Applicant after: Fujian Timor view Mdt InfoTech Ltd Address before: Unit 5, Unit 14, Comprehensive Dormitory Building, Guangming Lane News Center, New District, Hohhot City, Inner Mongolia Autonomous Region, 010000 Applicant before: Zhao Yan |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |