CN109685072A - A kind of compound degraded image high quality method for reconstructing based on generation confrontation network - Google Patents

A kind of compound degraded image high quality method for reconstructing based on generation confrontation network Download PDF

Info

Publication number
CN109685072A
CN109685072A CN201811575838.8A CN201811575838A CN109685072A CN 109685072 A CN109685072 A CN 109685072A CN 201811575838 A CN201811575838 A CN 201811575838A CN 109685072 A CN109685072 A CN 109685072A
Authority
CN
China
Prior art keywords
image
size
network
layer
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811575838.8A
Other languages
Chinese (zh)
Other versions
CN109685072B (en
Inventor
李嘉锋
王珂
卓力
张菁
马春杰
贾童谣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811575838.8A priority Critical patent/CN109685072B/en
Publication of CN109685072A publication Critical patent/CN109685072A/en
Application granted granted Critical
Publication of CN109685072B publication Critical patent/CN109685072B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the compound degraded image high quality method for reconstructing for generating confrontation network, for containing haze simultaneously, system noise, the low-quality image of a variety of degradation problems such as low-light (level) and compression artefacts, the present invention is first from the angle rebuild for complex factors degraded image, it establishes a kind of based on the compound degraded image high quality method for reconstructing for generating confrontation network, it is achievable to be directed to by haze, low-light (level), compression, system noise, the factors such as optical dimming combine the reconstruction of degraded image;Secondly, the present invention uses asymmetrical generation network, the parameter amount of model is greatly reduced, model is made to be easy to training and use;Furthermore using thought end to end, the framework of reconstructing system is simplified, eliminates pretreatment and post-processing;It is all made of convolutional layer finally, generating network, the compound degraded image of arbitrary dimension can be inputted and rebuild.

Description

A kind of compound degraded image high quality method for reconstructing based on generation confrontation network
Technical field
The invention belongs to digital picture/video signal processing field, in particular to a kind of answering based on generation confrontation network Close degraded image high quality method for reconstructing.
Background technique
In video monitoring, intelligent transportation, military imaging reconnaissance, the guidance of guided missile accurately image, telemetry, aerial mapping etc. In, influence of the outdoor vision system vulnerable to many factors such as haze, system noise, low-light (level), optical dimming, compressions. These factors random incorporation in a complex manner, will lead to the serious degeneration of image quality, image detail loss, contrast occurs Phenomena such as decline, cross-color, compression blocking artifact, the subjective vision effect of image becomes very poor.At the same time, picture quality The performance of outdoor vision system effectiveness can also be seriously affected by degenerating.What is more, will lead to subsequent moving object detection, with The processing of the intelligent analysis such as track, identification is entirely ineffective.
Currently, numerous scholars respectively for system noise, haze, compression blocking artifact, optical dimming etc. it is single degrade because Element has carried out research work, due to only accounting for certain single degraded factor in algorithm design process, usually cannot be considered in terms of removal The influence of other degraded factors.During handling the reconstruction of compound degraded image high quality, scholars are often using serial more Secondary application is directed to the method that certain single factor test degrades, and is such as successively removed haze reconstruction, and removal noise is rebuild, and removes fuzzy weight It builds, removal compression is rebuild.This method can to a certain extent rebuild image, but during multiple rebuild, The detailed information of some images is inevitably lost in upper level reconstructed results, the detailed information of loss often influences whether The reconstructed results of next stage, the mode that multistage reconstruction is independently handled also fail to fully consider the mutual pass between multiple problems System, causes final reconstructed results unsatisfactory.Therefore, how under unified frame simultaneously remove a variety of degraded factors to figure The problem of being influenced caused by image quality amount, being worth further investigation in the high quality reconstruction of low-quality images, for promoting practical application The performance of system is significant.
2012, deep neural network (Deep Neural Network) achieved in ImageNet contest it is huge at Function obtains the performance for being far more than conventional method.Then, scholars attempt deep neural network being applied to image reconstruction task In, using low-quality-high quality image data sample pair, learn low-quality out realizes preferably to the mapping network model of high quality image Image superior quality rebuild effect.
Generating confrontation network (GANs, Generative Adversarial Nets) was Goodfellow etc. in 2014 A kind of new network framework proposed in the frame for generating confrontation network, while setting up two deep neural networks, i.e., " raw At network " the non-cooperative game relationship that both is established with " differentiate network ", the frame, alternately updated by iteration reach receive it is assorted Equilibrium, to train optimal network model.It generates the proposition of confrontation network and provides one kind newly for image superior quality reconstruction Thinking and means.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, for containing haze simultaneously, system noise, low-light (level) and The low-quality image of a variety of degradation problems such as compression artefacts proposes the compound degraded image based on the generation confrontation network architecture High quality method for reconstructing is low caused by the problems such as can solving simultaneously containing haze, system noise, low-light (level) and compression artefacts Matter image reconstruction problem.
The present invention is realized using following technological means: a kind of high-quality based on the compound degraded image for generating confrontation network Method for reconstructing is measured, mainly includes overall flow, offline part and online part.
Overall flow: the process flow that compound degraded image is rebuild is devised first;Then according to the life of this process design At network structure and differentiate network structure;Each phase characteristic figure size adjusting of network will be finally generated, the compound figure that degrades is completed As being mapped to reconstruction image;
Offline part: mainly include 2 steps: training sample database generates;Network training and model obtain.Wherein, training It include 5 processes that training sample obtains in sample library generating method;Network model training and model obtain the stage including differentiating The selection of network, loss function, gradient descent method;
Online part: mainly including 3 steps: feature extraction;Fusion Features;Image reconstruction.
The overall flow, the specific steps are as follows:
(1) it mainly includes image reconstruction and true and false image discriminating, accordingly, network structure that compound degraded image, which rebuilds process, Network and two parts of image discriminating network are generated comprising image.
It in image reconstruction, inputs to contain haze, system noise, the figure under the influence factors such as low-light (level) and compression artefacts Picture carries out feature extraction, Fusion Features, image reconstruction, the high quality graphic after output reconstruction to it.
It in true and false image discriminating, inputs as image reconstruction result or high quality graphic, it is differentiated, differentiate input Whether it is compound expected high quality graphic, exports the probability for meeting expected high quality graphic for input, differentiate network for supervising Superintend and direct the quality and really degree for generating that network image is rebuild.
Image generates network and is used for image reconstruction, and image discriminating network is used for image discriminating, and offline part is simultaneously to two Image generates network and image discriminating network is trained, online to carry out image reconstruction in use, only using and generating network.
(2) each layer of network structure corresponds to the process flow of image reconstruction, has different meanings.As shown in Fig. 2, net Network structure includes two parts: generating network and differentiates network.
Generating network structure includes 14 convolutional layers, 6 warp laminations, 16 ELU active modules.Wherein, it is mentioned in feature It takes in step, by 14 convolutional layers and 11 ELU active module composition characteristic extracting sub-modules, is responsible for extracting the height of input picture Layer semantic feature information;In image reconstruction step, image reconstruction submodule is formed by 6 warp laminations and 5 ELU active modules Block is responsible for rebuilding compound degraded image by the characteristic information inputted, that pond is not added in image reconstruction submodule Layer and full articulamentum, it is intended to guarantee that input feature vector figure is consistent with the output size of characteristic pattern;In Fusion Features step, feature is mentioned The 3rd, 6,11,12,13 layer of convolution of submodule is taken to export defeated with the 5th, 4,3,2,1 layer of deconvolution of image reconstruction submodule respectively Jump connection composition characteristic fusion submodule is carried out out, is responsible for preventing gradient from disappearing and retaining high-layer semantic information, accelerans The training process of network.Differentiate that network includes 5 convolutional layers, 3 batches of normalization layers, 4 LeakyReLU active modules, 1 complete Articulamentum is responsible for differentiating the probability that input meets expected high quality graphic.
The size of convolution kernel is described by W × H, and W, H respectively indicate the width and height of convolution kernel;The size of image passes through C × W × H description, C, W, H respectively indicate the port number of image, width and height.
(3) during image restoration, the variation that the convolutional layer in each network structure outputs and inputs characteristic pattern is as follows:
In the characteristic extraction procedure for generating network, input picture size is 3 × 128 × 128, in first convolutional layer In, the convolution kernel of 32 3 × 3 sizes is first passed through, using ELU active module, obtains the feature that size is 32 × 128 × 128 Figure;In second convolutional layer, the characteristic pattern that input size is 32 × 128 × 128 first passes through the convolution of 32 3 × 3 sizes Core obtains the characteristic pattern that size is 32 × 128 × 128 using ELU active module;In third convolutional layer, size is inputted For 32 × 128 × 128 characteristic pattern, by the convolution kernel for 3 × 3 sizes that 32 step-lengths are 2, obtaining size is 32 × 64 × 64 Characteristic pattern;In the 4th convolutional layer, the characteristic pattern that input size is 32 × 64 × 64 first passes through the volume of 64 3 × 3 sizes Product core obtains the characteristic pattern that size is 64 × 64 × 64 using ELU active module;In the 5th convolutional layer, size is inputted For 64 × 64 × 64 characteristic pattern, the convolution kernel of 64 3 × 3 sizes is first passed through, using ELU active module, obtaining size is 64 × 64 × 64 characteristic pattern;In the 6th convolutional layer, the characteristic pattern that input size is 64 × 64 × 64, by 64 step-lengths For the convolution kernel of 23 × 3 sizes, the characteristic pattern that size is 64 × 32 × 32 is obtained;In the 7th convolutional layer, size is inputted For 64 × 32 × 32 characteristic pattern, the convolution kernel of 128 3 × 3 sizes is first passed through, using ELU active module, obtaining size is 128 × 32 × 32 characteristic pattern;In the 8th convolutional layer, the characteristic pattern that input size is 128 × 32 × 32 first passes through 128 The convolution kernel of a 3 × 3 size obtains the characteristic pattern that size is 128 × 32 × 32 using ELU active module;It is rolled up at the 9th In lamination, the characteristic pattern that input size is 128 × 32 × 32 first passes through the convolution kernel of 128 3 × 3 sizes, swashs using ELU Flexible module obtains the characteristic pattern that size is 128 × 32 × 32;In the tenth convolutional layer, input size is 128 × 32 × 32 Characteristic pattern first passes through the convolution kernel of 128 3 × 3 sizes, and using ELU active module, obtaining size is 128 × 32 × 32 Characteristic pattern;In the 11st convolutional layer, input size be 128 × 32 × 32 characteristic pattern, by 128 step-lengths be 23 × The convolution kernel of 3 sizes obtains the characteristic pattern that size is 128 × 16 × 16;In the 12nd convolutional layer, input size is 128 × 16 × 16 characteristic pattern obtains the feature that size is 128 × 8 × 8 by the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 Figure;In the 13rd convolutional layer, the characteristic pattern that input size is 128 × 8 × 8, first pass through that 128 step-lengths are 23 × 3 are big Small convolution kernel obtains the characteristic pattern that size is 128 × 4 × 4 using ELU active module;In the 14th convolutional layer, The characteristic pattern that size is 128 × 4 × 4 is inputted, the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 is first passed through, is swashed using ELU Flexible module finally obtains the characteristic pattern that size is 128 × 1 × 1.
In the image reconstruction process for generating network, input feature vector figure size is 128 × 1 × 1, in first warp lamination In, the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 is first passed through, using ELU active module, obtaining size is 128 × 4 × 4 Reconstruction features figure;In second warp lamination, the fusion feature figure that input size is 256 × 4 × 4 first passes through 128 steps The convolution kernel of a length of 23 × 3 sizes obtains the reconstruction features figure that size is 128 × 8 × 8 using ELU active module;? In third warp lamination, the fusion feature figure that input size is 256 × 8 × 8 first passes through 3 × 3 sizes that 128 step-lengths are 2 Convolution kernel obtain the reconstruction features figure that size is 128 × 16 × 16 using ELU active module;In the 4th warp lamination In, the fusion feature figure that input size is 256 × 16 × 16 first passes through the convolution kernel for 3 × 3 sizes that 64 step-lengths are 2, then passes through ELU active module is crossed, the reconstruction features figure that size is 64 × 32 × 32 is obtained;In the 5th warp lamination, input size is 128 × 32 × 32 fusion feature figure first passes through the convolution kernel for 3 × 3 sizes that 32 step-lengths are 2, activates mould using ELU Block obtains the reconstruction features figure that size is 32 × 64 × 64;In the 6th warp lamination, input size is 64 × 64 × 64 Fusion feature figure, the convolution kernel for first passing through 3 × 3 sizes that 3 step-lengths are 2 finally obtain size using ELU active module For 3 × 128 × 128 reconstructing restored image.
During generating the Fusion Features of network, in feature extraction in the output of the tenth three-layer coil lamination and image reconstruction The output fusion of first layer warp lamination, the input as second layer warp lamination in image reconstruction;Floor 12 in feature extraction Convolutional layer output in image reconstruction second layer warp lamination export merges, as in image reconstruction third layer warp lamination it is defeated Enter;The output of eleventh floor convolutional layer is merged with third layer warp lamination output in image reconstruction in feature extraction, as image weight The input of 4th layer of warp lamination in building;The output of layer 6 convolutional layer and the 4th layer of warp lamination in image reconstruction in feature extraction Output fusion, the input as layer 5 warp lamination in image reconstruction;The output of third layer convolutional layer and image in feature extraction Layer 5 warp lamination output fusion, the input as layer 6 warp lamination in image reconstruction in reconstruction.
During differentiating that network differentiates image, input is 3 × 128 × 128 by differentiation image size, is rolled up at first In lamination, the convolution kernel for 4 × 4 sizes that 64 step-lengths are 2 is first passed through, using LeakyReLU active module, obtaining size is 64 × 64 × 64 differentiation characteristic pattern;In second convolutional layer, the convolution kernel for 4 × 4 sizes that 128 step-lengths are 2 is first passed through, The differentiation characteristic pattern that size is 128 × 32 × 32 is obtained using LeakyReLU active module using batch normalization layer; In third convolutional layer, the convolution kernel for 4 × 4 sizes that 256 step-lengths are 2 is first passed through, using batch normalization layer, then is passed through LeakyReLU active module is crossed, the differentiation characteristic pattern that size is 256 × 16 × 16 is obtained;In the 4th convolutional layer, first pass through The convolution kernel for 4 × 4 sizes that 512 step-lengths are 2, obtains using batch normalization layer using LeakyReLU active module The differentiation characteristic pattern for being 512 × 8 × 8 to size;In the 5th convolutional layer, by the convolution for 4 × 4 sizes that 1 step-length is 1 Core obtains the differentiation characteristic pattern that size is 1 × 4 × 4;Finally the output of adjustment layer 5 convolutional layer is used as full articulamentum for 1 × 16 Input, full articulamentum output size be 1 × 1 feature vector, indicate input picture rebuild quality, i.e., generation network generate The really degree of image.
The offline part, the specific steps are as follows:
(1) training sample database generates: the generation of training sample database mainly includes the generation of haze training sample, low-light (level) instruction The generation for practicing sample, compresses the generation of training sample, the generation of noise training sample and the generation of optical dimming training sample.
The generation of haze training sample: it according to atmosphere light scattering theory, in computer vision and graphics, forms wide Shown in the general atmosphere photon diffusion models such as formula (1) used.For giving fog free images J (x), by giving perspective rate t at random (x) haze training sample image is obtained, wherein A is air light value.
I (x)=J (x) t (x)+A (1-t (x)) (1)
The generation of low-light (level) training sample: given image is transformed into the color space HSV from RGB color space, to brightness Channel V obtains low-light (level) training sample image after reducing a certain range at random.
Compression artefacts training sample generates: given image being passed through JPEG compression method Q, carries out different compression qualities respectively The compression processing of parameter (CQ) obtains the training sample image of different compression artefacts degree.
Given image: being increased different degrees of zero-mean Gaussian noise by the generation of noise training sample, obtains noise instruction Practice sample image.
Optical dimming training sample generates: given image being increased to different degrees of Gaussian Blur, obtains optical dimming instruction Practice sampled images.
(2) training network: firstly, making a living into network and differentiating that network establishes loss supervision respectively, and declined using gradient Algorithm solves the problem of minimizing loss function.Wherein, the loss for generating network includes to rebuild loss and two portions of generational loss Point, it rebuilds loss and is collectively constituted by MSE loss function and characteristic loss function, generational loss function uses to be proposed in LSGAN Generational loss function;Differentiate the differentiation loss function of network using the differentiation loss function proposed in LSGAN.Training generates network It is all made of Adam gradient descent algorithm with network is differentiated, momentum is disposed as 0.9, and the learning rate for generating network is 0.0005, sentences The learning rate of other network be 0.00001, every secondary learning rate at network of training 50 and differentiate network learning rate respectively multiplied by 0.9, by iterating, the deconditioning when reaching preset maximum number of iterations (100,000 times) finally obtains image reconstruction Model.
The online part, the specific steps are as follows:
(1) feature extraction is carried out to input picture: compound degraded image input feature vector extracting sub-module is rolled up by 14 After lamination, the high-level semantics features information of input picture is obtained.
(2) Fusion Features are carried out to the feature of input picture: by the 3rd, 6,11,12,13 layer of volume in feature extraction submodule Product output carries out jump connection composition fusion feature with the 5th, 4,3,2,1 layer of deconvolution of image reconstruction submodule output respectively, For subsequent image reconstruction remodelling.
(3) fused characteristic image is rebuild: fused feature is input to image reconstruction submodule, to spy Sign figure is rebuild, high quality graphic after being rebuild after 6 layers of warp lamination.
The present invention from the angle rebuild for complex factors degraded image, establishes a kind of based on generation confrontation first The compound degraded image high quality method for reconstructing of network, it is achievable for by haze, low-light (level), compression, system noise, optical mode The reconstruction of the factors such as paste combination degraded image;Secondly, the present invention uses asymmetrical generation network, the ginseng of model is greatly reduced Quantity makes model be easy to training and use;Furthermore the present invention simplifies the framework of reconstructing system using thought end to end, save Pretreatment and post-processing are gone;Finally, generation network of the invention is all made of convolutional layer, the compound drop of arbitrary dimension can be inputted Matter image is rebuild.
Detailed description of the invention
Fig. 1, a kind of compound degraded image high quality method for reconstructing flow chart based on generation confrontation network;
The network structure of Fig. 2, the method for the present invention;(a) network structure is generated;(b) differentiate network structure;
The offline part of Fig. 3, the method for the present invention and online partial process view;
Low-quality images and reconstruction result map under the conditions of Fig. 4, haze, noise and compression artefacts;(a) haze, noise, And the low-quality images under the conditions of compression artefacts;(b) reconstruction result map;
Fig. 5, haze are obscured, low-quality images and reconstruction result map under compression artefacts and low light conditions;(a) haze, It is fuzzy, the low-quality images under compression artefacts and low light conditions;(b) reconstruction result map;
Fig. 6, haze are obscured, noise, low-quality images and reconstruction result map under compression artefacts and low light conditions;(a) Haze obscures, noise, the low-quality images under compression artefacts and low light conditions;(b) reconstruction result map;
Specific embodiment
Below in conjunction with Figure of description, embodiment of the invention is described in detail:
A kind of compound degraded image high quality method for reconstructing based on generation confrontation network, overall flow figure such as 1 institute of attached drawing Show;Algorithm is divided into offline part and online part;Its flow chart is as shown in Fig. 3;Offline part, according to different degraded factors Establish training sample set;For the image of a width size M × N, then size scaling first is increased separately to 128 × 128 pixels Haze degraded factor, low-light (level) degraded factor compress degraded factor, random noise degraded factor, and optical dimming degraded factor obtains To training sample image, original image forms a training sample pair with every width training sample image respectively.In training network, at random Using training sample to being trained.Online part avoids image preprocessing and post-processing, to compound degraded image, by network Model prediction obtains reconstruction image, further promotes network reconnection performance.
The offline part is divided into 2 steps, the specific steps are as follows:
Step (1) training sample database generates: training dataset is 50000 images collected from network.According to formula (1), given image J (x), fixed air light value A=1 are given by perspective rate t (x), and meet t (x) ∈ (0,1) at random, obtained To haze training sample image;Given image is transformed into the color space HSV from RGB color space, to the bright of converted images Channel is spent at random multiplied by low-light (level) factor alpha, and meets α ∈ (0.6,0.9), obtains low-light (level) training sample image;By given figure As using JPEG compression method, carry out the compression processing of different compression quality parameter (CQ) values respectively, CQ be set as (10,20, 30,40) compression artefacts training sample image, is obtained;Given image addition is met to the zero-mean Gaussian noise of (0, σ) N, σ is set It is set to (1,0.5,0.1,0.01), obtains noise training sample image;Given image is added respectively fuzzy core size be 25 × 25, variance is the Gaussian Blur of σ, and σ is set as (0.5,1.0,1.5), obtains optical dimming training sample image.
The method that step (2) network training and model obtain generates network and differentiates network training in the same frame, And its mapping relations is respectively obtained by end-to-end study.
During whole network training, input as sample to { Xi,Yi, wherein XiIt is generation with degraded factor Sample image, YiFor the true picture of high quality.
In generating network, as shown in formula (2), will degrade sample image XiAs the input for generating network, by life It is reconstruction image Z at output after networki, wherein the output image Z of network will be generated againiWith high quality true picture YiComposition figure Opposite { Zi,Yi, for subsequent differentiation Web vector graphic.In formula (2), G indicates to generate network.
Zi=G (Xi) (2)
Generate network in, the loss function for generating network mainly includes two parts, respectively reconstruction loss function with Fight loss function.It rebuilds shown in loss function such as formula (3), wherein W and H indicates the width and height of image.Confrontation loss Shown in function such as formula (4), for making image seem to be more nearly actual high quality graphic, more in higher level Add really and naturally, D indicates to differentiate network in formula (4).
In differentiating network, differentiate the loss function of network for differentiating that generating network generates reconstruction image ZiTrue journey Degree, the input for differentiating network are image to { Zi,Yi, label { 0,1 } is respectively corresponded, indicates the really degree of image.Differentiate network Loss function such as formula (5) shown in, wherein Z and Y meet { Z, Y } ∈ { Zi,Yi}。
During network training, generates network and differentiate that network is alternately trained.It is fixed first to generate network, root According to the loss function for differentiating networkTraining differentiates network;Secondly fixed to differentiate network, according to the loss of final generation network Function training generates network.Final optimization aim such as formula (6) is shown, wherein λGWith λD1 and 0.01 are taken respectively.By repeatedly Iteration, the deconditioning when reaching preset maximum number of iterations (100,000 times), obtains the generation network of image reconstruction.
Described is partially divided into 3 steps online, the specific steps are as follows:
(1) feature extraction is carried out to input picture: using convolutional neural networks CNN carry out bottom-up feature extraction and Expression.Input picture is a compound degraded image X to be processed, after input feature vector extracting sub-module, extracts image border Effective information;Then the image after convolution is handled using nonlinear activation function, excavates the potential feature of image.Finally The high-level semantics features information of input picture is obtained by layer-by-layer eigentransformation.Wherein, the activation primitive that the present invention uses It is Exponential Linear Unit (ELU), as shown in formula (9).Compared with sigmoid, tanh and ReLU function, ELU Stochastic gradient descent convergence rate it is very fast, and do not need carry out large amount of complex operation.α in the present invention is a non-zero number, It is set as 1.0.
ELU (x)=max (0, x)+min (0, α * (exp (x) -1)) (7)
Every layer of convolutional layer extracts shown in the formula such as formula (10) of feature when successively extracting feature, i-th layer of convolutional layer it is defeated Enter for xi, by convolutional layer, after ELU activation primitive, obtain the final convolution results of the layer, the i.e. characteristic pattern of this layer.It is defeated Enter shown in l layers of image of high-level semantics features such as formula (11), during feature extraction, l meet l ∈ (3,6,11,12, 13,14), wherein fea14Directly as the input of reconstruction process, remaining feature is used for Fusion Features.
Fi(xi)=ELU (Convi(xi)) (8)
feal=Fl(Fl-1(Fl-2...(F1(x)))) (9)
(2) carry out Fusion Features to the feature of input picture: Fusion Features process such as formula (12) is shown, wherein concat Operator representing matrix merge, such as by size be 128 × 16 × 16 characteristic pattern and size be 128 × 16 × 16 reconstruction features figures After carrying out concat operation, the fusion feature figure that size is 256 × 16 × 16 is obtained.The Fusion Features of input picture are with reconstruction It carries out simultaneously, the specific details that merges is shown in step (3).
M=concat (fea, rec) (10)
(3) fused characteristic image is rebuild: warp lamination reconstruction process in reconstruction process, such as formula (13) institute Show, the input of i-th layer of warp lamination is xi, warp lamination is first passed through, using ELU activation primitive, finally obtains reconstruction features Figure.In reconstruction process shown in the output such as formula (14) of l layers of input feature vector figure.
Ri(xi)=ELU (DeConvi(xi)) (11)
recl=Rl(Rl-1(Rl-2...(R1(x)))) (12)
During feature reconstruction, by (1) step in feature extraction submodule the 14th layer output, i.e. fea14As The input of feature reconstruction submodule obtains reconstruction features figure rec1.By the output rec of feature reconstruction module first layer1, with feature 13rd layer of output fea in extraction process13It is merged, obtains fusion feature m1, m1The input for making next layer of reconstruction layer, passes through The second layer rebuilds layer and obtains reconstruction features figure rec2;By the output rec of the feature reconstruction module second layer2, in characteristic extraction procedure Floor 12 exports fea12It is merged, obtains fusion feature m2, m2The input for making next layer of reconstruction layer, rebuilds by third layer Layer obtains reconstruction features figure rec3;By the output rec of feature reconstruction module third layer3, defeated with eleventh floor in characteristic extraction procedure Fea out11It is merged, obtains fusion feature m3, m3The input for making next layer of reconstruction layer, is rebuild by the 4th layer of reconstruction layer Characteristic pattern rec4;By the 4th layer of output rec of feature reconstruction module4, fea is exported with layer 6 in characteristic extraction procedure6Melted It closes, obtains fusion feature m4, m4The input for making next layer of reconstruction layer rebuilds layer by layer 5 and obtains reconstruction features figure rec5;It will The output rec of feature reconstruction module layer 55, fea is exported with third layer in characteristic extraction procedure3It is merged, it is special to obtain fusion Levy m5, m5The input for making next layer of reconstruction layer rebuilds layer by layer 6 and obtains finally rebuilding high quality graphic.

Claims (7)

1. a kind of based on the compound degraded image high quality method for reconstructing for generating confrontation network, which is characterized in that this method includes Overall flow, offline part and online part;
Overall flow: the process flow that compound degraded image is rebuild is devised first;Then according to this process design generation net Network structure and differentiation network structure;Each phase characteristic figure size adjusting of network will be finally generated, compound degraded image is completed and reflects It is mapped to reconstruction image;
Offline part: mainly include 2 steps: training sample database generates;Network training and model obtain;Wherein, training sample It include 5 processes that training sample obtains in library generating method;Network model training and model obtain the stage include differentiate network, The selection of loss function, gradient descent method;
Online part: mainly including 3 steps: feature extraction;Fusion Features;Image reconstruction.
2. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 1, It is characterized in that, the overall flow, the specific steps are as follows:
(1) it mainly includes image reconstruction and true and false image discriminating that compound degraded image, which rebuilds process, and accordingly, network structure includes Image generates network and two parts of image discriminating network;
It in image reconstruction, inputs to contain haze, system noise, the image under the influence factors such as low-light (level) and compression artefacts, Feature extraction, Fusion Features, image reconstruction, the high quality graphic after output reconstruction are carried out to it;
It in true and false image discriminating, inputs as image reconstruction result or high quality graphic, it is differentiated, whether differentiate input For compound expected high quality graphic, the probability for meeting expected high quality graphic for input is exported, differentiates that network is used for prefect At the quality and really degree of network image reconstruction;
Image generates network and is used for image reconstruction, and image discriminating network is used for image discriminating, and offline part is simultaneously to two images It generates network and image discriminating network is trained, it is online to carry out image reconstruction in use, only using and generating network;
(2) each layer of network structure corresponds to the process flow of image reconstruction, has different meanings;As shown in Fig. 2, network knot Structure includes two parts: generating network and differentiates network;
Generating network structure includes 14 convolutional layers, 6 warp laminations, 16 ELU active modules;Wherein, it is walked in feature extraction In rapid, by 14 convolutional layers and 11 ELU active module composition characteristic extracting sub-modules, it is responsible for extracting the high-rise language of input picture Adopted characteristic information;In image reconstruction step, image reconstruction submodule is formed by 6 warp laminations and 5 ELU active modules, Be responsible for by input characteristic information compound degraded image is rebuild, in image reconstruction submodule without be added pond layer with Full articulamentum, it is intended to guarantee that input feature vector figure is consistent with the output size of characteristic pattern;In Fusion Features step, feature extraction Module the 3rd, 6,11,12,13 layer of convolution output exported respectively with the 5th, 4,3,2,1 layer of deconvolution of image reconstruction submodule into Row jump connection composition characteristic merges submodule, is responsible for preventing gradient from disappearing and retaining high-layer semantic information, accelerans network Training process;Differentiate that network includes 5 convolutional layers, 3 batches of normalization layers, 4 LeakyReLU active modules, 1 full connection Layer is responsible for differentiating the probability that input meets expected high quality graphic;
The size of convolution kernel is described by W × H, and W, H respectively indicate the width and height of convolution kernel;The size of image passes through C × W × H description, C, W, H respectively indicate the port number of image, width and height;
(3) during image restoration, the variation that the convolutional layer in each network structure outputs and inputs characteristic pattern is as follows:
In the characteristic extraction procedure for generating network, input picture size is 3 × 128 × 128, in first convolutional layer, first The characteristic pattern that size is 32 × 128 × 128 is obtained using ELU active module by the convolution kernel of 32 3 × 3 sizes;? In second convolutional layer, the characteristic pattern that input size is 32 × 128 × 128 first passes through the convolution kernel of 32 3 × 3 sizes, then pass through ELU active module is crossed, the characteristic pattern that size is 32 × 128 × 128 is obtained;In third convolutional layer, input size be 32 × 128 × 128 characteristic pattern obtains the feature that size is 32 × 64 × 64 by the convolution kernel for 3 × 3 sizes that 32 step-lengths are 2 Figure;In the 4th convolutional layer, the characteristic pattern that input size is 32 × 64 × 64 first passes through the convolution kernel of 64 3 × 3 sizes, Using ELU active module, the characteristic pattern that size is 64 × 64 × 64 is obtained;In the 5th convolutional layer, input size is 64 × 64 × 64 characteristic pattern first passes through the convolution kernel of 64 3 × 3 sizes, using ELU active module, obtain size be 64 × 64 × 64 characteristic pattern;In the 6th convolutional layer, the characteristic pattern that input size is 64 × 64 × 64 is 2 by 64 step-lengths 3 × 3 sizes convolution kernel, obtain size be 64 × 32 × 32 characteristic pattern;In the 7th convolutional layer, input size is 64 × 32 × 32 characteristic pattern first passes through the convolution kernel of 128 3 × 3 sizes, and using ELU active module, obtaining size is 128 × 32 × 32 characteristic pattern;In the 8th convolutional layer, the characteristic pattern that input size is 128 × 32 × 32 first passes through 128 3 The convolution kernel of × 3 sizes obtains the characteristic pattern that size is 128 × 32 × 32 using ELU active module;In the 9th convolution In layer, the characteristic pattern that input size is 128 × 32 × 32 first passes through the convolution kernel of 128 3 × 3 sizes, activates using ELU Module obtains the characteristic pattern that size is 128 × 32 × 32;In the tenth convolutional layer, the spy that size is 128 × 32 × 32 is inputted Sign figure, first passes through the convolution kernel of 128 3 × 3 sizes, using ELU active module, obtains the spy that size is 128 × 32 × 32 Sign figure;In the 11st convolutional layer, input size be 128 × 32 × 32 characteristic pattern, by 128 step-lengths be 23 × 3 The convolution kernel of size obtains the characteristic pattern that size is 128 × 16 × 16;In the 12nd convolutional layer, input size be 128 × 16 × 16 characteristic pattern obtains the feature that size is 128 × 8 × 8 by the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 Figure;In the 13rd convolutional layer, the characteristic pattern that input size is 128 × 8 × 8, first pass through that 128 step-lengths are 23 × 3 are big Small convolution kernel obtains the characteristic pattern that size is 128 × 4 × 4 using ELU active module;In the 14th convolutional layer, The characteristic pattern that size is 128 × 4 × 4 is inputted, the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 is first passed through, is swashed using ELU Flexible module finally obtains the characteristic pattern that size is 128 × 1 × 1.
3. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 2, It is characterized in that, in the image reconstruction process for generating network, input feature vector figure size is 128 × 1 × 1, in first deconvolution In layer, the convolution kernel for 3 × 3 sizes that 128 step-lengths are 2 is first passed through, using ELU active module, obtaining size is 128 × 4 × 4 reconstruction features figure;In second warp lamination, the fusion feature figure that input size is 256 × 4 × 4 first passes through 128 The convolution kernel for 3 × 3 sizes that a step-length is 2 obtains the reconstruction features that size is 128 × 8 × 8 using ELU active module Figure;In third warp lamination, input size be 256 × 8 × 8 fusion feature figure, first pass through 128 step-lengths be 23 × The convolution kernel of 3 sizes obtains the reconstruction features figure that size is 128 × 16 × 16 using ELU active module;It is anti-at the 4th In convolutional layer, the fusion feature figure that input size is 256 × 16 × 16 first passes through the convolution for 3 × 3 sizes that 64 step-lengths are 2 Core obtains the reconstruction features figure that size is 64 × 32 × 32 using ELU active module;In the 5th warp lamination, input The fusion feature figure that size is 128 × 32 × 32 first passes through the convolution kernel for 3 × 3 sizes that 32 step-lengths are 2, swashs using ELU Flexible module obtains the reconstruction features figure that size is 32 × 64 × 64;In the 6th warp lamination, input size be 64 × 64 × 64 fusion feature figure, the convolution kernel for first passing through 3 × 3 sizes that 3 step-lengths are 2 are finally obtained using ELU active module The reconstructing restored image that size is 3 × 128 × 128.
4. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 2, It is characterized in that, during generating the Fusion Features of network, the output of the tenth three-layer coil lamination and image reconstruction in feature extraction Middle first layer warp lamination output fusion, the input as second layer warp lamination in image reconstruction;The 12nd in feature extraction The output of layer convolutional layer is merged with second layer warp lamination output in image reconstruction, as third layer warp lamination in image reconstruction Input;The output of eleventh floor convolutional layer is merged with third layer warp lamination output in image reconstruction in feature extraction, as image The input of 4th layer of warp lamination in reconstruction;The output of layer 6 convolutional layer and the 4th layer of deconvolution in image reconstruction in feature extraction Layer output fusion, the input as layer 5 warp lamination in image reconstruction;The output of third layer convolutional layer and figure in feature extraction As layer 5 warp lamination output fusion, the input as layer 6 warp lamination in image reconstruction in rebuilding.
5. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 2, It is characterized in that, during differentiating that network differentiates image, input is 3 × 128 × 128 by differentiation image size, at first In convolutional layer, the convolution kernel for first passing through 4 × 4 sizes that 64 step-lengths are 2 obtains size using LeakyReLU active module For 64 × 64 × 64 differentiation characteristic pattern;In second convolutional layer, the convolution for 4 × 4 sizes that 128 step-lengths are 2 is first passed through Core obtains the differentiation feature that size is 128 × 32 × 32 using LeakyReLU active module using batch normalization layer Figure;In third convolutional layer, the convolution kernel for 4 × 4 sizes that 256 step-lengths are 2 is first passed through, using batch normalization layer, Using LeakyReLU active module, the differentiation characteristic pattern that size is 256 × 16 × 16 is obtained;In the 4th convolutional layer, first By the convolution kernel for 4 × 4 sizes that 512 step-lengths are 2, using batch normalization layer, mould is activated using LeakyReLU Block obtains the differentiation characteristic pattern that size is 512 × 8 × 8;In the 5th convolutional layer, 4 × 4 sizes for being 1 by 1 step-length Convolution kernel, obtain size be 1 × 4 × 4 differentiation characteristic pattern;Finally the output of adjustment layer 5 convolutional layer is for 1 × 16 as complete The input of articulamentum, the feature vector that full articulamentum output size is 1 × 1 indicate the quality that input picture is rebuild, i.e. generation net The really degree of network generation image.
6. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 1, It is characterized in that, the offline part, the specific steps are as follows:
(1) training sample database generates: the generation of training sample database mainly includes the generation of haze training sample, low-light (level) training sample This generation, compresses the generation of training sample, the generation of noise training sample and the generation of optical dimming training sample;
The generation of haze training sample: according to atmosphere light scattering theory, in computer vision and graphics, form makes extensively Shown in atmosphere photon diffusion models such as formula (1);For giving fog free images J (x), obtained by giving perspective rate t (x) at random To haze training sample image, wherein A is air light value;
I (x)=J (x) t (x)+A (1-t (x)) (1)
The generation of low-light (level) training sample: given image is transformed into the color space HSV from RGB color space, to luminance channel V Low-light (level) training sample image is obtained after random reduction a certain range;
Compression artefacts training sample generates: given image being passed through JPEG compression method Q, carries out different compression quality parameters respectively Compression processing, obtain the training sample image of different compression artefacts degree;
Given image: being increased different degrees of zero-mean Gaussian noise by the generation of noise training sample, obtains noise training sample This image;
Optical dimming training sample generates: given image being increased to different degrees of Gaussian Blur, obtains optical dimming training sample Image;
(2) training network: firstly, making a living into network and differentiating that network establishes loss supervision respectively, and gradient descent algorithm is used Solve the problem of minimizing loss function;Wherein, the loss for generating network includes to rebuild loss and two parts of generational loss, weight It builds loss to be collectively constituted by MSE loss function and characteristic loss function, generational loss function is using the generation damage proposed in LSGAN Lose function;Differentiate the differentiation loss function of network using the differentiation loss function proposed in LSGAN;Training generates network and differentiates Network is all made of Adam gradient descent algorithm, and momentum is disposed as 0.9, and the learning rate for generating network is 0.0005, differentiates network Learning rate be 0.00001, every secondary learning rate at network of training 50 and differentiate the learning rate of network respectively multiplied by 0.9, warp It crosses and iterates, the deconditioning when reaching preset maximum number of iterations finally obtains image reconstruction model.
7. a kind of compound degraded image high quality method for reconstructing based on generation confrontation network according to claim 1, It is characterized in that, the online part, the specific steps are as follows:
(1) feature extraction is carried out to input picture: by compound degraded image input feature vector extracting sub-module, by 14 convolutional layers Afterwards, the high-level semantics features information of input picture is obtained;
(2) Fusion Features are carried out to the feature of input picture: the 3rd, 6,11,12,13 layer of convolution in feature extraction submodule is defeated It carries out jump connection composition fusion feature with the 5th, 4,3,2,1 layer of deconvolution of image reconstruction submodule output respectively out, is used for Subsequent image reconstruction remodelling;
(3) fused characteristic image is rebuild: fused feature is input to image reconstruction submodule, to characteristic pattern It is rebuild, high quality graphic after being rebuild after 6 layers of warp lamination.
CN201811575838.8A 2018-12-22 2018-12-22 Composite degraded image high-quality reconstruction method based on generation countermeasure network Active CN109685072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811575838.8A CN109685072B (en) 2018-12-22 2018-12-22 Composite degraded image high-quality reconstruction method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811575838.8A CN109685072B (en) 2018-12-22 2018-12-22 Composite degraded image high-quality reconstruction method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109685072A true CN109685072A (en) 2019-04-26
CN109685072B CN109685072B (en) 2021-05-14

Family

ID=66188107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811575838.8A Active CN109685072B (en) 2018-12-22 2018-12-22 Composite degraded image high-quality reconstruction method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109685072B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033416A (en) * 2019-04-08 2019-07-19 重庆邮电大学 A kind of car networking image recovery method of the more granularities of combination
CN110363716A (en) * 2019-06-25 2019-10-22 北京工业大学 One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing
CN110414620A (en) * 2019-08-06 2019-11-05 厦门大学 A kind of semantic segmentation model training method, computer equipment and storage medium
CN110443244A (en) * 2019-08-12 2019-11-12 深圳市捷顺科技实业股份有限公司 A kind of method and relevant apparatus of graphics process
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111291839A (en) * 2020-05-09 2020-06-16 创新奇智(南京)科技有限公司 Sample data generation method, device and equipment
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
WO2020228536A1 (en) * 2019-05-15 2020-11-19 京东方科技集团股份有限公司 Icon generation method and apparatus, method for acquiring icon, electronic device, and storage medium
WO2020233475A1 (en) * 2019-05-22 2020-11-26 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer device and storage medium
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method
CN112767279A (en) * 2021-02-01 2021-05-07 福州大学 Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN112950507A (en) * 2021-03-08 2021-06-11 四川大学 Method for improving single-pixel color imaging performance under scattering environment based on deep learning
CN113034381A (en) * 2021-02-08 2021-06-25 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN113284046A (en) * 2021-05-26 2021-08-20 中国电子科技集团公司第五十四研究所 Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
CN114266709A (en) * 2021-12-14 2022-04-01 北京工业大学 Composite degraded image decoupling analysis and restoration method based on cross-branch connection network
CN114331922A (en) * 2022-03-10 2022-04-12 武汉工程大学 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
CN115131218A (en) * 2021-03-25 2022-09-30 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN108615226A (en) * 2018-04-18 2018-10-02 南京信息工程大学 A kind of image defogging method fighting network based on production
CN108765319A (en) * 2018-05-09 2018-11-06 大连理工大学 A kind of image de-noising method based on generation confrontation network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN108615226A (en) * 2018-04-18 2018-10-02 南京信息工程大学 A kind of image defogging method fighting network based on production
CN108765319A (en) * 2018-05-09 2018-11-06 大连理工大学 A kind of image de-noising method based on generation confrontation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAO X J等: "Image restoration using convolutional auto-encoders with symmetric skip connections", 《 ARXIV.ORG/ABS/1606.08921》 *
QQ_23304241: "激活函数ReLU、Leaky ReLU、PReLU和RReLU", 《CSDN博客HTTPS://BLOG.CSDN.NET/QQ_23304241/ARTICLE/DETAILS/80300149》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033416A (en) * 2019-04-08 2019-07-19 重庆邮电大学 A kind of car networking image recovery method of the more granularities of combination
WO2020228536A1 (en) * 2019-05-15 2020-11-19 京东方科技集团股份有限公司 Icon generation method and apparatus, method for acquiring icon, electronic device, and storage medium
WO2020233475A1 (en) * 2019-05-22 2020-11-26 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer device and storage medium
CN110363716A (en) * 2019-06-25 2019-10-22 北京工业大学 One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing
CN110363716B (en) * 2019-06-25 2021-11-19 北京工业大学 High-quality reconstruction method for generating confrontation network composite degraded image based on conditions
CN110473154A (en) * 2019-07-31 2019-11-19 西安理工大学 A kind of image de-noising method based on generation confrontation network
CN110473154B (en) * 2019-07-31 2021-11-16 西安理工大学 Image denoising method based on generation countermeasure network
CN110414620B (en) * 2019-08-06 2021-08-31 厦门大学 Semantic segmentation model training method, computer equipment and storage medium
CN110414620A (en) * 2019-08-06 2019-11-05 厦门大学 A kind of semantic segmentation model training method, computer equipment and storage medium
CN110443244B (en) * 2019-08-12 2023-12-05 深圳市捷顺科技实业股份有限公司 Graphics processing method and related device
CN110443244A (en) * 2019-08-12 2019-11-12 深圳市捷顺科技实业股份有限公司 A kind of method and relevant apparatus of graphics process
CN110738626A (en) * 2019-10-24 2020-01-31 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111291839A (en) * 2020-05-09 2020-06-16 创新奇智(南京)科技有限公司 Sample data generation method, device and equipment
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method
CN112529828B (en) * 2020-12-25 2023-01-31 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method
CN112767279B (en) * 2021-02-01 2022-06-14 福州大学 Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN112767279A (en) * 2021-02-01 2021-05-07 福州大学 Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN113034381B (en) * 2021-02-08 2022-06-21 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN113034381A (en) * 2021-02-08 2021-06-25 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN112950507A (en) * 2021-03-08 2021-06-11 四川大学 Method for improving single-pixel color imaging performance under scattering environment based on deep learning
CN115131218A (en) * 2021-03-25 2022-09-30 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment
CN113284046A (en) * 2021-05-26 2021-08-20 中国电子科技集团公司第五十四研究所 Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
CN113284046B (en) * 2021-05-26 2023-04-07 中国电子科技集团公司第五十四研究所 Remote sensing image enhancement and restoration method and network based on no high-resolution reference image
CN114266709A (en) * 2021-12-14 2022-04-01 北京工业大学 Composite degraded image decoupling analysis and restoration method based on cross-branch connection network
CN114266709B (en) * 2021-12-14 2024-04-02 北京工业大学 Composite degradation image decoupling analysis and restoration method based on cross-branch connection network
CN114331922A (en) * 2022-03-10 2022-04-12 武汉工程大学 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect
CN114331922B (en) * 2022-03-10 2022-07-19 武汉工程大学 Multi-scale self-calibration method and system for restoring turbulence degraded image by aerodynamic optical effect

Also Published As

Publication number Publication date
CN109685072B (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN109685072A (en) A kind of compound degraded image high quality method for reconstructing based on generation confrontation network
Yang et al. Proximal dehaze-net: A prior learning-based deep network for single image dehazing
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
Ram Prabhakar et al. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs
CN110363716A (en) One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing
DE102018130924A1 (en) Systems and methods for dynamic facial analysis using a recurrent neural network
CN109671023A (en) A kind of secondary method for reconstructing of face image super-resolution
CN110378288A (en) A kind of multistage spatiotemporal motion object detection method based on deep learning
CN106204499A (en) Single image rain removing method based on convolutional neural networks
CN108510451A (en) A method of the reconstruction car plate based on the double-deck convolutional neural networks
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN107292804A (en) Direct many exposure fusion parallel acceleration methods based on OpenCL
CN111597920B (en) Full convolution single-stage human body example segmentation method in natural scene
CN109801232A (en) A kind of single image to the fog method based on deep learning
CN110211052A (en) A kind of single image to the fog method based on feature learning
DE102019121200A1 (en) MOTION-ADAPTIVE RENDERING BY SHADING WITH A VARIABLE RATE
CN110223240A (en) Image defogging method, system and storage medium based on color decaying priori
CN109903373A (en) A kind of high quality human face generating method based on multiple dimensioned residual error network
CN114549297A (en) Unsupervised monocular depth estimation method based on uncertain analysis
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN109671044B (en) A kind of more exposure image fusion methods decomposed based on variable image
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN115115549A (en) Image enhancement model, method, equipment and storage medium of multi-branch fusion attention mechanism
Yan et al. MMP-net: a multi-scale feature multiple parallel fusion network for single image haze removal
Yang et al. A model-driven deep dehazing approach by learning deep priors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant