CN110363716A - One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing - Google Patents
One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing Download PDFInfo
- Publication number
- CN110363716A CN110363716A CN201910552748.5A CN201910552748A CN110363716A CN 110363716 A CN110363716 A CN 110363716A CN 201910552748 A CN201910552748 A CN 201910552748A CN 110363716 A CN110363716 A CN 110363716A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- parameter
- prediction
- compression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 150000001875 compounds Chemical class 0.000 claims abstract description 78
- 238000012549 training Methods 0.000 claims abstract description 39
- 230000006835 compression Effects 0.000 claims description 80
- 238000007906 compression Methods 0.000 claims description 80
- 238000000149 argon plasma sintering Methods 0.000 claims description 49
- 230000006870 function Effects 0.000 claims description 38
- 230000004927 fusion Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 10
- 230000033001 locomotion Effects 0.000 claims description 10
- 239000000203 mixture Substances 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000003475 lamination Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 230000004069 differentiation Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 239000011248 coating agent Substances 0.000 claims description 3
- 238000000576 coating method Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 230000008447 perception Effects 0.000 claims description 2
- 238000003786 synthesis reaction Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 11
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
It is generated the invention discloses one kind based on condition and fights network combined degraded image high quality method for reconstructing, this method is based on condition and generates confrontation network to compound degraded image progress high quality reconstruction in the outdoor vision system such as unmanned plane, video monitoring, intelligent transportation, builds including overall flow, the foundation of compound degraded image sample database, network model and rebuilds part with training, compound degraded image high quality.The compound degraded image that confrontation network obtains the outdoor vision system such as unmanned plane, video monitoring, intelligent transportation, which is generated, by condition carries out unified high quality reconstruction.The invention proposes the schemes for establishing corresponding clear-compound degraded image sample database;Confrontation network is generated using condition, establishes a kind of compound degraded image high quality method for reconstructing, achievable there are the unified of the compound degraded images such as haze, fuzzy, pinch effect to rebuild;Using light-duty network, image reconstruction speed is not only increased, also the more conducively application of the method in practice.
Description
Technical field
The invention belongs to digital image processing field, in particular to it is a kind of based on condition generate confrontation network (cGAN,
Conditional Generative Adversarial Nets) the compound degraded image of outdoor vision system high quality weight
Construction method.
Background technique
The open air such as unmanned plane, video monitoring, intelligent transportation vision system acquired image is usually by haze sky
The influence of a variety of degraded factors such as gas, fuzzy, pinch effect.These factors random combine in a complex manner, leads to image matter
The serious degeneration of amount not only influences the subjective vision effect of human eye, but also brings to outdoor giving full play to for vision system effectiveness
Very big obstruction.
As shown by the equation, wherein I (x) is foggy image to the classical atmosphere photon diffusion models that haze image is formed, and J (x) is
Clear image, t (x) are transmission plot t (x)=e-ρd(x)(ρ is that scattering coefficient d (x) is scene depth), A are global atmosphere light.
I (x)=J (x) t (x)+A (1-t (x))
Estimation for image atmosphere light scattering parameter, if estimation transmission plot t (x) and atmosphere light A, reconstruction obtain respectively
Image generally comprise cross-color or artifact is as shown in Fig. 1, therefore can by the estimation of transmission plot t (x) and atmosphere light A according to
Formula is converted into the estimation of atmosphere light scattering parameter K.
As shown by the equation, wherein g (x) indicates that blurred picture, f (x) indicate clear image, q to blurred image universal model
(x) fuzzy core is represented, * indicates that convolution operator, n (x) represent additive noise.There are mainly three types of common vague category identifiers, respectively
Motion blur, defocusing blurring, atmospheric turbulance are fuzzy etc..
G (x)=q (x) * f (x)+n (x)
In the outdoor vision system such as unmanned plane, video monitoring, intelligent transportation, generally require to acquired image
Compression storage is carried out, currently used compression method can be divided into two classes: lossless compression and lossy compression.Although lossless compression side
Method provides optimal visual experience for user, but higher compression ratio may be implemented in compression method.Therefore it is actually answering
Lossy compression is commonly used in, and the phenomenons that degrade such as pseudomorphism, image blocking artifact are commonly present in lossy compression image.
Currently, scholars have carried out research work for degraded factors such as image haze, fuzzy, pinch effects respectively, but
It is that these algorithms are often only focused in certain specific single degraded factor, is difficult to there are the progress of the image of a variety of degraded factors
Unified high quality is rebuild.As shown in Fig. 2 is the low-quality images that two kinds of degraded factors of a width haze and compression coexist, if right
The image only carries out defogging processing, and not only defog effect is poor, and the blocking artifact as caused by compressing can also seem abnormal prominent, this makes
The human eye subjective vision impression for obtaining reconstruction image is very poor.There is scholar's proposition to pass sequentially through compound degraded image single for certain
The processing method of degraded factor reconstructing system, this method can theoretically rebuild compound degraded image, but due to
The overlay error repeatedly handled and the randomness and its interaction that do not consider various degraded factor combinations etc., reconstructed results are past
It is past unsatisfactory.Therefore, how to be rebuild using the high quality that unified frame carries out compound degraded image, be unmanned plane etc.
Outdoor vision collecting system urgent problem to be solved.
It is compared with the traditional method, deep neural network (Deep Neural Network) achieves in terms of image reconstruction
Preferable effect.The generation that especially Goodfellow et al. was proposed in 2014 fights network (GAN, Generative
Adversarial Networks), the network structure is as shown in Fig. 3, and objective function is as shown by the equation.
The basic principle is that being produced and the consistent pseudo- data of truthful data distribution by arbiter D auxiliary generator G.Model
Input be random noise signal z.The noise signal is mapped to some new data space via generator G, is generated
Data G (z).Next, exporting a probability respectively according to the input of truthful data x and generation data G (z) by arbiter D
Value is indicated that D is truthful data or the confidence level for generating data for input, the performance quality of generator G is judged with this;When most
When whole D cannot be distinguished truthful data x and generate data G (z), it is considered as generator G and has reached optimal.Generator G wants to make to produce
Performance D (G (z)) of the raw data on D and truthful data D (x) are as consistent as possible, so that D, which cannot be distinguished, generates data and true
Real data.The design for generating confrontation network establishes the non-cooperative game relationship of generator and arbiter, by iteration replace with
Newly reach Nash Equilibrium, to train optimal network model, provides new think of to solve image superior quality Problems of Reconstruction
Road and means.
But original GAN network does not need to model in advance in unconditional generation model, this will cause generator can not
Control, for high-resolution pictures, the mode based on classical GAN is unable to control the mode for generating data, the training to network
Cause very big obstruction.It is to be used as to supervise by additional information y that condition, which generates confrontation network (cGAN) main thought, guide data
Generating process, wherein additional information y can be any kind of auxiliary information, such as class label or the number from other mode
According to.The exemplary diagram of classical cGAN network is as shown in Fig. 4, and objective function is as shown by the equation.
The input for generating model is random noise signal z and condition y, and noise signal z and condition y are reflected via generator G
It is mapped to some new data space, the data G that is generated (z | y).Next, by arbiter D according to truthful data x, condition y
A probability value is exported respectively with data G (z | y) is generated, and is indicated that D is the confidence level of truthful data for input, is judged with this
The generation data performance quality of G.When final D cannot be distinguished truthful data x and generate data G (z | y), it is considered as generator G
Reach optimal.Condition is generated confrontation network application in the super-resolution rebuilding of image, image repair, style by recent scholars
Migration etc., yields good result.
Summary of the invention
It is an object of the invention to generate confrontation network (cGAN) to unmanned plane, video monitoring, intelligence using condition
There are haze, fuzzy, pinch effect compound degraded images, and unification is carried out under a frame in the vision system of the open air such as traffic
High quality rebuild.
The present invention is realized using following technical scheme: generating confrontation network to unmanned plane, video based on condition
Compound degraded image carries out high quality reconstruction in the vision system of the open air such as monitoring, intelligent transportation, mainly includes overall flow, compound
The foundation of degraded image sample database, network model, which are built, rebuilds part with training, compound degraded image high quality.
The process flow that degraded image compound first is rebuild, the foundation including compound degraded image sample database;Image atmosphere
Light scattering parameter K predicts that network, image fuzzy parameter Bn prediction network, compression of images parameter CQ prediction network and condition generate
It fights the generation network G of network and differentiates that network D is built, and using clear-compound degraded image sample of synthesis to above-mentioned net
Network is trained, and atmosphere light scattering parameter K, image fuzzy parameter Bn and compression of images parameter CQ are generated as conditional guidance and fought
Network training;When rebuilding to true compound degraded image L, it is pre- that true compound degraded image L is respectively fed to three parameters
Survey grid network obtains corresponding atmosphere light scattering parameter K, image fuzzy parameter Bn and compression of images parameter CQ, later by K, Bn,
CQ is fed together generation network as condition and compound degraded image and obtains high quality reconstruction image.
Compound degraded image sample database is established: the addition of acquisition, degraded factor including high quality graphic, clear-compound
Degraded image sample database generates three steps.
Network model is built and training: the frame including network is built, network training and model obtain two steps.Network
Frame build including image atmosphere light scattering parameter K prediction network, image fuzzy parameter Bn prediction network, compression of images parameter
CQ prediction network, condition generate the generation network G of confrontation network and the framework of confrontation network D is built;Network training and model obtain
Taking the stage includes the use etc. of loss function and Training strategy.
Compound degraded image high quality is rebuild: including compound degraded image atmosphere light scattering parameter K prediction, fuzzy category ginseng
Number Bn prediction and compression parameters CQ prediction, send compound degraded image and its Prediction Parameters into generator and carry out image reconstruction two
Step.
The overall flow, the specific steps are as follows:
Overall flow of the invention is as shown in Fig. 5
S1 obtains high quality graphic first and carries out the compound processing that degrades to it, obtains clear-compound degraded image data
Collection;
It is pre- that S2 builds image atmosphere light scattering parameter prediction network, image fuzzy parameter prediction network, compression of images parameter
Survey grid network, condition generate the generation network G of confrontation network and differentiate network D, and using data set obtained in S2 to above-mentioned five
A network is trained, wherein atmosphere light scattering parameter K, image fuzzy category parameter Bn, image that three prediction networks obtain
Compression parameters CQ generates confrontation network training as conditional guidance, until condition generates confrontation network (cGAN) and reaches Nash Equilibrium
Or reach maximum number of iterations, deconditioning;
S3 is when carrying out image superior quality reconstruction, first true multiple by what is rebuild using the trained network of S2
Conjunction degraded image is respectively fed to three parameter prediction networks and obtains corresponding parameter K, Bn and CQ, later by image and obtained ginseng
Number, which is fed together, to be generated in network G, and reconstruction image is obtained.
The compound degraded image sample database is established, the specific steps are as follows:
The foundation of compound degraded image sample database includes the selection of high quality graphic, the selection for the parameter that degrades and clear-
Compound degraded image is to sample database generation etc..
S1.1 chooses the high quality graphic in existing indoor and outdoor data set respectively;
S1.2 obtains mist for the high quality graphic J (x) in S1.1, by giving transmissivity t (x) and atmosphere light A at random
Haze degraded image;
S1.3 is added a kind of obscure at random into the image that S1.2 is obtained and degrades to obtain fuzzy degraded image, wherein fuzzy drop
Matter includes motion blur, defocusing blurring, atmospheric turbulance etc.;
S1.4 carries out at the compression of different compression quality parameters (CQ) image that S1.3 is obtained using JPEG compression method
Reason, obtains the compound degraded image I (x) of different degrees of compression artefacts.
S1.5 forms high quality graphic J (x) obtained in S1.1 with the corresponding compound degraded image I (x) that S1.4 is obtained
Clearly-compound degraded image pair and corresponding parameter K, Bn, CQ composition sample database that degrades.
The network model is built and training, the specific steps are as follows:
S2 image atmosphere light scattering parameter predicts network establishment
Image atmosphere light scattering parameter K is predicted using convolutional network, using prediction result K as condition, for referring to
Conducting bar part generates the study of confrontation network.
6 atmosphere light scattering parameter of attached drawing prediction network show the basic network of image atmosphere light scattering parameter prediction network
Structure, it is as shown in table 1 comprising 5 convolutional layers and 3 Fusion Features layers, parameter.Compound degraded image is sent into network, is extracted
The characteristic pattern that conv1 and conv2 are obtained carries out Fusion Features, input of the result as conv3;Extract conv1, conv2 and
The characteristic pattern of conv3 carries out Fusion Features, input of the result as conv4;Extract conv1, conv2, conv3 and conv4
Characteristic pattern carry out Fusion Features, input of the obtained result as conv5.After layer 5 convolution, final output atmosphere
Light scattering parameter prediction result K.The network structure can retain image feature at all levels to greatest extent, obtain preferably pre-
Survey result.
S2.1 image fuzzy parameter predicts network establishment
Image is fuzzy mainly to have motion blur, defocusing blurring, atmospheric turbulance fuzzy.Class is obscured to image using convolutional network
It is not predicted, using prediction result Bn as condition, condition is instructed to generate the study of confrontation network.
Using the network structure of classical image sorter network AlexNet, parameter is as shown in table 2.Attached drawing 7 show image
Fuzzy category predicts the structure chart of network, mainly includes 3 convolutional layers, 3 full articulamentums.In all convolutional layers and full connection
After layer, ReLU nonlinear activation function is all applied.In order to accelerate to calculate, prevent over-fitting, in convolution function and activation letter
After number, sampled using maximum pond layer come the width to image and highly.It joined part after the layer of the first two pond
Response normalization (Local Response Normalization, LRN) operation improves the accuracy rate of network.Finally network is defeated
Result is one 3 × 1 vector out, and numerical value respectively represents in image that there are motion blur, defocusing blurring, atmospheric turbulance are fuzzy
The fuzzy parameter of three types.
S2.2 compression of images parameter prediction network establishment
Compression method is divided into two classes: lossless and damage.Although loseless method provides optimal visual experience for user,
There is damage method that higher compression ratio may be implemented, therefore outdoor vision system often uses JPEG lossy compression mode.The present invention utilizes
Convolutional network predicts compression of images parameter, using prediction result CQ as condition, condition is instructed to generate confrontation network
It practises.Attached drawing 8 show the basic structure of compression of images parameter prediction network, mainly comprising 3 convolutional layers and 2 full articulamentums with
1 normalization exponential function (softmax) layer, parameter are as shown in table 3.After all convolutional layers and full articulamentum, all
Apply ReLU nonlinear activation function.In order to reduce intrinsic dimensionality, accelerate to calculate, after convolution function and activation primitive,
Feature selecting is carried out using maximum pond layer.The vector that the output result of final network is one 5 × 1, numerical value respectively represent
There are the probability that compression parameters are equal to 0,10,20,30,40 in image, the final compression parameters prediction result CQ of image is equal to 5
The weighted sum of compression parameters and corresponding prediction probability.
S2.3 condition generates confrontation network
Generate network structure to build: generator G wants that the data generated can be made most with the performance of truthful data on arbiter D
It may be consistent.The generation network structure used is as shown in Fig. 9, and parameter is as shown in table 4.Comprising symmetrically coding and decoding knot
Structure, cataloged procedure is based primarily upon down-sampling operation, and provides Feature Mapping to the respective layer of decoding process.Decoding process mainly makes
With up-sampling operation and non-linear space transfer.Coding corresponds to the feature extraction that 8 convolutional layers carry out different levels, lsb decoder
The reconstruction for dividing corresponding 6 warp laminations, 1 convolutional layer and 1 Tanh function active coating to carry out image.Meanwhile in order to better
Using the feature in each stage, the information bottleneck in decoding process is broken through, leapfrog connection is added in a network.
Differentiate that network structure is built: arbiter D wants that image can be accurately distinguished to be from true or generator G defeated
Out.The study that generator is instructed with this, the output image for keeping its final more level off to true picture.The differentiation that the present invention uses
Network structure such as attached drawing 10 differentiates that parameter is as shown in table 5 shown in network model.Comprising 5 convolutional layers, it is all made of 3 × 3 volume
Product core obtains image scoring finally by sigmoid function.
S2.4 network training
In the network training stage, the scattering of image atmosphere light is respectively trained first with the compound degraded image sample database of foundation
Parameter K predicts that network, image fuzzy parameter Bn prediction network, compression of images parameter CQ prediction network, condition generate confrontation network.
Compound degraded image I is obtained using trained parameter prediction network lateriAtmosphere light scattering parameter Ki, image fuzzy parameter
BniWith compression of images parameter CQi, the condition of confrontation network is generated as condition.By Ii、Ki、BniAnd CQiIt is sent into and generates network G,
Obtain corresponding reconstruction image Zi.Use IiCorresponding high quality graphic Ji, reconstruction image Zi, parameter Ki、Bni、CQiTraining confrontation net
Network D.Alternately training generates network G and confrontation network D, until condition generates confrontation network and reaches Nash Equilibrium or greatest iteration time
Number.Trained two parameter prediction networks and generation network G are that final compound degraded image rebuilds network.Wherein image
Atmosphere light scattering parameter prediction network, fuzzy parameter prediction network, compression parameters prediction network are all made of mean square error (MSE) damage
Function is lost, the loss function for generating network includes confrontation loss, perception loss and Pixel-level loss, differentiates that Web vector graphic differentiates damage
Lose function.The above network of training is all made of Adam gradient descent method, and momentum is disposed as 0.9.Learning rate is 0.0002, every training
100 learning rates become original 0.9 times, by iterating, change when loss function is minimized or reaches preset maximum
Deconditioning when generation number obtains final image reconstruction model.
The compound degraded image high quality is rebuild, the specific steps are as follows:
S3.1 parameter prediction
Firstly, image atmosphere light scattering ginseng will be sent into having a size of the true compound degraded image L to be reconstructed of M × N pixel
Number prediction network, obtains corresponding atmosphere light scattering parameter K.True compound degraded image L dimension is zoomed into 256 × 256 pictures
Element is respectively fed in image fuzzy parameter prediction network and compression of images parameter prediction network, obtains corresponding fuzzy parameter Bn
With compression parameters CQ, and in this, as the condition of generator.
S3.2 generates network reconnection
The obtained atmosphere light scattering parameter K of true compound degraded image L and S3.1 to be reconstructed and image are obscured into class
Other parameter Bn, which is fed together, to be generated in network, and output result is high quality reconstruction image Z.
Compared with prior art, the present invention is directed to generate confrontation network to unmanned plane, video monitoring, intelligence by condition
The compound degraded image that the outdoor vision system such as energy traffic obtains carries out unified high quality and rebuilds.Firstly, appointing for of the invention
Business proposes the scheme for establishing corresponding clear-compound degraded image sample database;Secondly, the present invention generates confrontation net using condition
Network (cGAN) establishes a kind of compound degraded image high quality method for reconstructing, and achievable there are haze, fuzzy, pinch effects etc.
The unified of compound degraded image is rebuild;Furthermore the present invention uses light-duty network, not only increases image reconstruction speed, also more
Conducive to the application of the method in practice.
Detailed description of the invention
Fig. 1 predicts transmission plot and global atmosphere light defog effect respectively.
The compound degraded image defog effect of Fig. 2.
Fig. 3 classics generate confrontation network structure.
Fig. 4 condition generates confrontation network example figure.
Fig. 5 overall flow figure.
Fig. 6 atmosphere light scattering parameter predicts network.
Fig. 7 image fuzzy parameter predicts network.
Fig. 8 compression of images parameter prediction network.
Fig. 9 generates network model.
Figure 10 differentiates network model.
1 atmosphere light scattering parameter of table predicts network structure and parameter
2 image fuzzy category of table predicts network structure and parameter
3 compression of images parameter prediction network structure of table and parameter
Table 4 generates network structure and parameter
Table 5 differentiates network structure and parameter
Specific embodiment
Below in conjunction with Figure of description, embodiment of the invention is described in detail:
One kind is rebuild based on the high quality for the compound degraded image of outdoor vision system that condition generates confrontation network (cGAN)
Method, overall flow is as shown in Fig. 5, mainly including the foundation of compound degraded image sample database, network model build with training,
Quality image reconstruction part.Atmosphere light scattering parameter K predicts that network is as shown in Fig. 6, and image fuzzy parameter Bn predicts network
As shown in Fig. 7, compression of images parameter CQ predicts that network is as shown in Fig. 8, and the generation network model that condition fights network is for example attached
Fig. 9 is generated shown in network model, is differentiated that network is as shown in Fig. 10.For a panel height quality image, haze, mould are added at random
The degraded factors such as paste, compression obtain clear-compound degraded image sample database.Using the sample database of generation, to parameter prediction network
And condition generates confrontation network and is trained.For true compound degraded image L, figure is respectively obtained by three parameter prediction networks
Atmosphere light scattering parameter K, fuzzy category parameter Bn and the compression parameters CQ of picture, and by three Prediction Parameters and corresponding compound drop
Matter image is fed together generation network, obtains final high quality reconstruction image Z.
The compound degraded image sample established part of the library is divided into 3 steps, the specific steps are as follows:
(1) existing NYU Depth Dataset V2 data set includes 1449 off-the-air pictures, Make3D dataset
Data set includes 1000 outdoor images.The image depth information that above-mentioned two data set includes is more advantageous to the high-quality of image
Amount is rebuild.2400 panel height quality image J are chosen from above-mentioned two data set at random.
(2) additive process of degraded factor when simulation true picture obtains, to high quality graphic J obtained in step (1)
It is random that following three kinds of degraded factors are added, obtain corresponding compound degraded image I.
Haze degrades: haze degraded factor being added into image according to formula, uses transmission plot t given at randomi(x) and
Global atmosphere light AiGenerate degraded image Ihi, wherein (0,1) t (x) ∈, Ai=[n1, n2, n3], n ∈ [0.8,1.0].Label Ki
By formula according to given transmission plot ti(x) and atmosphere optical parameter AiIt obtains.
It is fuzzy to degrade: according to formula to haze degraded image IhiIn the fuzzy degraded factor of classification a kind of is added at random, obtain
To blurred picture Ihbi, wherein each fuzzy kernel parameter selection range is as follows: motion blur is mainly caused by the linear movement of camera
, as shown by the equation, at random given M ∈ [0,20], ω ∈ [0,180];Defocusing blurring, as shown by the equation, blur radius r with
Defocusing degree is proportional, at random given r ∈ [0,25];The point spread function of atmospheric turbulance can be simulated with Gaussian Blur, such as formula
Shown, wherein σ is blur radius, and R ∈ [- 3 σ, 3 σ] gives σ ∈ [0,5] at random.Label B niBy motion blur angle ω, dissipate
Burnt blur radius r, atmospheric turbulance blur radius sigma composition.
Pinch effect: for haze and the fuzzy image I to degrade is addedhbiIt is random to carry out difference with JPEG compression method
The compression processing of compression parameters (CQ) value obtains compound degraded image Ii, CQ is set as (0,10,20,30,40).Label C QiFor figure
As the correspondence compression parameters value used.
(3) the high quality graphic J for obtaining step (1)iThe corresponding compound degraded image I obtained with step (2)i, composition is clearly
Clear-compound degraded image is to { Ji,Ii, and save corresponding atmosphere light scattering parameter Ki, fuzzy category Bni, compression quality ginseng
Number CQi, form clear-compound degraded image sample database.
The network model, which is built, is divided into 2 steps with training part, the specific steps are as follows:
(1) network pre-training: first with clear-compound degraded image sample database of foundation respectively to the image put up
Atmosphere light scattering parameter predicts that network, image fuzzy parameter prediction network, compression of images parameter prediction network, condition generate confrontation
The generation network G and confrontation network D of network carry out pre-training.
Atmosphere light scattering parameter predicts that network structure is as shown in Fig. 6, inputs clear-compound degraded image to { Ji,Ii,
Output is image atmosphere light scattering parameter prediction result Ki'.It predicts to lose using atmosphere light scattering parameter shown in formulaMake
Network atmosphere light scattering parameter prediction result is closer to true value.
Image fuzzy parameter predicts that network structure is as shown in Fig. 7, inputs clear-compound degraded image to { Ji,Ii, it is defeated
It is out image fuzzy parameter prediction result Bn′.It predicts to lose using fuzzy parameter shown in formulaNetwork is improved to image
Fuzzy parameter forecasting accuracy.
Compression of images parameter prediction network structure is as shown in Fig. 8, inputs clear-compound degraded image to { Ji,Ii, it is defeated
It is out compression of images parameter prediction result CQ '.L is lost using compression of images parameter prediction shown in formulaCQ, network is improved to pressure
The forecasting accuracy of contracting parameter CQ
It generates network structure such as attached drawing 9 to show, as shown by the equation, G indicates to generate network, L generating functioniFor input
True compound degraded image, KiFor image atmosphere light scattering parameter, BniFor image fuzzy parameter, CQiFor compression of images parameter, Zi
For the reconstruction image of output.Network output image will be generated with corresponding high quality graphic composition image to { Zi,JiSupply subsequent differentiation
Web vector graphic.
Zi=G (Li|(Ki,Bni,CQi))
In generating network, the loss function for generating network mainly includes four parts, respectively confrontation loss function, sense
Know loss function, Pixel-level loss function and gradient loss function.
Fight loss function lGenIt is to seem image close to practical high quality graphic, more very in higher level
It is real and natural.As shown by the equation, wherein D indicates to differentiate network its expression-form.
Perceive loss function lfea,jBe added can preferably in reconstruction image minutia, expression-form such as formula institute
Show.W, H respectively represent the width and height of character pair figure, Ф in formulajIndicate trained in advance on ImageNet
The characteristic pattern of VGG network jth layer output.
In order to be consistent reconstruction image on pixel level with original image, Pixel-level loss function l is addedMSE, generally use
Mean square deviation MSE, expression-form is as shown by the equation.
In order to remove the pseudomorphism in reconstruction image, retains more minutias, gradient loss function l is addedt,v, table
As shown by the equation up to formula, whereinWithRespectively represent the gradient total variance of image in a vertical and horizontal direction.
Generate the total loss function l of networkGAs shown by the equation, network is generated by minimizing lGIt is trained.Wherein a, b,
C, d are positive weights, and weight is empirically respectively set to a=1, b=100, c=100, d=0.5 in training process.
lG=alGen+blfea,j+clMSE+dltv
In differentiating network, network reconnection result Z will be generatediWith high quality graphic JiAnd corresponding Ki、BiAnd CQiIt is sent into
Differentiate network, obtains the differentiation of reconstruction image and true picture as a result, updating arbiter by formula.
(2) network association training
Using pre-training three obtained parameter network to compound degraded image IiParameter prediction is carried out, is obtained corresponding big
Gas light scattering parameter Ki, image fuzzy parameter Bni, compression of images parameter CQiConfrontation network training is generated as conditional guidance.Through
Alternating iteration is crossed, is stopped when condition generation confrontation network reaches Nash Equilibrium or reaches preset maximum number of iterations (100,000 times)
It only trains, parameter prediction network needed for so far obtaining image reconstruction and generation network.
The high quality rebuilds part and is divided into 2 steps, the specific steps are as follows:
(1) it parameter prediction: for compound degraded image L to be reconstructed, is predicted respectively by image atmosphere light scattering parameter
Network, image fuzzy parameter prediction network, compression of images parameter prediction network obtain corresponding parameter K, Bn, CQ, as image
The condition of reconstruction.
Image N to be reconstructed is input in image atmosphere light scattering parameter prediction network as shown in Fig. 6, is predicted
Parameter K.Representing matrix merges as shown by the equation for concat operation in figure, and image is obtained after first layer and second layer convolution
Size be M × N × 3 characteristic pattern carry out matrix merge concat1 operation after, obtain size be M × N × 6 fusion
Characteristic pattern k2Input as third convolutional layer;The ruler that image is obtained after first layer, the second layer and third layer convolution
Very little is after the characteristic pattern of M × N × 3 carries out concat2 operation, to obtain the fusion feature figure k that size is M × N × 93As
The input of four convolutional layers;The size that image is obtained after first layer, the second layer, third layer and the 4th layer of convolution is M
After the characteristic pattern of × N × 3 carries out concat3 operation, the fusion feature figure k that size is M × N × 12 is obtained4It is rolled up as the 5th
The input of lamination eventually passes through layer 5 convolution and obtains image atmosphere light scattering Prediction Parameters K.
kn=concat (k1,...,kn)
By image I scaling to be reconstructed to 256 × 256 pixels, it is input to attached image fuzzy category prediction shown in Fig. 7
In network, by convolutional layer, pond layer, full articulamentum, fuzzy parameter Bn in image is exported.
By image I scaling to be reconstructed to 256 × 256 pixels, it is input to attached compression of images parameter prediction shown in Fig. 8
In network, by convolutional layer, pond layer, full articulamentum, softmax layers, compression of images parameter CQ is exported.
(2) network reconnection is generated
By Prediction Parameters K, fuzzy parameter Bn and compression obtained in compound degraded image L and step (1) to be reconstructed
Parameter CQ, which is fed together, to be generated in network, and output result is reconstruction image Z.
Network is generated using coding-decoded symmetrical structure, coded portion correspondence image characteristic extraction procedure.It is rolled up by 8
Lamination composition, to compound degraded image L, obtains corresponding characteristic pattern En by a convolutional layer in n-th (n < 9) and activation primitive,
Middle activation primitive selects LeakyRelu, expression formula (a as shown by the equationi=10), the functional form is simple, and solves
The problem of neuron does not learn after Relu function enters between minus zone.
Decoded portion correspondence image reconstruction process includes 6 warp laminations, 1 convolutional layer and a Tanh active coating, warp
It crosses a deconvolution of m (m < 8) or the output of convolutional layer is Dm.And it joined leapfrog connection, the spy that coded portion conv7 is exported
The characteristic pattern that the characteristic pattern D1 of sign figure E7 and decoded portion uconv1 output is operated by element-sum is as uconv2
Input;The characteristic pattern E6 that coded portion conv6 the is exported characteristic pattern D2 exported with decoded portion uconv2 is passed through
Input of the characteristic pattern that element-sum is operated as uconv3;The characteristic pattern E5 and solution that coded portion conv5 is exported
Input of the characteristic pattern that the characteristic pattern D3 of code part uconv3 output is operated by element-sum as uconv4;It will
The characteristic pattern D4 of characteristic pattern E4 and decoded portion the uconv4 output of coded portion conv4 output is operated by element-sum
Input of the obtained characteristic pattern as uconv5;The coded portion conv3 characteristic pattern E3 exported and decoded portion uconv5 is defeated
Input of the characteristic pattern that characteristic pattern D5 out is operated by element-sum as uconv6, element-sum representative pair
The Fusion Features operation for answering position element to sum.
Claims (5)
1. one kind is generated based on condition and fights network combined degraded image high quality method for reconstructing, it is characterised in that:
This method includes overall flow, the foundation of compound degraded image sample database, network model is built and training, the compound figure that degrades
Image height mass reconstruction part;
The process flow that degraded image compound first is rebuild, the foundation including compound degraded image sample database;Image atmosphere light dissipates
It penetrates parameter K prediction network, image fuzzy parameter Bn prediction network, compression of images parameter CQ prediction network and condition and generates confrontation
The generation network G of network and differentiate that network D is built, and using clear-compound degraded image sample of synthesis to above-mentioned network into
Row training, atmosphere light scattering parameter K, image fuzzy parameter Bn and compression of images parameter CQ generate confrontation network as conditional guidance
Training;When rebuilding to true compound degraded image L, true compound degraded image L is respectively fed to three parameter prediction nets
Network obtains corresponding atmosphere light scattering parameter K, image fuzzy parameter Bn and compression of images parameter CQ, later makees K, Bn, CQ
Generation network, which is fed together, for condition and compound degraded image obtains high quality reconstruction image;
Compound degraded image sample database is established: the addition of acquisition, degraded factor including high quality graphic clear-compound degrades
Image pattern library generates three steps;
Network model is built and training: the frame including network is built, network training and model obtain two steps;The frame of network
Frame is built pre- including image atmosphere light scattering parameter K prediction network, image fuzzy parameter Bn prediction network, compression of images parameter CQ
Survey grid network, condition generate the generation network G of confrontation network and the framework of confrontation network D is built;Network training and model obtain rank
Section includes the use of loss function and Training strategy;
Compound degraded image high quality is rebuild: including compound degraded image atmosphere light scattering parameter K prediction, fuzzy category parameter Bn
Compound degraded image and its Prediction Parameters are sent into generator and carry out image reconstruction two steps by prediction and compression parameters CQ prediction
Suddenly.
2. a kind of generated based on condition according to claim 1 fights network combined degraded image high quality method for reconstructing,
It is characterized by:
The overall flow, the specific steps are as follows:
S1 obtains high quality graphic first and carries out the compound processing that degrades to it, obtains clear-compound degraded image data set;
S2 builds image atmosphere light scattering parameter prediction network, image fuzzy parameter prediction network, compression of images parameter prediction net
Network, condition generate the generation network G of confrontation network and differentiate network D, and using data set obtained in S2 to above-mentioned five nets
Network is trained, wherein atmosphere light scattering parameter K, image fuzzy category parameter Bn, compression of images that three prediction networks obtain
Parameter CQ as conditional guidance generate confrontation network training, until condition generate confrontation network (cGAN) reach Nash Equilibrium or
Reach maximum number of iterations, deconditioning;
S3 is when carrying out image superior quality reconstruction, using the trained network of S2, true compound drop that will first rebuild
Matter image is respectively fed to three parameter prediction networks and obtains corresponding parameter K, Bn and CQ, later by image and obtained parameter one
It is generated in network G with being sent into, obtains reconstruction image.
3. a kind of generated based on condition according to claim 2 fights network combined degraded image high quality method for reconstructing,
It is characterized by:
The compound degraded image sample database is established, the specific steps are as follows:
The foundation of compound degraded image sample database includes the selection of high quality graphic, the selection for the parameter that degrades and clear-compound
Degraded image generates sample database;
S1.1 chooses the high quality graphic in existing indoor and outdoor data set respectively;
S1.2 obtains haze drop for the high quality graphic J (x) in S1.1, by giving transmissivity t (x) and atmosphere light A at random
Matter image;
S1.3 is added a kind of obscure at random into the image that S1.2 is obtained and degrades to obtain fuzzy degraded image, wherein the fuzzy packet that degrades
Include motion blur, defocusing blurring, atmospheric turbulance;
S1.4 carries out the compression processing of different compression quality parameters (CQ) using JPEG compression method to the image that S1.3 is obtained, and obtains
To the compound degraded image I (x) of different degrees of compression artefacts;
Corresponding compound degraded image I (x) composition that S1.5 obtains high quality graphic J (x) obtained in S1.1 and S1.4 is clear-
Compound degraded image pair and corresponding parameter K, Bn, CQ composition sample database that degrades.
4. a kind of generated based on condition according to claim 1 fights network combined degraded image high quality method for reconstructing,
It is characterized by:
The network model is built and training, the specific steps are as follows:
S2.1 image atmosphere light scattering parameter predicts network establishment
Image atmosphere light scattering parameter K is predicted using convolutional network, using prediction result K as condition, is used for directing bar
Part generates the study of confrontation network;
Image atmosphere light scattering parameter predicts the basic network topology of network, will comprising 5 convolutional layers and 3 Fusion Features layers
Compound degraded image is sent into network, extracts the characteristic pattern that conv1 and conv2 is obtained and carries out Fusion Features, result is as conv3
Input;The characteristic pattern for extracting conv1, conv2 and conv3 carries out Fusion Features, input of the result as conv4;It extracts
The characteristic pattern of conv1, conv2, conv3 and conv4 carry out Fusion Features, input of the obtained result as conv5;By
After five layers of convolution, final output atmosphere light scattering parameter prediction result K;It is each that the network structure can retain to greatest extent image
A level characteristics obtain better prediction result;
S2.2 image fuzzy parameter predicts network establishment
Image is fuzzy fuzzy including motion blur, defocusing blurring, atmospheric turbulance;Image fuzzy category is carried out using convolutional network
Prediction instructs condition to generate the study of confrontation network using prediction result Bn as condition;
Using the network structure of classical image sorter network AlexNet, image fuzzy category predicts the structure chart of network, including 3
A convolutional layer, 3 full articulamentums;After all convolutional layers and full articulamentum, ReLU nonlinear activation function is all applied;
In order to accelerate to calculate, prevent over-fitting, after convolution function and activation primitive, using maximum pond layer come the width to image
Highly sampled;It joined the accuracy rate that local acknowledgement's normalization LRN operation improves network after the layer of the first two pond;Most
The output result of whole network is one 3 × 1 vector, to numerical quantity respectively represent in image there are motion blur, defocusing blurring,
Atmospheric turbulance obscures the fuzzy parameter of three types;
S2.3 compression of images parameter prediction network establishment
Compression method is divided into two classes: lossless and damage;Although loseless method provides optimal visual experience for user, damage
Higher compression ratio may be implemented in method, therefore outdoor vision system often uses JPEG lossy compression mode;The present invention utilizes convolution
Network predicts compression of images parameter, using prediction result CQ as condition, condition is instructed to generate the study of confrontation network;Figure
As the basic structure of compression parameters prediction network, include 3 convolutional layers and 2 full articulamentums and 1 normalization exponential function
(softmax) layer;After all convolutional layers and full articulamentum, ReLU nonlinear activation function is all applied;It is special to reduce
It levies dimension, accelerate to calculate, after convolution function and activation primitive, carry out feature selecting using maximum pond layer;Final network
Output result be one 5 × 1 vector, what numerical value respectively represented in image there are compression parameters equal to 0,10,20,30,40
Probability, the final compression parameters prediction result CQ of image are equal to the weighted sum of 5 compression parameters and corresponding prediction probability;
S2.4 condition generates confrontation network
Network structure is generated to build: generator G want to make the data generated on arbiter D with the performance of truthful data as far as possible
Unanimously;The generation network structure used includes symmetrical coding and decoding structure, and cataloged procedure is based on down-sampling operation, and to solution
The respective layer of code process provides Feature Mapping;Decoding process is shifted using up-sampling operation and non-linear space;Coding corresponds to 8
A convolutional layer carries out the feature extraction of different levels, the corresponding 6 warp laminations of decoded portion, 1 convolutional layer and 1 Tanh function
The reconstruction of active coating progress image;Meanwhile in order to preferably utilize the feature in each stage, the information bottle in decoding process is broken through
Leapfrog connection is added in neck in a network;
Differentiate that network structure is built: arbiter D wants that image can be accurately distinguished to be from true or generator G output;With
This study to instruct generator, the output image for keeping its final more level off to true picture;The differentiation network structure packet used
Containing 5 convolutional layers, it is all made of 3 × 3 convolution kernel, finally by sigmoid function, obtains image scoring;
S2.5 network training
In the network training stage, image atmosphere light scattering parameter is respectively trained first with the compound degraded image sample database of foundation
K predicts that network, image fuzzy parameter Bn prediction network, compression of images parameter CQ prediction network, condition generate confrontation network;Later
Compound degraded image I is obtained using trained parameter prediction networkiAtmosphere light scattering parameter Ki, image fuzzy parameter BniWith
Compression of images parameter CQi, the condition of confrontation network is generated as condition;By Ii、Ki、BniAnd CQiIt is sent into and generates network G, obtain pair
Answer reconstruction image Zi;Use IiCorresponding high quality graphic Ji, reconstruction image Zi, parameter Ki、Bni、CQiTraining confrontation network D;It hands over
Network G and confrontation network D are generated for training, until condition generates confrontation network and reaches Nash Equilibrium or maximum number of iterations;Instruction
The two parameter prediction networks and generation network G perfected are that final compound degraded image rebuilds network;Image atmosphere light dissipates
It penetrates parameter prediction network, fuzzy parameter prediction network, compression parameters prediction network and is all made of mean square error (MSE) loss function,
The loss function for generating network includes confrontation loss, perception loss and Pixel-level loss, differentiates that Web vector graphic differentiates loss function;
The above network of training is all made of Adam gradient descent method, and momentum is disposed as 0.9;Learning rate is 0.0002, every training 100 times
Habit rate becomes original 0.9 times, by iterating, when loss function is minimized or reaches preset maximum number of iterations
Deconditioning obtains final image reconstruction model.
5. a kind of generated based on condition according to claim 1 fights network combined degraded image high quality method for reconstructing,
It is characterized by:
The compound degraded image high quality is rebuild, the specific steps are as follows:
S3.1 parameter prediction
Firstly, it is pre- that image atmosphere light scattering parameter will be sent into having a size of the true compound degraded image L to be reconstructed of M × N pixel
Survey grid network obtains corresponding atmosphere light scattering parameter K;True compound degraded image L dimension is zoomed into 256 × 256 pixels, point
Not Song Ru image fuzzy parameter prediction network and compression of images parameter prediction network in, obtain corresponding fuzzy parameter Bn and compression
Parameter CQ, and in this, as the condition of generator;
S3.2 generates network reconnection
The obtained atmosphere light scattering parameter K of true compound degraded image L and S3.1 to be reconstructed and image fuzzy category are joined
Number Bn, which is fed together, to be generated in network, and output result is high quality reconstruction image Z.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910552748.5A CN110363716B (en) | 2019-06-25 | 2019-06-25 | High-quality reconstruction method for generating confrontation network composite degraded image based on conditions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910552748.5A CN110363716B (en) | 2019-06-25 | 2019-06-25 | High-quality reconstruction method for generating confrontation network composite degraded image based on conditions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363716A true CN110363716A (en) | 2019-10-22 |
CN110363716B CN110363716B (en) | 2021-11-19 |
Family
ID=68216980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910552748.5A Active CN110363716B (en) | 2019-06-25 | 2019-06-25 | High-quality reconstruction method for generating confrontation network composite degraded image based on conditions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363716B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028174A (en) * | 2019-12-10 | 2020-04-17 | 深圳先进技术研究院 | Multi-dimensional image restoration method and equipment based on residual connection |
CN111047088A (en) * | 2019-12-09 | 2020-04-21 | 上海眼控科技股份有限公司 | Prediction image acquisition method and device, computer equipment and storage medium |
CN111080528A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution and model training method, device, electronic equipment and medium |
CN111080527A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution method and device, electronic equipment and storage medium |
CN111080729A (en) * | 2019-12-24 | 2020-04-28 | 山东浪潮人工智能研究院有限公司 | Method and system for constructing training picture compression network based on Attention mechanism |
CN111369548A (en) * | 2020-03-10 | 2020-07-03 | 江南大学 | No-reference video quality evaluation method and device based on generation countermeasure network |
CN111369451A (en) * | 2020-02-24 | 2020-07-03 | 西华大学 | Image restoration model, method and equipment based on complex task decomposition regularization |
CN111489301A (en) * | 2020-03-19 | 2020-08-04 | 山西大学 | Image defogging method based on image depth information guide for migration learning |
CN111652813A (en) * | 2020-05-22 | 2020-09-11 | 中国科学技术大学 | Method and device for processing cross section of transverse beam |
CN112257800A (en) * | 2020-10-30 | 2021-01-22 | 南京大学 | Visual identification method based on deep convolutional neural network model-regeneration network |
CN112669240A (en) * | 2021-01-22 | 2021-04-16 | 深圳市格灵人工智能与机器人研究院有限公司 | High-definition image restoration method and device, electronic equipment and storage medium |
CN112785496A (en) * | 2019-11-05 | 2021-05-11 | 四零四科技股份有限公司 | Device and method for processing image super-resolution |
WO2021179851A1 (en) * | 2020-03-12 | 2021-09-16 | Oppo广东移动通信有限公司 | Image processing method and device, and terminal and storage medium |
CN113487476A (en) * | 2021-05-21 | 2021-10-08 | 中国科学院自动化研究所 | Online-updating image blind super-resolution reconstruction method and device |
CN114092520A (en) * | 2021-11-19 | 2022-02-25 | 电子科技大学长三角研究院(湖州) | Ground moving target refocusing method and system based on generation countermeasure network |
CN114612362A (en) * | 2022-03-18 | 2022-06-10 | 四川大学 | Large-depth-of-field imaging method and system for generating countermeasure network based on multipoint spread function |
WO2023245927A1 (en) * | 2022-06-23 | 2023-12-28 | 中国科学院自动化研究所 | Image generator training method and apparatus, and electronic device and readable storage medium |
CN117340280A (en) * | 2023-12-05 | 2024-01-05 | 成都斐正能达科技有限责任公司 | LPBF additive manufacturing process monitoring method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230325A (en) * | 2018-02-06 | 2018-06-29 | 浙江师范大学 | The compound degraded image quality evaluating method and system decomposed based on cartoon texture |
CN108537746A (en) * | 2018-03-21 | 2018-09-14 | 华南理工大学 | A kind of fuzzy variable method for blindly restoring image based on depth convolutional network |
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN109584188A (en) * | 2019-01-15 | 2019-04-05 | 东北大学 | A kind of image defogging method based on convolutional neural networks |
CN109658348A (en) * | 2018-11-16 | 2019-04-19 | 天津大学 | The estimation of joint noise and image de-noising method based on deep learning |
CN109685072A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | A kind of compound degraded image high quality method for reconstructing based on generation confrontation network |
CN109831664A (en) * | 2019-01-15 | 2019-05-31 | 天津大学 | Fast Compression three-dimensional video quality evaluation method based on deep learning |
-
2019
- 2019-06-25 CN CN201910552748.5A patent/CN110363716B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230325A (en) * | 2018-02-06 | 2018-06-29 | 浙江师范大学 | The compound degraded image quality evaluating method and system decomposed based on cartoon texture |
CN108537746A (en) * | 2018-03-21 | 2018-09-14 | 华南理工大学 | A kind of fuzzy variable method for blindly restoring image based on depth convolutional network |
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN109658348A (en) * | 2018-11-16 | 2019-04-19 | 天津大学 | The estimation of joint noise and image de-noising method based on deep learning |
CN109685072A (en) * | 2018-12-22 | 2019-04-26 | 北京工业大学 | A kind of compound degraded image high quality method for reconstructing based on generation confrontation network |
CN109584188A (en) * | 2019-01-15 | 2019-04-05 | 东北大学 | A kind of image defogging method based on convolutional neural networks |
CN109831664A (en) * | 2019-01-15 | 2019-05-31 | 天津大学 | Fast Compression three-dimensional video quality evaluation method based on deep learning |
Non-Patent Citations (2)
Title |
---|
GUANG YANG ET AL: "DAGAN: Deep De-Aliasing Generative", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
文常保等: "《人工神经网络理论及应用》", 31 March 2019 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785496A (en) * | 2019-11-05 | 2021-05-11 | 四零四科技股份有限公司 | Device and method for processing image super-resolution |
CN111047088A (en) * | 2019-12-09 | 2020-04-21 | 上海眼控科技股份有限公司 | Prediction image acquisition method and device, computer equipment and storage medium |
CN111028174A (en) * | 2019-12-10 | 2020-04-17 | 深圳先进技术研究院 | Multi-dimensional image restoration method and equipment based on residual connection |
WO2021115053A1 (en) * | 2019-12-10 | 2021-06-17 | 深圳先进技术研究院 | Residual connection-based multi-dimensional image restoration method and device |
CN111028174B (en) * | 2019-12-10 | 2023-08-04 | 深圳先进技术研究院 | Multi-dimensional image restoration method and device based on residual connection |
CN111080528B (en) * | 2019-12-20 | 2023-11-07 | 北京金山云网络技术有限公司 | Image super-resolution and model training method and device, electronic equipment and medium |
CN111080527B (en) * | 2019-12-20 | 2023-12-05 | 北京金山云网络技术有限公司 | Image super-resolution method and device, electronic equipment and storage medium |
CN111080527A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution method and device, electronic equipment and storage medium |
CN111080528A (en) * | 2019-12-20 | 2020-04-28 | 北京金山云网络技术有限公司 | Image super-resolution and model training method, device, electronic equipment and medium |
CN111080729B (en) * | 2019-12-24 | 2023-06-13 | 山东浪潮科学研究院有限公司 | Training picture compression network construction method and system based on Attention mechanism |
CN111080729A (en) * | 2019-12-24 | 2020-04-28 | 山东浪潮人工智能研究院有限公司 | Method and system for constructing training picture compression network based on Attention mechanism |
CN111369451A (en) * | 2020-02-24 | 2020-07-03 | 西华大学 | Image restoration model, method and equipment based on complex task decomposition regularization |
CN111369451B (en) * | 2020-02-24 | 2023-08-01 | 黑蜂智造(深圳)科技有限公司 | Image restoration model, method and device based on complex task decomposition regularization |
CN111369548A (en) * | 2020-03-10 | 2020-07-03 | 江南大学 | No-reference video quality evaluation method and device based on generation countermeasure network |
WO2021179851A1 (en) * | 2020-03-12 | 2021-09-16 | Oppo广东移动通信有限公司 | Image processing method and device, and terminal and storage medium |
CN111489301A (en) * | 2020-03-19 | 2020-08-04 | 山西大学 | Image defogging method based on image depth information guide for migration learning |
CN111489301B (en) * | 2020-03-19 | 2022-05-31 | 山西大学 | Image defogging method based on image depth information guide for migration learning |
CN111652813A (en) * | 2020-05-22 | 2020-09-11 | 中国科学技术大学 | Method and device for processing cross section of transverse beam |
CN112257800B (en) * | 2020-10-30 | 2024-05-31 | 南京大学 | Visual identification method based on deep convolutional neural network model-regeneration network |
CN112257800A (en) * | 2020-10-30 | 2021-01-22 | 南京大学 | Visual identification method based on deep convolutional neural network model-regeneration network |
CN112669240B (en) * | 2021-01-22 | 2024-05-10 | 深圳市格灵人工智能与机器人研究院有限公司 | High-definition image restoration method and device, electronic equipment and storage medium |
CN112669240A (en) * | 2021-01-22 | 2021-04-16 | 深圳市格灵人工智能与机器人研究院有限公司 | High-definition image restoration method and device, electronic equipment and storage medium |
CN113487476A (en) * | 2021-05-21 | 2021-10-08 | 中国科学院自动化研究所 | Online-updating image blind super-resolution reconstruction method and device |
CN114092520B (en) * | 2021-11-19 | 2023-12-26 | 电子科技大学长三角研究院(湖州) | Ground moving target refocusing method and system based on generation countermeasure network |
CN114092520A (en) * | 2021-11-19 | 2022-02-25 | 电子科技大学长三角研究院(湖州) | Ground moving target refocusing method and system based on generation countermeasure network |
CN114612362A (en) * | 2022-03-18 | 2022-06-10 | 四川大学 | Large-depth-of-field imaging method and system for generating countermeasure network based on multipoint spread function |
WO2023245927A1 (en) * | 2022-06-23 | 2023-12-28 | 中国科学院自动化研究所 | Image generator training method and apparatus, and electronic device and readable storage medium |
CN117340280A (en) * | 2023-12-05 | 2024-01-05 | 成都斐正能达科技有限责任公司 | LPBF additive manufacturing process monitoring method |
CN117340280B (en) * | 2023-12-05 | 2024-02-13 | 成都斐正能达科技有限责任公司 | LPBF additive manufacturing process monitoring method |
Also Published As
Publication number | Publication date |
---|---|
CN110363716B (en) | 2021-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363716A (en) | One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing | |
CN111259905B (en) | Feature fusion remote sensing image semantic segmentation method based on downsampling | |
CN109685072A (en) | A kind of compound degraded image high quality method for reconstructing based on generation confrontation network | |
CN110276354B (en) | High-resolution streetscape picture semantic segmentation training and real-time segmentation method | |
CN108765279A (en) | A kind of pedestrian's face super-resolution reconstruction method towards monitoring scene | |
CN109711413A (en) | Image, semantic dividing method based on deep learning | |
CN109829855A (en) | A kind of super resolution ratio reconstruction method based on fusion multi-level features figure | |
CN107833183A (en) | A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring | |
CN107133935A (en) | A kind of fine rain removing method of single image based on depth convolutional neural networks | |
CN108986058A (en) | The image interfusion method of lightness Consistency Learning | |
CN110322416A (en) | Image processing method, device and computer readable storage medium | |
CN110634108A (en) | Composite degraded live webcast video enhancement method based on element-cycle consistency countermeasure network | |
CN108648197A (en) | A kind of object candidate area extracting method based on image background mask | |
CN110136062A (en) | A kind of super resolution ratio reconstruction method of combination semantic segmentation | |
CN107784628A (en) | A kind of super-resolution implementation method based on reconstruction optimization and deep neural network | |
CN110490804A (en) | A method of based on the generation super resolution image for generating confrontation network | |
CN107977930A (en) | A kind of image super-resolution method and its system | |
CN110136075A (en) | It is a kind of to recycle the remote sensing image defogging method for generating confrontation network based on edge sharpening | |
CN112070040A (en) | Text line detection method for video subtitles | |
Koh et al. | Simple and effective synthesis of indoor 3d scenes | |
Xu et al. | Learning self-supervised space-time CNN for fast video style transfer | |
CN115330620A (en) | Image defogging method based on cyclic generation countermeasure network | |
CN115830165A (en) | Chinese painting drawing process generation method, device and equipment based on confrontation generation network | |
CN108924528A (en) | A kind of binocular stylization real-time rendering method based on deep learning | |
CN109377498B (en) | Interactive matting method based on cyclic neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |