CN109559287A - A kind of semantic image restorative procedure generating confrontation network based on DenseNet - Google Patents
A kind of semantic image restorative procedure generating confrontation network based on DenseNet Download PDFInfo
- Publication number
- CN109559287A CN109559287A CN201811386418.5A CN201811386418A CN109559287A CN 109559287 A CN109559287 A CN 109559287A CN 201811386418 A CN201811386418 A CN 201811386418A CN 109559287 A CN109559287 A CN 109559287A
- Authority
- CN
- China
- Prior art keywords
- network
- image
- densenet
- training
- generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 40
- 230000008439 repair process Effects 0.000 claims abstract description 26
- 230000006870 function Effects 0.000 claims abstract description 20
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 230000004069 differentiation Effects 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 11
- 238000004088 simulation Methods 0.000 claims description 11
- 230000007704 transition Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000009826 distribution Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000003475 lamination Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000005498 polishing Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 230000008485 antagonism Effects 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000003252 repetitive effect Effects 0.000 claims description 3
- 230000008901 benefit Effects 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 239000004744 fabric Substances 0.000 claims 1
- 238000012360 testing method Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 3
- 230000008034 disappearance Effects 0.000 abstract description 2
- 230000019771 cognition Effects 0.000 abstract 1
- 230000001815 facial effect Effects 0.000 abstract 1
- 239000010410 layer Substances 0.000 description 18
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 208000032544 Cicatrix Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 230000037387 scars Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of semantic image restorative procedure that confrontation network is generated based on DenseNet.It include: collection image preprocessing, building training and test data set;It constructs DenseNet and generates confrontation network;Training DenseNet generates confrontation network phase;Finally, utilizing the repair process of trained network implementations Incomplete image.The present invention is under the frame for generating confrontation network, it introduces DenseNet structure and constructs new loss function to optimize network, not only alleviate gradient disappearance, reduce network parameter, the transmitting and utilization for improving feature simultaneously, have improved the similitude and visual effect of big region semantic loss of learning image repair.Example shows that the present invention can be realized the serious facial image reparation of defect information, and compared with existing other methods, repairs result and more meets visual cognition.
Description
Technical field
The present invention relates to a kind of semantic image restorative procedures that confrontation network is generated based on DenseNet, belong to depth
Habit and field of image processing.
Background technique
Image restoration technology is to be repaired using image known region to information is lost, so that the image after repairing
Maximized reduction original image and the visual perception for meeting the mankind.When image restoration technology appears in the Renaissance earliest
Phase restores Middle Ages artistic works for protecting.In recent years, the Digital Image Inpainting based on calculating attracts attention gradually
Grow up.Existing reparation algorithm can preferably repair the stripe-shaped scars in picture, the background filling of zonule etc.
Problem, but still there is comparable limitation for the serious image repair of semantic information missing.Semantic image is repaired problem and is needed
On the basis of image understanding, effective missing information prediction is carried out, is the problem of solution of requiring study.
In recent years, the research of deep learning obtains brilliant progress, understands task such as image classification, target in high vision
In detection etc., the achievement that attracts people's attention is achieved.Image repair algorithm research based on deep learning is ground as image restoration technology
The hot spot studied carefully.Generation confrontation network (GAN) the more image repair algorithm research wherein proposed for 2014 provides brand-new side
To.Autocoding network and GAN are combined the semantic image reparation for being used for large defect by Pathak and Yang et al..Although
The model can learn the basic semantic information to image, but the accuracy of restoring area and visual effect need to be improved.
Summary of the invention
The present invention provides a kind of semantic image restorative procedure that confrontation network is generated based on DenseNet.Core of the invention
Thought think be generate confrontation network frame under, introduce DenseNet structure mitigate gradient disappear, improve feature transmitting and
It utilizes, reduces network parameter, while constructing new loss function to optimize network, improve big region semantic information and lack
Lose the similitude and visual effect of image repair.
Currently, thering are two critical issues to be solved to distinguish in based on the image restoration technology for generating confrontation network
Be: (1) gradient disappears.With the increase of network depth, gradient disappearance problem can be further serious, causes periods of network disruption;(2) figure
As the artifact and similarity repaired.There are artifacts in the reparation image of semantic information missing at present, and similarity is lower, vision body
It tests and has much room for improvement.In view of the above-mentioned problems, the present invention proposes a kind of new reparation side for generating confrontation network based on DenseNet
Case can solve two above problem.In the case where level is constant, network parameter can be effectively reduced, reduction gradient disappears
It loses, improve network stabilization, while increasing the multiplexing of interlayer feature, feature is more effectively utilized, in conjunction with new network losses letter
Number, further increases the repairing effect of semantic information Incomplete image.
The technical solution adopted by the invention is as follows:
A kind of semantic image restorative procedure generating confrontation network based on DenseNet, comprising the following steps:
Step 1, data preprocessing phase.Image is collected as data set, the image being collected into is pre-processed, is cut out
It is cut into the image being sized, for constructing training dataset and test data set.
Step 2, building DenseNet generate confrontation network.Generate confrontation network model by DenseNet generate network G and
Differentiate that network D is constituted.Generate the network structure that network G uses encoding-decoder, wherein encoder by DenseNet module and
Convolutional layer is constituted, and decoder is made of DenseNet module and warp lamination.Exist between encoder and decoder simultaneously and jumps
Jump connection (skip-connection).Encoder extracts abstract characteristics to input picture down-sampling.Decoder passes through deconvolution
Characteristic after coding is up-sampled so that output data size is consistent with input picture size.Differentiate net
Two sorter networks that network is made of convolutional layer and full articulamentum, realize the identification of input picture.
Step 3, training DenseNet generate confrontation network.Using above-mentioned pretreated image as data set, randomly select
Training data concentrates a collection of image, and the complex pattern to be repaired of mask simulation semantics loss of learning is added in each image, as
The input of network is respectively trained the weight for generating network G and differentiation network D.Reconstruction damage is utilized in the training process
It loses, confrontation loss and TV (Total Variation) loss carry out costing bio disturbance, and calculate network with back-propagation algorithm
Parameter gradients, and updated according to the iteration that the learning rate of setting carries out parameter.Two networks are alternately trained, when satisfaction setting
Training is completed when the number of iterations condition.
Step 4 carries out image repair processing to semantic information Incomplete image using trained generation network G.
Preferably, data set described in step 1, can be the image oneself being collected into and is also possible to disclosed data
Collection, pre-processes image not of uniform size, is converted into the training image being sized.
Preferably, building DenseNet described in step 2 generates confrontation network, it is specific as follows:
Confrontation network model is generated to generate network G by DenseNet and differentiate that network D is constituted.Wherein, network G packet is generated
Containing encoder and decoder, coding and decoding structure is using the intensive convolution block (Dense block) and mistake in DenseNet model
Cross layer (transition layer).Wherein, encoder contains a convolutional layer, 4 intensive convolution blocks and 4 transition zones.
Feature extraction is carried out to input picture for the convolution kernel of the 4*4 of step-length first with 2, then utilizes intensive convolution block and mistake
It crosses layer and carries out down-sampling, extract abstract characteristics.The structure of decoder is corresponding with encoder, by intensive convolution block and deconvolution
Transition zone is constituted.Decoder carries out up-sampling decoding to the abstract characteristics of coding, and output is schemed with the consistent generation of input size
Picture.Hierarchy characteristic each in encoder and decoder is also attached by network using skip-connection simultaneously, is realized
The multiplexing of multi-layer feature, improves the performance for generating network restoration image accuracy.Differentiate that network is by 4 convolutional layers and one
Two sorter networks that full articulamentum is constituted, realize the differentiation true and false to input picture.
It is trained preferably, the data set established in step 3 using step 1 generates confrontation network to DenseNet,
It is specific as follows:
Training each time needs to choose the image of a batch from data set, while in each picture centre region
Mask simulation semantics loss of learning is added, using the missing image after processing as the input of network, respectively to generation network G
It is trained with the weight for differentiating network D.It generates confrontation network and loss function is minimized to generation network by gradient descent method
The parameter of G and differentiation network D successively reversely adjust, by repetitive exercise network to improve the precision of network, to make
It generates network and generates the true picture for being similar to training set.
For basic generation fights network, the input for generating network G is noise data z, exports the mould for generation
Quasi- data G (z).Differentiation network D is using the true data x or data G (z) of generation as input, and what Pan Duan be do not inputted comes
Source.Generating confrontation network training target is to obtain to enable to differentiate the maximized parameter of network D classification accuracy, and obtain maximum
Change the parameter that deception differentiates the generation network G of network D, objective function are as follows:
Wherein, V (D, G) indicates to generate the objective function for needing to optimize in confrontation network;X~prIndicate that x obeys data set
In image distribution pr, E [] expression seek mathematic expectaion;Z~pzIndicate that z obeys prior distribution pz, pzTo be uniformly distributed or
Gaussian Profile, z are the vector of stochastical sampling.
Training and optimization that new loss function L generates confrontation network for DenseNet are constructed in the present invention, to improve
The stability and repair ability of network.Loss function L loses L by confrontationadv, rebuild loss LrecL is lost with TVTVWith appropriate
Weighted array constitute, i.e.,
L=λrecLrec+λadvLadv+λTVLTV (2)
Wherein, λrec、λadvAnd λTVThe respectively weighting coefficient of the every loss item of balance.
Basic structure of the invention is to generate confrontation network, therefore antagonism loses LadvIt is as follows:
Wherein, y is to generate image, and y=G (M e x), M are two-value masks, is worth and represents reservation region, value for 1 part
Missing image is represented for 0 part, size and input truthful data x are in the same size, and e indicates that corresponding element is multiplied.D (y) table
Show and differentiates that network D generates the probability Estimation output that image y carries out genuine/counterfeit discriminating to network G is generated.
Rebuild loss LrecIt indicates to generate the difference between network generation image y and truthful data x, is defined as:
Lrec=| | x-y | |2 (4)
Wherein, | | | |2Indicate l2Norm.
In the present invention, it introduces TV and loses LTVConstraint generates the flatness of image y, with the puppet for eliminating restoring area edge
The visual effect of image is repaired in shadow, enhancing.TV loses LTVIt is defined as,
Wherein, i and j indicates that the row coordinate and column coordinate of pixel, y indicate to generate the image that network G generates, yI+1, j-
yI, jIndicate the difference of the pixel pixel value between right pixel and left pixel in the row direction, yI, j+1-yI, jIndicate pixel in column direction
The difference of pixel value between upper and lower pixel and upper pixel.
Preferably, using trained generation network G to semantic information defect figure in step 4
It is final to repair imageAbsent region will be corresponded to by generation image completion, as a result as shown in figure 4, specific formula such as
Under:
Wherein, y is the image for generating network G and generating, and what (1-M) ey was indicated is the part of polishing in the image that will be generated
It takes out, Mex is the incomplete image of simulation, and it is exactly image after repairing that incomplete image, which is combined with the part of polishing,
Detailed description of the invention
Fig. 1 generates the flow diagram of the semantic image reparation of confrontation network
Fig. 2 generates the semantic image repairing model schematic diagram of confrontation network;
Generation schematic network structure based on DenseNet in Fig. 3 present invention;
Fig. 4 image repair result figure.
Specific embodiment
In order to make the purpose of the method for the present invention, technical solution and advantage are more clearly understood, below in conjunction with attached drawing and reality
It illustrates and releases the present invention, be not intended to limit the present invention.
As shown in Figure 1, the present invention provides a kind of semantic image restorative procedure that confrontation network is generated based on DenseNet,
The following steps are included:
Step 1, data preprocessing phase.Size setting is carried out to the image data being collected into, obtains in training and sets ruler
The image of very little size.
Step 2, building generate the confrontation network model stage based on DenseNet.The model includes that DenseNet is generated
Network and differentiate network, wherein generate the encoding-decoder that is made of intensive convolution block of network, differentiate network by convolutional layer and
Full articulamentum is constituted.
Step 3, training DenseNet generate confrontation network phase.It generates network G and differentiates that network D declines with gradient
Method and back propagation are trained, and two networks are alternately trained, and are finally reached convergence effect.
Step 4 carries out image repair processing to Incomplete image using trained generation network.
The image being collected into is pre-processed described in step 1, specific as follows:
Use existing data set CeleA.CeleA data set is a face data set, including 202599 famous person faces
Hole is trained with wherein 200,000 images, remaining 2599 image is tested.People is carried out to image using openface
Face identification, extracts the information of face, such as the top of chin, the outer of eyes, eyebrow interior edge;It is fixed on the face according to every
The mark of position, by the image cropping being collected at the face training image being sized, in order to eyes and mouth energy
It is enough placed in the middle.Another data set be collect car data set, the image including 16185 vehicles, wherein use 14000 as
Training image, remaining image are cut out as test image, while to what image was sized.Data in this example
Concentration picture size is 64*64.
Building described in step 2 is based on DenseNet and generates the confrontation network model stage, specifically includes each in network model
Layer structure, convolution kernel size, step-length, the learning rate of loss function in network layer, model optimization algorithm and model parameter it is initial
Change setting.
It generates confrontation network model to generate network G by two deep neural networks and differentiate that network D is constituted, such as Fig. 2 institute
Show.For generation network architecture of the invention as shown in figure 3, the first layer of encoder is a convolutional layer, using with 2 is step
The convolution kernel of long 3*3;Then 4 intensive convolution blocks of connection and convolution transition zone, wherein intensive convolution block structure is 4 volumes
Lamination, for reducing parameter and merging the feature in each channel, transition zone is used to connect two intensive convolution blocks, and structure is
Convolutional layer and average pond layer.The structure of decoder is corresponding with encoder, by intensive convolution block and deconvolution transition zone structure
At.Wherein as the intensive convolution block in coding structure, deconvolution transition zone is used with 2 as step-length intensive convolution block
The convolution kernel of 3*3 is consistent Output Size size with input picture size using deconvolution.It is generated in network simultaneously also
The feature for encoding intensive convolution block and the corresponding intensive deconvolution block of decoding is attached, realizes the multiplexing of multi-layer feature.
Differentiate that network D is made of 4 convolutional layers and full articulamentum in the present invention, specific structure parameter is as shown in Figure 2.Wherein four volumes
Lamination uses into step-length and carries out down-sampling for 2 4*4 convolution kernel.
It is 64 that batch is arranged in experiment, and the maximum batch used under given hardware memory limitation, the number of iterations is set as
100 times, optimization algorithm uses Adam Optimizer, and learning rate is set as 0.0002, λrecIt is 0.999, λadvFor
0.001, λTVFor 1e-6.
Training described in step 3 generates confrontation network phase, the specific steps are as follows:
Generation with high performance fights network in order to obtain, needs through celeA and car two datasets to generation
Confrontation network is trained, and converges to the weight parameter in network optimal, training process is as follows:
For training every time, need to choose a batch image from data set, while mask is added in each image
Simulation semantics loss of learning according to batch gradient descent method and reversed will pass in treated image is input to generation confrontation network
It broadcasts method to be trained two networks, it is therefore an objective to network be made to reach convergence effect.It generates network G and differentiates two networks of network D
It alternately trains, training is completed when meeting the number of iterations condition of setting.
For basic generation fights network, the input for generating network G is noise data z, exports the mould for generation
Quasi- data G (z).Differentiation network D is using the true data x or data G (z) of generation as input, and what Pan Duan be do not inputted comes
Source.Generating confrontation network training target is to obtain to enable to differentiate the maximized parameter of network D classification accuracy, and obtain maximum
Change the parameter that deception differentiates the generation network G of network D, objective function are as follows:
Wherein, V (D, G) indicates to generate the objective function for needing to optimize in confrontation network;X~prIndicate that x obeys data set
In image distribution pr, E [] expression seek mathematic expectaion;Z~pzIndicate that z obeys prior distribution pz, pzTo be uniformly distributed or
Gaussian Profile, z are the vector of stochastical sampling.D (x) indicates to differentiate that network D estimates the truthful data x probability for carrying out genuine/counterfeit discriminating
Meter output, D (G (z)) are that the probability Estimation for differentiating that network D carries out genuine/counterfeit discriminating to generation network G generation image G (z) exports.
Training and optimization that new loss function L generates confrontation network for DenseNet are constructed in the present invention, to improve
The stability and repair ability of network.Loss function L loses L by confrontationadv, rebuild loss LrecL is lost with TVTVWith appropriate
Weighted array constitute, i.e.,
L=λrecLrec+λadvLadv+λTVLTV (2)
Wherein, λrec、λadvAnd λTVRespectively the weighting coefficient of the every loss item of balance, weighting coefficient are true by training
Fixed.
Basic structure of the invention is to generate confrontation network, therefore antagonism loses LadvIt is as follows:
Wherein, y is to generate image, and y=G (Mex), M are two-value masks, is worth and represents reservation region for 1 part, being worth is 0
Part represent missing image, size and input truthful data x are in the same size, and e indicates that corresponding element is multiplied.D (y) is indicated
Differentiate that network D generates the probability Estimation output that image y carries out genuine/counterfeit discriminating to network G is generated.
Rebuild loss LrecIt indicates to generate the difference between network generation image y and truthful data x, is defined as:
Lrec=| | x-y | |2 (4)
Wherein, | | | |2Indicate l2Norm.
In the present invention, it introduces TV and loses LTVConstraint generates the flatness of image y, with the puppet for eliminating restoring area edge
The visual effect of image is repaired in shadow, enhancing.TV loses LTVIt is defined as,
Wherein, i and j indicates that the row coordinate and column coordinate of pixel, y indicate to generate the image that network G generates, yI+1, j-
yI, jIndicate the difference of the pixel pixel value between right pixel and left pixel in the row direction, yI, j+1-yI, jIndicate pixel in column direction
The difference of pixel value between upper and lower pixel and upper pixel.
Preferably, carrying out image repair to semantic information Incomplete image using trained generation network G in step 4
Processing, specific as follows:
It is final to repair imageIt will be by generation image completion to corresponding absent region, as a result as shown in Fig. 4, specific formula
It is as follows:
Wherein, y is the image for generating network G and generating, and what (1-M) ey was indicated is the part of polishing in the image that will be generated
It takes out, Mex is the incomplete image of simulation, and incomplete image is obtained to repair image in conjunction with corresponding part as in generation
Embodiment 1
The method of the present invention includes the following steps:
Step 1, data preprocessing phase.Size setting is carried out to the data being collected into, obtains the size needed in training
Size.
Step 2, building generate confrontation network phase.Confrontation network model is generated by generation network G and differentiates network D structure
At.Network G is generated using the network structure of encoder connection decoder, wherein encoder is by DenseNet module and convolutional layer
It constitutes, decoder is made of DenseNet module and warp lamination.There is jump connection between encoder and decoder simultaneously
(skip-connection).Encoder extracts abstract characteristics to input picture down-sampling.Decoder is by deconvolution to coding
Characteristic later is up-sampled so that output data size is consistent with input picture size.Differentiate network by rolling up
Lamination and full articulamentum are constituted, and realize the identification of input picture.
Step 3, training generate confrontation network.The image of a batch is chosen from data set, while in each image
Mask simulation semantics loss of learning is added in central area, and the image after processing is input in generation confrontation network.Generation pair
Anti- network minimizes loss function by gradient descent method and carries out successively reversed adjust to the parameter for generating network G and differentiation network D
Section, by repetitive exercise network to improve the precision of network, to make to generate the image that network generation is similar to training set.
Training and optimization that new loss function L generates confrontation network for DenseNet are constructed in the present invention, to improve
The stability and repair ability of network.Loss function L loses L by confrontationadv, rebuild loss LrecL is lost with TVTVWith appropriate
Weighted array constitute, i.e.,
L=λrecLrec+λadvLadv+λTVLTV (1)
Wherein, λrec、λadvAnd λTVRespectively the weighting coefficient of the every loss item of balance, weighting coefficient are true by training
Fixed.
Basic structure of the invention is to generate confrontation network, therefore antagonism loses LadvIt is as follows:
Rebuild loss LrecIt indicates to generate the difference between network generation image y and truthful data x, is defined as:
Lrec=| | x-y | |2 (3)
Wherein, | | | |2Indicate l2Norm.
TV loses LTVThe constraint to image y flatness is generated is embodied, is defined as,
Wherein, i and j indicates the row coordinate and column coordinate of pixel.
Step 4 carries out image repair processing to semantic information Incomplete image using trained generation network G, specifically such as
Under:
It is final repair image be by by generation image completion to correspondence absent region, specific formula is as follows:
Wherein, y is the image for generating network G and generating, and what (1-M) ey was indicated is the part of polishing in the image that will be generated
It takes out, Mex is the incomplete image of simulation, and incomplete image is obtained to repair image in conjunction with corresponding part as in generation
Detailed description has been carried out to specific implementation of the invention above.It will be appreciated that detail is not limited to
In above-mentioned specific embodiment, those skilled in the art can make various deformations or amendments within the scope of the claims,
It does not affect the essence of the present invention.
Claims (5)
1. a kind of semantic image restorative procedure for generating confrontation network based on DenseNet, which comprises the following steps:
Step 1, data preprocessing phase pre-process the image set being collected into, and are cut into the training image being sized;
Step 2, building DenseNet generate confrontation network: generating confrontation network model by DenseNetg and generate network G and differentiation
Network D is constituted;The network structure that network G uses encoding-decoder is generated, wherein encoder is by DenseNet module and convolutional layer
It constitutes, decoder is made of DenseNet module and warp lamination;There is jump connection between encoder and decoder simultaneously;It compiles
Code device extracts abstract characteristics to input picture down-sampling;Decoder carries out the characteristic after coding by deconvolution
Sampling is so that output data size is consistent with input picture size;Differentiate that network is to constitute two by convolutional layer and full articulamentum
Sorter network realizes the identification of input picture;
Step 3, training DenseNet generate confrontation network;Using above-mentioned pretreated image as data set, training is randomly selected
The complex pattern to be repaired of mask simulation semantics loss of learning is added in each image, as network in a collection of image in data set
Input is respectively trained the weight for generating network G and differentiation network D;In the training process using reconstruction loss, to damage-retardation
The TV that becomes estranged loss carries out costing bio disturbance, and calculates network parameter gradient based on back-propagation algorithm, and according to the learning rate of setting
The iteration for carrying out parameter updates;Two networks are alternately trained, and training is completed when meeting the number of iterations of setting;
Step 4 carries out image repair processing to semantic information Incomplete image using trained generation network G.
2. the semantic image reparation of confrontation network is generated based on DenseNet as described in claim 1, which is characterized in that step
Data set described in 1 is the either disclosed data set of image oneself being collected into, is located in advance to image not of uniform size
Reason, is converted into the training image being sized.
3. the semantic image reparation of confrontation network is generated based on DenseNet as described in claim 1, which is characterized in that step
Building described in 2 generates confrontation network, specific as follows:
Confrontation network model is generated to generate network G by DenseNet and differentiate that network D is constituted;Wherein, generating network G includes to compile
Code-decoder, coding and decoding structure is using the intensive convolution block and transition zone in DenseNet model;Wherein, encoder includes
One convolutional layer, 4 intensive convolution blocks and 4 transition zones;First with 2 for the 4*4 of step-length convolution kernel to input picture
Feature extraction is carried out, then carries out down-sampling using intensive convolution block and transition zone, extracts abstract characteristics;The structure of decoder with
Encoder is corresponding, is made of intensive convolution block and deconvolution transition zone;Decoder up-samples the abstract characteristics of coding
Decoding, output and the input consistent generation image of size;Network is also by hierarchy characteristic benefit each in encoder and decoder simultaneously
It is attached with skip-connection, realizes the multiplexing of multi-layer feature, improve the property for generating network restoration image accuracy
Energy;Differentiate that network is made of 4 convolutional layers and a full articulamentum, realizes the differentiation true and false to input picture.
4. the semantic image restorative procedure of confrontation network is generated based on DenseNet as described in claim 1, which is characterized in that
It is specific as follows that training DenseNet described in step 3 generates confrontation network:
Training each time needs to choose the image of a batch from data set, while covering in each picture centre region addition
Code simulation semantics loss of learning, using the missing image after processing as the input of network, respectively to generation network G and differentiation net
The weight of network D is trained;Confrontation network is generated to minimize loss function to generation network G by gradient descent method and differentiate net
The parameter of network D carries out successively reversed adjusting, by repetitive exercise network to improve the precision of network, so that generation network be made to generate
Similar to the true picture of training set;Two networks are alternately trained, and training is completed when meeting the number of iterations of setting;
For basic generation fights network, the input for generating network G is noise data z, exports the simulation number for generation
According to G (z);Differentiation network D does not judge the source of input using the true data x or data G (z) of generation as input;It generates
Confrontation network training target is to obtain to enable to differentiate the maximized parameter of network D classification accuracy, and acquisition maximizes deception and sentences
The parameter of the generation network G of other network D, objective function are as follows:
Wherein, V (D, G) indicates to generate the objective function for needing to optimize in confrontation network;X~prIndicate that x obeys the figure in data set
As distribution pr, E [] expression seek mathematic expectaion;Z~pzIndicate that z obeys prior distribution pz, pzTo be uniformly distributed or Gauss point
Cloth, z are the vector of stochastical sampling;D (x) indicates to differentiate the probability Estimation output that network D carries out truthful data x genuine/counterfeit discriminating, D
(G (z)) is to differentiate that network D generates the probability Estimation output that image G (z) carries out genuine/counterfeit discriminating to network G is generated;
Training and optimization that new loss function L generates confrontation network for DenseNet are constructed, to improve the stability of network
And repair ability;Loss function L loses L by confrontationadv, rebuild loss LrecL is lost with TVTVWeighted array is constituted, i.e.,
L=λrecLrec+λadvLadv+λTVLTV (2)
Wherein, λrec、λadvAnd λTVThe respectively weighting coefficient of the every loss item of balance, λrecIt is 0.999, λadvIt is 0.001, λTVFor
1e-6;
Antagonism loses LadvIt is as follows:
Wherein, y is to generate image, and y=G (M e x), M are two-value masks, is worth and represents reservation region for 1 part, being worth is 0
Part represents missing image, and size and input truthful data x are in the same size, and e indicates that corresponding element is multiplied;D (y) indicates to differentiate
Network D generates the probability Estimation output that image y carries out genuine/counterfeit discriminating to network G is generated;
Rebuild loss LrecIt indicates to generate the difference between network generation image y and truthful data x, is defined as:
Lrec=| | x-y | |2 (4)
Wherein, | | | |2Indicate l2Norm;
It is introducing TV and is losing LTVConstraint generates the flatness of image y, and with the artifact for eliminating restoring area edge, figure is repaired in enhancing
The visual effect of picture;TV loses LTVIt is defined as,
Wherein, i and j indicates that the row coordinate and column coordinate of pixel, y indicate to generate the image that network G generates, yI+1, j-yI, jIt indicates
The difference of the pixel pixel value between right pixel and left pixel in the row direction, yI, j+1-yI, jIndicate that pixel descends pixel in a column direction
The difference of pixel value between upper pixel.
5. the semantic image restorative procedure of confrontation network is generated based on DenseNet as described in claim 1, which is characterized in that
It is specific as follows to the progress image repair processing of semantic information Incomplete image using trained generation network G described in step 4:
It is final to repair imageWill by generation image completion to corresponding absent region, specific formula is as follows:
Wherein, y is the image for generating network G and generating, and what (1-M) e y was indicated is the part taking-up of polishing in the image that will be generated,
M e x is the incomplete image of simulation, and incomplete image is obtained to repair image in conjunction with corresponding part as in generation
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811386418.5A CN109559287A (en) | 2018-11-20 | 2018-11-20 | A kind of semantic image restorative procedure generating confrontation network based on DenseNet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811386418.5A CN109559287A (en) | 2018-11-20 | 2018-11-20 | A kind of semantic image restorative procedure generating confrontation network based on DenseNet |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109559287A true CN109559287A (en) | 2019-04-02 |
Family
ID=65866684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811386418.5A Pending CN109559287A (en) | 2018-11-20 | 2018-11-20 | A kind of semantic image restorative procedure generating confrontation network based on DenseNet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559287A (en) |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060216A (en) * | 2019-04-17 | 2019-07-26 | 广东工业大学 | A kind of image repair method, device and equipment based on generation confrontation network |
CN110188776A (en) * | 2019-05-30 | 2019-08-30 | 京东方科技集团股份有限公司 | Image processing method and device, the training method of neural network, storage medium |
CN110223254A (en) * | 2019-06-10 | 2019-09-10 | 大连民族大学 | A kind of image de-noising method generating network based on confrontation |
CN110222628A (en) * | 2019-06-03 | 2019-09-10 | 电子科技大学 | A kind of face restorative procedure based on production confrontation network |
CN110276728A (en) * | 2019-05-28 | 2019-09-24 | 河海大学 | A kind of face video Enhancement Method based on Residual Generation confrontation network |
CN110335337A (en) * | 2019-04-28 | 2019-10-15 | 厦门大学 | A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network |
CN110415309A (en) * | 2019-06-26 | 2019-11-05 | 公安部第三研究所 | The method automatically generated based on generation confrontation network implementations fingerprint picture |
CN110473160A (en) * | 2019-08-21 | 2019-11-19 | 西安工程大学 | A kind of damaged textile fabric image repair method of ancient times based on SSGAN |
CN110473151A (en) * | 2019-07-04 | 2019-11-19 | 北京航空航天大学 | Dual-stage image completion method and system based on the association loss of subregion convolution sum |
CN110689499A (en) * | 2019-09-27 | 2020-01-14 | 北京工业大学 | Face image restoration method based on dense expansion convolution self-coding countermeasure network |
CN110705353A (en) * | 2019-08-29 | 2020-01-17 | 北京影谱科技股份有限公司 | Method and device for identifying face to be shielded based on attention mechanism |
CN110738161A (en) * | 2019-10-12 | 2020-01-31 | 电子科技大学 | face image correction method based on improved generation type confrontation network |
CN110738231A (en) * | 2019-07-25 | 2020-01-31 | 太原理工大学 | Method for classifying mammary gland X-ray images by improving S-DNet neural network model |
CN110880175A (en) * | 2019-11-15 | 2020-03-13 | 广东工业大学 | Welding spot defect detection method, system and equipment |
CN110889811A (en) * | 2019-12-05 | 2020-03-17 | 中南大学 | Photo repair system construction method, photo repair method and system |
CN111028167A (en) * | 2019-12-05 | 2020-04-17 | 广州市久邦数码科技有限公司 | Image restoration method based on deep learning |
CN111179189A (en) * | 2019-12-15 | 2020-05-19 | 深圳先进技术研究院 | Image processing method and device based on generation countermeasure network GAN, electronic equipment and storage medium |
CN111179219A (en) * | 2019-12-09 | 2020-05-19 | 中国科学院深圳先进技术研究院 | Copy-move counterfeiting detection method based on generation of countermeasure network |
CN111241614A (en) * | 2019-12-30 | 2020-06-05 | 浙江大学 | Engineering structure load inversion method based on condition generation confrontation network model |
CN111241725A (en) * | 2019-12-30 | 2020-06-05 | 浙江大学 | Structure response reconstruction method for generating countermeasure network based on conditions |
CN111340122A (en) * | 2020-02-29 | 2020-06-26 | 复旦大学 | Multi-modal feature fusion text-guided image restoration method |
CN111341438A (en) * | 2020-02-25 | 2020-06-26 | 中国科学技术大学 | Image processing apparatus, electronic device, and medium |
CN111445426A (en) * | 2020-05-09 | 2020-07-24 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Target garment image processing method based on generation countermeasure network model |
CN111476866A (en) * | 2020-04-09 | 2020-07-31 | 咪咕文化科技有限公司 | Video optimization and playing method and system, electronic equipment and storage medium |
CN111523411A (en) * | 2020-04-10 | 2020-08-11 | 陕西师范大学 | Synthetic aperture imaging method based on semantic patching |
CN111612721A (en) * | 2020-05-22 | 2020-09-01 | 哈尔滨工业大学(深圳) | Image restoration model training method and device and satellite image restoration method and device |
CN111640076A (en) * | 2020-05-29 | 2020-09-08 | 北京金山云网络技术有限公司 | Image completion method and device and electronic equipment |
CN111652813A (en) * | 2020-05-22 | 2020-09-11 | 中国科学技术大学 | Method and device for processing cross section of transverse beam |
CN111784602A (en) * | 2020-06-28 | 2020-10-16 | 江西理工大学 | Method for generating countermeasure network for image restoration |
CN111787187A (en) * | 2020-07-29 | 2020-10-16 | 上海大学 | Method, system and terminal for repairing video by utilizing deep convolutional neural network |
CN111787242A (en) * | 2019-07-17 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for virtual fitting |
CN111815523A (en) * | 2020-06-08 | 2020-10-23 | 天津中科智能识别产业技术研究院有限公司 | Image restoration method based on generation countermeasure network |
CN111861945A (en) * | 2020-09-21 | 2020-10-30 | 浙江大学 | Text-guided image restoration method and system |
CN111899191A (en) * | 2020-07-21 | 2020-11-06 | 武汉工程大学 | Text image restoration method and device and storage medium |
CN111915522A (en) * | 2020-07-31 | 2020-11-10 | 天津中科智能识别产业技术研究院有限公司 | Image restoration method based on attention mechanism |
CN112001427A (en) * | 2020-08-04 | 2020-11-27 | 中国科学院信息工程研究所 | Image conversion method and device based on analogy learning |
CN112055249A (en) * | 2020-09-17 | 2020-12-08 | 京东方科技集团股份有限公司 | Video frame interpolation method and device |
CN112102306A (en) * | 2020-09-25 | 2020-12-18 | 西安交通大学 | Dual-GAN-based defect detection method for edge repair feature fusion |
CN112184585A (en) * | 2020-09-29 | 2021-01-05 | 中科方寸知微(南京)科技有限公司 | Image completion method and system based on semantic edge fusion |
CN112396674A (en) * | 2020-10-21 | 2021-02-23 | 浙江工业大学 | Rapid event image filling method and system based on lightweight generation countermeasure network |
CN112465718A (en) * | 2020-11-27 | 2021-03-09 | 东北大学秦皇岛分校 | Two-stage image restoration method based on generation of countermeasure network |
CN112767226A (en) * | 2021-01-15 | 2021-05-07 | 南京信息工程大学 | Image steganography method and system based on GAN network structure automatic learning distortion |
CN112766366A (en) * | 2021-01-18 | 2021-05-07 | 深圳前海微众银行股份有限公司 | Training method for resisting generation network and image processing method and device thereof |
CN112785661A (en) * | 2021-01-12 | 2021-05-11 | 山东师范大学 | Depth semantic segmentation image compression method and system based on fusion perception loss |
CN112801183A (en) * | 2021-01-28 | 2021-05-14 | 哈尔滨理工大学 | Multi-scale target detection method based on YOLO v3 |
CN112950749A (en) * | 2021-03-17 | 2021-06-11 | 西北大学 | Calligraphy picture generation method based on generation of confrontation network |
CN113240613A (en) * | 2021-06-07 | 2021-08-10 | 北京航空航天大学 | Image restoration method based on edge information reconstruction |
CN113379036A (en) * | 2021-06-18 | 2021-09-10 | 西安石油大学 | Oil-gas image desensitization method based on context encoder |
CN113393550A (en) * | 2021-06-15 | 2021-09-14 | 杭州电子科技大学 | Fashion garment design synthesis method guided by postures and textures |
CN113449756A (en) * | 2020-03-26 | 2021-09-28 | 太原理工大学 | Improved DenseNet-based multi-scale image identification method and device |
CN113538608A (en) * | 2021-01-25 | 2021-10-22 | 哈尔滨工业大学(深圳) | Controllable character image generation method based on generation countermeasure network |
CN113808003A (en) * | 2020-06-17 | 2021-12-17 | 北京达佳互联信息技术有限公司 | Training method of image processing model, image processing method and device |
CN113837953A (en) * | 2021-06-11 | 2021-12-24 | 西安工业大学 | Image restoration method based on generation countermeasure network |
CN114387190A (en) * | 2022-03-23 | 2022-04-22 | 山东省计算中心(国家超级计算济南中心) | Adaptive image enhancement method and system based on complex environment |
CN114565526A (en) * | 2022-02-23 | 2022-05-31 | 杭州电子科技大学 | Deep learning image restoration method based on gradient direction and edge guide |
CN114693565A (en) * | 2022-04-25 | 2022-07-01 | 杭州电子科技大学 | Method for repairing GAN image based on jump connection multi-scale fusion |
CN114782291A (en) * | 2022-06-23 | 2022-07-22 | 中国科学院自动化研究所 | Training method and device of image generator, electronic equipment and readable storage medium |
CN115131218A (en) * | 2021-03-25 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer readable medium and electronic equipment |
CN115345970A (en) * | 2022-08-15 | 2022-11-15 | 哈尔滨工业大学(深圳) | Multi-modal input video condition generation method based on generation countermeasure network |
CN115456913A (en) * | 2022-11-07 | 2022-12-09 | 四川大学 | Method and device for defogging night fog map |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945118A (en) * | 2017-10-30 | 2018-04-20 | 南京邮电大学 | A kind of facial image restorative procedure based on production confrontation network |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
CN108171701A (en) * | 2018-01-15 | 2018-06-15 | 复旦大学 | Conspicuousness detection method based on U networks and confrontation study |
CN108257100A (en) * | 2018-01-12 | 2018-07-06 | 北京奇安信科技有限公司 | A kind of image repair method and server |
CN108460830A (en) * | 2018-05-09 | 2018-08-28 | 厦门美图之家科技有限公司 | Image repair method, device and image processing equipment |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
-
2018
- 2018-11-20 CN CN201811386418.5A patent/CN109559287A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945118A (en) * | 2017-10-30 | 2018-04-20 | 南京邮电大学 | A kind of facial image restorative procedure based on production confrontation network |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
CN108257100A (en) * | 2018-01-12 | 2018-07-06 | 北京奇安信科技有限公司 | A kind of image repair method and server |
CN108171701A (en) * | 2018-01-15 | 2018-06-15 | 复旦大学 | Conspicuousness detection method based on U networks and confrontation study |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
CN108460830A (en) * | 2018-05-09 | 2018-08-28 | 厦门美图之家科技有限公司 | Image repair method, device and image processing equipment |
Non-Patent Citations (2)
Title |
---|
ARAVINDH MAHENDRAN等: "Understanding Deep Image Representations by Inverting Them", 《ARXIV》 * |
XING DI等: "GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks", 《ARXIV》 * |
Cited By (96)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060216A (en) * | 2019-04-17 | 2019-07-26 | 广东工业大学 | A kind of image repair method, device and equipment based on generation confrontation network |
CN110335337B (en) * | 2019-04-28 | 2021-11-05 | 厦门大学 | Method for generating visual odometer of antagonistic network based on end-to-end semi-supervision |
CN110335337A (en) * | 2019-04-28 | 2019-10-15 | 厦门大学 | A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network |
CN110276728A (en) * | 2019-05-28 | 2019-09-24 | 河海大学 | A kind of face video Enhancement Method based on Residual Generation confrontation network |
CN110276728B (en) * | 2019-05-28 | 2022-08-05 | 河海大学 | Human face video enhancement method based on residual error generation countermeasure network |
CN110188776A (en) * | 2019-05-30 | 2019-08-30 | 京东方科技集团股份有限公司 | Image processing method and device, the training method of neural network, storage medium |
US11908102B2 (en) | 2019-05-30 | 2024-02-20 | Boe Technology Group Co., Ltd. | Image processing method and device, training method of neural network, and storage medium |
CN110222628A (en) * | 2019-06-03 | 2019-09-10 | 电子科技大学 | A kind of face restorative procedure based on production confrontation network |
CN110223254A (en) * | 2019-06-10 | 2019-09-10 | 大连民族大学 | A kind of image de-noising method generating network based on confrontation |
CN110415309A (en) * | 2019-06-26 | 2019-11-05 | 公安部第三研究所 | The method automatically generated based on generation confrontation network implementations fingerprint picture |
CN110415309B (en) * | 2019-06-26 | 2023-09-08 | 公安部第三研究所 | Method for automatically generating fingerprint pictures based on generation countermeasure network |
CN110473151A (en) * | 2019-07-04 | 2019-11-19 | 北京航空航天大学 | Dual-stage image completion method and system based on the association loss of subregion convolution sum |
CN111787242B (en) * | 2019-07-17 | 2021-12-07 | 北京京东尚科信息技术有限公司 | Method and apparatus for virtual fitting |
US11935167B2 (en) | 2019-07-17 | 2024-03-19 | Reling Jingdong Shangke Information Technology Co., Ltd. | Method and apparatus for virtual fitting |
CN111787242A (en) * | 2019-07-17 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and apparatus for virtual fitting |
CN110738231A (en) * | 2019-07-25 | 2020-01-31 | 太原理工大学 | Method for classifying mammary gland X-ray images by improving S-DNet neural network model |
CN110738231B (en) * | 2019-07-25 | 2022-12-27 | 太原理工大学 | Method for classifying mammary gland X-ray images by improving S-DNet neural network model |
CN110473160A (en) * | 2019-08-21 | 2019-11-19 | 西安工程大学 | A kind of damaged textile fabric image repair method of ancient times based on SSGAN |
CN110705353A (en) * | 2019-08-29 | 2020-01-17 | 北京影谱科技股份有限公司 | Method and device for identifying face to be shielded based on attention mechanism |
CN110689499A (en) * | 2019-09-27 | 2020-01-14 | 北京工业大学 | Face image restoration method based on dense expansion convolution self-coding countermeasure network |
CN110689499B (en) * | 2019-09-27 | 2023-04-25 | 北京工业大学 | Face image restoration method based on dense expansion convolution self-coding countermeasure network |
CN110738161A (en) * | 2019-10-12 | 2020-01-31 | 电子科技大学 | face image correction method based on improved generation type confrontation network |
CN110880175A (en) * | 2019-11-15 | 2020-03-13 | 广东工业大学 | Welding spot defect detection method, system and equipment |
CN110880175B (en) * | 2019-11-15 | 2023-05-05 | 广东工业大学 | Welding spot defect detection method, system and equipment |
CN110889811B (en) * | 2019-12-05 | 2023-06-09 | 中南大学 | Photo restoration system construction method, photo restoration method and system |
CN110889811A (en) * | 2019-12-05 | 2020-03-17 | 中南大学 | Photo repair system construction method, photo repair method and system |
CN111028167A (en) * | 2019-12-05 | 2020-04-17 | 广州市久邦数码科技有限公司 | Image restoration method based on deep learning |
CN111179219A (en) * | 2019-12-09 | 2020-05-19 | 中国科学院深圳先进技术研究院 | Copy-move counterfeiting detection method based on generation of countermeasure network |
CN111179189B (en) * | 2019-12-15 | 2023-05-23 | 深圳先进技术研究院 | Image processing method and device based on generation of countermeasure network GAN, electronic equipment and storage medium |
CN111179189A (en) * | 2019-12-15 | 2020-05-19 | 深圳先进技术研究院 | Image processing method and device based on generation countermeasure network GAN, electronic equipment and storage medium |
CN111241614B (en) * | 2019-12-30 | 2022-08-23 | 浙江大学 | Engineering structure load inversion method based on condition generation confrontation network model |
CN111241725B (en) * | 2019-12-30 | 2022-08-23 | 浙江大学 | Structure response reconstruction method for generating countermeasure network based on conditions |
CN111241614A (en) * | 2019-12-30 | 2020-06-05 | 浙江大学 | Engineering structure load inversion method based on condition generation confrontation network model |
CN111241725A (en) * | 2019-12-30 | 2020-06-05 | 浙江大学 | Structure response reconstruction method for generating countermeasure network based on conditions |
CN111341438A (en) * | 2020-02-25 | 2020-06-26 | 中国科学技术大学 | Image processing apparatus, electronic device, and medium |
CN111340122A (en) * | 2020-02-29 | 2020-06-26 | 复旦大学 | Multi-modal feature fusion text-guided image restoration method |
CN111340122B (en) * | 2020-02-29 | 2022-04-12 | 复旦大学 | Multi-modal feature fusion text-guided image restoration method |
CN113449756B (en) * | 2020-03-26 | 2022-08-16 | 太原理工大学 | Improved DenseNet-based multi-scale image identification method and device |
CN113449756A (en) * | 2020-03-26 | 2021-09-28 | 太原理工大学 | Improved DenseNet-based multi-scale image identification method and device |
CN111476866A (en) * | 2020-04-09 | 2020-07-31 | 咪咕文化科技有限公司 | Video optimization and playing method and system, electronic equipment and storage medium |
CN111476866B (en) * | 2020-04-09 | 2024-03-12 | 咪咕文化科技有限公司 | Video optimization and playing method, system, electronic equipment and storage medium |
CN111523411A (en) * | 2020-04-10 | 2020-08-11 | 陕西师范大学 | Synthetic aperture imaging method based on semantic patching |
CN111523411B (en) * | 2020-04-10 | 2023-02-28 | 陕西师范大学 | Synthetic aperture imaging method based on semantic patching |
CN111445426B (en) * | 2020-05-09 | 2023-09-08 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Target clothing image processing method based on generation of countermeasure network model |
CN111445426A (en) * | 2020-05-09 | 2020-07-24 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Target garment image processing method based on generation countermeasure network model |
CN111612721A (en) * | 2020-05-22 | 2020-09-01 | 哈尔滨工业大学(深圳) | Image restoration model training method and device and satellite image restoration method and device |
CN111612721B (en) * | 2020-05-22 | 2023-09-22 | 哈尔滨工业大学(深圳) | Image restoration model training method and device and satellite image restoration method and device |
CN111652813A (en) * | 2020-05-22 | 2020-09-11 | 中国科学技术大学 | Method and device for processing cross section of transverse beam |
CN111640076A (en) * | 2020-05-29 | 2020-09-08 | 北京金山云网络技术有限公司 | Image completion method and device and electronic equipment |
CN111640076B (en) * | 2020-05-29 | 2023-10-10 | 北京金山云网络技术有限公司 | Image complement method and device and electronic equipment |
CN111815523A (en) * | 2020-06-08 | 2020-10-23 | 天津中科智能识别产业技术研究院有限公司 | Image restoration method based on generation countermeasure network |
CN113808003B (en) * | 2020-06-17 | 2024-02-09 | 北京达佳互联信息技术有限公司 | Training method of image processing model, image processing method and device |
CN113808003A (en) * | 2020-06-17 | 2021-12-17 | 北京达佳互联信息技术有限公司 | Training method of image processing model, image processing method and device |
CN111784602A (en) * | 2020-06-28 | 2020-10-16 | 江西理工大学 | Method for generating countermeasure network for image restoration |
CN111784602B (en) * | 2020-06-28 | 2022-09-23 | 江西理工大学 | Method for generating countermeasure network for image restoration |
CN111899191A (en) * | 2020-07-21 | 2020-11-06 | 武汉工程大学 | Text image restoration method and device and storage medium |
CN111899191B (en) * | 2020-07-21 | 2024-01-26 | 武汉工程大学 | Text image restoration method, device and storage medium |
CN111787187A (en) * | 2020-07-29 | 2020-10-16 | 上海大学 | Method, system and terminal for repairing video by utilizing deep convolutional neural network |
CN111915522A (en) * | 2020-07-31 | 2020-11-10 | 天津中科智能识别产业技术研究院有限公司 | Image restoration method based on attention mechanism |
CN112001427A (en) * | 2020-08-04 | 2020-11-27 | 中国科学院信息工程研究所 | Image conversion method and device based on analogy learning |
CN112055249A (en) * | 2020-09-17 | 2020-12-08 | 京东方科技集团股份有限公司 | Video frame interpolation method and device |
CN112055249B (en) * | 2020-09-17 | 2022-07-08 | 京东方科技集团股份有限公司 | Video frame interpolation method and device |
CN111861945B (en) * | 2020-09-21 | 2020-12-18 | 浙江大学 | Text-guided image restoration method and system |
CN111861945A (en) * | 2020-09-21 | 2020-10-30 | 浙江大学 | Text-guided image restoration method and system |
CN112102306A (en) * | 2020-09-25 | 2020-12-18 | 西安交通大学 | Dual-GAN-based defect detection method for edge repair feature fusion |
CN112102306B (en) * | 2020-09-25 | 2022-10-25 | 西安交通大学 | Dual-GAN-based defect detection method for edge repair feature fusion |
CN112184585A (en) * | 2020-09-29 | 2021-01-05 | 中科方寸知微(南京)科技有限公司 | Image completion method and system based on semantic edge fusion |
CN112184585B (en) * | 2020-09-29 | 2024-03-29 | 中科方寸知微(南京)科技有限公司 | Image completion method and system based on semantic edge fusion |
CN112396674A (en) * | 2020-10-21 | 2021-02-23 | 浙江工业大学 | Rapid event image filling method and system based on lightweight generation countermeasure network |
CN112465718A (en) * | 2020-11-27 | 2021-03-09 | 东北大学秦皇岛分校 | Two-stage image restoration method based on generation of countermeasure network |
CN112785661A (en) * | 2021-01-12 | 2021-05-11 | 山东师范大学 | Depth semantic segmentation image compression method and system based on fusion perception loss |
CN112767226A (en) * | 2021-01-15 | 2021-05-07 | 南京信息工程大学 | Image steganography method and system based on GAN network structure automatic learning distortion |
CN112767226B (en) * | 2021-01-15 | 2023-09-12 | 南京信息工程大学 | Image steganography method and system based on automatic learning distortion of GAN network structure |
CN112766366A (en) * | 2021-01-18 | 2021-05-07 | 深圳前海微众银行股份有限公司 | Training method for resisting generation network and image processing method and device thereof |
CN113538608A (en) * | 2021-01-25 | 2021-10-22 | 哈尔滨工业大学(深圳) | Controllable character image generation method based on generation countermeasure network |
CN113538608B (en) * | 2021-01-25 | 2023-08-01 | 哈尔滨工业大学(深圳) | Controllable figure image generation method based on generation countermeasure network |
CN112801183A (en) * | 2021-01-28 | 2021-05-14 | 哈尔滨理工大学 | Multi-scale target detection method based on YOLO v3 |
CN112801183B (en) * | 2021-01-28 | 2023-09-08 | 哈尔滨理工大学 | YOLO v 3-based multi-scale target detection method |
CN112950749A (en) * | 2021-03-17 | 2021-06-11 | 西北大学 | Calligraphy picture generation method based on generation of confrontation network |
CN112950749B (en) * | 2021-03-17 | 2023-10-27 | 西北大学 | Handwriting picture generation method based on generation countermeasure network |
CN115131218A (en) * | 2021-03-25 | 2022-09-30 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer readable medium and electronic equipment |
CN113240613A (en) * | 2021-06-07 | 2021-08-10 | 北京航空航天大学 | Image restoration method based on edge information reconstruction |
CN113837953B (en) * | 2021-06-11 | 2024-04-12 | 西安工业大学 | Image restoration method based on generation countermeasure network |
CN113837953A (en) * | 2021-06-11 | 2021-12-24 | 西安工业大学 | Image restoration method based on generation countermeasure network |
CN113393550A (en) * | 2021-06-15 | 2021-09-14 | 杭州电子科技大学 | Fashion garment design synthesis method guided by postures and textures |
CN113379036B (en) * | 2021-06-18 | 2023-04-07 | 西安石油大学 | Oil-gas image desensitization method based on context encoder |
CN113379036A (en) * | 2021-06-18 | 2021-09-10 | 西安石油大学 | Oil-gas image desensitization method based on context encoder |
CN114565526A (en) * | 2022-02-23 | 2022-05-31 | 杭州电子科技大学 | Deep learning image restoration method based on gradient direction and edge guide |
CN114387190B (en) * | 2022-03-23 | 2022-08-16 | 山东省计算中心(国家超级计算济南中心) | Adaptive image enhancement method and system based on complex environment |
CN114387190A (en) * | 2022-03-23 | 2022-04-22 | 山东省计算中心(国家超级计算济南中心) | Adaptive image enhancement method and system based on complex environment |
CN114693565A (en) * | 2022-04-25 | 2022-07-01 | 杭州电子科技大学 | Method for repairing GAN image based on jump connection multi-scale fusion |
CN114693565B (en) * | 2022-04-25 | 2024-05-03 | 杭州电子科技大学 | GAN image restoration method based on jump connection multi-scale fusion |
CN114782291B (en) * | 2022-06-23 | 2022-09-06 | 中国科学院自动化研究所 | Training method and device of image generator, electronic equipment and readable storage medium |
CN114782291A (en) * | 2022-06-23 | 2022-07-22 | 中国科学院自动化研究所 | Training method and device of image generator, electronic equipment and readable storage medium |
CN115345970A (en) * | 2022-08-15 | 2022-11-15 | 哈尔滨工业大学(深圳) | Multi-modal input video condition generation method based on generation countermeasure network |
CN115456913A (en) * | 2022-11-07 | 2022-12-09 | 四川大学 | Method and device for defogging night fog map |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109559287A (en) | A kind of semantic image restorative procedure generating confrontation network based on DenseNet | |
CN109523463A (en) | A kind of face aging method generating confrontation network based on condition | |
CN113469356B (en) | Improved VGG16 network pig identity recognition method based on transfer learning | |
CN104361363B (en) | Depth deconvolution feature learning network, generation method and image classification method | |
CN109615582A (en) | A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description | |
CN109919122A (en) | A kind of timing behavioral value method based on 3D human body key point | |
CN110222628A (en) | A kind of face restorative procedure based on production confrontation network | |
CN108765319A (en) | A kind of image de-noising method based on generation confrontation network | |
CN108765279A (en) | A kind of pedestrian's face super-resolution reconstruction method towards monitoring scene | |
CN108416755A (en) | A kind of image de-noising method and system based on deep learning | |
CN107767413A (en) | A kind of image depth estimation method based on convolutional neural networks | |
CN108038420A (en) | A kind of Human bodys' response method based on deep video | |
CN110211045A (en) | Super-resolution face image method based on SRGAN network | |
CN107748895A (en) | UAV Landing landforms image classification method based on DCT CNN models | |
CN111462230B (en) | Typhoon center positioning method based on deep reinforcement learning | |
CN109801292A (en) | A kind of bituminous highway crack image partition method based on generation confrontation network | |
CN110443768A (en) | Single-frame image super-resolution reconstruction method based on Multiple Differential consistency constraint and symmetrical redundant network | |
CN108921019A (en) | A kind of gait recognition method based on GEI and TripletLoss-DenseNet | |
CN108520202A (en) | Confrontation robustness image characteristic extracting method based on variation spherical projection | |
CN109685724A (en) | A kind of symmetrical perception facial image complementing method based on deep learning | |
CN113012172A (en) | AS-UNet-based medical image segmentation method and system | |
CN110458060A (en) | A kind of vehicle image optimization method and system based on confrontation study | |
CN110223304A (en) | A kind of image partition method, device and computer readable storage medium based on multipath polymerization | |
CN110175248A (en) | A kind of Research on face image retrieval and device encoded based on deep learning and Hash | |
CN109272487A (en) | The quantity statistics method of crowd in a kind of public domain based on video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190402 |
|
RJ01 | Rejection of invention patent application after publication |