CN107945204A - A kind of Pixel-level portrait based on generation confrontation network scratches drawing method - Google Patents

A kind of Pixel-level portrait based on generation confrontation network scratches drawing method Download PDF

Info

Publication number
CN107945204A
CN107945204A CN201711022184.1A CN201711022184A CN107945204A CN 107945204 A CN107945204 A CN 107945204A CN 201711022184 A CN201711022184 A CN 201711022184A CN 107945204 A CN107945204 A CN 107945204A
Authority
CN
China
Prior art keywords
network
generation
mrow
portrait
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711022184.1A
Other languages
Chinese (zh)
Other versions
CN107945204B (en
Inventor
王伟
周红丽
王晨吉
方凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711022184.1A priority Critical patent/CN107945204B/en
Publication of CN107945204A publication Critical patent/CN107945204A/en
Application granted granted Critical
Publication of CN107945204B publication Critical patent/CN107945204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a kind of Pixel-level portrait based on generation confrontation network to scratch drawing method, solves the problems, such as that machine scratches figure field and needs the huge data set training optimization network of magnanimity cost of manufacture.The present invention pre-sets the generation network of confrontation mode of learning and differentiates network, and generation network is the deep neural network with jump connection;By true picture input generation network output people's scape segmentation figure picture containing portrait;By the first and second images to being separately input to differentiate that network output differentiates probability, determine generation network and differentiate the loss function of network;According to the configuration parameter for minimizing two network losses functional values two networks of adjustment, the training of generation network is completed;Test image is input to the generation network after the completion of training to generate people's scape segmentation figure picture, will generate image randomization, finally further optimizes probability matrix feeding condition random field.The present invention reduces training figure quantity bulkly, improves efficiency and segmentation precision.

Description

A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
Technical field
The present invention relates to technical field of computer vision, more particularly to Pixel-level portrait to scratch drawing method, is specifically a kind of base Drawing method is scratched in the Pixel-level portrait of generation confrontation network, for portrait and background separation.
Background technology
Portrait scratches the hot issue that figure is always computer vision field.Pixel-level portrait is scratched figure requirement and is accurately incited somebody to action The prospect of one object is extracted from background, belongs to two more fine Classification Semantics segmentation problems.
With the rapid development of e-commerce, portrait, which scratches figure, boundless application scenarios, such as more and more people Clothes are bought in selection on network, and then electric business is come into being with scheming to search thing function.Accurately searching similar clothes is One very difficult thing, it is therefore necessary to split the portrait in picture.And for example with various portraits beautification software Rise, background blurring function therein is also required to background accurately distinguish portrait.Again such as in monitoring of solving a case, it will monitor Obtained portrait is pre-processed, so that fast positioning searches for target.It is preceding but in most of images, background is more complicated Scape accurately distinguishes technology with background and need to be improved.
Before the deep learning epoch, people solve related semantic segmentation problem and mainly use the cluster side based on pixel Method and based on " figure division " (Graph Partitioning) algorithm, semantic segmentation method of the tradition based on figure division is all to scheme As being abstracted as the form of figure (Graph), then carry out the semantic segmentation of image by algorithm of the figure in theoretical, the method it is main Shortcoming be if background is more complicated or background and target similarity it is very big, segmentation effect is bad, and speed is very slow.
With the arrival in 2.0 epoch of artificial intelligence, the development of depth learning technology, the lifting of computer capacity, Yi Ji great The generation of data, good environment has been established for the development of semantic segmentation technology.Have at present and be much based on deep neural network Model be used for semantic segmentation, such as full convolutional network (FCN), by using the warp lamination of up-sampling, obtain end to end Dense prediction.But because it generally requires many training set images and carrys out training network because network structure is simple.In addition, use Scratch that the data set of figure is on the low side in Pixel-level portrait, the calibration of data set and cost of manufacture are huge, a training mark image pattern The artificial calibration of half an hour is needed, but the parted pattern based on full convolutional network needs the training set image of tens thousand of just to take Obtain preferable result, it is seen that artificial mark obtains available data sets and takes time and effort.
The content of the invention
The purpose of the present invention is in view of the above shortcomings of the prior art, it is proposed that one kind is obtained with less training set image The method that the Pixel-level portrait based on generation confrontation network of preferable segmentation result scratches figure.
The present invention is that a kind of Pixel-level portrait based on generation confrontation network scratches drawing method, for the true figure containing portrait Picture, obtains true portrait and the mark image of background separation, it is characterised in that include following step by way of manually marking Suddenly:
(1) network is preset:Pre-set generation network and differentiate network, be confrontation study mould by two network settings The type of formula, the generation network and differentiation network is deep neural network, wherein, generation network is to be connected with jump Deep neural network, also referred to as generate network;
(2) segmentation figure picture is generated:By the true picture input generation network containing portrait, portrait and background separation are exported Segmentation figure picture, referred to as generates people's scape segmentation figure picture;
(3) counting loss functional value:Using the true picture for containing portrait and people's scape segmentation figure picture is generated as the first image Right, the true picture for containing portrait differentiates network with mark image as the second image pair, respectively input.By differentiating network meter Calculate the false loss function public affairs for differentiating probability and true differentiation probability, utilizing the loss function formula for differentiating network and generation network Formula, obtains differentiating the loss function value of network and the loss function value of generation network;
(4) network parameter is updated:The loss function value for differentiating network and the loss function value of generation network are minimized respectively, Using deep neural network Back Propagation Algorithm, the more newly-generated network of iteration and the value for differentiating each parameter of network, complete generation Network and the training for differentiating network;
(5) generating test set people scape segmentation figure picture:After the completion of generate network training, receive to be split containing portrait True picture, is entered into the generation network of trained completion, defeated by iterative calculation, generation network in the network Go out generation people's scape segmentation figure picture of test set;
(6) Optimized Segmentation image, completes portrait and scratches figure:The generation people's scape segmentation figure picture for generating network output test set is general Rate, the input using people's scape probability matrix of test set as condition random field, use condition random field is to the people after randomization As segmentation figure picture is further refined, complete the portrait based on generation confrontation network and scratch figure.
Compared with existent technique, technical advantage of the invention is:
First, the present invention is based on generation confrontation network, and compared to full convolutional network, the structure of generation confrontation network is more multiple Miscellaneous, parameter is more, easily acquires the feature of portrait and background image more details, such as shape, color etc., thus segmentation precision is more It is high.FCN networks need to use tens thousand of training set image training networks, can just obtain the segmentation of preferable portrait and background separation Effect.And the training set amount of images that the present invention uses is less than FCN network training collection image volumes up to 2 orders of magnitude, improves instruction Practice speed.The present invention provides a kind of new method for portrait and background segment.
2nd, in addition, the present invention, when training generates network, has with the deep neural network with jump connection Help make network Fast Convergent, within the shorter time, obtain preferable segmentation effect.Add in the decoder layer of generation network Random inactivation is entered, has prevented because network parameter is excessive, the complicated network structure, caused over-fitting.Both the above mechanism The strong property of generation confrontation network is enhanced, improves the speed of network training.
3rd, by applying the technical scheme of the present invention, after pre-setting generation network and differentiating network, network is generated Further optimized with differentiating that network is confronted with each other study, and to generation image feeding condition random field, improve portrait side The segmentation smoothness of edge.
Brief description of the drawings
Fig. 1 is a kind of structure diagram of generation network of the present invention;
Fig. 2 is a kind of coder structure schematic diagram of the present invention;
Fig. 3 is a kind of decoder architecture schematic diagram of the present invention;
Fig. 4 is a kind of structure diagram of differentiation network of the present invention;
Fig. 5 is the Optimizing Flow figure based on condition random field to Self -adaptive people's scape segmentation figure picture of the present invention;
The Pixel-level portrait that Fig. 6 is the present invention scratches figure segmentation flow diagram;
Fig. 7 is the segmentation effect figure of the portrait and background separation disclosed in the specific embodiment of the invention.
Embodiment
Below in conjunction with attached drawing, elaborate to the present invention.
Embodiment 1
The present invention is scratched in drawing method for existing portrait the problem of needing magnanimity training set image to cause inefficiency, exhibition Research and innovation are opened, it is proposed that a kind of Pixel-level portrait based on generation confrontation network scratches drawing method.For containing portrait True picture, as full convolutional network training pattern, the present invention is also required to obtain true portrait by way of manually marking With the mark image of background separation, but the quantity of training set image has been lacked very much.Referring to Fig. 7 b, this is by manually marking Portrait and background separation image.Referring to Fig. 6, portrait of the invention, which scratches drawing method, to be included having the following steps:
(1) network is preset:Pre-set generation network and differentiate network, be confrontation study mould by two network settings Formula, namely the loss function of network is generated by differentiating that the loss function of network obtains.The generation network and the differentiation The type of network is deep neural network, wherein, generation network is the deep neural network with jump connection, also referred to as Generate network.Generation network is mainly formed with convolutional layer, encoder layer group, decoder layer group, warp lamination, referring to Fig. 1.With letter Breath processing flow direction is connected with portrait image input layer, convolutional layer, encoder layer group, decoder layer group, warp lamination, segmentation in turn Image output layer.The quantity of encoder layer group and decoder layer group corresponds, each encoder layer and corresponding decoder Layer has jump to connect.Confrontation network is made of 5 convolutional layers with activation primitive, referring to Fig. 4.
(2) segmentation figure picture is generated:Containing portrait true picture input generation network before, it is necessary to input picture into Row pretreatment, such as image resolution ratio modification, picture cuts normalization, Image Reversal etc., by generating network calculations, so that it is defeated Go out portrait and the segmentation figure picture of background separation, referred to as generate people's scape segmentation figure picture.
(3) counting loss functional value:By the true picture containing portrait and generation people's scape segmentation figure of generation network output As being used as the first image pair, the true picture for containing portrait differentiates network with mark image as the second image pair, respectively input. By differentiating that network weight calculates false differentiation probability and true differentiation probability, the loss function formula and generation for differentiating network are utilized The loss function formula of network, obtains differentiating the loss function value of network and the loss function value of generation network.
(4) network parameter is updated:The loss function value for differentiating network and the loss function value of generation network are minimized respectively, Using deep neural network Back Propagation Algorithm, the more newly-generated network of iteration and the value for differentiating each parameter of network, complete generation Network and the training for differentiating network.The generation network for completing training is used to generate tester's scape segmentation figure picture, and differentiates network Do not used in test.
(5) generating test set people scape segmentation figure picture:After the completion of network training is generated, generation network receives to be split contain There are the true picture of portrait, i.e. test image.In the network by iterative calculation, the generation network that training is completed will can be tested Image people scape is split.Generate generation people's scape segmentation figure picture of network output test set.
(6) Optimized Segmentation image, completes portrait and scratches figure:The generation people's scape segmentation figure picture for generating network output test set is general Rate, the input using people's scape probability matrix of test set as condition random field, use condition random field is to the people after randomization As segmentation figure picture is further refined so that with the data set of smaller, obtain more accurate portrait and background segment figure Picture, completes the portrait based on generation confrontation network and scratches figure.
The present invention resists network (Generative Adversarial with the training set image training generation of smaller Network, GAN) network, obtain more preferable result.This method pre-sets generation network and differentiates network, the generation net The type of network and differentiation network is deep neural network.In addition, the present invention is added when testing the output of segmentation figure picture Condition random field optimization, makes the segmentation result more fine.
Embodiment 2
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1, institute in step 1 The generation network stated is the deep neural network with jump connection, and the deep neural network with jump connection is to be used to form Generate the jump connection (Skip Connection) between the encoder layer of N number of concatenation of network and N number of decoder layer concatenated Gradient transmission path, this is a kind of identical mapping at the same time, and referring to Fig. 1,3-8 layer coder layers specifically are connected to 11-16 Layer decoder layer, for example the output of the 3rd layer coder layer is to input to the 4th layer coder layer and the 16th layer decoder layer at the same time, Wherein, it is to export substantially that the 3rd layer coder layer, which is output to the 4th layer coder layer, and the 3rd layer coder layer is output to the 16th layer decoder Device layer be jump connection as a result, and so on, formed the present invention jump connection neural network structure, its effect be in order to Prevent generation network number of plies increase caused by gradient disperse and degenerate problem.
Embodiment 3
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-2, the present invention Random inactivation (Dropout), specifically the decoder layer in generation network are introduced in the decoder layer of middle generation network Random jettisoning inactivation parameter before final output, that is, each the decoder layer for generating network at random will before final output Respective inactivation parameter jettisoning, the specifically value at random by respective inactivation partial parameters output are arranged to 0, and simplifying many need not The calculating wanted, ensure that the strong property of network structure.
Embodiment 4
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-3, in step (3) and in step (4) loss function referred to, wherein the loss function of generation network is smaller, the portrait that generation network is exported With the segmentation figure of background separation as validity is higher, generation network losses function formula is carried out by two losses by two coefficients Weighting obtains, and the specific network losses function formula that generates is:
Gen_loss=GAN_weight × GAN_loss+L1_weight × L1_loss
Wherein, what GAN_weight was represented is the weight for resisting loss function GAN_loss, and value is 0-50;In this example Value is 10.Generation people's scape segmentation figure picture that L1_weight is expressed as generation network output loses with marking the L1 norms of image The weight of function, value are 0-100, and value is 90 in this example.GAN_loss is confrontation loss function, and calculation formula is as follows:
P in formulafakeIt is the general of generation people's scape segmentation figure picture generation of the true picture containing portrait and generation network output Rate matrix, differentiates that network thinks that it is genuine probability to generate network output image, i.e., by differentiating that the vacation that network weight calculates is sentenced Other probability;Be equal to 0 in order to avoid the antilog of log functions, cause mistake, add a very small number in formula, i.e. EPS, as Simple modifications, the value range of EPS is 10-13—10-11, value is 10 in this example-13
L1 norms lose the regularization loss function of L1_loss, also referred to as Pixel-level, describe the life of generation network output The difference degree being grown up between scape segmentation figure picture and mark image, i.e., the regularization loss function of Pixel-level is according to the generation net People's scape segmentation figure picture of network generation and the difference of mark image determine.
L1 norms loss L1_loss calculation formula are as follows:
Targets in formulaijWhat is represented is to mark image, outputijWhat is represented is the generation people scape point for generating network output Cut image, N is the length of side pixel number of the true picture containing portrait, value range 256-1024, and value is 256 in this example.
Embodiment 5
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-4, in step (3) and in step (4) loss function referred to, wherein, differentiate that the loss function of network is smaller, it is described to differentiate sentencing for network output The accuracy of other probability is higher, wherein differentiating that network losses calculation formula is:
Wherein, EPS also illustrates that a very small constant, scope 10 with embodiment 4-13—10-11;Preal It is to differentiate that network thinks that it is genuine probability to mark image, specifically contains the true picture of portrait with marking image as the second figure Picture is genuine probability to the probability square of generation, that is, the true differentiation probability calculated by differentiating network weight.
The present invention allows generation network to utilize deep neural network back-propagating calculation with differentiating that network is confronted with each other study Method, in communication process, the more newly-generated network of iteration and the value for differentiating each parameter of network, minimize the loss letter for differentiating network The loss function of number and generation network, achievees the purpose that network Fast Convergent, and then enhances the generative capacity of generation network.
Embodiment 6
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-5, in step (6) use condition random field is further refined to generation people's scape segmentation figure picture after randomization, will contain portrait in After true picture input generation network is to export people's scape segmentation figure picture after two Classification Semantics dividing processings, by people's scape of generation Segmentation figure, that is, to whole people's scape segmentation figure picture divided by 255, is converted to the value between 0-1, as the general of portrait part as randomization Rate matrix;The probability matrix of background parts is obtained with 1 probability matrix for subtracting portrait part, is provided for condition random field based on Calculate the probability matrix of potential function.
Embodiment 7
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-6, the present invention Involved in condition random field optimize when, private database, the denseCRF storehouses such as based on Python can be called.Ginseng See Fig. 5, specific optimization process can be expressed as:
(1) the generation people's scape segmentation figure for generating network output test set is obtained into probability matrix as randomization.
(2) portrait and the position correlated characteristic of background in image are obtained according to the algorithm of condition random field, it is possible thereby to allow Portrait and the segmentation effect of background are smoother.
(3) color correlation of portrait and background in image is obtained according to the algorithm of condition random field, final output is excellent Portrait segmentation figure picture after change, completes the Pixel-level portrait based on generation confrontation network and scratches figure.
Pixel-level portrait of the present invention based on generation confrontation network scratches drawing method, and solving machine and scratching figure field needs magnanimity The problem of huge data set training of cost of manufacture optimizes network.Realization includes:The generation network of default confrontation mode of learning and Differentiate network, generation network is the deep neural network with jump connection;Trained in network is generated to export the segmentation of people's scape Image;Counting loss functional value;Update network parameter;Generating test set people's scape segmentation figure picture;Optimized Segmentation image, completes portrait Scratch figure.The present invention is according to minimum generation network losses functional value and differentiates network losses functional value to generating network and sentencing The configuration parameter of other network is adjusted.And by generation people's scape segmentation figure after randomization as input condition random field, further Optimized Segmentation result.The training process of the present invention reduces training set figure quantity bulkly, improves efficiency and segmentation essence Degree.For portrait and background separation.
A more detailed example is given below, scratches figure process with reference to attached drawing and portrait, the present invention is further described:
Embodiment 8
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-7, the present invention A kind of Pixel-level portrait based on generation confrontation network proposed scratches drawing method, and this method contains the true figure of portrait for single width As proposing a kind of confrontation network frame, including two deep neural networks, one is generation network, the other is differentiating net Network, two networks training at the same time mutually study, contest competition.The purpose of generation network is from the true picture generation containing portrait People's scape segmentation figure picture;The purpose for differentiating network is to discriminate between mark image and generates people's scape segmentation figure picture that network is generated.Pass through Dual training, makes mark image of the generation network generation as close possible to true segmentation result.In addition, the present invention is testing Condition random field optimization is added during the picture output of generation people's scape segmentation figure, makes segmentation result more fine.It is of the invention below One more complete specific embodiment:
First, the network structure of the present invention is introduced:
As shown in Figure 1, Fig. 1 is the structure diagram that network is generated in the application specific embodiment, it is of the invention in this example The framework based on U-Net is used, every layer of concrete meaning of generation network structure is as follows:
1st layer is portrait image input layer, comprising operation mainly have portrait photo resolution modification, and picture cut Normalization etc.;
2nd layer is a convolutional layer, and the step-length of convolution operation is 2, and convolution kernel size is 4*4, and convolution nuclear volume is 64;
3-9 layers are encoder layer, as shown in Fig. 2, each encoder layer includes three sublayers, i.e. convolutional layer, ReLU (Rectified Linear Unit) layer and batch standardization (Batch Normalization) layer;Wherein each convolutional layer Step-length is 2, and convolution nuclear volume is 2 times of its last layer convolution nuclear volume;ReLU layers are used for realizing nonlinear transformation;Batch is advised Generalized layer is used for that this layer output (i.e. next layer of input) is normalized;
10-16 layers are decoder layer, as shown in figure 3, each decoder layer includes three sublayers, i.e. warp lamination, ReLU layers and batch standardization layer.The height of wherein each warp lamination and the step-length of width are 2, that is, are exported in height and width It is original 2 times to spend on direction.ReLU layers consistent with above-mentioned effect with batch standardization layer.In addition, draw in decoder layer Random inactivation (Dropout) is entered, in the training process, inactivation ratio is 0.5;
17th layer is warp lamination, output size 256*256*1;
18th layer is segmentation figure as output layer, comprising primary operational cut for picture, final output generation people scape is split Image;
It should be noted that there is jump connection (Skip in above description between encoder layer and decoder layer Connection) it is used as gradient transmission path.
The specific implementation that network is generated in the present invention is the foregoing described, it is necessary to which explanation, network is being resisted based on generation Portrait segmentation field is carried out, and in the case where input picture is the true image conditions containing portrait, other professionals of this area Other modifications made to the application, such as parameter adjustment etc. belong to the protection domain of the application.
As shown in figure 4, Fig. 4 is the differentiation schematic network structure of the present invention, each layer concrete meaning of the differentiation network is as follows:
1st layer is differentiation image input layer, and primary operational is by the generation people scape segmentation figure picture and mark of generation network output Note image is overlapped;
2nd layer is convolutional layer, and wherein convolutional layer step-length is 2, and convolution kernel size is 4*4, and convolution nuclear volume is 64;
3rd layer is convolutional layer, and wherein convolutional layer step-length is 2, and convolution kernel size is 4*4, and convolution nuclear volume is 128;
4th layer is convolutional layer, and wherein convolutional layer step-length is 2, and convolution kernel size is 4*4, and convolution nuclear volume is 256;
5th layer is convolutional layer, and wherein convolutional layer step-length is 1, and convolution kernel size is 4*4, and convolution nuclear volume is 512;
6th layer is convolutional layer, and wherein convolutional layer step-length is 1, and convolution kernel size is 4*4, and convolution nuclear volume is 1;
7th layer is differentiation network output layer, its basic operation is that (Sigmoid functions are a kind of deep by Sigmoid functions Degree study activation primitive) activation, for exporting the probable value of 30*30 sizes.
The realization of the differentiation network in the present embodiment is the foregoing described, it is necessary to which explanation, network is being resisted based on generation Portrait segmentation field is carried out, and in the case where tablet pattern is the true picture containing portrait, other professional peoples of this area Other modifications that member makes the application, such as parameter adjustment etc. belong to the protection domain of the application.
The process that the present invention carries out segmentation image optimization using condition random field is introduced again:
As shown in figure 5, Fig. 5 is the Optimizing Flow figure to Self -adaptive people's scape segmentation figure picture based on condition random field, it is based on The output optimization process of condition random field, mainly comprising following steps:
1st step, generation network output probability of outcome is used to calculate the general of potential function so as to provide for condition random field Rate matrix.
2nd step, adds Pictures location correlated characteristic, and portrait and background in image are obtained according to the algorithm of condition random field Position correlated characteristic, it is possible thereby to make the segmentation effect of portrait and background smoother.
3rd step, adds picture color correlated characteristic, and portrait and background in image are obtained according to the algorithm of condition random field Color correlation.
4th step, iterative solution.
5th step, people's scape segmentation figure picture after output optimization.
Above-mentioned generation network is such as based on, differentiates that network and condition random field, Pixel-level portrait proposed by the invention are scratched Figure segmentation flow is as shown in fig. 6, comprise the following steps:
S301, inputs the generation network to export the segmentation of portrait and background separation by the true picture containing portrait Image.Need in the usual course to being done before the true picture input network containing portrait at some pretreatment works, such as scaling Reason, normalized etc..
S302, will contain the true picture of portrait and generation people's scape segmentation figure picture of generation network output as the first image Right, the true picture for containing portrait differentiates network with mark image as the second image pair, respectively input.By differentiating network meter Calculate true differentiation probability PrealDifferentiate probability P with vacationfake, and then obtain differentiating network losses disscrim_loss and generate network Lose gen_loss.
In the present embodiment, training set image tag is by manually marking, and portrait partial pixel value is 255, background parts picture Element value is 0, is known as mark image in this application.Wherein differentiate that network losses function discsrim_los meters s calculates formula and is:
Wherein, EPS represents a very small constant, scope 10-13—10-11, value is 10 in this example-12
Due in the present embodiment, by the portrait of the true picture containing portrait and generation network output and background separation Segmentation figure picture as the first image pair, with mark image as the second image pair, respectively input sentence by the true picture for containing portrait Other network.Differentiate that network can export a probability matrix every time, represent similar between the two images different piece of input Degree.Therefore, the probability matrix that the true picture containing portrait and generation people's scape segmentation figure picture of generation network output produce, sentences Other network thinks that it is genuine probability to generate network output image, and the application is known as Pfake;Likewise, the true picture containing portrait As the second image it is to differentiate that network thinks that it is genuine probability to mark image to the probability matrix of generation with mark image, the application Referred to as Preal.Differentiate network losses functional value by minimizing, the lifting of differentiation network can be made to differentiate generation network output image With the ability of mark image.
In the present embodiment, generation network losses include two parts, one is confrontation loss function GAN_loss, calculates public Formula is as follows:
The other is L1 norms are lost, between the generation people scape segmentation figure picture and mark image that describe generation network output Difference degree, calculation formula is as follows:
Wherein, targetsijWhat is represented is to mark image, outputijWhat is represented is the portrait and the back of the body for generating network output The separated segmentation figure picture of scape.
Two losses thereon are weighted by two coefficients, and the generation network losses function formula obtained to the end is:
Gen_loss=GAN_weight × GAN_loss+L1_weight × L1_loss
Wherein, what GAN_weight was represented is the weight for resisting loss function, and value is 1 in this example;L1_loss is expressed as Network output image and the L1 norm loss functions of mark image are generated, value is 100 in this example;
Network losses are generated by minimizing, the energy that generation network generates correct portrait and background segment image can be lifted Power.
S303 differentiates network losses functional value and generation network losses functional value by minimizing respectively, utilizes neutral net Back Propagation Algorithm, the iteration renewal generation network and each parameter value for differentiating network.
It should be noted that herein described parameter adjustment is the Back Propagation Algorithm based on neutral net, it was adjusted Journey and adjustable strategies belong to the application protection category.
S304 receives the true picture to be split containing portrait after parameter adjustment is completed, defeated as generation network Enter, and obtain the output of generation network.
In the present embodiment, after generation network and differentiation network parameter pass through S301-S303 successive steps, you can will True picture containing portrait to be split is after pretreatment, as generation network inputs and via generation network output segmentation Image, pixel value is continuously distributed in people's scape segmentation figure picture of generation, thinks that the pixel belongs to portrait part closer to 255 networks Probability it is bigger.
S305 is turned to input by network output probability is generated, and use condition random field carries out output result further thin Change.
In the present embodiment, in order to further refine segmentation result, it is more smooth to make one the portrait edge of scape segmentation figure picture, The output for generating network is imported into condition random field.Since condition random field algorithm needs probability of acceptance matrix as input, and The output for generating network is pixel value, first by output image randomization, to whole image divided by 255, is converted between 0-1 Value, the probability matrix as portrait part.Background parts probability matrix subtracts portrait part probability matrix with 1.With The denseCRF storehouses of Python, the present invention can easily extract the position feature and color characteristic of image, and iterative solution, Obtain more fine segmentation figure picture.
As a result, the technique effect of the present invention is explained below by emulation and again:
Embodiment 9
Pixel-level portrait based on generation confrontation network scratches the overall technological scheme of drawing method with embodiment 1-8, this implementation The effect that actually generates of generation confrontation network, the segmentation effect of specific portrait and background separation are measured in example with ACC indexs Fruit is schemed, referring to Fig. 7.
Wherein, Fig. 7 is the true picture containing portrait in test set;Fig. 7 b are corresponding mark image;Fig. 7 c are By generate resist network training after and optimize portrait and background segmentation figure picture.By eye-observation, generation confrontation network life Into the trunk profile of portrait, good segmentation effect is achieved.Further according to ACC computation rules, generation result is more accurate, and its is right The ACC numerical value answered is smaller.It is calculated by the formula of ACC, the ACC values of Fig. 7 are 0.0138,500 are tested in this example, figure The ACC values of picture are all 10-2In magnitude, and the reduced levels in the magnitude, this also show the generation result of inventive network With higher precision.
It is the information age now, image information accounts for very big proportion in total information, and people obtain image information, processing figure As information, have become main trend using image information.Although the generation people scape segmentation figure picture and mark image of the present invention still have Some gaps, but manually mark takes time and effort, and is not long-term solution, and machine mark is following developing direction.In the prior art, Using based on full convolutional network segmentation figure picture, it is necessary to which 100,000 training image can just obtain preferable segmentation result.The present invention is only Need 1000 training image training networks just can obtain preferable people's scape segmentation figure picture, compared to FCN, improve 2 numbers Magnitude, greatly improves the speed and segmentation precision of image segmentation, is conducive to popularization of the portrait cutting techniques in engineering application.
In brief, a kind of Pixel-level portrait based on generation confrontation network disclosed by the invention scratches drawing method, solves Machine scratches the problem of figure field needs magnanimity cost of manufacture huge data set training optimization network.The present invention pre-sets confrontation The generation network and differentiation network of mode of learning, generation network is the deep neural network with jump connection;Portrait will be contained True picture be input to generation network in export people's scape segmentation figure picture;By the first image pair and the second image to being separately input to Network is differentiated to export differentiation probability, with true and false differentiation determine the probability generation network and the loss function of differentiation network;According to most Small metaplasia is into network and differentiates that the two network losses functional values of network are adjusted the configuration parameter of two networks, completes life Into the training of network;Test image is input in the generation network after the completion of training to generate people's scape segmentation figure picture;People will be generated Scape segmentation figure as randomization, be sent into condition random field and further optimize by probability matrix.The present invention reduces network instruction bulkly White silk uses figure quantity, improves efficiency and segmentation precision, scratching figure for Pixel-level portrait provides a kind of effective new way.

Claims (6)

1. a kind of Pixel-level portrait based on generation confrontation network scratches drawing method, for the true picture containing portrait, pass through people The mode of work mark obtains true portrait and the mark image of background separation, it is characterised in that including having the following steps:
(1) network is preset:Pre-set generation network and differentiate network, be to resist mode of learning, institute by two network settings The type for stating generation network and differentiation network is deep neural network, wherein, the deep neural network for generating network is tool The deep neural network for having jump to connect, also referred to as generates network;
(2) segmentation figure picture is generated:By the true picture input generation network containing portrait, the segmentation of portrait and background separation is exported Image, referred to as generates people's scape segmentation figure picture;
(3) counting loss functional value:Generation people's scape segmentation figure picture of true picture containing portrait and generation network output is made For the first image pair, the true picture for containing portrait differentiates network, passes through with mark image as the second image pair, respectively input Differentiate that network calculations go out the false damage for differentiating probability and really differentiating probability, utilize the loss function formula for differentiating network and generation network Function formula is lost, obtains differentiating the loss function value of network and the loss function value of generation network;
(4) network parameter is updated:The loss function value for differentiating network and the loss function value of generation network are minimized respectively, are utilized Deep neural network Back Propagation Algorithm, the more newly-generated network of iteration and each parameter value for differentiating network, complete generation network With the training for differentiating network;
(5) generating test set people scape segmentation figure picture:After the completion of network training is generated, complete the reception of the generation network after training and treat The true picture containing portrait of segmentation, in the network by iterating to calculate, exports generation people's scape segmentation figure of test image Picture;
(6) Optimized Segmentation image, completes portrait and scratches figure:By generation people's scape segmentation figure of test image as randomization, tested The portrait probability matrix and background probability matrix of generation people's scape segmentation figure picture of image, by portrait probability matrix and background probability square Input of the battle array as condition random field, generation people scape segmentation figure picture of the use condition random field to the test image after randomization into Traveling step refining, obtains more accurate portrait and the image of background separation, completes the Pixel-level based on generation confrontation network Portrait scratches figure.
2. the Pixel-level portrait as claimed in claim 1 based on generation confrontation network scratches drawing method, it is characterised in that step 1 Described in have jump connection deep neural network be for form generation network N number of concatenation encoder layer with it is N number of The gradient transmission path of jump connection between the decoder layer of concatenation.
3. the Pixel-level portrait as claimed in claim 2 based on generation confrontation network scratches drawing method, it is characterised in that the life Introduce random inactivation into the decoder layer of network, specifically generation network each decoder layer final output it Preceding random jettisoning inactivation parameter.
4. the Pixel-level portrait as claimed in claim 1 based on generation confrontation network scratches drawing method, it is characterised in that in step (3) and in step (4) loss function referred to, wherein, generate the portrait that the loss function of network is smaller, and generation network is exported With the segmentation figure of background separation as validity is higher, generation network losses function formula is carried out by two losses by two coefficients Weighting obtains, specific to generate network losses function gen_loss, and formula is:
Gen_loss=GAN_weight × GAN_loss+L1_weight × L1_loss
Wherein, what GAN_weight was represented is the weight for resisting loss function GAN_loss, and value is 0-50;L1_weight tables Be shown as the weight of L1 norm loss function of the generation people's scape segmentation figure picture of generation network output with marking image, value for 0- 100;GAN_loss is confrontation loss function, and calculation formula is as follows:
<mrow> <mi>G</mi> <mi>A</mi> <mi>N</mi> <mo>_</mo> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>s</mi> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>N</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>f</mi> <mi>a</mi> <mi>k</mi> <mi>e</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msubsup> <mo>+</mo> <mi>E</mi> <mi>P</mi> <mi>S</mi> <mo>)</mo> </mrow> </mrow>
P in formulafakeIt is the true picture for containing portrait and generates generation people's scape segmentation figure picture of network output as the second image pair The probability matrix of generation, differentiates that network thinks that it is genuine probability to generate network output image;EPS represents one very small normal Number, scope 10-13—10-11
L1 norms lose L1_loss, describe generation people's scape segmentation figure picture of generation network output and mark the difference between image Off course degree, calculation formula are as follows:
<mrow> <mi>L</mi> <mn>1</mn> <mo>_</mo> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>s</mi> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>N</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <msup> <mi>targets</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> <mo>-</mo> <msup> <mi>output</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> <mo>|</mo> </mrow>
Targets in formulaijWhat is represented is to mark image, outputijWhat is represented is the generation people's scape segmentation figure for generating network output Picture.
5. the Pixel-level portrait as claimed in claim 1 based on generation confrontation network scratches drawing method, it is characterised in that in step (3) and in step (4) loss function referred to, wherein, differentiate that the loss function of network is smaller, it is described to differentiate sentencing for network output The accuracy of other probability is higher, wherein differentiating network losses function disscrim_loss, formula is:
<mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>s</mi> <mi>c</mi> <mi>r</mi> <mi>i</mi> <mi>m</mi> <mo>_</mo> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>s</mi> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>N</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>&amp;lsqb;</mo> <mi>log</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>a</mi> <mi>l</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msubsup> <mo>+</mo> <mi>E</mi> <mi>P</mi> <mi>S</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>log</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msubsup> <mi>P</mi> <mrow> <mi>f</mi> <mi>a</mi> <mi>k</mi> <mi>e</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msubsup> <mo>+</mo> <mi>E</mi> <mi>P</mi> <mi>S</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
Wherein, EPS represents a very small constant, scope 10-13—10-11;PrealIt is true picture and mark containing portrait Probability matrix of the image as the second image to generation is noted, differentiates that network thinks that it is genuine probability to mark image.
6. the Pixel-level portrait as claimed in claim 1 based on generation confrontation network scratches drawing method, it is characterised in that in step (6) generation people scape segmentation figure picture of the use condition random field to the test image after randomization described in carries out further thin Change, be by the true picture input generation network containing portrait to export the portrait and background after two Classification Semantics dividing processings After separated segmentation figure picture, by generation people's scape segmentation figure of test image as randomization, provided for condition random field based on Calculate the probability matrix of potential function.
CN201711022184.1A 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network Active CN107945204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711022184.1A CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711022184.1A CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN107945204A true CN107945204A (en) 2018-04-20
CN107945204B CN107945204B (en) 2021-06-25

Family

ID=61935772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711022184.1A Active CN107945204B (en) 2017-10-27 2017-10-27 Pixel-level image matting method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN107945204B (en)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN108830209A (en) * 2018-06-08 2018-11-16 西安电子科技大学 Based on the remote sensing images method for extracting roads for generating confrontation network
CN109035253A (en) * 2018-07-04 2018-12-18 长沙全度影像科技有限公司 A kind of stingy drawing method of the deep learning automated graphics of semantic segmentation information guiding
CN109035267A (en) * 2018-06-22 2018-12-18 华东师范大学 A kind of image object based on deep learning takes method
CN109034162A (en) * 2018-07-13 2018-12-18 南京邮电大学 A kind of image, semantic dividing method
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109190707A (en) * 2018-09-12 2019-01-11 深圳市唯特视科技有限公司 A kind of domain adapting to image semantic segmentation method based on confrontation study
CN109214973A (en) * 2018-08-24 2019-01-15 中国科学技术大学 For the confrontation safety barrier generation method of steganalysis neural network
CN109242000A (en) * 2018-08-09 2019-01-18 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109523523A (en) * 2018-11-01 2019-03-26 郑宇铄 Vertebra localization based on FCN neural network and confrontation study identifies dividing method
CN109544652A (en) * 2018-10-18 2019-03-29 江苏大学 Add to weigh imaging method based on the nuclear magnetic resonance that depth generates confrontation neural network
CN109543827A (en) * 2018-12-02 2019-03-29 清华大学 Production fights network equipment and training method
CN109639710A (en) * 2018-12-29 2019-04-16 浙江工业大学 A kind of network attack defence method based on dual training
CN109754447A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 Image generating method, device, equipment and storage medium
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109840561A (en) * 2019-01-25 2019-06-04 湘潭大学 A kind of rubbish image automatic generation method can be used for garbage classification
CN109859113A (en) * 2018-12-25 2019-06-07 北京奇艺世纪科技有限公司 Model generating method, image enchancing method, device and computer readable storage medium
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110188760A (en) * 2019-04-01 2019-08-30 上海卫莎网络科技有限公司 A kind of image processing model training method, image processing method and electronic equipment
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110276745A (en) * 2019-05-22 2019-09-24 南京航空航天大学 A kind of pathological image detection algorithm based on generation confrontation network
CN110287851A (en) * 2019-06-20 2019-09-27 厦门市美亚柏科信息股份有限公司 A kind of target image localization method, device, system and storage medium
CN110334805A (en) * 2019-05-05 2019-10-15 中山大学 A kind of JPEG domain image latent writing method and system based on generation confrontation network
CN110458904A (en) * 2019-08-06 2019-11-15 苏州瑞派宁科技有限公司 Generation method, device and the computer storage medium of capsule endoscopic image
WO2019237342A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Training method and apparatus for classification neural network for semantic segmentation, and electronic device
CN110610509A (en) * 2019-09-18 2019-12-24 上海大学 Optimized matting method and system capable of assigning categories
CN110728626A (en) * 2018-07-16 2020-01-24 宁波舜宇光电信息有限公司 Image deblurring method and apparatus and training thereof
CN110880001A (en) * 2018-09-06 2020-03-13 银河水滴科技(北京)有限公司 Training method, device and storage medium for semantic segmentation neural network
CN110909754A (en) * 2018-09-14 2020-03-24 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
CN111161272A (en) * 2019-12-31 2020-05-15 北京理工大学 Embryo tissue segmentation method based on generation of confrontation network
CN111222440A (en) * 2019-12-31 2020-06-02 江西开心玉米网络科技有限公司 Portrait background separation method, device, server and storage medium
WO2020107264A1 (en) * 2018-11-28 2020-06-04 深圳市大疆创新科技有限公司 Neural network architecture search method and apparatus
CN111278085A (en) * 2020-02-24 2020-06-12 北京百度网讯科技有限公司 Method and device for acquiring target network
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device
CN111462162A (en) * 2019-01-18 2020-07-28 上海大学 Foreground segmentation algorithm for specific class of pictures
CN111524060A (en) * 2020-03-31 2020-08-11 厦门亿联网络技术股份有限公司 System, method, storage medium and device for blurring portrait background in real time
CN111582278A (en) * 2019-02-19 2020-08-25 北京嘀嘀无限科技发展有限公司 Portrait segmentation method and device and electronic equipment
CN111899203A (en) * 2020-07-10 2020-11-06 贵州大学 Real image generation method based on label graph under unsupervised training and storage medium
CN112100908A (en) * 2020-08-31 2020-12-18 西安工程大学 Garment design method for generating confrontation network based on multi-condition deep convolution
CN112215868A (en) * 2020-09-10 2021-01-12 湖北医药学院 Method for removing gesture image background based on generation countermeasure network
CN112489056A (en) * 2020-12-01 2021-03-12 叠境数字科技(上海)有限公司 Real-time human body matting method suitable for mobile terminal
CN112529929A (en) * 2020-12-07 2021-03-19 北京邮电大学 Full-convolution dense network-based portrait cutout method
WO2021102697A1 (en) * 2019-11-26 2021-06-03 驭势(上海)汽车科技有限公司 Method and system for training generative adversarial network, and electronic device and storage medium
CN113160358A (en) * 2021-05-21 2021-07-23 上海随幻智能科技有限公司 Non-green-curtain cutout rendering method
CN114187668A (en) * 2021-12-15 2022-03-15 长讯通信服务有限公司 Face silence living body detection method and device based on positive sample training

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090189830A1 (en) * 2008-01-23 2009-07-30 Deering Michael F Eye Mounted Displays
CN104272344A (en) * 2012-10-24 2015-01-07 株式会社摩如富 Image processing device, image processing method, image processing program, and recording medium
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106683048A (en) * 2016-11-30 2017-05-17 浙江宇视科技有限公司 Image super-resolution method and image super-resolution equipment
CN106845471A (en) * 2017-02-20 2017-06-13 深圳市唯特视科技有限公司 A kind of vision significance Forecasting Methodology based on generation confrontation network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090189830A1 (en) * 2008-01-23 2009-07-30 Deering Michael F Eye Mounted Displays
CN104272344A (en) * 2012-10-24 2015-01-07 株式会社摩如富 Image processing device, image processing method, image processing program, and recording medium
CN105160310A (en) * 2015-08-25 2015-12-16 西安电子科技大学 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN106683048A (en) * 2016-11-30 2017-05-17 浙江宇视科技有限公司 Image super-resolution method and image super-resolution equipment
CN106845471A (en) * 2017-02-20 2017-06-13 深圳市唯特视科技有限公司 A kind of vision significance Forecasting Methodology based on generation confrontation network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUN CHU ET AL: "Learnable contextual regularization for semantic segmentation of indoor scene images", 《2017 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
SRIVASTAVA N ET AL: "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", 《JOURNAL OF MACHINE LEARNING RESEARCH》 *
张卫等: "基于生成式对抗网络的人脸识别开发", 《电子世界》 *
徐一峰等: "生成对抗网络理论模型和应用综述", 《金华职业技术学院学报》 *

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805789B (en) * 2018-05-29 2022-06-03 厦门市美亚柏科信息股份有限公司 Method, device and equipment for removing watermark based on antagonistic neural network and readable medium
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN108830209A (en) * 2018-06-08 2018-11-16 西安电子科技大学 Based on the remote sensing images method for extracting roads for generating confrontation network
CN108830209B (en) * 2018-06-08 2021-12-17 西安电子科技大学 Remote sensing image road extraction method based on generation countermeasure network
WO2019237342A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Training method and apparatus for classification neural network for semantic segmentation, and electronic device
CN109035267A (en) * 2018-06-22 2018-12-18 华东师范大学 A kind of image object based on deep learning takes method
CN109035253A (en) * 2018-07-04 2018-12-18 长沙全度影像科技有限公司 A kind of stingy drawing method of the deep learning automated graphics of semantic segmentation information guiding
CN109034162A (en) * 2018-07-13 2018-12-18 南京邮电大学 A kind of image, semantic dividing method
CN109034162B (en) * 2018-07-13 2022-07-26 南京邮电大学 Image semantic segmentation method
CN110728626A (en) * 2018-07-16 2020-01-24 宁波舜宇光电信息有限公司 Image deblurring method and apparatus and training thereof
CN109242000A (en) * 2018-08-09 2019-01-18 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109166126B (en) * 2018-08-13 2022-02-18 苏州比格威医疗科技有限公司 Method for segmenting paint cracks on ICGA image based on condition generation type countermeasure network
CN109214973A (en) * 2018-08-24 2019-01-15 中国科学技术大学 For the confrontation safety barrier generation method of steganalysis neural network
CN110880001A (en) * 2018-09-06 2020-03-13 银河水滴科技(北京)有限公司 Training method, device and storage medium for semantic segmentation neural network
CN109190707A (en) * 2018-09-12 2019-01-11 深圳市唯特视科技有限公司 A kind of domain adapting to image semantic segmentation method based on confrontation study
CN110909754A (en) * 2018-09-14 2020-03-24 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN110909754B (en) * 2018-09-14 2023-04-07 哈尔滨工业大学(深圳) Attribute generation countermeasure network and matching clothing generation method based on same
CN109544652A (en) * 2018-10-18 2019-03-29 江苏大学 Add to weigh imaging method based on the nuclear magnetic resonance that depth generates confrontation neural network
CN109523523B (en) * 2018-11-01 2020-05-05 郑宇铄 Vertebral body positioning, identifying and segmenting method based on FCN neural network and counterstudy
CN109448006B (en) * 2018-11-01 2022-01-28 江西理工大学 Attention-based U-shaped dense connection retinal vessel segmentation method
CN109523523A (en) * 2018-11-01 2019-03-26 郑宇铄 Vertebra localization based on FCN neural network and confrontation study identifies dividing method
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
WO2020107264A1 (en) * 2018-11-28 2020-06-04 深圳市大疆创新科技有限公司 Neural network architecture search method and apparatus
CN111406263A (en) * 2018-11-28 2020-07-10 深圳市大疆创新科技有限公司 Method and device for searching neural network architecture
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109543827A (en) * 2018-12-02 2019-03-29 清华大学 Production fights network equipment and training method
US11574199B2 (en) 2018-12-02 2023-02-07 Tsinghua University Generative adversarial network device and training method thereof
CN109543827B (en) * 2018-12-02 2020-12-29 清华大学 Generating type confrontation network device and training method
CN109859113A (en) * 2018-12-25 2019-06-07 北京奇艺世纪科技有限公司 Model generating method, image enchancing method, device and computer readable storage medium
US11270446B2 (en) 2018-12-28 2022-03-08 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image processing
US11948314B2 (en) 2018-12-28 2024-04-02 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image processing
CN109754447A (en) * 2018-12-28 2019-05-14 上海联影智能医疗科技有限公司 Image generating method, device, equipment and storage medium
CN109639710B (en) * 2018-12-29 2021-02-26 浙江工业大学 Network attack defense method based on countermeasure training
CN109639710A (en) * 2018-12-29 2019-04-16 浙江工业大学 A kind of network attack defence method based on dual training
CN111462162B (en) * 2019-01-18 2023-07-21 上海大学 Foreground segmentation algorithm for specific class pictures
CN111462162A (en) * 2019-01-18 2020-07-28 上海大学 Foreground segmentation algorithm for specific class of pictures
CN109840561A (en) * 2019-01-25 2019-06-04 湘潭大学 A kind of rubbish image automatic generation method can be used for garbage classification
CN111582278A (en) * 2019-02-19 2020-08-25 北京嘀嘀无限科技发展有限公司 Portrait segmentation method and device and electronic equipment
CN111582278B (en) * 2019-02-19 2023-12-08 北京嘀嘀无限科技发展有限公司 Portrait segmentation method and device and electronic equipment
CN110188760A (en) * 2019-04-01 2019-08-30 上海卫莎网络科技有限公司 A kind of image processing model training method, image processing method and electronic equipment
CN110334805A (en) * 2019-05-05 2019-10-15 中山大学 A kind of JPEG domain image latent writing method and system based on generation confrontation network
CN110334805B (en) * 2019-05-05 2022-10-25 中山大学 JPEG domain image steganography method and system based on generation countermeasure network
CN110222722A (en) * 2019-05-14 2019-09-10 华南理工大学 Interactive image stylization processing method, calculates equipment and storage medium at system
CN110276745A (en) * 2019-05-22 2019-09-24 南京航空航天大学 A kind of pathological image detection algorithm based on generation confrontation network
CN110189341B (en) * 2019-06-05 2021-08-10 北京青燕祥云科技有限公司 Image segmentation model training method, image segmentation method and device
CN110189341A (en) * 2019-06-05 2019-08-30 北京青燕祥云科技有限公司 A kind of method, the method and device of image segmentation of Image Segmentation Model training
CN110287851A (en) * 2019-06-20 2019-09-27 厦门市美亚柏科信息股份有限公司 A kind of target image localization method, device, system and storage medium
CN110458904B (en) * 2019-08-06 2023-11-10 苏州瑞派宁科技有限公司 Method and device for generating capsule endoscope image and computer storage medium
CN110458904A (en) * 2019-08-06 2019-11-15 苏州瑞派宁科技有限公司 Generation method, device and the computer storage medium of capsule endoscopic image
CN110610509B (en) * 2019-09-18 2023-07-21 上海大学 Optimizing matting method and system capable of specifying category
CN110610509A (en) * 2019-09-18 2019-12-24 上海大学 Optimized matting method and system capable of assigning categories
WO2021102697A1 (en) * 2019-11-26 2021-06-03 驭势(上海)汽车科技有限公司 Method and system for training generative adversarial network, and electronic device and storage medium
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
CN111161272B (en) * 2019-12-31 2022-02-08 北京理工大学 Embryo tissue segmentation method based on generation of confrontation network
CN111222440A (en) * 2019-12-31 2020-06-02 江西开心玉米网络科技有限公司 Portrait background separation method, device, server and storage medium
CN111161272A (en) * 2019-12-31 2020-05-15 北京理工大学 Embryo tissue segmentation method based on generation of confrontation network
CN111278085A (en) * 2020-02-24 2020-06-12 北京百度网讯科技有限公司 Method and device for acquiring target network
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device
CN111311485B (en) * 2020-03-17 2023-07-04 Oppo广东移动通信有限公司 Image processing method and related device
CN111524060B (en) * 2020-03-31 2023-04-14 厦门亿联网络技术股份有限公司 System, method, storage medium and device for blurring portrait background in real time
CN111524060A (en) * 2020-03-31 2020-08-11 厦门亿联网络技术股份有限公司 System, method, storage medium and device for blurring portrait background in real time
CN111899203B (en) * 2020-07-10 2023-06-20 贵州大学 Real image generation method based on label graph under unsupervised training and storage medium
CN111899203A (en) * 2020-07-10 2020-11-06 贵州大学 Real image generation method based on label graph under unsupervised training and storage medium
CN112100908B (en) * 2020-08-31 2024-03-22 西安工程大学 Clothing design method for generating countermeasure network based on multi-condition deep convolution
CN112100908A (en) * 2020-08-31 2020-12-18 西安工程大学 Garment design method for generating confrontation network based on multi-condition deep convolution
CN112215868B (en) * 2020-09-10 2023-12-26 湖北医药学院 Method for removing gesture image background based on generation of countermeasure network
CN112215868A (en) * 2020-09-10 2021-01-12 湖北医药学院 Method for removing gesture image background based on generation countermeasure network
CN112489056A (en) * 2020-12-01 2021-03-12 叠境数字科技(上海)有限公司 Real-time human body matting method suitable for mobile terminal
CN112529929A (en) * 2020-12-07 2021-03-19 北京邮电大学 Full-convolution dense network-based portrait cutout method
CN113160358A (en) * 2021-05-21 2021-07-23 上海随幻智能科技有限公司 Non-green-curtain cutout rendering method
CN114187668B (en) * 2021-12-15 2024-03-26 长讯通信服务有限公司 Face silence living body detection method and device based on positive sample training
CN114187668A (en) * 2021-12-15 2022-03-15 长讯通信服务有限公司 Face silence living body detection method and device based on positive sample training

Also Published As

Publication number Publication date
CN107945204B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN107945204A (en) A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
US11581130B2 (en) Internal thermal fault diagnosis method of oil-immersed transformer based on deep convolutional neural network and image segmentation
CN110020676A (en) Method for text detection, system, equipment and medium based on more receptive field depth characteristics
CN110263912A (en) A kind of image answering method based on multiple target association depth reasoning
CN107358293A (en) A kind of neural network training method and device
CN108921198A (en) commodity image classification method, server and system based on deep learning
CN109800821A (en) Method, image processing method, device, equipment and the medium of training neural network
CN107464210A (en) A kind of image Style Transfer method based on production confrontation network
CN108182454A (en) Safety check identifying system and its control method
CN112990296B (en) Image-text matching model compression and acceleration method and system based on orthogonal similarity distillation
CN109165645A (en) A kind of image processing method, device and relevant device
CN107239733A (en) Continuous hand-written character recognizing method and system
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CA3098286A1 (en) Method for distinguishing a real three-dimensional object from a two-dimensional spoof of the real object
CN110084221A (en) A kind of serializing face critical point detection method of the tape relay supervision based on deep learning
CN110516095A (en) Weakly supervised depth Hash social activity image search method and system based on semanteme migration
Wang et al. FE-YOLOv5: Feature enhancement network based on YOLOv5 for small object detection
CN108280451A (en) Semantic segmentation and network training method and device, equipment, medium, program
CN115222946B (en) Single-stage instance image segmentation method and device and computer equipment
CN111046928B (en) Single-stage real-time universal target detector and method with accurate positioning
CN116051574A (en) Semi-supervised segmentation model construction and image analysis method, device and system
CN114842208A (en) Power grid harmful bird species target detection method based on deep learning
CN111739037B (en) Semantic segmentation method for indoor scene RGB-D image
CN115131698B (en) Video attribute determining method, device, equipment and storage medium
CN110096991A (en) A kind of sign Language Recognition Method based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant