CN107480788A - A kind of training method and training system of depth convolution confrontation generation network - Google Patents

A kind of training method and training system of depth convolution confrontation generation network Download PDF

Info

Publication number
CN107480788A
CN107480788A CN201710685760.4A CN201710685760A CN107480788A CN 107480788 A CN107480788 A CN 107480788A CN 201710685760 A CN201710685760 A CN 201710685760A CN 107480788 A CN107480788 A CN 107480788A
Authority
CN
China
Prior art keywords
convolution
dictionary
coefficient
obtains
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710685760.4A
Other languages
Chinese (zh)
Inventor
刘怡俊
林裕鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201710685760.4A priority Critical patent/CN107480788A/en
Publication of CN107480788A publication Critical patent/CN107480788A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

Dictionary training operation is performed according to the relevance between parameter this application discloses the training method that a kind of depth convolution resists generation network, including by parameter set, obtains dictionary D, index I and coefficient C;The dictionary D, the index I and the coefficient C are expressed as convolution kernel W, convolution scale operation is performed using the convolution kernel W, obtains Convolution Formula;Generation confrontation e-learning operation is performed using the Convolution Formula, obtains maker G.By being a series of close weight vectors by parameter set space representation, the calculating process of convolution can be simplified, allow depth convolution confrontation generation network from the new type of a small amount of sample learning, therefore a small amount of repetitive exercise is obtained with higher precision, training effectiveness is also improved.Disclosed herein as well is the training system that a kind of depth convolution resists generation network.

Description

A kind of training method and training system of depth convolution confrontation generation network
Technical field
The application is related to field of computer technology, the training method of more particularly to a kind of depth convolution confrontation generation network and Training system.
Background technology
With the development of artificial intelligence technology, the artificial intelligence technology applied in computer vision field is more and more ripe, Especially in the application of image classification, target detection and target identification.The performance of the learning training method of artificial intelligence More and more higher.The learning method of in general artificial intelligence is divided into supervised learning and unsupervised learning, can in supervised learning field To be learnt using confrontation generation network.Image classification, target detection and target identification are carried out using confrontation generation network With preferable effect.
But confrontation generation network training gets up highly unstable, often maker is caused to be produced without the output of meaning. Therefore confrontation generation network is improved, and the generation in the convolutional neural networks and unsupervised learning in supervised learning is resisted Network integration to together, make convolutional neural networks network topology structure be provided with it is a series of limitation come cause confrontation to generate net Network, which can be stablized, trains.
And parameter amount present in depth convolution confrontation generation network is excessive, very high is required to internal memory and amount of calculation, even Amount of calculation in learning process causes training speed to decline far beyond the bearing capacity of most of calculating platforms, effect Rate step-down.
Therefore, the training effectiveness of depth convolution confrontation generation network how is improved, is that those skilled in the art are of interest Problem.
The content of the invention
The purpose of the application is to provide a kind of training method of depth convolution confrontation generation network, by by parameter set space A series of close weight vectors are expressed as, the calculating process of convolution can be simplified, make depth convolution confrontation generation network can be with The type new from a small amount of sample learning, therefore a small amount of repetitive exercise is obtained with higher precision, training effectiveness also obtains Lifting is arrived.Disclosed herein as well is the training system that a kind of depth convolution resists generation network.
In order to solve the above technical problems, the application provides a kind of training method of depth convolution confrontation generation network, including:
Parameter set is performed into dictionary training operation according to the relevance between parameter, obtains dictionary D, index I and coefficient C;
The dictionary D, the index I and the coefficient C are expressed as convolution kernel W, volume is performed using the convolution kernel W Product scale operation, obtains Convolution Formula;
Generation confrontation e-learning operation is performed using the Convolution Formula, obtains maker G.
Optionally, it is described that parameter set is performed into dictionary training operation according to the relevance between parameter, obtain dictionary, index And coefficient, including:
Depth convolution confrontation generation network operation is performed to the parameter set, obtains reality output;
It is operation associated according to the relevance execution between the parameter of the parameter set, obtain original dictionary, original rope Regard it as and original coefficient;
Depth convolution confrontation generation network operation is performed to original dictionary, primary index and original coefficient, is obtained original Output;
The reality output and the original output are performed into error computation operation, are adjusted value;
Original dictionary, primary index and original coefficient are performed into adjustment operation according to the adjusted value, obtain the word Allusion quotation D, the index I and the coefficient C.
Optionally, it is described that the dictionary D, the index I and the coefficient C are expressed as convolution kernel W, utilize the volume Product core W performs convolution algorithm operation, obtains Convolution Formula, including:
The operation of convolution kernel representation is performed to the dictionary D, the index I and the coefficient C, is obtained
Using describedIt is right Map function is carried out, is obtained
According to S[i,:,:]=X*D[i,:]To describedPerform Bring operation into, obtainAs the Convolution Formula.
Optionally, it is described to perform generation confrontation e-learning operation using the Convolution Formula, maker G is obtained, including:
Initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
Arbiter is performed according to True Data and pseudo- data and differentiates operation, obtains differentiating result;
The initial generator is performed into iteration optimization operation according to the differentiation result, obtains maker G.
The application also provides a kind of training system of depth convolution confrontation generation network, including:
Dictionary acquisition module, parameter set is performed into dictionary training according to the relevance between parameter and operated, obtain dictionary D, Index I and coefficient C;
Convolution modular converter, the dictionary D, the index I and the coefficient C are expressed as convolution kernel W, using described Convolution kernel W performs convolution scale operation, obtains Convolution Formula;
Generation module is resisted, generation confrontation e-learning operation is performed using the Convolution Formula, obtains maker G.
Optionally, the dictionary acquisition module includes:
Reality output acquiring unit, depth convolution confrontation generation network operation is performed to the parameter set, is obtained actual defeated Go out;
Original dictionary acquiring unit, it is operation associated according to the relevance execution between the parameter of the parameter set, obtain To original dictionary, primary index and original coefficient;
Original output acquiring unit, the confrontation generation of depth convolution is performed to original dictionary, primary index and original coefficient Network operation, obtain original output;
Error transfer factor value acquiring unit, the reality output and the original output are performed into error computation operation, obtained Error transfer factor value;
Operating unit is adjusted, original dictionary, primary index and original coefficient are performed into adjustment behaviour according to the adjusted value Make, obtain the dictionary D, the index I and the coefficient C.
Optionally, the convolution modular converter includes:
Convolution kernel representation unit, the operation of convolution kernel representation is performed to the dictionary D, the index I and the coefficient D, Obtain
Map function unit, using describedIt is rightMap function is carried out, is obtained
Convolution Formula acquiring unit, according to S[i,:,:]=X*D[i,:]To describedOperation is brought in execution into, is obtainedAs the Convolution Formula.
Optionally, the confrontation generation module includes:
Pseudo- data generating unit, initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
Differentiate result acquiring unit, arbiter is performed according to True Data and pseudo- data and differentiates operation, obtain differentiating knot Fruit;
Iteration optimization processing unit, the initial generator is performed into iteration optimization operation according to the differentiation result, obtained To maker G.
A kind of training method of depth convolution confrontation generation network provided herein, including:By parameter set according to ginseng Relevance between number performs dictionary training operation, obtains dictionary D, index I and coefficient D;By the dictionary D, the index I And the coefficient D is expressed as convolution kernel W, convolution scale operation is performed using the convolution kernel W, obtains Convolution Formula;Utilize The Convolution Formula performs generation confrontation e-learning operation, obtains maker G.
In resisting generation network in depth convolution, the correlation between many parameters is very strong, therefore is joined by analyzing Relevance between manifold, dictionary, index and coefficient are obtained, dictionary, index and coefficient can be used to represent weight vector, It is a series of close weight vectors namely by parameter set space representation, the calculating process of convolution can be simplified, roll up depth Product confrontation generation network can from the new type of a small amount of sample learning, therefore a small amount of repetitive exercise be obtained with it is higher Precision, training effectiveness is also improved.The application also provides a kind of training system of depth convolution confrontation generation network, tool There is above-mentioned beneficial effect, no longer repeat herein.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of application, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 is a kind of flow chart of the training method for depth convolution confrontation generation network that the embodiment of the present application provides;
Fig. 2 is that a kind of confrontation of the training method for depth convolution confrontation generation network that the embodiment of the present application provides generates net The calculation flow chart of network;
Fig. 3 is a kind of dictionary training behaviour of the training method for depth convolution confrontation generation network that the embodiment of the present application provides The flow chart of work;
Fig. 4 is that a kind of generation of the training method for depth convolution confrontation generation network that the embodiment of the present application provides resists net Network learning manipulation flow chart;
Fig. 5 is a kind of block diagram of the training system for depth convolution confrontation generation network that the embodiment of the present application provides.
Embodiment
The core of the application is to provide the training method and training system of a kind of depth convolution confrontation generation network, pass through by Parameter set space representation is a series of close weight vectors, can simplify the calculating process of convolution, make depth convolution to antibiosis Can be from the new type of a small amount of sample learning into network, therefore a small amount of repetitive exercise is obtained with higher precision, instruction Practice efficiency to be also improved.Disclosed herein as well is the training system that a kind of depth convolution resists generation network.
To make the purpose, technical scheme and advantage of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, belong to the scope of the application protection.
Fig. 1 is refer to, Fig. 1 is a kind of training method for depth convolution confrontation generation network that the embodiment of the present application provides Flow chart.
A kind of training method for depth convolution confrontation generation network that the present embodiment provides, can include:
S101, parameter set is performed into dictionary training operation according to the relevance between parameter, obtain dictionary D, index I and Coefficient C;
This step is intended to, and the dictionary D that can represent parameter set, index I are extracted according to the association between parameter and is Number D.
Wherein, dictionary D, index I and coefficient C refer to that the vector matrix for being expressed as parameter space can be combined, and parameter is empty Between be referred to as convolution kernel in convolution algorithm formula, therefore, dictionary D, index I and coefficient C represent convolution kernel W detailed process It is as follows:
Dictionary matrix is denoted as D ∈ Rk×m, shared between the weight vector of this layer, dictionary includes k dictionary vector, often Individual vector length is m.Corresponding with dictionary D is a lookup index matrixAn and coefficient matrixWherein, s represents that each m dimensional vectors in convolution kernel W are represented with s dictionary SYSTEM OF LINEAR VECTOR.Therefore convolution kernel W can To be expressed as:
Dictionary D, index I and coefficient C are expressed as the important operation that convolution kernel W is also next step, obtain convolution kernel It can be used for convolution algorithm, each convolution kernel W be transformed to D, I and C in computing, when the k values size for reducing dictionary and again During the number s of structure vector, it is possible to reduce the parameter amount in convolutional layer.
Certainly, during dictionary D, index I and coefficient C are extracted from parameter set, inevitably exist Some other processing operation, it is other processing operations can according to specifically expect which type of dictionary D, index I and coefficient C and Make a change, do not limit herein.
S102, dictionary D, index I and coefficient C are expressed as convolution kernel W, convolution scale operation is performed using convolution kernel W, Obtain Convolution Formula;
On the basis of step S101, this step is expressed as convolution kernel using dictionary D, the index I and coefficient C extracted W, and scale operation is carried out in convolution algorithm using convolution kernel W, the Convolution Formula being simplified.By its step, can incite somebody to action Traditional convolutional calculation is simplified to dictionary convolution, index is tabled look-up and Linear Mapping, reduces parameter amount, reduces computation burden.
Wherein, convolution algorithm refers to the computing in convolutional neural networks, and convolutional neural networks (CNN) are a kind of feedforward god Through network, its artificial neuron can respond the surrounding cells in a part of coverage, be used in large-scale image procossing There is outstanding treatment effect.Convolutional neural networks include convolutional layer and pond layer.Its convolutional layer is convolution algorithm step layer, includes 4 Individual part:1) input vector X ∈ Rm×w×h, wherein, m, w and h are input channel number, width and height respectively.2) n convolution kernel Weights, wherein each convolution kernel is W ∈ Rm×Kw×Kh, wherein, kw and kh are the width and height of convolution kernel respectively.3) scalar biases Parameter b.4) output vector Y, it is calculated by W*X+b, * is convolution symbol.
It should be noted that in two layers of basic structure of convolutional neural networks, each neuron of feature extraction layer Input is connected with the local acceptance region of preceding layer, can extract the local feature.After the local feature is extracted, it with it is other Position relationship between feature is also decided therewith;In the Feature Mapping layer of convolutional neural networks, each computation layer of network It is made up of multiple Feature Mappings, each Feature Mapping is a plane, and the weights of all neurons are equal in plane.Feature Mapping Structure is using small activation primitive of sigmoid (S types) functions as convolutional network of influence function core so that Feature Mapping has Shift invariant.
Further, since the neuron on a mapping face shares weights, thus reduce the number of network freedom parameter.Volume Each convolutional layer followed by one in product neutral net is used for asking the computation layer of local average and second extraction, this spy Feature extraction structure reduces feature resolution to some twice.
The convolution kernel W represented with dictionary D, index I and coefficient C is brought into the convolution of above-mentioned depth convolutional neural networks In layer, it can be found that the parameter that the convolution kernel represented using dictionary can use the convolution algorithm in convolutional layer is reduced, even if Each convolution kernel can be represented with less dictionary quantity, convolutional neural networks is used less parameter when calculating Amount, it is possible to reach more accurately learning training effect.
Certainly, it is also other to need to operate the step of calculating when calculating convolutional neural networks.Other steps are transported with convolution The simplifying for process of calculation and make corresponding change, may specifically also need to make the modification of other adaptability and personalization again, should It is determined on a case-by-case basis, does not limit herein.
S103, generation confrontation e-learning operation is performed using Convolution Formula, obtains maker G.
On the basis of step S102, this step is intended to the convolutional neural networks simplified by dictionary being added to generation Resist in network, learnt, and obtain the result of generation confrontation network, i.e. maker.Two of generation confrontation network are basic Key element is maker and arbiter, and the basic composition of maker and arbiter is differential formula, and the addition of this step operates Just the calculating learning process of convolutional neural networks is added in the differential formula of maker and arbiter.Pass through simplified convolution Calculate, can reduce depth convolution confrontation generation network calculating parameter amount, reduce computation burden, lifting forward calculate and reversely The speed of training, obtain effective performance boost.
Network is resisted for generation, its core concept derives from the Nash Equilibrium of game theory, and it, which sets, participates in game both sides The fundamental of respectively above-mentioned generation confrontation network, i.e. maker and arbiter, wherein, the purpose of maker is to try to learn Real data distribution is practised, and the purpose of arbiter is to try to correct decision input data and is from True Data or comes spontaneous Grow up to be a useful person.Maker and arbiter are analogized into competitive relation, both sides are in order to obtain the triumph of competition, it is necessary to the both sides of this game Constantly competition is continued to optimize, and each improves the generative capacity and discriminating power of oneself, the process of this Optimization Learning is exactly to find Therebetween a nash banlance.
Fig. 2 is refer to, Fig. 2 is a kind of training method for depth convolution confrontation generation network that the embodiment of the present application provides The calculation flow chart of confrontation generation network.
In general, calculation process and the structure of generation confrontation network (GAN) are shown in upper figure, wherein any differentiable letter Number may serve to represent GAN maker and arbiter, and thus we represent arbiter respectively with differentiable function D and G And maker, their input are respectively True Data x and random noise z.G (z) is then to obey true number as far as possible by G generations According to the sample for dividing Pdata.If the input of arbiter comes from True Data, 1 is labeled as.If input sample is G (z), mark For 0.Here D target is to realize two discriminant classifications to data source:Very, i.e., from True Data x distribution;Or Puppet, the i.e. pseudo- data G (z) from maker.And G target is performance D (G of the pseudo- data G (z) for generating oneself on D (z) the performance D (x)) with True Data x on D is consistent, and the two are confronted with each other and the process of iteration optimization causes D and G property Can constantly be lifted, when final D discriminating power lifting to a certain extent, and can not correct decision data source when, can recognize The distribution of True Data has been acquired for this maker G.
Wherein, depth convolution confrontation generation network, is by being formed after the convolutional neural networks added.In depth convolution pair For antibiosis into network, the network structure of maker is to need to carry out to up-sampling, it is possible to using warp lamination, and for Arbiter is then to be carried out using convolutional layer to down-sampling, so that the training that network is stabilized, characteristics of image is able to well Expression.By simplifying to obtain convolutional calculation, i.e. dictionary convolution, index search and Linear Mapping, the confrontation of depth convolution can be reduced The calculating parameter amount of network is generated, reduces computation burden, lifting calculates forward and the speed of reverse train, obtains effective performance Lifting.
Fig. 3 is refer to, Fig. 3 is a kind of training method for depth convolution confrontation generation network that the embodiment of the present application provides The flow chart of dictionary training operation.
With reference to a upper embodiment, this implementation is primarily directed to how to carry out dictionary training operation one in a upper embodiment Individual to illustrate, other parts and a upper embodiment are essentially identical, and same section refers to an embodiment, no longer do herein superfluous State.
The present embodiment can include:
S201, depth convolution confrontation generation network operation is performed to parameter set, obtains reality output;
This step is intended to, and depth convolution confrontation generation network is carried out to original parameter set and is obtained without using dictionary vector Output.In order to obtain training more accurately dictionary, it is necessary to the output and the output on opportunity of dictionary are contrasted, parameter is obtained Adjustment direction, preferably to adjust dictionary parameter.
S202, according between the parameter of parameter set relevance perform it is operation associated, obtain original dictionary, primary index with And original coefficient;
This step is intended to, and the parameters such as the dictionary of reset condition are tentatively obtained according to parameter set.In order to which follow-up is adjusted Direction value is, it is necessary to obtain the output of dictionary, so needing to set the parameters such as the dictionary of reset condition.
S203, depth convolution confrontation generation network operation is performed to original dictionary, primary index and original coefficient, is obtained Original output;
On the basis of step S202, this step is intended to, and depth convolution is performed to antibiosis by parameters such as original dictionaries Original output is obtained into network processing operations.In order to be made comparisons with reality output.
S204, reality output and original output are performed into error computation operation, are adjusted value;
On the basis of step S201 and step S203, this step is intended to make comparisons to obtain with original output by reality output The adjusted value of dictionary.By the way that original output is made comparisons with reality output, it is known that dictionary needs the direction changed, adjustment Size, therefore just obtained how adjusting the adjusted value of the parameters such as dictionary.
S205, by original dictionary, primary index and original coefficient according to adjusted value perform adjustment operation, obtain dictionary D, Index I and coefficient C.
On the basis of step S204, this step is intended to carry out the parameters such as original word dictionary using above-mentioned adjusted value Adjustment, obtain dictionary, index and coefficient after training terminates.
In depth convolution resists generation network, using this dictionary, index and coefficient can with less data amount Carry out more accurately learning training operation.
With reference to a upper embodiment, how this implementation carries out the one of convolution algorithm operation primarily directed in a upper embodiment Individual to illustrate, other parts and a upper embodiment are essentially identical, and same section refers to an embodiment, no longer do herein superfluous State.
The present embodiment can include:
S301, the operation of convolution kernel representation is performed to dictionary D, index I and coefficient C, obtained
This step is intended to dictionary D, index I and coefficient C being expressed as convolution kernel W.Wherein convolution kernel is expressed as:
W(m×Kw×Kh)
Wherein, kw and kh is the width and height of convolution kernel respectively.
Dictionary matrix is expressed as:
D(k×m)
Wherein, comprising k dictionary vector, each vector length is m.Corresponding with dictionary D is a lookup index matrix I (s×kw×kh), and a coefficient matrix C (s × kw×kh), s refer to each s dictionary of m dimensional vectors in convolution kernel to Measure linear expression.
Therefore, the convolution kernel W obtained to the end is:
Utilize this formula, it is possible to the parameter space of convolution algorithm is represented with less dictionary parameter.
S302, utilizeIt is rightCarry out Map function, obtain
On the basis of step S301, this step, which is intended to say that convolution kernel W is brought into Convolution Formula, carries out map function.
S303, according to S[i,:,:]=X*D[i,:]It is rightPerform Bring operation into, obtainAs Convolution Formula.
On the basis of step S302, the formula for operating to obtain to above-mentioned execution does further conversion.Wherein, by formula:
Further it is transformed to:
In above formula, all dictionaries vector in dictionary D can be subjected to convolution with input X, by X's and all dictionary D Convolution is S, i.e.,:
S[i,:,:]=X*D[i,:]
When calculating, can is suddenly spread u and studied after S, according to loop up table, can find the index of the S corresponding to convolution I and coefficient C, linear combination is carried out, it is as follows:
Therefore, D parameter can be reduced by reducing k size when calculating, so as to which its meter can be reduced when calculating S Calculation amount, realize the lifting of calculating speed.
Specifically, wherein S calculating process is dense matrix product, can also be carried out by a series of acceleration storehouse Quick matrix operation.
Optionally, it can be OpenBlas that it, which accelerates storehouse, can accelerate the speed of matrix operation.Wherein Blas is Basic linear Algebra Subprograms, are BLASs, mainly include matrix and matrix, matrix And vector, vector sum vector operations, it is one of background mathematics storehouse of science and engineering calculation.
Fig. 4 is refer to, Fig. 4 is a kind of training method for depth convolution confrontation generation network that the embodiment of the present application provides Generation confrontation e-learning operational flowchart.
With reference to a upper embodiment, how this implementation carries out generation confrontation e-learning primarily directed in a upper embodiment One of operation illustrates, and other parts and a upper embodiment are essentially identical, and same section refers to an embodiment, herein No longer repeat.
The present embodiment can include:
S401, initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
Based in above-described embodiment, the explanation of network is generated for confrontation.The concrete operations of this step are according to addition word Convolution Formula after allusion quotation carries out confrontation generation operation, obtains the pseudo- data of maker generation.Pseudo- data be in order to train arbiter, Wherein pseudo- data are inevitably larger with True Data gap at the beginning of training, therefore, pass through the differentiation knot of arbiter Fruit is modified.Continue to generate pseudo- data, allow arbiter to be differentiated, be modified further according to differentiation result, until arbiter Untill can not identifying pseudo- data.
Certainly, in the training process, other processing operations are also had, concrete condition should be regarded and select different processing operations, Do not limit herein.
S402, arbiter is performed according to True Data and pseudo- data and differentiates operation, obtain differentiating result;
On the basis of step S401, this step is intended to carry out learning training to arbiter according to True Data, then differentiates Whether pseudo- data are true.Differentiation for pseudo- data, it can all obtain differentiating result after every time differentiating, maker, which can utilize, to be differentiated As a result maker is adjusted, i.e. iteration optimization.
S403, by initial generator according to result execution iteration optimization operation is differentiated, obtain maker G.
On the basis of step S402, this step is intended to maker being iterated optimization according to differentiation result.Its iteration Optimization is probably that successive ignition optimizes, and the maker after each iteration optimization can all generate pseudo- data, allow arbiter according to puppet Data are differentiated, obtain differentiating result, optimization are iterated further according to differentiation result, until arbiter can not be identified correctly Untill pseudo- data.
The training method for a kind of depth convolution confrontation generation network that the embodiment of the present application provides, by by parameter set space A series of close weight vectors are expressed as, the calculating process of convolution can be simplified, make depth convolution confrontation generation network can be with The type new from a small amount of sample learning, therefore a small amount of repetitive exercise is obtained with higher precision, training effectiveness also obtains Lifting is arrived.Disclosed herein as well is the training system that a kind of depth convolution resists generation network.
A kind of training system of the depth convolution confrontation generation network provided below the embodiment of the present application is introduced, under A kind of training system of depth convolution confrontation generation network of text description and a kind of above-described depth convolution confrontation generation net The training method of network can be mutually to should refer to.
Fig. 5 is refer to, Fig. 5 is a kind of training system for depth convolution confrontation generation network that the embodiment of the present application provides Block diagram.
This training system can include:
Dictionary acquisition module 100, parameter set is performed into dictionary training operation according to the relevance between parameter, obtains dictionary D, I and coefficient C is indexed;
Convolution modular converter 200, dictionary D, index I and coefficient C are expressed as convolution kernel W, volume is performed using convolution kernel W Product scale operation, obtains Convolution Formula;
Generation module 300 is resisted, generation confrontation e-learning operation is performed using Convolution Formula, obtains maker G.
Optionally, dictionary acquisition module 100 can include:
Reality output acquiring unit, depth convolution confrontation generation network operation is performed to parameter set, obtains reality output;
Original dictionary acquiring unit, it is operation associated according to the relevance execution between the parameter of parameter set, obtain original word Allusion quotation, primary index and original coefficient;
Original output acquiring unit, the confrontation generation of depth convolution is performed to original dictionary, primary index and original coefficient Network operation, obtain original output;
Error transfer factor value acquiring unit, reality output and original output are performed into error computation operation, obtain error transfer factor Value;
Operating unit is adjusted, original dictionary, primary index and original coefficient are performed into adjustment operation according to adjusted value, Obtain dictionary D, index I and coefficient C.
Optionally, convolution modular converter 200 can include:
Convolution kernel representation unit, the operation of convolution kernel representation is performed to dictionary D, index I and coefficient D, obtained
Map function unit, utilizeIt is rightMap function is carried out, is obtained
Convolution Formula acquiring unit, according to S[i,:,:]=X*D[i,:]It is rightOperation is brought in execution into, is obtainedAs Convolution Formula.
Optionally, confrontation generation module 300 can include:
Pseudo- data generating unit, initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
Differentiate result acquiring unit, arbiter is performed according to True Data and pseudo- data and differentiates operation, obtain differentiating knot Fruit;
Iteration optimization processing unit, by initial generator according to result execution iteration optimization operation is differentiated, obtain maker G。
Each embodiment is described by the way of progressive in specification, and what each embodiment stressed is and other realities Apply the difference of example, between each embodiment identical similar portion mutually referring to.For device disclosed in embodiment Speech, because it is corresponded to the method disclosed in Example, so description is fairly simple, related part is referring to method part illustration .
Professional further appreciates that, with reference to the list of each example of the embodiments described herein description Member and algorithm steps, it can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This A little functions are performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specially Industry technical staff can realize described function using distinct methods to each specific application, but this realization is not It is considered as exceeding scope of the present application.
Specific case used herein is set forth to the principle and embodiment of the application, and above example is said It is bright to be only intended to help and understand the present processes and its core concept.It should be pointed out that the ordinary skill for the art For personnel, on the premise of the application principle is not departed from, some improvement and modification, these improvement can also be carried out to the application Also fallen into modification in the application scope of the claims.
It should also be noted that, in this manual, such as first and second or the like relational terms be used merely to by One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also other key elements including being not expressly set out, or also include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged Except other identical element in the process including key element, method, article or equipment being also present.

Claims (8)

  1. A kind of 1. training method of depth convolution confrontation generation network, it is characterised in that including:
    Parameter set is performed into dictionary training operation according to the relevance between parameter, obtains dictionary D, index I and coefficient C;
    The dictionary D, the index I and the coefficient C are expressed as convolution kernel W, performing convolution using the convolution kernel W changes Operation is calculated, obtains Convolution Formula;
    Generation confrontation e-learning operation is performed using the Convolution Formula, obtains maker G.
  2. 2. according to the method for claim 1, it is characterised in that described to perform parameter set according to the relevance between parameter Dictionary training operates, and obtains dictionary, index and coefficient, including:
    Depth convolution confrontation generation network operation is performed to the parameter set, obtains reality output;
    According between the parameter of the parameter set relevance perform it is operation associated, obtain original dictionary, primary index with And original coefficient;
    Depth convolution confrontation generation network operation is performed to original dictionary, primary index and original coefficient, obtains original output;
    The reality output and the original output are performed into error computation operation, are adjusted value;
    By original dictionary, primary index and original coefficient according to the adjusted value perform adjustment operation, obtain the dictionary D, The index I and coefficient C.
  3. 3. according to the method for claim 2, it is characterised in that described by the dictionary D, the index I and the system Number C is expressed as convolution kernel W, performs convolution algorithm operation using the convolution kernel W, obtains Convolution Formula, including:
    The operation of convolution kernel representation is performed to the dictionary D, the index I and the coefficient C, is obtained
    Using describedIt is rightBecome Operation is changed, is obtained
    According to S[i,:,:]=X*D[i,:]To describedBehaviour is brought in execution into Make, obtainAs the Convolution Formula.
  4. 4. according to the method for claim 3, it is characterised in that described to perform generation confrontation network using the Convolution Formula Learning manipulation, maker G is obtained, including:
    Initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
    Arbiter is performed according to True Data and pseudo- data and differentiates operation, obtains differentiating result;
    The initial generator is performed into iteration optimization operation according to the differentiation result, obtains maker G.
  5. A kind of 5. training system of depth convolution confrontation generation network, it is characterised in that including:
    Dictionary acquisition module, parameter set is performed into dictionary training operation according to the relevance between parameter, obtains dictionary D, index I And coefficient C;
    Convolution modular converter, the dictionary D, the index I and the coefficient C are expressed as convolution kernel W, utilize the convolution Core W performs convolution scale operation, obtains Convolution Formula;
    Generation module is resisted, generation confrontation e-learning operation is performed using the Convolution Formula, obtains maker G.
  6. 6. system according to claim 5, it is characterised in that the dictionary acquisition module includes:
    Reality output acquiring unit, depth convolution confrontation generation network operation is performed to the parameter set, obtains reality output;
    Original dictionary acquiring unit, it is operation associated according to the relevance execution between the parameter of the parameter set, obtain original Beginning dictionary, primary index and original coefficient;
    Original output acquiring unit, depth convolution confrontation generation network is performed to original dictionary, primary index and original coefficient Operation, obtains original output;
    Error transfer factor value acquiring unit, the reality output and the original output are performed into error computation operation, obtain error Adjusted value;
    Operating unit is adjusted, original dictionary, primary index and original coefficient are performed into adjustment operation according to the adjusted value, obtained To the dictionary D, the index I and the coefficient C.
  7. 7. system according to claim 6, it is characterised in that the convolution modular converter includes:Convolution kernel representation list Member, the operation of convolution kernel representation is performed to the dictionary D, the index I and the coefficient D, is obtained
    Map function unit, using describedIt is rightMap function is carried out, is obtained
    Convolution Formula acquiring unit, according to S[i,:,:]=X*D[i,:]To describedOperation is brought in execution into, is obtainedAs the Convolution Formula.
  8. 8. system according to claim 7, it is characterised in that the confrontation generation module includes:
    Pseudo- data generating unit, initial generator is performed into generation operation according to Convolution Formula, obtains pseudo- data;
    Differentiate result acquiring unit, arbiter is performed according to True Data and pseudo- data and differentiates operation, obtain differentiating result;
    Iteration optimization processing unit, the initial generator is performed into iteration optimization operation according to the differentiation result, given birth to Grow up to be a useful person G.
CN201710685760.4A 2017-08-11 2017-08-11 A kind of training method and training system of depth convolution confrontation generation network Pending CN107480788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710685760.4A CN107480788A (en) 2017-08-11 2017-08-11 A kind of training method and training system of depth convolution confrontation generation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710685760.4A CN107480788A (en) 2017-08-11 2017-08-11 A kind of training method and training system of depth convolution confrontation generation network

Publications (1)

Publication Number Publication Date
CN107480788A true CN107480788A (en) 2017-12-15

Family

ID=60600274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710685760.4A Pending CN107480788A (en) 2017-08-11 2017-08-11 A kind of training method and training system of depth convolution confrontation generation network

Country Status (1)

Country Link
CN (1) CN107480788A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108537761A (en) * 2018-04-18 2018-09-14 广东工业大学 A kind of image goes training method, device and the image rain removing method of rain model
CN108596261A (en) * 2018-04-28 2018-09-28 重庆青山工业有限责任公司 Based on the gear parameter oversampler method for generating confrontation network model
CN108802041A (en) * 2018-03-16 2018-11-13 浙江大学 A kind of method that the small sample set of screen detection is quickly remodeled
CN109947931A (en) * 2019-03-20 2019-06-28 华南理工大学 Text automatic abstracting method, system, equipment and medium based on unsupervised learning
CN110263909A (en) * 2018-03-30 2019-09-20 腾讯科技(深圳)有限公司 Image-recognizing method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108460391B (en) * 2018-03-09 2022-03-22 西安电子科技大学 Hyperspectral image unsupervised feature extraction method based on generation countermeasure network
CN108802041A (en) * 2018-03-16 2018-11-13 浙江大学 A kind of method that the small sample set of screen detection is quickly remodeled
CN110263909A (en) * 2018-03-30 2019-09-20 腾讯科技(深圳)有限公司 Image-recognizing method and device
CN110263909B (en) * 2018-03-30 2022-10-28 腾讯科技(深圳)有限公司 Image recognition method and device
CN108537761A (en) * 2018-04-18 2018-09-14 广东工业大学 A kind of image goes training method, device and the image rain removing method of rain model
CN108596261A (en) * 2018-04-28 2018-09-28 重庆青山工业有限责任公司 Based on the gear parameter oversampler method for generating confrontation network model
CN109947931A (en) * 2019-03-20 2019-06-28 华南理工大学 Text automatic abstracting method, system, equipment and medium based on unsupervised learning
CN109947931B (en) * 2019-03-20 2021-05-14 华南理工大学 Method, system, device and medium for automatically abstracting text based on unsupervised learning

Similar Documents

Publication Publication Date Title
CN107480788A (en) A kind of training method and training system of depth convolution confrontation generation network
Grathwohl et al. Scalable reversible generative models with free-form continuous dynamics
Wu et al. Exponential passivity of memristive neural networks with time delays
CN110048827B (en) Class template attack method based on deep learning convolutional neural network
CN107871014A (en) A kind of big data cross-module state search method and system based on depth integration Hash
CN107247989A (en) A kind of neural network training method and device
CN108717568A (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN107657204A (en) The construction method and facial expression recognizing method and system of deep layer network model
CN114049513A (en) Knowledge distillation method and system based on multi-student discussion
CN108446676B (en) Face image age discrimination method based on ordered coding and multilayer random projection
CN106909945A (en) The feature visualization and model evaluation method of deep learning
CN107563430A (en) A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension
CN112818764A (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN113157919B (en) Sentence text aspect-level emotion classification method and sentence text aspect-level emotion classification system
Krempser et al. Differential evolution assisted by surrogate models for structural optimization problems
Cheng et al. Remote sensing image classification based on optimized support vector machine
CN114170659A (en) Facial emotion recognition method based on attention mechanism
Cheng et al. Off-policy deep reinforcement learning based on Steffensen value iteration
CN114332565A (en) Method for generating image by generating confrontation network text based on distribution estimation condition
CN107590538A (en) A kind of dangerous source discrimination based on online Sequence Learning machine
CN107563496A (en) A kind of deep learning mode identification method of vectorial core convolutional neural networks
Li et al. A new grey forecasting model based on BP neural network and Markov chain
Molani et al. A new approach to software project cost estimation using a hybrid model of radial basis function neural network and genetic algorithm
CN109241284A (en) Document classification method and device
CN108460426A (en) A kind of image classification method based on histograms of oriented gradients combination pseudoinverse learning training storehouse self-encoding encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171215

RJ01 Rejection of invention patent application after publication