Summary of the invention
This specification embodiment provide it is a kind of based on generate confrontation network image processing method, device and electronics set
It is standby, to improve the quality of treated picture.
In order to solve the above technical problems, this specification embodiment is achieved in that
In a first aspect, proposing a kind of based on the image processing method for generating confrontation network, comprising:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output
Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described
Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with
The size of the initial picture is identical.
Second aspect proposes a kind of based on the advertising pictures processing method for generating confrontation network, comprising:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output
Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture,
In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training,
And the targeted advertisements picture is identical as the size of the initial ad picture.
The third aspect proposes a kind of based on the picture processing unit for generating confrontation network, comprising:
First obtains module, for obtaining initial picture to be processed;
First input module for the initial picture to be inputted to the generation model generated in confrontation network, and obtains institute
State the picture for generating model output;
First determining module, for using it is described generation model output picture as treated have default style mesh
It marks on a map piece, wherein the generation confrontation network is obtained based on samples pictures identical with initial picture size training,
And the Target Photo is identical as the size of the initial picture.
Fourth aspect proposes a kind of based on the advertising pictures processing unit for generating confrontation network, comprising:
Second obtains module, for obtaining initial ad picture to be processed;
Second input module for the initial ad picture to be inputted to the generation model generated in confrontation network, and obtains
Obtain the advertising pictures for generating model output;
Second determining module, for using the advertising pictures of the generation model output as treated with ad style
Targeted advertisements picture, wherein generation confrontation network is wide based on sample identical with the initial ad dimension of picture
Accuse what picture training obtained, and the targeted advertisements picture is identical as the size of the initial ad picture.
5th aspect, proposes a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed
It manages device and executes following operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output
Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described
Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with
The size of the initial picture is identical.
6th aspect, proposes a kind of computer readable storage medium, the computer-readable recording medium storage one
Or multiple programs, one or more of programs are when the electronic equipment for being included multiple application programs executes, so that the electricity
Sub- equipment executes following operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output
Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described
Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with
The size of the initial picture is identical.
7th aspect, proposes a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed
It manages device and executes following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output
Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture,
In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training,
And the targeted advertisements picture is identical as the size of the initial ad picture.
Octahedral, proposes a kind of computer readable storage medium, the computer-readable recording medium storage one or
Multiple programs, one or more of programs are when the electronic equipment for being included multiple application programs executes, so that the electronics
Equipment executes following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output
Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture,
In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training,
And the targeted advertisements picture is identical as the size of the initial ad picture.
The technical solution provided by above this specification embodiment is as it can be seen that the scheme that this specification embodiment provides at least has
Standby following a kind of technical effect: by then passing through in the generation confrontation network for obtaining initial picture input training to be processed
Generate model, obtain have default style Target Photo, rather than based on it is simple rule to initial picture to be processed into
Row processing, therefore the quality for the Target Photo that processing obtains can be improved.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and
Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one
Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
For the quality for improving the picture after style conversion, this specification embodiment provides a kind of based on generating confrontation network
Image processing method and device.What this specification embodiment provided can based on the image processing method and device for generating confrontation network
To be executed by electronic equipment, such as terminal device or server device.In other words, the method can be by being mounted on terminal device
Or the software or hardware of server device execute.The server-side includes but is not limited to: single server, server cluster,
Cloud server or cloud server cluster etc..
Fig. 1 is that the process based on the image processing method for generating confrontation network that one embodiment of this specification provides is shown
It is intended to, as shown in Figure 1, this method may include:
Step 102 obtains initial picture to be processed.
Initial picture to be processed can be the picture for needing to carry out style conversion pre-saved, these initial pictures
Both it can be obtained by being manually based on user experience design (User Experience Design, UED), for example, to be processed is first
Beginning picture can be in advance by the initial ad intention picture manually designed based on UED, can also be obtained by other modes,
For example, can be the photo that image capturing device is shot.
The initial picture is inputted the generation model generated in confrontation network, and obtains the generation model by step 104
The picture of output.
Fig. 2 shows a kind of structural schematic diagrams for generating confrontation network.As shown in Fig. 2, generating confrontation network includes differentiating
Model (Discriminative, D) and generation model (Generative, G), wherein what generation model G learnt is for institute
The Joint Distribution of data is observed, what discrimination model D learnt is conditional probability distribution, that is, what is learnt is the premise of observation variable
Under non-viewing variable distribution situation.
Wherein, z is the input for generating model G, is the noisy samples data (In that gaussian random distribution generates under normal circumstances
It is samples pictures in this specification embodiment), it generates and 3 layers of frame and several nodes is shown in model G figure, generate model G's
The number of nodes of the number of plies and each layer can adjust according to actual needs.The output for generating model G is G (z).
In training discrimination model, the label of truthful data X can be 1, carry out the label of the output G (Z) of self-generating model G
It can be 0.By truthful data X and generate model G output G (Z) mix after, input to discrimination model D, discrimination model D can be with
Regard two classifiers as.
In this specification embodiment, discrimination model D is used to differentiate that the picture of input to be true initial picture, or by
The virtual Target Photo that model G is generated is generated, output is probability.Specifically, showing when the output of discrimination model is 1
Input is true initial picture;When the output of discrimination model is 0, show input is virtual Target Photo.Wherein,
Discrimination model D can be based on convolutional neural networks (Convolutional Neural Networks, CNN) and residual error network
Model (Residual Networks, ResNet) construction.
Fig. 3 shows a kind of structural schematic diagram of residual error network model, as shown in figure 3, the residual error network model includes extremely
A few residual error module (Fig. 3 illustrates only 1), which includes first layer convolution module 31 and second layer convolution module
32, first layer convolution module 31 and second layer convolution module 32 are connected by activation primitive (ReLUctant, ReLU), when this is residual
It when the input of difference module is X, by the processing of first layer convolution module 31 and second layer convolution module 32, obtains F (X), then by X
It is added with F (X) and passes to next residual error module, which connect with next residual error module also by RELU.
In this specification embodiment, generate model G be used for the initial picture z (true picture) for receiving input it
Afterwards, the Target Photo G (z) (virtual picture) after output conversion style.The network structure for generating model G can be as shown in figure 4, ginseng
After Fig. 4 is examined it is found that initial picture z is inputted generation model G, generates model G interior and the initial picture z of input is successively passed through
Mapping and reconstruct (project and reshape), convolution algorithm (Vector convolution, CONV) 1, CONV2,
CONV3 and CONV4 finally exports Target Photo G (z).Certainly, generate model G network structure can also be except shown in Fig. 4 with
Outer structure, this specification embodiment do not limit this.
Step 106, using it is described generation model output picture as treated have default style Target Photo.
Wherein, generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and
The Target Photo is identical as the size of the initial picture.
In this specification embodiment, the wind that the generation model in confrontation network is used to convert the initial picture of input is generated
Lattice, and keep the core content of the initial picture of input constant.
Wherein, default style can be arbitrary style, for example, if initial picture to be processed is an initial ad
Intention picture, then default style can be advertisement promotion style;For another example, if picture to be processed be summer shooting certain
The photo of one landscape, then default style can be winter style, etc..
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost
The Target Photo that can look genuine, to cheat discrimination model D therein, so that discrimination model D is difficult to determine the true and false of it.And it is right
In generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, as discrimination model D
When reaching nash banlance with generation model G, illustrate that the picture for generating the output of model G can look genuine, generates model G success
Discrimination model D has been cheated on ground, and the picture that can will generate the output of model G at this time is corresponding with default wind as initial picture
The Target Photo of lattice.
The generation that initial picture input training to be processed obtains is fought network by then passing through by this specification embodiment
In generation model, obtain have default style Target Photo, rather than based on it is simple rule to initial graph to be processed
Piece is handled, therefore the quality for the Target Photo that processing obtains can be improved.
As shown in figure 5, before above-mentioned steps 102, fighting network based on generating in another embodiment of this specification
Image processing method, can also include:
Step 108, the training generation model and discrimination model generated in confrontation network.
The thought for generating confrontation network is to generate model and the continuous dual training of discrimination model, to realize two models
Mutually promoted.According to Fig. 3 and Fig. 4 it is found that generating the generation model G and discrimination model D in confrontation network is two mutually indepedent
Model.But error when training generates model G derives from discrimination model G, and therefore, the training process of the two is individually alternately to instruct
Practice, for example, following sub-steps 602 and sub-step 604 are individually alternately performed, until discrimination model and generation model reach Na Shiping
Weighing apparatus.
With reference to Fig. 6 it is found that the training generation model and discrimination model generated in confrontation network, can specifically include:
Sub-step 602, the training discrimination model generated in confrontation network.
Detailed, sub-step 603 may include: the fixed parameter for generating model, and the samples pictures are inputted the life
At model, obtains the sample for generating model output and export picture;By the samples pictures and sample output picture point
It Zuo Wei not discrimination model described in positive and negative sample training.
During training discrimination model, it is meant that using the samples pictures as positive sample, by the sample graph
The label of piece is set as 1, is meant that using sample output picture as negative sample, the label of sample output picture is set
It is 0, in this way, be the two disaggregated model training for having supervision to the training of discrimination model.
It is appreciated that in the first time training of discrimination model, it can be with for generating the generation model of sample output picture
It is initial construction, unbred generation model;In the n-th training of discrimination model (N is the integer greater than 1), it is used to
The generation model of generation sample output picture is the generation model after following training of sub-steps 604.
Sub-step 604, the training generation model generated in confrontation network.
Detailed, sub-step 604 may include: the parameter of fixed discrimination model, and the samples pictures are inputted the life
At model, obtains the sample for generating model output and export picture;Institute is inputted using sample output picture as positive sample
Discrimination model is stated, and obtains the differentiation of the discrimination model as a result, the differentiation result includes sentencing sample output picture
Not Wei the samples pictures probability.
Sub-step 606 judges whether the discrimination model and the generation model reach nash banlance, if so, executing
Step 110, no to then follow the steps 610.
In this specification embodiment, the discrimination model and the side for generating model and whether reaching nash banlance are judged
Formula can be, and judge whether the differentiation result of discrimination model converges on 0.5, namely not be born into model life in discrimination model differentiation
At sample output picture be true samples pictures or generate model generate virtual picture when, it is believed that the differentiation
Model and the generation model have reached nash banlance.
Sub-step 610 optimizes the generation model.
The target optimized to the generation model includes: defeated compared to the sample of the generation model before optimizing
Picture inputs the discrimination model out, when the sample output picture for generating model is input to the discrimination model, institute
The output result for stating discrimination model increases.That is, compared to the sample output picture input of the generation model before optimizing
To the discrimination model, when the sample output picture of the generation model after optimization is input to the discrimination model, institute
The output result of discrimination model is stated closer to 0.5.
Sub-step 612 optimizes the discrimination model.
Wherein, the optimization aim of the discrimination model include: compared to by the samples pictures be input to optimization before institute
Discrimination model is stated, in the discrimination model being input to the samples pictures after optimization, the output knot of the discrimination model
Fruit increases, also, compared to the samples pictures to be input to the discrimination model before optimization, is generating model for described
When sample output picture is input to the discrimination model after optimization, the output result of the discrimination model reduces.That is, comparing
In the samples pictures are input to optimization before the discrimination model, the samples pictures are input to optimization after described in
When discrimination model, the output result of the discrimination model is closer to 0.5;Picture is exported compared to by the sample for generating model
Be input to optimization before the discrimination model, by it is described generate model sample output picture be input to optimization after described in sentence
When other model, the output result of the discrimination model is further from 0.5.
Step 110 saves the generation model.
The generation model of the preservation can be used in step 604 generate the corresponding Target Photo of initial picture.
This specification embodiment provides a kind of based on the image processing method for generating confrontation network, due to can be by not
Optimize training disconnectedly, obtain the generation model for reaching nash banlance with discrimination model, so as to be treated based on the generation model
The initial picture of processing carries out style conversion, it is thus possible to improve the quality of the picture after conversion.
Based on technical concept same as the above-mentioned generation confrontation image processing method of network, this specification embodiment is also mentioned
A kind of advertising pictures processing method based on generation confrontation network has been supplied, has been had to realize to be converted in initial ad intention picture
Apparent advertisement, promote style advertising pictures target.
As shown in fig. 7, a kind of advertising pictures processing method based on generation confrontation network that this specification embodiment provides,
May include:
Step 702 obtains initial ad picture to be processed.
Initial ad picture to be processed can be the initial ad intention figure in advance by manually designing based on UED
Piece.
The initial ad picture is inputted the generation model generated in confrontation network, and obtains the generation by step 704
The advertising pictures of model output.
Confrontation network is generated to include discrimination model (Discriminative, D) and generate model (Generative, G).
In this specification embodiment, generates model G and be used in the initial ad picture z (true picture) for receiving input
Later, the targeted advertisements picture G (z) (virtual picture) after output conversion style.
In this specification embodiment, discrimination model D is used to differentiate that the picture of input to be true initial ad picture, also
It is the virtual targeted advertisements picture generated by generation model, output is probability.Specifically, when the output of discrimination model is 1
When, show input is true initial ad picture;When the output of discrimination model is 0, show input is virtual mesh
Mark advertising pictures.
Step 706, using the advertising pictures of the generation model output, as treated, the target with ad style is wide
Accuse picture.
Wherein, the generation confrontation network is instructed based on sample advertising pictures identical with the initial ad dimension of picture
It gets, and the targeted advertisements picture is identical as the size of the initial ad picture.
In this specification embodiment, the initial ad picture that the generation model in confrontation network is used to convert input is generated
Style, and keep input initial ad picture core content it is constant.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost
The targeted advertisements picture that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it.
And for generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, when differentiation mould
When type D and generation model G reach nash banlance, illustrates that the picture for generating the output of model G can look genuine, generate model G
Discrimination model D has successfully been cheated, can will generate the picture of the output of model G at this time as the corresponding tool of initial ad picture
There is the targeted advertisements picture of default style.
This specification embodiment, by then passing through the generation confrontation for obtaining initial ad picture input training to be processed
Generation model in network, obtains the Target Photo with ad style, rather than based on simple rule to it is to be processed just
Beginning advertising pictures are handled, therefore the quality of targeted advertisements picture that processing obtains can be improved, be not in color it is uneven,
The problems such as pattern is chaotic.And experiment shows the method that will provide using this specification embodiment treated that advertising pictures are launched
Later, it clicks conversion ratio and improves 30% or more.
Optionally, before step 702, method shown in Fig. 7 can also include:
The training confrontation generates generation model and discrimination model in network.
The difference of specific training process and the training process that above embodiment shown in fig. 5 describes is, use
Samples pictures are initial ad picture, remaining process is similar, therefore specific training process is referred to above, does not do weight herein
Multiple description.
Be above to this specification provide embodiment of the method explanation, below to this specification provide electronic equipment into
Row is introduced.
Fig. 8 is the structural schematic diagram for the electronic equipment that one embodiment of this specification provides.Referring to FIG. 8, in hardware
Level, the electronic equipment include processor, optionally further comprising internal bus, network interface, memory.Wherein, memory can
It can include memory, such as high-speed random access memory (Random-Access Memory, RAM), it is also possible to further include non-easy
The property lost memory (non-volatile memory), for example, at least 1 magnetic disk storage etc..Certainly, which is also possible to
Including hardware required for other business.
Processor, network interface and memory can be connected with each other by internal bus, which can be ISA
(Industry Standard Architecture, industry standard architecture) bus, PCI (Peripheral
Component Interconnect, Peripheral Component Interconnect standard) bus or EISA (Extended Industry Standard
Architecture, expanding the industrial standard structure) bus etc..The bus can be divided into address bus, data/address bus, control always
Line etc..Only to be indicated with a four-headed arrow in Fig. 8, it is not intended that an only bus or a type of convenient for indicating
Bus.
Memory, for storing program.Specifically, program may include program code, and said program code includes calculating
Machine operational order.Memory may include memory and nonvolatile memory, and provide instruction and data to processor.
Processor is from the then operation into memory of corresponding computer program is read in nonvolatile memory, in logical layer
It is formed on face based on the picture processing unit for generating confrontation network.Processor executes the program that memory is stored, and specifically uses
The operation below executing:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output
Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described
Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with
The size of the initial picture is identical.
Alternatively, processor, executes the program that memory is stored, and it is specifically used for executing following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output
Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture,
In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training,
And the targeted advertisements picture is identical as the size of the initial ad picture.
The above-mentioned image processing method based on generation confrontation network as disclosed in this specification Fig. 1 or embodiment illustrated in fig. 5
It can be applied in processor, or realized by processor.Processor may be a kind of IC chip, the place with signal
Reason ability.During realization, each step of the above method can by the integrated logic circuit of the hardware in processor or
The instruction of software form is completed.Above-mentioned processor can be general processor, including central processing unit (Central
Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be Digital Signal Processing
Device (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated
Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other can
Programmed logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute this specification one
Disclosed each method, step and logic diagram in a or multiple embodiments.General processor can be microprocessor or should
Processor is also possible to any conventional processor etc..The step of the method in conjunction with disclosed in this specification one or more embodiment
Suddenly hardware decoding processor can be embodied directly in and execute completion, or in decoding processor hardware and software module combine
Execute completion.Software module can be located at random access memory, and flash memory, read-only memory, programmable read only memory or electricity can
In the storage medium of this fields such as erasable programmable storage, register maturation.The storage medium is located at memory, and processor is read
Information in access to memory, in conjunction with the step of its hardware completion above method.
The electronic equipment can also carry out the image processing method based on generation confrontation network of Fig. 1 or Fig. 5, and this specification exists
This is repeated no more.
Certainly, other than software realization mode, other implementations are not precluded in the electronic equipment of this specification, such as
Logical device or the mode of software and hardware combining etc., that is to say, that the executing subject of following process flow is not limited to each
Logic unit is also possible to hardware or logical device.
This specification embodiment also proposed a kind of computer readable storage medium, the computer-readable recording medium storage
One or more programs, the one or more program include instruction, and the instruction is when by the portable electric including multiple application programs
When sub- equipment executes, the method that the portable electronic device can be made to execute embodiment illustrated in fig. 1, and be specifically used for executing following
Operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output
Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described
Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with
The size of the initial picture is identical.
This specification embodiment also proposed a kind of computer readable storage medium, the computer-readable recording medium storage
One or more programs, the one or more program include instruction, and the instruction is when by the portable electric including multiple application programs
When sub- equipment executes, the method that the portable electronic device can be made to execute embodiment illustrated in fig. 7, and be specifically used for executing following
Operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output
Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture,
In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training,
And the targeted advertisements picture is identical as the size of the initial ad picture.
It is illustrated below to what this specification provided based on the picture processing unit for generating confrontation network.
It is filled as shown in figure 9, one embodiment of this specification provides a kind of handle based on the picture for generating confrontation network
It sets, it, should be based on the picture processing unit 900 for generating confrontation network in a kind of Software Implementation can include: first obtains mould
Block 901, the first input module 902 and the first determining module 903.
First obtains module 901, for obtaining initial picture to be processed.
First input module 902 for the initial picture to be inputted to the generation model generated in confrontation network, and obtains
The picture for generating model output.
It generates confrontation network to include discrimination model D and generate model G, in this specification embodiment, generates model G and be used for
Target Photo G (z) (virtual picture) after the initial picture z (true picture) for receiving input, after output conversion style;
Discrimination model D is used to differentiate that the picture of input to be true initial picture, or the virtual target figure generated by generation model
Piece, output are probability.Wherein, the discrimination model can be based on convolutional neural networks and residual error network model construction.
First determining module 903, for using the picture of the generation model output as treated with default style
Target Photo.
Wherein, the generation confrontation network is obtained based on samples pictures identical with initial picture size training
, and the Target Photo is identical as the size of the initial picture.
In this specification embodiment, the wind that the generation model in confrontation network is used to convert the initial picture of input is generated
Lattice, and keep the core content of the initial picture of input constant.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost
The Target Photo that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it.And it is right
In generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, as discrimination model D
When reaching nash banlance with generation model G, illustrate that the picture for generating the output of model G can look genuine, generates model G success
Discrimination model D has been cheated on ground, and the picture that can will generate the output of model G at this time is corresponding with default wind as initial picture
The Target Photo of lattice.
The generation that initial picture input training to be processed obtains is fought network by then passing through by this specification embodiment
In generation model, obtain have default style Target Photo, rather than based on it is simple rule to initial graph to be processed
Piece is handled, therefore the quality for the Target Photo that processing obtains can be improved.
As shown in Figure 10, what another embodiment of this specification provided is a kind of based on the picture processing for generating confrontation network
Device 900 may also include that
Training module 904, for the institute before obtaining initial picture to be processed, in the training generation confrontation network
It states and generates model and discrimination model.
The thought for generating confrontation network is to generate model and the continuous dual training of discrimination model, to realize two models
Mutually promoted.Generation model G and discrimination model D in generation confrontation network are two mutually independent models.But training generates
Error when model G derives from discrimination model G, and therefore, the training process of the two is individually alternately to train, for example, individually alternately
Following first training submodules 111 and the second training submodule 112 are triggered, until discrimination model and generation model reach Na Shiping
Weighing apparatus.
Detailed, as shown in figure 11, training module 904 includes:
First training submodule 111, for training the discrimination model generated in confrontation network.
Detailed, the first training submodule 111 can be used for the samples pictures inputting the generation model, obtain institute
State the sample output picture for generating model output;The samples pictures and the sample are exported into picture as positive negative sample
The training discrimination model.
Second training submodule 112, for training the generation model generated in confrontation network.
Detailed, the second training submodule 112 can be used for the samples pictures inputting the generation model, obtain institute
State the sample output picture for generating model output;The discrimination model is inputted using sample output picture as positive sample, and
The differentiation of the discrimination model is obtained as a result, the differentiation result includes that sample output picture is determined as the sample graph
The probability of piece.
Judging submodule 113, for judging whether the discrimination model and the generation model reach nash banlance, if
It is to trigger following preserving modules 905, otherwise triggers following first optimization submodules 114.
First optimization submodule 114, for after the differentiation result for obtaining the discrimination model, when the discrimination model
When being not up to nash banlance with the generation model, the generation model is optimized.
Wherein, the target that the generation model optimizes includes: compared to the sample of the generation model before optimizing
This output picture inputs the discrimination model, described sentences the sample output picture of the generation model after optimization to be input to
When other model, the output result of the discrimination model increases.
Second optimization submodule 115, for being optimized to the discrimination model..
Wherein, the optimization aim of the discrimination model include: compared to by the samples pictures be input to optimization before institute
Discrimination model is stated, in the discrimination model being input to the samples pictures after optimization, the output knot of the discrimination model
Fruit increase, also, compared to by it is described generate model sample output picture be input to optimization before the discrimination model, will
When the sample output picture for generating model is input to the discrimination model after optimization, the output result of the discrimination model
Reduce.
Preserving module 905, for saving the generation model.
The generation model of the preservation can be used in the first input module 902 generate the corresponding Target Photo of initial picture.
This specification embodiment provides a kind of based on the picture processing unit for generating confrontation network, due to can be by not
Optimize training disconnectedly, obtain the generation model for reaching nash banlance with discrimination model, so as to be treated based on the generation model
The initial picture of processing carries out style conversion, it is thus possible to improve the quality of the picture after conversion.
It should be noted that can be realized the embodiment of the method for Fig. 1 based on the picture processing unit 900 for generating confrontation network
Method, specifically refer to embodiment illustrated in fig. 7 based on generate confrontation network image processing method, repeat no more.
As shown in figure 12, another embodiment of this specification additionally provides a kind of based on the advertisement figure for generating confrontation network
Piece treating apparatus 120, comprising: second obtains module 121, the second input module 122 and the second determining module 123.
Second obtains module 121, for obtaining initial ad picture to be processed.
Second input module 122, for the initial ad picture to be inputted to the generation model generated in confrontation network, and
Obtain the advertising pictures of the generation model output.
Second determining module 123, for using the advertising pictures of the generation model output as treated with advertisement
The targeted advertisements picture of style, wherein the generation confrontation network is based on sample identical with the initial ad dimension of picture
The training of this advertising pictures obtains, and the targeted advertisements picture is identical as the size of the initial ad picture.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost
The targeted advertisements picture that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it.
And for generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, when differentiation mould
When type D and generation model G reach nash banlance, illustrates that the picture for generating the output of model G can look genuine, generate model G
Discrimination model D has successfully been cheated, can will generate the picture of the output of model G at this time as the corresponding tool of initial ad picture
There is the targeted advertisements picture of default style.
This specification embodiment, by then passing through the generation confrontation for obtaining initial ad picture input training to be processed
Generation model in network, obtains the Target Photo with ad style, rather than based on simple rule to it is to be processed just
Beginning advertising pictures are handled, therefore the quality of targeted advertisements picture that processing obtains can be improved, be not in color it is uneven,
The problems such as pattern is chaotic.And experiment shows the device that will provide using this specification embodiment treated that advertising pictures are launched
Later, it clicks conversion ratio and improves 30% or more.
Optionally, device shown in Figure 12 can also include: training module, for training the confrontation to generate in network
Generate model and discrimination model.
The difference of specific training process and the training process that above embodiment shown in fig. 5 describes is, use
Samples pictures are initial ad picture, remaining process is similar, therefore specific training process is referred to above, does not do weight herein
Multiple description.
It should be noted that real based on the method that the advertising pictures processing unit 120 for generating confrontation network can be realized Fig. 7
The method for applying example specifically refers to the advertising pictures processing method based on generation confrontation network of embodiment illustrated in fig. 7, no longer superfluous
It states.
Above-mentioned that this specification specific embodiment is described, other embodiments are in the scope of the appended claims
It is interior.In some cases, the movement recorded in detail in the claims or step can be come according to the sequence being different from embodiment
It executes and desired result still may be implemented.In addition, process depicted in the drawing not necessarily require show it is specific suitable
Sequence or consecutive order are just able to achieve desired result.In some embodiments, multitasking and parallel processing be also can
With or may be advantageous.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
In short, being not intended to limit the protection of this specification the foregoing is merely the preferred embodiment of this specification
Range.With within principle, made any modification, changes equivalent replacement all spirit in this specification one or more embodiment
Into etc., it should be included within the protection scope of this specification one or more embodiment.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity,
Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used
Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play
It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment
The combination of equipment.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.When not limiting more, the element that is limited by sentence "including a ...", it is not excluded that in the mistake including the element
There is also other identical elements in journey, method, commodity or equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.