CN110443746A - Based on image processing method, device and the electronic equipment for generating confrontation network - Google Patents

Based on image processing method, device and the electronic equipment for generating confrontation network Download PDF

Info

Publication number
CN110443746A
CN110443746A CN201910675265.4A CN201910675265A CN110443746A CN 110443746 A CN110443746 A CN 110443746A CN 201910675265 A CN201910675265 A CN 201910675265A CN 110443746 A CN110443746 A CN 110443746A
Authority
CN
China
Prior art keywords
picture
model
generation
initial
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910675265.4A
Other languages
Chinese (zh)
Other versions
CN110443746B (en
Inventor
刘弘一
陈若田
熊军
李若鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910675265.4A priority Critical patent/CN110443746B/en
Publication of CN110443746A publication Critical patent/CN110443746A/en
Application granted granted Critical
Publication of CN110443746B publication Critical patent/CN110443746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This specification embodiment discloses a kind of based on the image processing method, device and the electronic equipment that generate confrontation network, wherein, the method is by obtaining the Target Photo with default style of output in the initial picture input generation model for generating confrontation network trained in advance.

Description

Based on image processing method, device and the electronic equipment for generating confrontation network
Technical field
This application involves field of computer technology more particularly to it is a kind of based on generate confrontation network image processing method, Device and electronic equipment.
Background technique
In actual work or life, people, which can usually encounter, needs the problem of carrying out style conversion to initial picture, For example, before advertiser launches advertising pictures on advertisement position, the advertising creative initial picture that needs to obtain engineer into Row style conversion, to obtain the targeted advertisements picture with apparent marketing, popularization style, so that advertisement can after being launched Obtain better marketing effectiveness (higher click conversion ratio).
However, the image processing method of existing automatic conversion picture style, the Target Photo being converted to often color Uneven, pattern confusion, poor quality.
Summary of the invention
This specification embodiment provide it is a kind of based on generate confrontation network image processing method, device and electronics set It is standby, to improve the quality of treated picture.
In order to solve the above technical problems, this specification embodiment is achieved in that
In a first aspect, proposing a kind of based on the image processing method for generating confrontation network, comprising:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with The size of the initial picture is identical.
Second aspect proposes a kind of based on the advertising pictures processing method for generating confrontation network, comprising:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, And the targeted advertisements picture is identical as the size of the initial ad picture.
The third aspect proposes a kind of based on the picture processing unit for generating confrontation network, comprising:
First obtains module, for obtaining initial picture to be processed;
First input module for the initial picture to be inputted to the generation model generated in confrontation network, and obtains institute State the picture for generating model output;
First determining module, for using it is described generation model output picture as treated have default style mesh It marks on a map piece, wherein the generation confrontation network is obtained based on samples pictures identical with initial picture size training, And the Target Photo is identical as the size of the initial picture.
Fourth aspect proposes a kind of based on the advertising pictures processing unit for generating confrontation network, comprising:
Second obtains module, for obtaining initial ad picture to be processed;
Second input module for the initial ad picture to be inputted to the generation model generated in confrontation network, and obtains Obtain the advertising pictures for generating model output;
Second determining module, for using the advertising pictures of the generation model output as treated with ad style Targeted advertisements picture, wherein generation confrontation network is wide based on sample identical with the initial ad dimension of picture Accuse what picture training obtained, and the targeted advertisements picture is identical as the size of the initial ad picture.
5th aspect, proposes a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed It manages device and executes following operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with The size of the initial picture is identical.
6th aspect, proposes a kind of computer readable storage medium, the computer-readable recording medium storage one Or multiple programs, one or more of programs are when the electronic equipment for being included multiple application programs executes, so that the electricity Sub- equipment executes following operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with The size of the initial picture is identical.
7th aspect, proposes a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed It manages device and executes following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, And the targeted advertisements picture is identical as the size of the initial ad picture.
Octahedral, proposes a kind of computer readable storage medium, the computer-readable recording medium storage one or Multiple programs, one or more of programs are when the electronic equipment for being included multiple application programs executes, so that the electronics Equipment executes following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, And the targeted advertisements picture is identical as the size of the initial ad picture.
The technical solution provided by above this specification embodiment is as it can be seen that the scheme that this specification embodiment provides at least has Standby following a kind of technical effect: by then passing through in the generation confrontation network for obtaining initial picture input training to be processed Generate model, obtain have default style Target Photo, rather than based on it is simple rule to initial picture to be processed into Row processing, therefore the quality for the Target Photo that processing obtains can be improved.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is a kind of process signal based on the image processing method for generating confrontation network that this specification embodiment provides One of figure.
Fig. 2 is the structural schematic diagram for the generation confrontation network that this specification embodiment provides.
Fig. 3 is the structural schematic diagram for the residual error network model that this specification embodiment provides.
Fig. 4 is the structural schematic diagram of the generation model in the generation confrontation network that this specification embodiment provides.
Fig. 5 is a kind of process signal based on the image processing method for generating confrontation network that this specification embodiment provides The two of figure.
Fig. 6 is a kind of detailed process schematic diagram of the step 108 in Fig. 5.
Fig. 7 is a kind of process based on the advertising pictures processing method for generating confrontation network that this specification embodiment provides Schematic diagram.
Fig. 8 is the structural schematic diagram for a kind of electronic equipment that this specification embodiment provides.
Fig. 9 is a kind of structural representation based on the picture processing unit for generating confrontation network that this specification embodiment provides One of figure.
Figure 10 is that a kind of structure based on the picture processing unit for generating confrontation network that this specification embodiment provides is shown The two of intention.
Figure 11 is a kind of detailed construction schematic diagram of the module 904 in Figure 10.
Figure 12 is knot of a kind of advertisement based on the picture processing unit for generating confrontation network that this specification embodiment provides Structure schematic diagram.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
For the quality for improving the picture after style conversion, this specification embodiment provides a kind of based on generating confrontation network Image processing method and device.What this specification embodiment provided can based on the image processing method and device for generating confrontation network To be executed by electronic equipment, such as terminal device or server device.In other words, the method can be by being mounted on terminal device Or the software or hardware of server device execute.The server-side includes but is not limited to: single server, server cluster, Cloud server or cloud server cluster etc..
Fig. 1 is that the process based on the image processing method for generating confrontation network that one embodiment of this specification provides is shown It is intended to, as shown in Figure 1, this method may include:
Step 102 obtains initial picture to be processed.
Initial picture to be processed can be the picture for needing to carry out style conversion pre-saved, these initial pictures Both it can be obtained by being manually based on user experience design (User Experience Design, UED), for example, to be processed is first Beginning picture can be in advance by the initial ad intention picture manually designed based on UED, can also be obtained by other modes, For example, can be the photo that image capturing device is shot.
The initial picture is inputted the generation model generated in confrontation network, and obtains the generation model by step 104 The picture of output.
Fig. 2 shows a kind of structural schematic diagrams for generating confrontation network.As shown in Fig. 2, generating confrontation network includes differentiating Model (Discriminative, D) and generation model (Generative, G), wherein what generation model G learnt is for institute The Joint Distribution of data is observed, what discrimination model D learnt is conditional probability distribution, that is, what is learnt is the premise of observation variable Under non-viewing variable distribution situation.
Wherein, z is the input for generating model G, is the noisy samples data (In that gaussian random distribution generates under normal circumstances It is samples pictures in this specification embodiment), it generates and 3 layers of frame and several nodes is shown in model G figure, generate model G's The number of nodes of the number of plies and each layer can adjust according to actual needs.The output for generating model G is G (z).
In training discrimination model, the label of truthful data X can be 1, carry out the label of the output G (Z) of self-generating model G It can be 0.By truthful data X and generate model G output G (Z) mix after, input to discrimination model D, discrimination model D can be with Regard two classifiers as.
In this specification embodiment, discrimination model D is used to differentiate that the picture of input to be true initial picture, or by The virtual Target Photo that model G is generated is generated, output is probability.Specifically, showing when the output of discrimination model is 1 Input is true initial picture;When the output of discrimination model is 0, show input is virtual Target Photo.Wherein, Discrimination model D can be based on convolutional neural networks (Convolutional Neural Networks, CNN) and residual error network Model (Residual Networks, ResNet) construction.
Fig. 3 shows a kind of structural schematic diagram of residual error network model, as shown in figure 3, the residual error network model includes extremely A few residual error module (Fig. 3 illustrates only 1), which includes first layer convolution module 31 and second layer convolution module 32, first layer convolution module 31 and second layer convolution module 32 are connected by activation primitive (ReLUctant, ReLU), when this is residual It when the input of difference module is X, by the processing of first layer convolution module 31 and second layer convolution module 32, obtains F (X), then by X It is added with F (X) and passes to next residual error module, which connect with next residual error module also by RELU.
In this specification embodiment, generate model G be used for the initial picture z (true picture) for receiving input it Afterwards, the Target Photo G (z) (virtual picture) after output conversion style.The network structure for generating model G can be as shown in figure 4, ginseng After Fig. 4 is examined it is found that initial picture z is inputted generation model G, generates model G interior and the initial picture z of input is successively passed through Mapping and reconstruct (project and reshape), convolution algorithm (Vector convolution, CONV) 1, CONV2, CONV3 and CONV4 finally exports Target Photo G (z).Certainly, generate model G network structure can also be except shown in Fig. 4 with Outer structure, this specification embodiment do not limit this.
Step 106, using it is described generation model output picture as treated have default style Target Photo.
Wherein, generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and The Target Photo is identical as the size of the initial picture.
In this specification embodiment, the wind that the generation model in confrontation network is used to convert the initial picture of input is generated Lattice, and keep the core content of the initial picture of input constant.
Wherein, default style can be arbitrary style, for example, if initial picture to be processed is an initial ad Intention picture, then default style can be advertisement promotion style;For another example, if picture to be processed be summer shooting certain The photo of one landscape, then default style can be winter style, etc..
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost The Target Photo that can look genuine, to cheat discrimination model D therein, so that discrimination model D is difficult to determine the true and false of it.And it is right In generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, as discrimination model D When reaching nash banlance with generation model G, illustrate that the picture for generating the output of model G can look genuine, generates model G success Discrimination model D has been cheated on ground, and the picture that can will generate the output of model G at this time is corresponding with default wind as initial picture The Target Photo of lattice.
The generation that initial picture input training to be processed obtains is fought network by then passing through by this specification embodiment In generation model, obtain have default style Target Photo, rather than based on it is simple rule to initial graph to be processed Piece is handled, therefore the quality for the Target Photo that processing obtains can be improved.
As shown in figure 5, before above-mentioned steps 102, fighting network based on generating in another embodiment of this specification Image processing method, can also include:
Step 108, the training generation model and discrimination model generated in confrontation network.
The thought for generating confrontation network is to generate model and the continuous dual training of discrimination model, to realize two models Mutually promoted.According to Fig. 3 and Fig. 4 it is found that generating the generation model G and discrimination model D in confrontation network is two mutually indepedent Model.But error when training generates model G derives from discrimination model G, and therefore, the training process of the two is individually alternately to instruct Practice, for example, following sub-steps 602 and sub-step 604 are individually alternately performed, until discrimination model and generation model reach Na Shiping Weighing apparatus.
With reference to Fig. 6 it is found that the training generation model and discrimination model generated in confrontation network, can specifically include:
Sub-step 602, the training discrimination model generated in confrontation network.
Detailed, sub-step 603 may include: the fixed parameter for generating model, and the samples pictures are inputted the life At model, obtains the sample for generating model output and export picture;By the samples pictures and sample output picture point It Zuo Wei not discrimination model described in positive and negative sample training.
During training discrimination model, it is meant that using the samples pictures as positive sample, by the sample graph The label of piece is set as 1, is meant that using sample output picture as negative sample, the label of sample output picture is set It is 0, in this way, be the two disaggregated model training for having supervision to the training of discrimination model.
It is appreciated that in the first time training of discrimination model, it can be with for generating the generation model of sample output picture It is initial construction, unbred generation model;In the n-th training of discrimination model (N is the integer greater than 1), it is used to The generation model of generation sample output picture is the generation model after following training of sub-steps 604.
Sub-step 604, the training generation model generated in confrontation network.
Detailed, sub-step 604 may include: the parameter of fixed discrimination model, and the samples pictures are inputted the life At model, obtains the sample for generating model output and export picture;Institute is inputted using sample output picture as positive sample Discrimination model is stated, and obtains the differentiation of the discrimination model as a result, the differentiation result includes sentencing sample output picture Not Wei the samples pictures probability.
Sub-step 606 judges whether the discrimination model and the generation model reach nash banlance, if so, executing Step 110, no to then follow the steps 610.
In this specification embodiment, the discrimination model and the side for generating model and whether reaching nash banlance are judged Formula can be, and judge whether the differentiation result of discrimination model converges on 0.5, namely not be born into model life in discrimination model differentiation At sample output picture be true samples pictures or generate model generate virtual picture when, it is believed that the differentiation Model and the generation model have reached nash banlance.
Sub-step 610 optimizes the generation model.
The target optimized to the generation model includes: defeated compared to the sample of the generation model before optimizing Picture inputs the discrimination model out, when the sample output picture for generating model is input to the discrimination model, institute The output result for stating discrimination model increases.That is, compared to the sample output picture input of the generation model before optimizing To the discrimination model, when the sample output picture of the generation model after optimization is input to the discrimination model, institute The output result of discrimination model is stated closer to 0.5.
Sub-step 612 optimizes the discrimination model.
Wherein, the optimization aim of the discrimination model include: compared to by the samples pictures be input to optimization before institute Discrimination model is stated, in the discrimination model being input to the samples pictures after optimization, the output knot of the discrimination model Fruit increases, also, compared to the samples pictures to be input to the discrimination model before optimization, is generating model for described When sample output picture is input to the discrimination model after optimization, the output result of the discrimination model reduces.That is, comparing In the samples pictures are input to optimization before the discrimination model, the samples pictures are input to optimization after described in When discrimination model, the output result of the discrimination model is closer to 0.5;Picture is exported compared to by the sample for generating model Be input to optimization before the discrimination model, by it is described generate model sample output picture be input to optimization after described in sentence When other model, the output result of the discrimination model is further from 0.5.
Step 110 saves the generation model.
The generation model of the preservation can be used in step 604 generate the corresponding Target Photo of initial picture.
This specification embodiment provides a kind of based on the image processing method for generating confrontation network, due to can be by not Optimize training disconnectedly, obtain the generation model for reaching nash banlance with discrimination model, so as to be treated based on the generation model The initial picture of processing carries out style conversion, it is thus possible to improve the quality of the picture after conversion.
Based on technical concept same as the above-mentioned generation confrontation image processing method of network, this specification embodiment is also mentioned A kind of advertising pictures processing method based on generation confrontation network has been supplied, has been had to realize to be converted in initial ad intention picture Apparent advertisement, promote style advertising pictures target.
As shown in fig. 7, a kind of advertising pictures processing method based on generation confrontation network that this specification embodiment provides, May include:
Step 702 obtains initial ad picture to be processed.
Initial ad picture to be processed can be the initial ad intention figure in advance by manually designing based on UED Piece.
The initial ad picture is inputted the generation model generated in confrontation network, and obtains the generation by step 704 The advertising pictures of model output.
Confrontation network is generated to include discrimination model (Discriminative, D) and generate model (Generative, G).
In this specification embodiment, generates model G and be used in the initial ad picture z (true picture) for receiving input Later, the targeted advertisements picture G (z) (virtual picture) after output conversion style.
In this specification embodiment, discrimination model D is used to differentiate that the picture of input to be true initial ad picture, also It is the virtual targeted advertisements picture generated by generation model, output is probability.Specifically, when the output of discrimination model is 1 When, show input is true initial ad picture;When the output of discrimination model is 0, show input is virtual mesh Mark advertising pictures.
Step 706, using the advertising pictures of the generation model output, as treated, the target with ad style is wide Accuse picture.
Wherein, the generation confrontation network is instructed based on sample advertising pictures identical with the initial ad dimension of picture It gets, and the targeted advertisements picture is identical as the size of the initial ad picture.
In this specification embodiment, the initial ad picture that the generation model in confrontation network is used to convert input is generated Style, and keep input initial ad picture core content it is constant.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost The targeted advertisements picture that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it. And for generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, when differentiation mould When type D and generation model G reach nash banlance, illustrates that the picture for generating the output of model G can look genuine, generate model G Discrimination model D has successfully been cheated, can will generate the picture of the output of model G at this time as the corresponding tool of initial ad picture There is the targeted advertisements picture of default style.
This specification embodiment, by then passing through the generation confrontation for obtaining initial ad picture input training to be processed Generation model in network, obtains the Target Photo with ad style, rather than based on simple rule to it is to be processed just Beginning advertising pictures are handled, therefore the quality of targeted advertisements picture that processing obtains can be improved, be not in color it is uneven, The problems such as pattern is chaotic.And experiment shows the method that will provide using this specification embodiment treated that advertising pictures are launched Later, it clicks conversion ratio and improves 30% or more.
Optionally, before step 702, method shown in Fig. 7 can also include:
The training confrontation generates generation model and discrimination model in network.
The difference of specific training process and the training process that above embodiment shown in fig. 5 describes is, use Samples pictures are initial ad picture, remaining process is similar, therefore specific training process is referred to above, does not do weight herein Multiple description.
Be above to this specification provide embodiment of the method explanation, below to this specification provide electronic equipment into Row is introduced.
Fig. 8 is the structural schematic diagram for the electronic equipment that one embodiment of this specification provides.Referring to FIG. 8, in hardware Level, the electronic equipment include processor, optionally further comprising internal bus, network interface, memory.Wherein, memory can It can include memory, such as high-speed random access memory (Random-Access Memory, RAM), it is also possible to further include non-easy The property lost memory (non-volatile memory), for example, at least 1 magnetic disk storage etc..Certainly, which is also possible to Including hardware required for other business.
Processor, network interface and memory can be connected with each other by internal bus, which can be ISA (Industry Standard Architecture, industry standard architecture) bus, PCI (Peripheral Component Interconnect, Peripheral Component Interconnect standard) bus or EISA (Extended Industry Standard Architecture, expanding the industrial standard structure) bus etc..The bus can be divided into address bus, data/address bus, control always Line etc..Only to be indicated with a four-headed arrow in Fig. 8, it is not intended that an only bus or a type of convenient for indicating Bus.
Memory, for storing program.Specifically, program may include program code, and said program code includes calculating Machine operational order.Memory may include memory and nonvolatile memory, and provide instruction and data to processor.
Processor is from the then operation into memory of corresponding computer program is read in nonvolatile memory, in logical layer It is formed on face based on the picture processing unit for generating confrontation network.Processor executes the program that memory is stored, and specifically uses The operation below executing:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with The size of the initial picture is identical.
Alternatively, processor, executes the program that memory is stored, and it is specifically used for executing following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, And the targeted advertisements picture is identical as the size of the initial ad picture.
The above-mentioned image processing method based on generation confrontation network as disclosed in this specification Fig. 1 or embodiment illustrated in fig. 5 It can be applied in processor, or realized by processor.Processor may be a kind of IC chip, the place with signal Reason ability.During realization, each step of the above method can by the integrated logic circuit of the hardware in processor or The instruction of software form is completed.Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be Digital Signal Processing Device (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other can Programmed logic device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute this specification one Disclosed each method, step and logic diagram in a or multiple embodiments.General processor can be microprocessor or should Processor is also possible to any conventional processor etc..The step of the method in conjunction with disclosed in this specification one or more embodiment Suddenly hardware decoding processor can be embodied directly in and execute completion, or in decoding processor hardware and software module combine Execute completion.Software module can be located at random access memory, and flash memory, read-only memory, programmable read only memory or electricity can In the storage medium of this fields such as erasable programmable storage, register maturation.The storage medium is located at memory, and processor is read Information in access to memory, in conjunction with the step of its hardware completion above method.
The electronic equipment can also carry out the image processing method based on generation confrontation network of Fig. 1 or Fig. 5, and this specification exists This is repeated no more.
Certainly, other than software realization mode, other implementations are not precluded in the electronic equipment of this specification, such as Logical device or the mode of software and hardware combining etc., that is to say, that the executing subject of following process flow is not limited to each Logic unit is also possible to hardware or logical device.
This specification embodiment also proposed a kind of computer readable storage medium, the computer-readable recording medium storage One or more programs, the one or more program include instruction, and the instruction is when by the portable electric including multiple application programs When sub- equipment executes, the method that the portable electronic device can be made to execute embodiment illustrated in fig. 1, and be specifically used for executing following Operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the figure of the generation model output Piece;
Using it is described generation model output picture as treated have default style Target Photo, wherein it is described Generate confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with The size of the initial picture is identical.
This specification embodiment also proposed a kind of computer readable storage medium, the computer-readable recording medium storage One or more programs, the one or more program include instruction, and the instruction is when by the portable electric including multiple application programs When sub- equipment executes, the method that the portable electronic device can be made to execute embodiment illustrated in fig. 7, and be specifically used for executing following Operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the generation model output Advertising pictures;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, In, the generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, And the targeted advertisements picture is identical as the size of the initial ad picture.
It is illustrated below to what this specification provided based on the picture processing unit for generating confrontation network.
It is filled as shown in figure 9, one embodiment of this specification provides a kind of handle based on the picture for generating confrontation network It sets, it, should be based on the picture processing unit 900 for generating confrontation network in a kind of Software Implementation can include: first obtains mould Block 901, the first input module 902 and the first determining module 903.
First obtains module 901, for obtaining initial picture to be processed.
First input module 902 for the initial picture to be inputted to the generation model generated in confrontation network, and obtains The picture for generating model output.
It generates confrontation network to include discrimination model D and generate model G, in this specification embodiment, generates model G and be used for Target Photo G (z) (virtual picture) after the initial picture z (true picture) for receiving input, after output conversion style; Discrimination model D is used to differentiate that the picture of input to be true initial picture, or the virtual target figure generated by generation model Piece, output are probability.Wherein, the discrimination model can be based on convolutional neural networks and residual error network model construction.
First determining module 903, for using the picture of the generation model output as treated with default style Target Photo.
Wherein, the generation confrontation network is obtained based on samples pictures identical with initial picture size training , and the Target Photo is identical as the size of the initial picture.
In this specification embodiment, the wind that the generation model in confrontation network is used to convert the initial picture of input is generated Lattice, and keep the core content of the initial picture of input constant.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost The Target Photo that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it.And it is right In generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, as discrimination model D When reaching nash banlance with generation model G, illustrate that the picture for generating the output of model G can look genuine, generates model G success Discrimination model D has been cheated on ground, and the picture that can will generate the output of model G at this time is corresponding with default wind as initial picture The Target Photo of lattice.
The generation that initial picture input training to be processed obtains is fought network by then passing through by this specification embodiment In generation model, obtain have default style Target Photo, rather than based on it is simple rule to initial graph to be processed Piece is handled, therefore the quality for the Target Photo that processing obtains can be improved.
As shown in Figure 10, what another embodiment of this specification provided is a kind of based on the picture processing for generating confrontation network Device 900 may also include that
Training module 904, for the institute before obtaining initial picture to be processed, in the training generation confrontation network It states and generates model and discrimination model.
The thought for generating confrontation network is to generate model and the continuous dual training of discrimination model, to realize two models Mutually promoted.Generation model G and discrimination model D in generation confrontation network are two mutually independent models.But training generates Error when model G derives from discrimination model G, and therefore, the training process of the two is individually alternately to train, for example, individually alternately Following first training submodules 111 and the second training submodule 112 are triggered, until discrimination model and generation model reach Na Shiping Weighing apparatus.
Detailed, as shown in figure 11, training module 904 includes:
First training submodule 111, for training the discrimination model generated in confrontation network.
Detailed, the first training submodule 111 can be used for the samples pictures inputting the generation model, obtain institute State the sample output picture for generating model output;The samples pictures and the sample are exported into picture as positive negative sample The training discrimination model.
Second training submodule 112, for training the generation model generated in confrontation network.
Detailed, the second training submodule 112 can be used for the samples pictures inputting the generation model, obtain institute State the sample output picture for generating model output;The discrimination model is inputted using sample output picture as positive sample, and The differentiation of the discrimination model is obtained as a result, the differentiation result includes that sample output picture is determined as the sample graph The probability of piece.
Judging submodule 113, for judging whether the discrimination model and the generation model reach nash banlance, if It is to trigger following preserving modules 905, otherwise triggers following first optimization submodules 114.
First optimization submodule 114, for after the differentiation result for obtaining the discrimination model, when the discrimination model When being not up to nash banlance with the generation model, the generation model is optimized.
Wherein, the target that the generation model optimizes includes: compared to the sample of the generation model before optimizing This output picture inputs the discrimination model, described sentences the sample output picture of the generation model after optimization to be input to When other model, the output result of the discrimination model increases.
Second optimization submodule 115, for being optimized to the discrimination model..
Wherein, the optimization aim of the discrimination model include: compared to by the samples pictures be input to optimization before institute Discrimination model is stated, in the discrimination model being input to the samples pictures after optimization, the output knot of the discrimination model Fruit increase, also, compared to by it is described generate model sample output picture be input to optimization before the discrimination model, will When the sample output picture for generating model is input to the discrimination model after optimization, the output result of the discrimination model Reduce.
Preserving module 905, for saving the generation model.
The generation model of the preservation can be used in the first input module 902 generate the corresponding Target Photo of initial picture.
This specification embodiment provides a kind of based on the picture processing unit for generating confrontation network, due to can be by not Optimize training disconnectedly, obtain the generation model for reaching nash banlance with discrimination model, so as to be treated based on the generation model The initial picture of processing carries out style conversion, it is thus possible to improve the quality of the picture after conversion.
It should be noted that can be realized the embodiment of the method for Fig. 1 based on the picture processing unit 900 for generating confrontation network Method, specifically refer to embodiment illustrated in fig. 7 based on generate confrontation network image processing method, repeat no more.
As shown in figure 12, another embodiment of this specification additionally provides a kind of based on the advertisement figure for generating confrontation network Piece treating apparatus 120, comprising: second obtains module 121, the second input module 122 and the second determining module 123.
Second obtains module 121, for obtaining initial ad picture to be processed.
Second input module 122, for the initial ad picture to be inputted to the generation model generated in confrontation network, and Obtain the advertising pictures of the generation model output.
Second determining module 123, for using the advertising pictures of the generation model output as treated with advertisement The targeted advertisements picture of style, wherein the generation confrontation network is based on sample identical with the initial ad dimension of picture The training of this advertising pictures obtains, and the targeted advertisements picture is identical as the size of the initial ad picture.
In this specification embodiment, for generating for the generation model G in confrontation network, target is to generate almost The targeted advertisements picture that can look genuine, to cheat discrimination model D therein, so that discrimination model is difficult to determine the true and false of it. And for generating for the discrimination model D in confrontation network, target is to prevent from being generated model G deception.So, when differentiation mould When type D and generation model G reach nash banlance, illustrates that the picture for generating the output of model G can look genuine, generate model G Discrimination model D has successfully been cheated, can will generate the picture of the output of model G at this time as the corresponding tool of initial ad picture There is the targeted advertisements picture of default style.
This specification embodiment, by then passing through the generation confrontation for obtaining initial ad picture input training to be processed Generation model in network, obtains the Target Photo with ad style, rather than based on simple rule to it is to be processed just Beginning advertising pictures are handled, therefore the quality of targeted advertisements picture that processing obtains can be improved, be not in color it is uneven, The problems such as pattern is chaotic.And experiment shows the device that will provide using this specification embodiment treated that advertising pictures are launched Later, it clicks conversion ratio and improves 30% or more.
Optionally, device shown in Figure 12 can also include: training module, for training the confrontation to generate in network Generate model and discrimination model.
The difference of specific training process and the training process that above embodiment shown in fig. 5 describes is, use Samples pictures are initial ad picture, remaining process is similar, therefore specific training process is referred to above, does not do weight herein Multiple description.
It should be noted that real based on the method that the advertising pictures processing unit 120 for generating confrontation network can be realized Fig. 7 The method for applying example specifically refers to the advertising pictures processing method based on generation confrontation network of embodiment illustrated in fig. 7, no longer superfluous It states.
Above-mentioned that this specification specific embodiment is described, other embodiments are in the scope of the appended claims It is interior.In some cases, the movement recorded in detail in the claims or step can be come according to the sequence being different from embodiment It executes and desired result still may be implemented.In addition, process depicted in the drawing not necessarily require show it is specific suitable Sequence or consecutive order are just able to achieve desired result.In some embodiments, multitasking and parallel processing be also can With or may be advantageous.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
In short, being not intended to limit the protection of this specification the foregoing is merely the preferred embodiment of this specification Range.With within principle, made any modification, changes equivalent replacement all spirit in this specification one or more embodiment Into etc., it should be included within the protection scope of this specification one or more embodiment.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity, Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment The combination of equipment.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.When not limiting more, the element that is limited by sentence "including a ...", it is not excluded that in the mistake including the element There is also other identical elements in journey, method, commodity or equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.

Claims (20)

1. a kind of based on the image processing method for generating confrontation network, comprising:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the picture of the generation model output;
Using it is described generation model output picture as treated have default style Target Photo, wherein the generation Confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with it is described The size of initial picture is identical.
2. according to the method described in claim 1, before obtaining initial picture to be processed, further includes:
The training generation model and discrimination model generated in confrontation network.
3. according to the method described in claim 2,
Wherein, the training discrimination model generated in confrontation network, comprising:
The samples pictures are inputted into the generation model, the sample for generating model output is obtained and exports picture;
The samples pictures and the sample are exported into picture as discrimination model described in positive and negative sample training.
4. according to the method described in claim 2,
Wherein, the training generation model generated in confrontation network, comprising:
The samples pictures are inputted into the generation model, the sample for generating model output is obtained and exports picture;
The discrimination model is inputted using sample output picture as positive sample, and obtains the differentiation knot of the discrimination model Fruit, it is described to differentiate that result includes the probability that sample output picture is determined as to the samples pictures.
5. according to the method described in claim 4, after the differentiation result for obtaining the discrimination model, further includes:
When the discrimination model and the generation model reach nash banlance, the generation model is saved.
6. according to the method described in claim 4, after the differentiation result for obtaining the discrimination model, further includes:
When the discrimination model and the generation model are not up to nash banlance, the generation model is optimized, wherein The target that the generation model optimizes includes: the sample output picture input compared to the generation model before optimizing The discrimination model, when the sample output picture for generating model is input to the discrimination model, the discrimination model Output result increase.
7. according to the method described in claim 6, after being optimized to the generation model, further includes:
The discrimination model is optimized, wherein the optimization aim of the discrimination model includes: compared to by the sample graph Piece is input to the discrimination model before optimization, in the discrimination model being input to the samples pictures after optimization, institute State discrimination model output result increase, also, compared to by it is described generate model sample output picture be input to optimization before The discrimination model, by it is described generate model sample output picture be input to optimization after the discrimination model when, institute The output result for stating discrimination model reduces.
8. the method according to claim 2,
The discrimination model is constructed based on convolutional neural networks and residual error network model.
9. a kind of based on the advertising pictures processing method for generating confrontation network, comprising:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the wide of the generation model output Accuse picture;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, wherein Generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, and institute It is identical as the size of the initial ad picture to state targeted advertisements picture.
10. a kind of based on the picture processing unit for generating confrontation network, comprising:
First obtains module, for obtaining initial picture to be processed;
First input module for the initial picture to be inputted to the generation model generated in confrontation network, and obtains the life The picture exported at model;
First determining module, for using it is described generation model output picture as treated have default style target figure Piece, wherein generation confrontation network is obtained based on samples pictures identical with initial picture size training, and institute It is identical as the size of the initial picture to state Target Photo.
11. device according to claim 10, further includes:
Training module, for the generation before obtaining initial picture to be processed, in the training generation confrontation network Model and discrimination model.
12. device according to claim 11, wherein the training module includes:
First training submodule obtains the generation model output for the samples pictures to be inputted the generation model Sample exports picture;Mould is differentiated using the samples pictures and sample output picture as described in positive and negative sample training Type.
13. device according to claim 11, wherein the training module includes:
Second training submodule obtains the generation model output for the samples pictures to be inputted the generation model Sample exports picture;The discrimination model is inputted using sample output picture as positive sample, and obtains the discrimination model Differentiation as a result, described differentiate that result includes that sample output picture is determined as the probability of the samples pictures.
14. device according to claim 13, further includes:
Preserving module, for after the differentiation result for obtaining the discrimination model, when the discrimination model and the generation mould When type reaches nash banlance, the generation model is saved.
15. device according to claim 13, wherein the training module further include:
First optimization submodule, for after the differentiation result for obtaining the discrimination model, when the discrimination model and described When generation model is not up to nash banlance, the generation model is optimized, wherein the mesh for generating model and optimizing Mark includes: to input the discrimination model compared to the sample of the generation model before optimizing output picture, by the life When being input to the discrimination model at the sample output picture of model, the output result of the discrimination model increases.
16. device according to claim 14, wherein the training module further include:
Second optimization submodule, for being optimized to the discrimination model, wherein the optimization aim packet of the discrimination model It includes: compared to the samples pictures to be input to the discrimination model before optimization, the samples pictures being input to optimization When the rear discrimination model, the output result of the discrimination model increases, also, compared to by the sample for generating model Output picture is input to the discrimination model before optimization, after the sample output picture for generating model is input to optimization The discrimination model when, the output result of the discrimination model reduces.
17. the described in any item devices of 1-16 according to claim 1,
The discrimination model is constructed based on convolutional neural networks and residual error network model.
18. a kind of based on the advertising pictures processing unit for generating confrontation network, comprising:
Second obtains module, for obtaining initial ad picture to be processed;
Second input module for the initial ad picture to be inputted to the generation model generated in confrontation network, and obtains institute State the advertising pictures for generating model output;
Second determining module, for using it is described generation model output advertising pictures as treated have ad style mesh Mark advertising pictures, wherein the generation confrontation network is based on sample advertisement figure identical with the initial ad dimension of picture Piece training obtains, and the targeted advertisements picture is identical as the size of the initial ad picture.
19. a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the processor when executed Execute following operation:
Obtain initial picture to be processed;
The initial picture is inputted to the generation model generated in confrontation network, and obtains the picture of the generation model output;
Using it is described generation model output picture as treated have default style Target Photo, wherein the generation Confrontation network be based on samples pictures identical with the initial picture size training obtain, and the Target Photo with it is described The size of initial picture is identical.
20. a kind of electronic equipment, comprising:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the processor when executed Execute following operation:
Obtain initial ad picture to be processed;
The initial ad picture is inputted to the generation model generated in confrontation network, and obtains the wide of the generation model output Accuse picture;
Using it is described generation model output advertising pictures as treated have ad style targeted advertisements picture, wherein Generation confrontation network is obtained based on sample advertising pictures identical with initial ad dimension of picture training, and institute It is identical as the size of the initial ad picture to state targeted advertisements picture.
CN201910675265.4A 2019-07-25 2019-07-25 Picture processing method and device based on generation countermeasure network and electronic equipment Active CN110443746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910675265.4A CN110443746B (en) 2019-07-25 2019-07-25 Picture processing method and device based on generation countermeasure network and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910675265.4A CN110443746B (en) 2019-07-25 2019-07-25 Picture processing method and device based on generation countermeasure network and electronic equipment

Publications (2)

Publication Number Publication Date
CN110443746A true CN110443746A (en) 2019-11-12
CN110443746B CN110443746B (en) 2023-07-14

Family

ID=68431470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910675265.4A Active CN110443746B (en) 2019-07-25 2019-07-25 Picture processing method and device based on generation countermeasure network and electronic equipment

Country Status (1)

Country Link
CN (1) CN110443746B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112401928A (en) * 2020-11-17 2021-02-26 深圳度影医疗科技有限公司 Acquisition method of pelvic floor levator ani section, storage medium and terminal device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
CN108961174A (en) * 2018-05-24 2018-12-07 北京飞搜科技有限公司 A kind of image repair method, device and electronic equipment
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
CN109671018A (en) * 2018-12-12 2019-04-23 华东交通大学 A kind of image conversion method and system based on production confrontation network and ResNets technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464210A (en) * 2017-07-06 2017-12-12 浙江工业大学 A kind of image Style Transfer method based on production confrontation network
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
CN108961174A (en) * 2018-05-24 2018-12-07 北京飞搜科技有限公司 A kind of image repair method, device and electronic equipment
CN109671018A (en) * 2018-12-12 2019-04-23 华东交通大学 A kind of image conversion method and system based on production confrontation network and ResNets technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
白海娟等: "基于生成式对抗网络的字体风格迁移方法", 《大连民族大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112401928A (en) * 2020-11-17 2021-02-26 深圳度影医疗科技有限公司 Acquisition method of pelvic floor levator ani section, storage medium and terminal device
CN112401928B (en) * 2020-11-17 2023-11-28 深圳度影医疗科技有限公司 Method for acquiring pelvic floor levator ani muscle tangential plane, storage medium and terminal equipment

Also Published As

Publication number Publication date
CN110443746B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
Tranheden et al. Dacs: Domain adaptation via cross-domain mixed sampling
Qiu et al. Learning spatio-temporal representation with pseudo-3d residual networks
CN107341679A (en) Obtain the method and device of user's portrait
CN110378350A (en) A kind of method, apparatus and system of Text region
TWI750651B (en) Method, device and electronic equipment for protecting privacy information based on adversarial samples
CN111274981B (en) Target detection network construction method and device and target detection method
CN109688428B (en) Video comment generation method and device
CN108563680A (en) Resource recommendation method and device
CN109635953A (en) A kind of feature deriving method, device and electronic equipment
CN110675183B (en) Marketing object determining method, marketing popularization method and related devices
CN109271587A (en) A kind of page generation method and device
CN109064217A (en) Method, apparatus and electronic equipment are determined based on the core body strategy of user gradation
CN109598414A (en) Risk evaluation model training, methods of risk assessment, device and electronic equipment
CN110175201A (en) Business data processing method, system, device and electronic equipment
CN112132606B (en) Dynamic price adjustment method and system based on graph attention algorithm
CN109857984A (en) A kind of homing method and device of boiler load factor-efficacy curve
CN109726755A (en) A kind of picture mask method, device and electronic equipment
CN110032931A (en) Generate confrontation network training, reticulate pattern minimizing technology, device and electronic equipment
CN114694005A (en) Target detection model training method and device, and target detection method and device
CN110443746A (en) Based on image processing method, device and the electronic equipment for generating confrontation network
CN109255073A (en) A kind of personalized recommendation method, device and electronic equipment
CN107770603A (en) Method of video image processing and device
CN110046340A (en) The training method and device of textual classification model
CN109903073A (en) Shopping guide method, device and computer equipment based on recognition of face
CN108932525A (en) A kind of behavior prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant