CN109949213A - Method and apparatus for generating image - Google Patents
Method and apparatus for generating image Download PDFInfo
- Publication number
- CN109949213A CN109949213A CN201910199011.XA CN201910199011A CN109949213A CN 109949213 A CN109949213 A CN 109949213A CN 201910199011 A CN201910199011 A CN 201910199011A CN 109949213 A CN109949213 A CN 109949213A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- type information
- face organ
- initial pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000012512 characterization method Methods 0.000 claims abstract description 26
- 210000000056 organ Anatomy 0.000 claims description 134
- 238000012549 training Methods 0.000 claims description 46
- 230000008569 process Effects 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 8
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 description 13
- 230000006854 communication Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000001815 facial effect Effects 0.000 description 5
- 230000003321 amplification Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 238000010422 painting Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 210000004709 eyebrow Anatomy 0.000 description 2
- 210000000720 eyelash Anatomy 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 210000000214 mouth Anatomy 0.000 description 2
- 230000032696 parturition Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 210000000216 zygoma Anatomy 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000010428 oil painting Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Embodiment of the disclosure discloses the method and apparatus for generating image.One specific embodiment of this method includes: the initial pictures for obtaining user's input, and the picture type information that user selectes from predetermined picture type information set, wherein, picture type information in picture type information set generates the image in model set with image trained in advance and generates model one-to-one correspondence, and image generates the target image that model is used to generate the image type of corresponding picture type information characterization;It is generated in model set from image, determines that image corresponding with acquired picture type information generates model;Initial pictures are input to determined image and generate model, generate target image.The embodiment enriches the generating mode of image, can generate different types of image according to the different needs of the user.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to the method and apparatus for generating image.
Background technique
In the prior art, since the demand of each user is often different, the image liked generally also is different.So
And some users can not designed, designed its image for being satisfied with.As it can be seen that the initial pictures selected to user exist in the prior art
Carry out the demand of type conversion.
Existing technical solution is normally only to carry out the image procossings such as U.S. face, filter to initial pictures.
Summary of the invention
The present disclosure proposes the method and apparatus for generating image.
In a first aspect, embodiment of the disclosure provides a kind of method for generating image, used this method comprises: obtaining
The picture type information that the initial pictures of family input and user select from predetermined picture type information set,
In, the picture type information in picture type information set generates the image in model set with image trained in advance and generates mould
Type corresponds, and image generates the target image that model is used to generate the image type of corresponding picture type information characterization;From
Image generates in model set, determines that image corresponding with acquired picture type information generates model;Initial pictures are defeated
Enter to the image determined and generate model, generates target image.
In some embodiments, for the picture type information in picture type information set, the picture type information pair
The image answered generates that model is trained as follows to be obtained: obtaining training sample set, wherein training sample includes sample
This initial pictures, and sample object image corresponding with sample initial pictures, sample object image are the picture type information
The image of the image type of characterization;Using machine learning algorithm, at the beginning of the sample for including by the training sample in training sample set
Beginning image is as input, and using sample object image corresponding with the sample initial pictures of input as desired output, training is obtained
Image generates model.
In some embodiments, initial pictures are face-image;And this method further include: determine the initial of user's input
The area size at least one face organ's image-region that image includes;For the area of at least one face organ's image-region
Domain size is directed to face organ's image district in response to determining that the area size of face organ's image-region is more than or equal in advance
The size threshold value that domain determines carries out deformation process to the face organ's image for being located at face organ's image-region.
In some embodiments, this method further include: obtain user and selected from predetermined face organ's information aggregate
The deformation that fixed face organ's information and user is selected from the deformation data set determined for face organ's information aggregate
Information;The face organ's information and deformation data selected according to user, from face organ's deformation model set trained in advance
Determine face organ's deformation model;Target image generated is input to determined face organ's deformation model, is obtained
Image after the deformation of initial pictures.
In some embodiments, image generates model and makes a living into confrontation network.
Second aspect, embodiment of the disclosure provide a kind of for generating the device of image, which includes: first to obtain
Unit is taken, the initial pictures for being configured to obtain user's input and user are from predetermined picture type information set
Selected picture type information, wherein the picture type information in picture type information set is generated with image trained in advance
Image in model set generates model and corresponds, and image generates model and is used to generate corresponding picture type information characterization
The target image of image type;First determination unit is configured to generate in model set from image, determining and acquired figure
As the corresponding image of type information generates model;Generation unit is configured to for initial pictures to be input to determined image
Model is generated, target image is generated.
In some embodiments, wherein for the picture type information in picture type information set, image type letter
It ceases corresponding image and generates that model is trained as follows to be obtained: obtaining training sample set, wherein training sample packet
Sample initial pictures, and sample object image corresponding with sample initial pictures are included, sample object image is the image type
The image of the image type of information representation;Using machine learning algorithm, the sample for including by the training sample in training sample set
This initial pictures is as input, using sample object image corresponding with the sample initial pictures of input as desired output, training
It obtains image and generates model.
In some embodiments, initial pictures are face-image;And device further include: the second determination unit is matched
It is set to the area size at least one face organ's image-region that the initial pictures that determining user inputs include;Processing unit,
It is configured to the area size at least one face organ's image-region, in response to determining face organ's image-region
Area size is more than or equal to the size threshold value determined in advance for face organ's image-region, to positioned at face organ's image
Face organ's image in region carries out deformation process.
In some embodiments, device further include: second acquisition unit is configured to obtain user from predetermined
In face organ's information aggregate select face organ's information and user from for face organ's information aggregate determine deformation
The deformation data selected in information aggregate;Third determination unit, the face organ's information and shape for being configured to be selected according to user
Become information, face organ's deformation model is determined from face organ's deformation model set trained in advance;Input unit is configured
At target image generated to be input to determined face organ's deformation model, scheme after obtaining the deformation of initial pictures
Picture.
In some embodiments, image generates model and makes a living into confrontation network.
The third aspect, embodiment of the disclosure provide a kind of for generating the electronic equipment of image, comprising: one or more
A processor;Storage device is stored thereon with one or more programs, when said one or multiple programs are by said one or more
A processor executes, so that the one or more processors are realized in the method as above-mentioned for generating image any embodiment
Method.
Fourth aspect, embodiment of the disclosure provide a kind of for generating the computer-readable medium of image, deposit thereon
Computer program is contained, is realized when which is executed by processor in the method as above-mentioned for generating image any embodiment
Method.
The method and apparatus for generating image that embodiment of the disclosure provides, by the initial graph for obtaining user's input
The picture type information that picture and user select from predetermined picture type information set, wherein picture type information
Picture type information in set generates the image in model set with image trained in advance and generates model one-to-one correspondence, image
It generates the target image that model is used to generate the image type of corresponding picture type information characterization and then generates mould from image
In type set, determine that image corresponding with acquired picture type information generates model, finally, initial pictures are input to institute
The image determined generates model, generates target image, enriches the generating mode of image, can be according to the different needs of the user
To generate different types of image.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating image of the disclosure;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating image of the disclosure;
Fig. 4 is the flow chart according to another embodiment of the method for generating image of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating image of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for generating image using embodiment of the disclosure or the dress for generating image
The exemplary system architecture 100 for the embodiment set.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as image processing class is answered on terminal device 101,102,103
With, web browser applications, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software
Deng.
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with display screen and image procossing, including but not limited to smart phone, plate electricity
Brain, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic shadow
As expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic
Image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..Work as terminal device
101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Multiple softwares or soft may be implemented into it
Part module (such as providing the software of Distributed Services or software module), also may be implemented into single software or software mould
Block.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as send to terminal device 101,102,103 first
Beginning image is handled the background server of (such as Style Transfer (Style transfer)).Background server can be to reception
The data such as the initial pictures arrived carry out the processing such as Style Transfer, so that generating processing result (such as carries out style to initial pictures
The image obtained after migration).
It should be noted that can be by server 105 for generating the method for image provided by embodiment of the disclosure
It executes, can also be executed by terminal device 101,102,103.Correspondingly, it can be set for generating the device of image in service
In device 105, also it can be set in terminal device 101,102,103.Optionally, for giving birth to provided by embodiment of the disclosure
It can also be fitted to each other by server and terminal device execution at the method for image, it is each included by the device of image for generating
A unit can also be respectively arranged in server and terminal device.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
To be implemented as multiple softwares or software module (such as providing the software of Distributed Services or software module), also may be implemented
At single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.When for generating the electronics of image method operation thereon
When equipment does not need to carry out data transmission with other electronic equipments, which can only include for generating image method fortune
The electronic equipment of row thereon.
With continued reference to Fig. 2, the process of one embodiment of the method for generating image according to the disclosure is shown
200.The method for being used to generate image, comprising the following steps:
Step 201, the initial pictures for obtaining user's input and user are from predetermined picture type information set
Selected picture type information.
In the present embodiment, can lead to for generating the executing subject (such as server shown in FIG. 1) of the method for image
Wired connection mode or radio connection are crossed from other electronic equipments or the local initial pictures for obtaining user and inputting.Class
As, above-mentioned executing subject can also be by wired connection mode or radio connection from other electronic equipments or local
Obtain the picture type information that user selectes from predetermined picture type information set.
Wherein, the picture type information in picture type information set generates in model set with image trained in advance
Image generates model and corresponds.Image generation model is used to generate the image type of the picture type information characterization corresponding to it
Target image.
Herein, above-mentioned initial pictures can be the image to carry out Style Transfer to it.For example, initial pictures can be
But it is not limited to: face-image, human body image, landscape image, animal painting, plant image etc..Picture type information can be used
In characterization image type.Illustratively, the picture type information in above-mentioned picture type information set can be used for characterize but not
It is limited to following any image type: animation style, cartoon style, painting style, abstractionism style, Great Britain style etc..It can be with
Understand, Style Transfer is the technology that the type of a kind of pair of image is converted.For example, initial pictures can be the self-timer of user
Image, to being that image carries out Style Transfer at this, the self-timer image of available animation style, cartoon style self-timer image,
The self-timer image of painting style, the self-timer image of abstractionism style, self-timer image of Great Britain style etc..
In practice, user can input initial pictures to terminal device used in it by the modes such as shooting, downloading.
As a result, when above-mentioned executing subject is terminal device, above-mentioned executing subject can be from the local initial graph for obtaining user and being inputted
Picture;When above-mentioned executing subject is server, above-mentioned executing subject can be connect with terminal equipment in communication used by a user, from
And above-mentioned executing subject can obtain the initial pictures that user is inputted from the terminal device of communication connection.
As an example, above-mentioned executing subject can in the following way, to obtain user from predetermined image type
The picture type information selected in information aggregate: terminal device used by a user can present in picture type information set
Each picture type information, above-mentioned executing subject can be by detecting the selected operation of user (such as to image type as a result,
The choosing, click of information, long-pressing, the operation such as input picture type information), to obtain user from the picture type information set
Selected picture type information.
It is appreciated that above-mentioned executing subject, alternatively, the electronic equipment with the communication connection of above-mentioned executing subject, it can will be upper
It states the image that picture type information and above-mentioned image in picture type information set generate in model set and generates model progress
Associated storage, to establish corresponding relationship between the two.
In some optional implementations of the present embodiment, the image type in picture type information set is believed
Breath, the corresponding image of the picture type information generate model can be above-mentioned executing subject or with above-mentioned executing subject communication link
The electronic equipment connect, training obtains as follows:
The first step obtains training sample set.Wherein, training sample includes sample initial pictures, and initial with sample
The corresponding sample object image of image.Sample object image is the image of the image type of picture type information characterization.
Herein, above-mentioned sample initial pictures, which can be, obtains the corresponding image generation of the picture type information for training
Model and carry out the image before Style Transfer.Above-mentioned sample object image corresponding with sample initial pictures can be the sample
Initial pictures carry out obtained image after the Style Transfer of the image type of the picture type information characterization.For example, if should
The image type of picture type information characterization is " animation style ", then, sample object image corresponding with sample initial pictures
It can be the sample initial pictures of animation style.
Second step, using machine learning algorithm, the sample initial pictures for including by the training sample in training sample set
As input, using sample object image corresponding with the sample initial pictures of input as desired output, it is raw that training obtains image
At model.Sample object image can be the image of the image type of picture type information characterization.In practice, above-mentioned sample mesh
Logo image can be drafting personnel and be changed to be depicted as to the image types of initial pictures.
Specifically, above-mentioned executing subject can make the sample initial pictures that the training sample in training sample set includes
For the input of initial model (such as convolutional neural networks), reality output corresponding with the sample initial pictures of input is obtained.Really
Determine whether initial model meets predetermined trained termination condition.If it is satisfied, then will meet predetermined training terminates
The initial model of condition, the image for being determined as training completion generate model.If conditions are not met, then using back propagation, gradient
Descent method, the parameter based on reality output corresponding with the sample initial pictures of input and desired output adjustment initial model,
And parameter initial model adjusted is used for next training.Wherein, above-mentioned trained termination condition can include but is not limited to
At least one of lower: the training time is more than preset duration, and frequency of training is more than preset times, and reality output and desired output are inputted
The functional value arrived to predetermined loss function is less than preset threshold.
It is appreciated that this optional implementation can be believed using the training method for having supervision for above-mentioned image type
Each picture type information in breath set, training obtain an image and generate model.
In some optional implementations of the present embodiment, it may be to generate confrontation network that image, which generates model,.Its
In, generating confrontation network includes generating network and differentiating network, generates network for characterizing between initial pictures and target image
Corresponding relationship, differentiate network for determining that inputted target image is the target image or true target figure generated
Picture.As an example, above-mentioned generation confrontation network can be circulating generation confrontation network (CycleGan).Wherein, target image
The initial pictures of the image type of the corresponding picture type information characterization of model are generated for the image.
It is appreciated that this optional implementation can be adopted when it may be to generate confrontation network that image, which generates model,
With unsupervised training method, for each picture type information in above-mentioned picture type information set, training obtains one
Image generates model.
Step 202, it is generated in model set from image, determines that image corresponding with acquired picture type information generates
Model.
In the present embodiment, above-mentioned executing subject can generate in model set from above-mentioned image, determining and acquired
The corresponding image of picture type information generates model.
It is appreciated that above-mentioned executing subject can generate in model set from above-mentioned image, determining and acquired image
The image that type information pre-establishes incidence relation generates model, generates model as the image corresponding to it.
Step 203, initial pictures are input to determined image and generate model, generate target image.
In the present embodiment, the initial pictures that above-mentioned executing subject can get step 201 are input to step 202 institute
The image determined generates model, to generate target image.Wherein, it is corresponding to can be image generation model for target image
The image of the image type of picture type information characterization.
The present embodiment can carry out Style Transfer to the initial pictures of user's input according to the demand of user as a result, thus
Obtain the target image (image obtained after Style Transfer is carried out to initial pictures) for meeting user demand.
In some optional implementations of the present embodiment, initial pictures are face-image.Above-mentioned executing subject as a result,
Following steps can also be performed:
Step 1 determines that the region at least one face organ's image-region that the initial pictures of user's input include is big
It is small.Wherein, face organ includes but is not limited at least one of following: eyes, nose, forehead, ear, mouth, eyebrow, eyelashes,
Cheekbone etc..
As an example, above-mentioned executing subject can be at least one face that the initial pictures of above-mentioned user input include
Each face organ's image-region in organic image region, determines the area size of face organ's image-region.
As another example, above-mentioned executing subject can also determine at least one that the initial pictures of above-mentioned user's input include
In a face organ's image-region, the area size for face organ's image-region that user is chosen.
As an example, above-mentioned zone size can be by the quantity for the pixel that face organ's image-region includes come table
Sign, can also be by the area ratio of the area of face organ's image-region and the initial pictures of above-mentioned user input come table
Sign.
Step 2, for the area size of at least one face organ's image-region, in response to determining face organ figure
It is directed to the determining size threshold value of face organ's image-region in advance as the area size in region is more than or equal to, to positioned at the face
Face organ's image in organic image region carries out deformation process.
Herein, above-mentioned executing subject can use the affine change algorithm of triangle, carry out shape to facial organic image
Face organ's image, can also be input to deformation model trained in advance, to carry out deformation process to it by change processing.Its
In, above-mentioned deformation model can be used for carrying out deformation process to the image of input.For example, the deformation model can be using machine
The convolutional neural networks model that learning algorithm training obtains.
Wherein, size threshold value is arranged in each face organ that technical staff can include for face in advance.As an example,
Technical staff can set the size threshold value of face organ's " eyes " to any one eye size or any number of eyes
The mean value of size.Above-mentioned deformation process can include but is not limited to: amplification, diminution, translation, rotation, distortion etc..
In some optional implementations of the present embodiment, following steps are can also be performed in above-mentioned executing subject:
Step 1 obtains face organ's information that user selectes from predetermined face organ's information aggregate.And
Obtain the deformation data that user selectes from the deformation data set determined for face organ's information aggregate.
Wherein, face organ's information is used to indicate face organ.Here, face organ's information can be by text come table
Sign, such as face organ's information can be " nose ".Optionally, face organ's information can also be characterized by other forms,
For example, face organ's information can also be characterized by image.Deformation data can serve to indicate that specific deformation process.This
In, deformation data can be characterized by text, such as deformation data can be " amplification ".Optionally, deformation data can also be with
It is characterized by other forms, for example, deformation data can also be characterized by image (such as image of the eyes of amplification).
It should be noted that technical staff can be for each face organ's information setting in face organ's information aggregate
One or more corresponding deformation datas, can also be respectively set face organ's information aggregate and deformation data set, from
And make any facial organ information in face organ's information aggregate, it is opposite with any deformation data in deformation data set
It answers.
Herein, it when above-mentioned executing subject is terminal device, can present each in face organ's information aggregate
Face organ's information, user can choose facial device from face organ's information aggregate that above-mentioned executing subject is presented as a result,
Official's information, so that above-mentioned executing subject obtains face organ's information that user chooses.When above-mentioned executing subject is server,
Face organ's information aggregate can be sent to the terminal device for communicating with connection, so as to received by terminal device presentation
Each face organ's information in face organ's information aggregate, the face that user can be presented from above-mentioned executing subject as a result,
In organ information set, face organ's information is chosen, later, face organ's information that terminal device can again choose user is sent out
It send to above-mentioned executing subject, so that above-mentioned executing subject obtains face organ's information that user chooses.
It is appreciated that for deformation data, above-mentioned executing subject can using with mode as face organ's info class come
It obtains, details are not described herein.
Step 2, the face organ's information selected according to user and deformation data, from face organ's deformation trained in advance
Face organ's deformation model is determined in model set.
Wherein, face organ's deformation model can be used for carrying out deformation process to the face organ in initial pictures.As
Example, above-mentioned face organ's deformation model can be using the obtained neural network model of machine learning algorithm training, can also be with
It is triangle radiation transformation algorithm.
Target image generated is input to determined face organ's deformation model, obtains initial graph by step 3
Image after the deformation of picture.
It with continued reference to Fig. 3 A- Fig. 3 C, Fig. 3 A- Fig. 3 C is answered according to one of the method for generating image of the present embodiment
With the schematic diagram of scene.In figure 3 a, mobile phone obtains user's input initial pictures 301 and user are from predetermined figure
As the picture type information 3020 selected in type information set 302.Wherein, the image class in picture type information set 301
Type information generates the image in model set with image trained in advance and generates model one-to-one correspondence.Image generates model for giving birth to
At the target image of the image type of corresponding picture type information characterization.Then, Fig. 3 B is please referred to, mobile phone generates mould from image
Type set (it is corresponding to generate model, picture type information " cartoon " for example including the corresponding image of picture type information " oil painting "
Image generates the corresponding image of model, picture type information " Great Britain " and generates model, the corresponding figure of picture type information " Gothic "
As generating model, the corresponding image of picture type information " sand is drawn " generates model and the corresponding image of picture type information " black and white "
Generate model) in, determine that image corresponding with acquired picture type information 3020 generates model 304.Finally, mobile phone will be first
Beginning image 301 is input to determined image and generates model 304, generates target image 303.Optionally, Fig. 3 C, hand are please referred to
Machine presents the target image 303 for carrying out painting style migration to initial pictures 301 and generating to user on the screen.
In the prior art, since the demand of each user is often different, needed for head portrait be generally also different.So
And most users can not designed, designed its head portrait for being satisfied with.As it can be seen that the initial pictures selected to user exist in the prior art
Carry out the demand of type conversion.
The method provided by the above embodiment of the disclosure, by the initial pictures and user that obtain user's input from pre-
Then the picture type information selected in first determining picture type information set generates in model set from image, determine with
The corresponding image of acquired picture type information generates model, finally, it is raw that initial pictures are input to determined image
At model, target image is generated, Style Transfer can be carried out to the initial pictures of user's input according to the demand of user, thus
The image for meeting user demand is obtained, the generating mode of image is thus enriched.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating image.The use
In the process 400 for the method for generating image, comprising the following steps:
Step 401, the initial pictures for obtaining user's input and user are from predetermined picture type information set
Selected picture type information.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executing subject for generating the method for image
It is standby) the first of user's input can be obtained from other electronic equipments or locally by wired connection mode or radio connection
Beginning image.Similar, above-mentioned executing subject can also be set by wired connection mode or radio connection from other electronics
The standby or local picture type information for obtaining user and being selected from predetermined picture type information set.Initial pictures are
Face-image.
Wherein, the picture type information in picture type information set generates in model set with image trained in advance
Image generates model and corresponds, and image generates the mesh that model is used to generate the image type of corresponding picture type information characterization
Logo image
Step 402, it is generated in model set from image, determines that image corresponding with acquired picture type information generates
Model.
In the present embodiment, above-mentioned executing subject can generate in model set from above-mentioned image, determining and acquired
The corresponding image of picture type information generates model.
Step 403, initial pictures are input to determined image and generate model, generate target image.
In the present embodiment, the initial pictures that above-mentioned executing subject can get step 401 are input to step 402 institute
The image determined generates model, to generate target image.Wherein, it is corresponding to can be image generation model for target image
The image of the image type of picture type information characterization.
Herein, in addition to documented content above, step 401- step 403 can also respectively include corresponding with Fig. 2 real
The almost the same content of step 201- step 203 in example is applied, which is not described herein again.
Step 404, determine that the region at least one face organ's image-region that the initial pictures of user's input include is big
It is small.
In the present embodiment, above-mentioned executing subject can determine at least one face that the initial pictures of user's input include
The area size in organic image region.Wherein, face organ includes but is not limited at least one of following: eyes, nose, forehead,
Ear, mouth, eyebrow, eyelashes, cheekbone etc..
As an example, above-mentioned executing subject can be at least one face that the initial pictures of above-mentioned user input include
Each face organ's image-region in organic image region, determines the area size of face organ's image-region.
As another example, above-mentioned executing subject can also determine at least one that the initial pictures of above-mentioned user's input include
In a face organ's image-region, the area size for face organ's image-region that user is chosen.
As an example, above-mentioned zone size can be by the quantity for the pixel that face organ's image-region includes come table
Sign, can also be by the area ratio of the area of face organ's image-region and the initial pictures of above-mentioned user input come table
Sign.
Step 405, for the area size of at least one face organ's image-region, in response to determining face organ figure
It is directed to the determining size threshold value of face organ's image-region in advance as the area size in region is more than or equal to, to positioned at the face
Face organ's image in organic image region carries out deformation process.
In the present embodiment, above-mentioned executing subject can also at least one face organ's image-region region it is big
It is small, in response to determining that it is true for face organ's image-region in advance that the area size of face organ's image-region is more than or equal to
Fixed size threshold value carries out deformation process to the face organ's image for being located at face organ's image-region.
Herein, above-mentioned executing subject can use the affine change algorithm of triangle, carry out shape to facial organic image
Face organ's image, can also be input to deformation model trained in advance, to carry out deformation process to it by change processing.Its
In, above-mentioned deformation model can be used for carrying out deformation process to the image of input.For example, the deformation model can be using machine
The convolutional neural networks model that learning algorithm training obtains.
Wherein, size threshold value is arranged in each face organ that technical staff can include for face in advance.As an example,
Technical staff can set the size threshold value of face organ's " eyes " to any one eye size or any number of eyes
The mean value of size.Above-mentioned deformation process can include but is not limited to: amplification, diminution, translation, rotation, distortion etc..
Figure 4, it is seen that the method for generating image compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 highlight to facial organic image carry out deformation process the step of.The scheme of the present embodiment description can be as a result,
It realizes on the basis of carrying out Style Transfer to face-image, deformation further is carried out to face organ's image in face-image.
For example, if the size of the image-region where face-image characterization nose, is more than or equal to predetermined nose size threshold value
(such as size mean value of the nose of multiple people), then, the scheme described through this embodiment can be to the nose of the face-image
Son carries out deformation process (such as enhanced processing), to obtain stylized (such as animation style), nose through deformation process (example
Such as enhanced processing) after face-image.Thus, it is possible to which the individual demand of further satisfaction user, further enriches image
Generating mode.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind for generating figure
One embodiment of the device of picture, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, except following documented special
Sign is outer, which can also include feature identical or corresponding with embodiment of the method shown in Fig. 2.The device specifically may be used
To be applied in various electronic equipments.
As shown in figure 5, the device 500 for generating image of the present embodiment includes: that first acquisition unit 501, first is true
Order member 502 and generation unit 503.Wherein, first acquisition unit 501 is configured to obtain the initial pictures of user's input, with
And the picture type information that user selectes from predetermined picture type information set, wherein picture type information set
In picture type information with image trained in advance generate the image in model set and generate model and correspond, image generates
Model is used to generate the target image of the image type of corresponding picture type information characterization;First determination unit 502 is configured
It is generated in model set at from image, determines that image corresponding with acquired picture type information generates model;Generation unit
503, which are configured to for initial pictures being input to determined image, generates model, generates target image.
It in the present embodiment, can be by wired connection side for generating the first acquisition unit 501 of the device 500 of image
The initial pictures that formula or radio connection are inputted from other electronic equipments or local acquisition user.Similar, above-mentioned dress
Setting 500 from other electronic equipments or local can also obtain user from pre- by wired connection mode or radio connection
The picture type information first selected in determining picture type information set.
Wherein, the picture type information in picture type information set generates in model set with image trained in advance
Image generates model and corresponds.Image generates the mesh that model is used to generate the image type of corresponding picture type information characterization
Logo image.
In the present embodiment, above-mentioned first determination unit 502 can generate in model set from above-mentioned image, determining and institute
The corresponding image of the picture type information of acquisition generates model.
In the present embodiment, above-mentioned generation unit 503 can input the initial pictures that first acquisition unit 501 is got
The image determined to the first determination unit 502 generates model, to generate target image.Wherein, target image can be
The image generates the image of the image type of the corresponding picture type information characterization of model.
In some optional implementations of the present embodiment, the image type in picture type information set is believed
Breath, the corresponding image of the picture type information generate that model is trained as follows to be obtained:
Firstly, obtain training sample set, wherein training sample includes sample initial pictures, and with sample initial graph
As corresponding sample object image, sample object image is the image of the image type of picture type information characterization;
Then, using machine learning algorithm, the sample initial pictures for including by the training sample in training sample set are made
For input, using sample object image corresponding with the sample initial pictures of input as desired output, training obtains image generation
Model.
In some optional implementations of the present embodiment, initial pictures are face-image.The device 500 further include:
Second determination unit (not shown) is configured to determine at least one face organ that the initial pictures of user's input include
The area size of image-region.Processing unit (not shown) is configured at least one face organ's image-region
Area size is directed to face organ's image in response to determining that the area size of face organ's image-region is more than or equal in advance
The size threshold value that region determines carries out deformation process to the face organ's image for being located at face organ's image-region.
In some optional implementations of the present embodiment, the device 500 further include: second acquisition unit is (in figure not
Show) it is configured to obtain face organ's information that user selectes from predetermined face organ's information aggregate, Yi Jiyong
The deformation data that family is selected from the deformation data set determined for face organ's information aggregate.Third determination unit is (in figure
It is not shown) face organ's information for being configured to be selected according to user and deformation data, from face organ's deformation trained in advance
Face organ's deformation model is determined in model set.Input unit (not shown) is configured to target image generated
It is input to determined face organ's deformation model, obtains image after the deformation of initial pictures.
In some optional implementations of the present embodiment, image generates model and makes a living into confrontation network.
The device provided by the above embodiment of the disclosure obtains the initial graph of user's input by first acquisition unit 501
The picture type information that picture and user select from predetermined picture type information set, wherein picture type information
Picture type information in set generates the image in model set with image trained in advance and generates model one-to-one correspondence, image
Generate the target image that model is used to generate the image type of corresponding picture type information characterization, then, the first determination unit
502 generate in model set from image, determine that image corresponding with acquired picture type information generates model, finally, raw
Initial pictures are input to determined image at unit 503 and generate model, generate target image, it can be according to the need of user
It asks, Style Transfer is carried out to the initial pictures of user's input and thus enriches image to obtain the image for meeting user demand
Generating mode.
Below with reference to Fig. 6, it illustrates the computer systems for the electronic equipment for being suitable for being used to realize embodiment of the disclosure
600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, should not function to embodiment of the disclosure and
Use scope brings any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution disclosed method
Above-mentioned function.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object-oriented programming language-such as Python, Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor
Including first acquisition unit, the first determination unit and generation unit.Wherein, the title of these units not structure under certain conditions
The restriction of the pairs of unit itself, for example, the first determination unit is also described as, " determining and acquired image type is believed
Cease the unit that corresponding image generates model ".
As on the other hand, the disclosure additionally provides a kind of computer-readable medium, which can be
Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
When row, so that the electronic equipment: the initial pictures for obtaining user's input and user are from predetermined picture type information collection
The picture type information selected in conjunction, wherein picture type information and image trained in advance in picture type information set
It generates the image in model set and generates model one-to-one correspondence, image generates model for generating corresponding picture type information table
The target image of the image type of sign;It is generated in model set from image, determination is corresponding with acquired picture type information
Image generates model;Initial pictures are input to determined image and generate model, generate target image.
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of method for generating image, comprising:
The image class that the initial pictures and user for obtaining user's input are selected from predetermined picture type information set
Type information, wherein the picture type information in described image type information set generates model set with image trained in advance
In image generate model and correspond, image generates the image type that model is used to generate corresponding picture type information characterization
Target image;
It is generated in model set from described image, determines that image corresponding with acquired picture type information generates model;
The initial pictures are input to determined image and generate model, generate target image.
2. according to the method described in claim 1, wherein, for the picture type information in described image type information set,
The corresponding image of the picture type information generates that model is trained as follows to be obtained:
Obtain training sample set, wherein training sample includes sample initial pictures, and sample corresponding with sample initial pictures
This target image, sample object image are the image of the image type of picture type information characterization;
Using machine learning algorithm, the sample initial pictures for including using the training sample in the training sample set are as defeated
Enter, using sample object image corresponding with the sample initial pictures of input as desired output, training obtains image and generates model.
3. method according to claim 1 or 2, wherein initial pictures are face-image;And
The method also includes:
Determine the area size at least one face organ's image-region that the initial pictures of user's input include;
For the area size of at least one face organ's image-region, in response to determining face organ's image-region
Area size is more than or equal to the size threshold value determined in advance for face organ's image-region, to positioned at face organ's image
Face organ's image in region carries out deformation process.
4. method according to claim 1 or 2, wherein the method also includes:
User's face organ's information selected from predetermined face organ's information aggregate and user are obtained from for institute
State the deformation data selected in the deformation data set that face organ's information aggregate determines;
The face organ's information and deformation data selected according to user, from face organ's deformation model set trained in advance really
Determine face organ's deformation model;
Target image generated is input to determined face organ's deformation model, obtains the deformation of the initial pictures
Image afterwards.
5. according to the method described in claim 1, wherein, image generates model and makes a living into confrontation network.
6. a kind of for generating the device of image, comprising:
First acquisition unit, the initial pictures for being configured to obtain user's input and user are from predetermined image type
The picture type information selected in information aggregate, wherein picture type information in described image type information set and in advance
Trained image generates the image in model set and generates model one-to-one correspondence, and image generates model for generating corresponding image
The target image of the image type of type information characterization;
First determination unit is configured to generate in model set from described image, determining and acquired picture type information
Corresponding image generates model;
Generation unit is configured to for the initial pictures being input to determined image and generates model, generates target image.
7. device according to claim 6, wherein for the picture type information in described image type information set,
The corresponding image of the picture type information generates that model is trained as follows to be obtained:
Obtain training sample set, wherein training sample includes sample initial pictures, and sample corresponding with sample initial pictures
This target image, sample object image are the image of the image type of picture type information characterization;
Using machine learning algorithm, the sample initial pictures for including using the training sample in the training sample set are as defeated
Enter, using sample object image corresponding with the sample initial pictures of input as desired output, training obtains image and generates model.
8. device according to claim 6 or 7, wherein initial pictures are face-image;And
Described device further include:
Second determination unit is configured to determine at least one face organ's image that the initial pictures of user's input include
The area size in region;
Processing unit is configured to the area size at least one face organ's image-region, should in response to determining
The area size of face organ's image-region is more than or equal to the size threshold value determined in advance for face organ's image-region, right
Face organ's image positioned at face organ's image-region carries out deformation process.
9. device according to claim 6 or 7, wherein described device further include:
Second acquisition unit is configured to obtain the face organ that user selectes from predetermined face organ's information aggregate
The deformation data that information and user select from the deformation data set determined for face organ's information aggregate;
Third determination unit, the face organ's information and deformation data for being configured to be selected according to user, from face trained in advance
Face organ's deformation model is determined in portion's organ deformation model set;
Input unit is configured to for target image generated to be input to determined face organ's deformation model, obtain
Image after the deformation of the initial pictures.
10. device according to claim 6, wherein image generates model and makes a living into confrontation network.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
Now such as method as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910199011.XA CN109949213B (en) | 2019-03-15 | 2019-03-15 | Method and apparatus for generating image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910199011.XA CN109949213B (en) | 2019-03-15 | 2019-03-15 | Method and apparatus for generating image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109949213A true CN109949213A (en) | 2019-06-28 |
CN109949213B CN109949213B (en) | 2023-06-16 |
Family
ID=67010103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910199011.XA Active CN109949213B (en) | 2019-03-15 | 2019-03-15 | Method and apparatus for generating image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109949213B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503703A (en) * | 2019-08-27 | 2019-11-26 | 北京百度网讯科技有限公司 | Method and apparatus for generating image |
CN114331820A (en) * | 2021-12-29 | 2022-04-12 | 北京字跳网络技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090257099A1 (en) * | 2008-04-14 | 2009-10-15 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20150262370A1 (en) * | 2014-03-14 | 2015-09-17 | Omron Corporation | Image processing device, image processing method, and image processing program |
JP2016051302A (en) * | 2014-08-29 | 2016-04-11 | カシオ計算機株式会社 | Image processor, imaging device, image processing method, and program |
CN107909540A (en) * | 2017-10-26 | 2018-04-13 | 深圳天珑无线科技有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN108259769A (en) * | 2018-03-30 | 2018-07-06 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108401112A (en) * | 2018-04-23 | 2018-08-14 | Oppo广东移动通信有限公司 | Image processing method, device, terminal and storage medium |
CN108960093A (en) * | 2018-06-21 | 2018-12-07 | 阿里体育有限公司 | The recognition methods and equipment of face's rotational angle |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN109376596A (en) * | 2018-09-14 | 2019-02-22 | 广州杰赛科技股份有限公司 | Face matching process, device, equipment and storage medium |
-
2019
- 2019-03-15 CN CN201910199011.XA patent/CN109949213B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090257099A1 (en) * | 2008-04-14 | 2009-10-15 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20150262370A1 (en) * | 2014-03-14 | 2015-09-17 | Omron Corporation | Image processing device, image processing method, and image processing program |
JP2016051302A (en) * | 2014-08-29 | 2016-04-11 | カシオ計算機株式会社 | Image processor, imaging device, image processing method, and program |
CN107909540A (en) * | 2017-10-26 | 2018-04-13 | 深圳天珑无线科技有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN108259769A (en) * | 2018-03-30 | 2018-07-06 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108401112A (en) * | 2018-04-23 | 2018-08-14 | Oppo广东移动通信有限公司 | Image processing method, device, terminal and storage medium |
CN108960093A (en) * | 2018-06-21 | 2018-12-07 | 阿里体育有限公司 | The recognition methods and equipment of face's rotational angle |
CN109376596A (en) * | 2018-09-14 | 2019-02-22 | 广州杰赛科技股份有限公司 | Face matching process, device, equipment and storage medium |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
Non-Patent Citations (5)
Title |
---|
曾碧等: "基于CycleGAN的非配对人脸图片光照归一化方法", 《广东工业大学学报》 * |
曾碧等: "基于CycleGAN的非配对人脸图片光照归一化方法", 《广东工业大学学报》, vol. 35, no. 05, 18 July 2018 (2018-07-18), pages 11 - 19 * |
林维训,潘纲,吴朝晖,潘云鹤: "脸部特征定位方法", 中国图象图形学报, no. 08, pages 11 * |
赵增顺等: "生成对抗网络理论框架、衍生模型与应用最新进展", 《小型微型计算机系统》 * |
赵增顺等: "生成对抗网络理论框架、衍生模型与应用最新进展", 《小型微型计算机系统》, vol. 39, no. 12, 11 December 2018 (2018-12-11), pages 2602 - 2606 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503703A (en) * | 2019-08-27 | 2019-11-26 | 北京百度网讯科技有限公司 | Method and apparatus for generating image |
CN110503703B (en) * | 2019-08-27 | 2023-10-13 | 北京百度网讯科技有限公司 | Method and apparatus for generating image |
CN114331820A (en) * | 2021-12-29 | 2022-04-12 | 北京字跳网络技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109949213B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109816589A (en) | Method and apparatus for generating cartoon style transformation model | |
CN109086719A (en) | Method and apparatus for output data | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN107578017A (en) | Method and apparatus for generating image | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN109800732A (en) | The method and apparatus for generating model for generating caricature head portrait | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN108898186A (en) | Method and apparatus for extracting image | |
CN109858445A (en) | Method and apparatus for generating model | |
CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
CN108985257A (en) | Method and apparatus for generating information | |
CN109829432A (en) | Method and apparatus for generating information | |
CN109815365A (en) | Method and apparatus for handling video | |
CN107910060A (en) | Method and apparatus for generating information | |
CN109299477A (en) | Method and apparatus for generating text header | |
CN109977839A (en) | Information processing method and device | |
CN110084317A (en) | The method and apparatus of image for identification | |
CN110009059A (en) | Method and apparatus for generating model | |
CN109918530A (en) | Method and apparatus for pushing image | |
CN109961032A (en) | Method and apparatus for generating disaggregated model | |
CN109241934A (en) | Method and apparatus for generating information | |
CN109754464A (en) | Method and apparatus for generating information | |
CN108521516A (en) | Control method and device for terminal device | |
CN108960110A (en) | Method and apparatus for generating information | |
CN110046571A (en) | The method and apparatus at age for identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231214 Address after: Building 3, No. 1 Yinzhu Road, Suzhou High tech Zone, Suzhou City, Jiangsu Province, 215011 Patentee after: Suzhou Moxing Times Technology Co.,Ltd. Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |