CN110136054A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN110136054A CN110136054A CN201910409754.5A CN201910409754A CN110136054A CN 110136054 A CN110136054 A CN 110136054A CN 201910409754 A CN201910409754 A CN 201910409754A CN 110136054 A CN110136054 A CN 110136054A
- Authority
- CN
- China
- Prior art keywords
- image
- sample
- eyes
- processed
- eyes image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims description 19
- 238000012545 processing Methods 0.000 claims abstract description 137
- 230000001815 facial effect Effects 0.000 claims abstract description 118
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000012549 training Methods 0.000 claims description 63
- 230000008569 process Effects 0.000 claims description 22
- 210000000744 eyelid Anatomy 0.000 claims description 19
- 241000255789 Bombyx mori Species 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 9
- 238000013507 mapping Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013145 classification model Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G06T3/04—
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
Embodiment of the disclosure discloses the method and apparatus of image procossing.One specific embodiment of this method includes: to determine eyes image to be processed from acquired facial image to be processed;Eyes image to be processed is input in image processing model trained in advance, eyes image after being handled, wherein image processing model is for handling region corresponding with default eye object in eyes image to be processed;Eyes image to be processed in facial image to be processed is replaced with to eyes image after handling, facial image after being handled with generation.The embodiment, which realizes, targetedly adds to facial image to be processed or removes default eye object.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to image processing method and device.
Background technique
In various makeups applications (Application, APP), it will usually be related to carrying out the facial image that user uploads
Makeups, such as double-edged eyelid or sleeping silkworm are added in facial image.
Currently, double-edged eyelid or sleeping silkworm are added in facial image to realize, mainly by pre-set textures come real
It is existing.Specifically, first determine that the position in facial image where eyes will be pre- then according to the position where eyes
The textures being first arranged are placed on corresponding position in facial image.
Summary of the invention
Embodiment of the disclosure proposes image processing method and device.
In a first aspect, embodiment of the disclosure provides a kind of image processing method, this method comprises: from it is acquired to
Eyes image to be processed is determined in processing facial image;Eyes image to be processed is input to image procossing mould trained in advance
In type, eyes image after being handled, wherein image processing model be used for in eyes image to be processed with default eye object
Corresponding region is handled;Eyes image to be processed in facial image to be processed is replaced with to eyes image after handling, with
Facial image after generation processing.
In some embodiments, above-mentioned default eye object include it is following at least one: double-edged eyelid, crouch silkworm.
In some embodiments, above-mentioned image processing model is obtained using sample set training, in above-mentioned sample set
Sample is the image pair for including the second image of the first image of sample and sample, and it is corresponding to preset eye object in the first image of sample
There is no eye object is preset in region, is preset in the second image of sample in the corresponding region of eye object and there is default eye pair
As.
In some embodiments, training obtains above-mentioned image processing model as follows: by the sample in sample set
Input of the first image of sample as initial model included by this, by sample second corresponding with the first image of sample of input
Desired output of the image as initial model, training obtain image processing model.
In some embodiments, training obtains above-mentioned image processing model as follows: by the sample in sample set
Input of the second image of sample as initial model included by this, by sample first corresponding with the second image of sample of input
Desired output of the image as initial model, training obtain image processing model.
In some embodiments, the sample in above-mentioned sample set obtains as follows: from acquired the first
Eyes image is cut in the first facial image in face image set, obtains the first eye image collection;To the first eyes image
The first eyes image in set carries out flip horizontal, obtains the second eyes image set;From the first eye image collection and
It chooses in the default corresponding region of eye object there is no the eyes image of default eye object, obtains in two eyes image set
Third eyes image set;Third eyes image in third eyes image set is input to sample process mould trained in advance
In type, the 4th eye image collection is obtained, wherein sample process model is used to add third eyes image default eye pair
As;It is chosen based on the 4th eyes image chosen from the 4th eye image collection and from third eyes image set opposite
The third eyes image answered generates the sample in sample set.
In some embodiments, above-mentioned that eyes image to be processed, packet are determined from acquired facial image to be processed
It includes: based on the key point to acquired facial image to be processed extraction, pre-set dimension is cut from facial image to be processed
Eyes image to be processed.
In some embodiments, the above method further include: set the terminal that facial image after processing is sent to communication connection
It is standby, so that facial image after terminal device display processing.
Second aspect, embodiment of the disclosure provide a kind of image processing apparatus, which comprises determining that unit, quilt
It is configured to determine eyes image to be processed from acquired facial image to be processed;Processing unit, being configured to will be wait locate
Reason eyes image is input in image processing model trained in advance, eyes image after being handled, wherein image processing model
For handling region corresponding with default eye object in eyes image to be processed;Generation unit, being configured to will be to
Eyes image to be processed in processing facial image replaces with eyes image after processing, facial image after handling with generation.
In some embodiments, above-mentioned default eye object include it is following at least one: double-edged eyelid, crouch silkworm.
In some embodiments, above-mentioned image processing model is obtained using sample set training, in above-mentioned sample set
Sample is the image pair for including the second image of the first image of sample and sample, and it is corresponding to preset eye object in the first image of sample
There is no eye object is preset in region, is preset in the second image of sample in the corresponding region of eye object and there is default eye pair
As.
In some embodiments, training obtains above-mentioned image processing model as follows: by the sample in sample set
Input of the first image of sample as initial model included by this, by sample second corresponding with the first image of sample of input
Desired output of the image as initial model, training obtain image processing model.
In some embodiments, training obtains above-mentioned image processing model as follows: by the sample in sample set
Input of the second image of sample as initial model included by this, by sample first corresponding with the second image of sample of input
Desired output of the image as initial model, training obtain image processing model.
In some embodiments, the sample in above-mentioned sample set obtains as follows: from acquired the first
Eyes image is cut in the first facial image in face image set, obtains the first eye image collection;To the first eyes image
The first eyes image in set carries out flip horizontal, obtains the second eyes image set;From the first eye image collection and
It chooses in the default corresponding region of eye object there is no the eyes image of default eye object, obtains in two eyes image set
Third eyes image set;Third eyes image in third eyes image set is input to sample process mould trained in advance
In type, the 4th eye image collection is obtained, wherein sample process model is used to add third eyes image default eye pair
As;It is chosen based on the 4th eyes image chosen from the 4th eye image collection and from third eyes image set opposite
The third eyes image answered generates the sample in sample set.
In some embodiments, above-mentioned determination unit, is further configured to: based on to acquired face figure to be processed
As the key point extracted, the eyes image to be processed of pre-set dimension is cut from facial image to be processed.
In some embodiments, above-mentioned apparatus further include: transmission unit, facial image is sent to after being configured to handle
The terminal device of communication connection, so that facial image after terminal device display processing.
The third aspect, embodiment of the disclosure provide a kind of server, which includes: one or more processing
Device;Storage device is stored thereon with one or more programs;When one or more programs are executed by one or more processors,
So that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method as described in implementation any in first aspect is realized when the program is executed by processor.
The image processing method and device that embodiment of the disclosure provides, it is possible, firstly, to obtain facial image to be processed;So
Afterwards, eyes image to be processed can be determined from acquired facial image to be processed;It then, can be by identified wait locate
Reason eyes image is input in image processing model trained in advance, eyes image after being handled;Further, it is possible to will be wait locate
Eyes image to be processed in reason facial image replaces with eyes image after obtained processing, facial image after generation processing.
To realize and targetedly add or remove default eye object to facial image to be processed.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the image processing method of the disclosure;
Fig. 3 is the schematic diagram of an application scenarios of image processing method according to an embodiment of the present disclosure;
Fig. 4 is the flow chart according to another embodiment of the image processing method of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the image processing apparatus of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the image processing method of the disclosure or the exemplary architecture 100 of image processing apparatus.
As shown in Figure 1, system architecture 100 may include terminal device 101,102, network 103 and server 104.Network
103 between terminal device 101,102 and server 104 to provide the medium of communication link.Network 103 may include various
Connection type, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101,102 is interacted by network 103 with server 104, to receive or send message etc..Terminal device
101, various telecommunication customer end applications, such as image processing class application, the application of makeups class, browser class can be installed on 102
Using etc..
Terminal device 101,102 can be hardware, be also possible to software.It, can be with when terminal device 101,102 is hardware
It is the various electronic equipments that there is display screen and support image procossing, including but not limited to smart phone, tablet computer, above-knee
Type portable computer and desktop computer etc..When terminal device 101,102 is software, may be mounted at above-mentioned cited
In electronic equipment, multiple softwares or software module may be implemented into, single software or software module also may be implemented into.Herein
It is not specifically limited.
Server 104 can be to provide the server of various services.As an example, server 104 can be terminal device
101, the background server for the makeups application installed on 102.What the available terminal device 101,102 of server 104 was sent
Facial image to be processed is then handled the facial image to be processed, so that facial image after processing generated be sent out
It send to terminal device 101,102.
Server 104 can be hardware, be also possible to software.When server is hardware, multiple services may be implemented into
The distributed server cluster of device composition, also may be implemented into individual server.When server is software, may be implemented into more
A software or software module (such as providing multiple softwares of Distributed Services or software module), also may be implemented into single
Software or software module.It is not specifically limited herein.
It should be noted that image processing method provided by embodiment of the disclosure is generally executed by server 104, phase
Ying Di, image processing apparatus are generally positioned in server 104.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process 200 of one embodiment of the image processing method according to the disclosure is shown.The figure
As processing method the following steps are included:
Step 201, eyes image to be processed is determined from acquired facial image to be processed.
In the present embodiment, the executing subject (server 104 as shown in Figure 1) of image processing method can from local or
The terminal device (terminal device 101,102 as shown in Figure 1) of person's communication connection obtains facial image to be processed.Herein, wait locate
Managing facial image is usually the image for showing face.
In the present embodiment, after obtaining facial image to be processed, above-mentioned executing subject can be from the face figure to be processed
Eyes image to be processed is determined as in.Herein, eyes image to be processed is usually to show in above-mentioned facial image to be processed
The image of the eyes of someone.It should be noted that identified eyes image to be processed can be the left side for only showing someone
The image of eye or right eye, can also be while showing the left eye of someone and the image of right eye.
As an example, can be previously stored in above-mentioned executing subject for the average extracted feature of eyes image.Its
In, average eyes image, which can be, averages to the pixel value of respective pixel point in a large amount of eyes image, obtained figure
Picture.It is slided in above-mentioned facial image to be processed firstly, default sliding window can be used in above-mentioned executing subject, it is then possible to
It then can be by phase by similarity is calculated for the feature of the extracted region where sliding window and acquired features described above
It is determined as eyes image to be processed like maximum region is spent.
In some optional implementations of the present embodiment, above-mentioned executing subject can be based on to acquired to be processed
The key point that facial image extracts cuts the eyes image to be processed of pre-set dimension from the facial image to be processed.
Firstly, above-mentioned executing subject can extract key point to acquired facial image to be processed.In general, the pass extracted
Key point may include the key point for the contours extract of the eyes shown in facial image to be processed.Certainly, it extracts and closes
Key point can also include mentioning for other positions (for example, facial contour, pupil) of the face shown in facial image to be processed
The key point taken.
Then, above-mentioned executing subject can determine eyes in facial image to be processed according to extracted key point
In approximate location.In turn, the image of the pre-set dimension including eyes can be cut out from facial image to be processed, made
For eyes image to be processed.Herein, pre-set dimension can be set according to actual needs, and details are not described herein again.
In these implementations, the key point extracted by being directed to the eyes shown in facial image to be processed,
On the one hand the position where eyes can be accurately positioned, and then realize and accurately cut eyes image to be processed, on the other hand
Can to avoid sliding window multiple sliding and be repeatedly directed to the extracted region feature where sliding window, and then shorten it is determining to
Handle the time of eyes image.
Step 202, eyes image to be processed is input in image processing model trained in advance, eye after being handled
Image.
In the present embodiment, after determining eyes image to be processed, above-mentioned executing subject can be by the eye to be processed
Image is input in image processing model trained in advance, eyes image after being handled.Wherein, above-mentioned image processing model can
For handling region corresponding with default eye object in eyes image to be processed.
Above-mentioned region corresponding with default eye object can be in above-mentioned eyes image to be processed for adding or going
Except the region of default eye object.Optionally, above-mentioned default eye object may include at least one of double-edged eyelid and sleeping silkworm.
Correspondingly, region corresponding with default eye object can be in above-mentioned eyes image to be processed for adding or removing eyes
The region of at least one of skin and sleeping silkworm.Be commonly used for addition double-edged eyelid region can be in eyes image to be processed
The region that the upper eyelid of display is closer to, the region for adding sleeping silkworm can be to be shown down with eyes image to be processed
The region that eyelid is closer to.
Above-mentioned image processing model can be the mapping table that technical staff is pre-stored within above-mentioned executing subject.This is right
Answer relation table that can be handled to obtain to a large amount of eyes image collected in advance by technical staff.Specifically, technical staff
Default eye object shown in default eye object, or removal eyes image can be added in eyes image.Practice
In, in the mapping table, obtained image is corresponded after eyes image and eyes image processing.It is appreciated that eye
Obtained image can be obtained image after the default eye object of the addition in the eyes image after portion's image procossing, or
Person is to remove obtained image after default eye object shown in the eyes image.
As an example, identified eyes image to be processed can be input to above-mentioned mapping table by above-mentioned executing subject
In, in the mapping table eyes image carry out similarity mode, then in available mapping table with similarity
The corresponding image of maximum eyes image is as eyes image after processing.It is appreciated that compared to eyes image to be processed, place
Eyes image adds or eliminates default eye object after reason.
In some optional implementations of the present embodiment, above-mentioned image processing model, which can also be, utilizes sample set
The machine learning model that training obtains.Wherein, the sample in above-mentioned sample set can be include the first image of sample and sample
The image pair of second image.It is preset in the first image of sample in the corresponding region of eye object and default eye object, sample is not present
It is preset in this second image in the corresponding region of eye object and there is default eye object.
It, can be by above-mentioned sample on the basis of above-mentioned sample set in some optional implementations of the present embodiment
Input of the first image of sample as initial model included by sample in this set, by the first image pair of sample with input
Desired output of the second image of sample answered as initial model, training obtain image processing model.
Above-mentioned initial model can be the confrontation for being handled eyes image and generate network (Generative
Adversarial Nets, GAN), such as Pix2Pix (Pix is the abbreviation of Pixel, pixel) model.It should be noted that
During initial model training, in the same sample, the first image of sample as input and the sample as desired output
Second image, other than region corresponding with default eye object, the difference very little in other regions.
The training step of above-mentioned image processing model is specific as follows to be stated described by step S1 to step S5.
The first image of sample included by the sample chosen from sample set is input to initial model, obtained by step S1
Eyes image after to the processing of the first image of sample of input.
Step S2 calculates the first figure of eyes image and the sample of input after obtained processing using preset loss function
Difference degree as between, and the complexity using regularization term calculating initial model.
Above-mentioned preset loss function can be the following at least a kind of loss function chosen according to actual needs: 0-1 damage
Lose function, absolute error loss function, quadratic loss function, figure penalties function, logarithm loss function, hinge loss function etc..
Above-mentioned regularization term can be any one following norm chosen according to actual needs: L0 norm, L1 norm, L2 norm, mark
Norm, nuclear norm etc..
Step S3 adjusts the structural parameters of initial model according to the complexity for calculating resulting difference degree and model.
It, can be using the structural parameters of any one following algorithm adjustment initial model: BP (Back in practice
Propgation, backpropagation) algorithm, GD (Gradient Descent, gradient decline) algorithm etc..
Step S4, in response to reaching preset trained termination condition, the executing subject of the above-mentioned image processing model of training can
To determine that initial model training is completed, and the initial model that training is completed is determined as image processing model.
Above-mentioned preset trained termination condition may include at least one of following: the training time is more than preset duration;Training
Number is more than preset times;Resulting difference degree is calculated less than preset discrepancy threshold.
Step S5, in response to being not up to above-mentioned preset trained termination condition, the execution of the above-mentioned image processing model of training
Main body can choose the sample that do not chose from sample set, and use initial model adjusted as initial model,
Continue to execute above-mentioned training step.
It should be noted that the executing subject of the above-mentioned image processing model of training and the executing subject of image procossing can phases
Together, it can also be different.If the two is identical, at the image that the executing subject of the above-mentioned image processing model of training can complete training
The structural information and parameter value for managing model are stored in local.If the two is different, the executing subject of the above-mentioned image processing model of training
The structural information for the image processing model that training is completed and parameter value can be sent to the executing subject of image procossing.
By above-mentioned analysis it is found that not showing default eye object in the first image of sample, shown in the second image of sample
It is shown with default eye object, and during training above-mentioned image processing model, with the first image of sample included by sample
For the input of initial model, using the second image of sample included by sample as the desired output of initial model.Therefore, when to be processed
When not showing default eye object in eyes image, the image processing model that training obtains can be used in eye figure to be processed
Default eye object is added in region corresponding with default eye object as in.It is achieved in and is obtained by machine learning method training
Model, default eye object is added in eyes image to be processed, for example, addition at least one of double-edged eyelid and sleeping silkworm.
It, can be by sample set on the basis of above-mentioned sample set in some optional implementations of the present embodiment
Input of the second image of sample as initial model included by sample in conjunction, will be corresponding with the second image of sample of input
Desired output of the first image of sample as initial model, training obtain image processing model.
Herein, the difference with above-mentioned optional implementation is, during training image handles model, introductory die
The input of type and desired output are opposite.The image processing model that training obtains as a result, can be used for removing eyes image to be processed
In default eye object.And then realize the model obtained by machine learning method training, it removes in eyes image to be processed
At least one of shown default eye object, such as remove shown double-edged eyelid and sleeping silkworm.
In some optional implementations of the present embodiment, for training the sample of above-mentioned image processing model that can lead to
Following steps are crossed to obtain.
The first step cuts eyes image from the first facial image in the first acquired face image set, obtains
First eye image collection.
The executing subject of training image processing model can obtain the first face from local or communication connection database
Image collection.Wherein, the first facial image is usually the image for showing face.
After obtaining the first face image set, the executing subject that training image handles model can be from every first face
Eyes image is cut in the first facial image of image or part, and then obtains the first eye image collection.It is appreciated that first
Eyes image is the eyes image cut out from the first facial image.
Second step carries out flip horizontal to the first eyes image in the first eye image collection, obtains the second eye figure
Image set closes.
After obtaining the first eye image collection, the executing subject that training image handles model can be to every or part
First eyes image carries out flip horizontal, and then obtains the second eyes image set.It is appreciated that the second eyes image is
Obtained image after one eye Image Reversal.
Herein, the first eye image level is overturn, is typically referred on the basis of the longitudinal axis, the first eyes image is carried out pair
Claim.For example, after will be displayed with the first eye Image Reversal of the left eye of people, second of the right eye of available display someone
Portion's image.It is appreciated that the number of the sample of training image processing model can be increased by overturning to the first eyes image
Amount.
Third step chooses the default corresponding area of eye object from the first eye image collection and the second eyes image set
There is no the eyes image of default eye object in domain, third eyes image set is obtained.
In general, the executing subject of training image processing model can be by image classification model trained in advance, from first
In eyes image set and the second eyes image set, marks off the eyes image for showing default eye object and do not show pre-
If the eyes image of eye object.The executing subject of training image processing model, which can randomly select, as a result, does not show default eye
The eyes image of portion's object obtains third eyes image set.It is not shown it is appreciated that third eyes image is as selected
The eyes image of default eye object.
It should be noted that above-mentioned image classification model can be the image classification mould constructed by convolutional neural networks
Type, details are not described herein again.
Third eyes image in third eyes image set is input to sample process model trained in advance by the 4th step
In, obtain the 4th eye image collection.Wherein, sample process model is used to add third eyes image default eye object.
It is appreciated that the 4th eyes image is to obtained image after the default eye object of third eyes image addition.Also
It is to say, in the 4th eyes image and third eyes image, other than region corresponding with default eye object, other regions
Difference very little.
It should be noted that the initial model of training sample processing model is also possible to for handling eyes image
Confrontation generate network, such as CycleGAN (Cycle Generative Adversarial Nets, circulation confrontation generate net
Network).Unlike the initial model of training image processing model, during training, in the same sample, as input
Image and as usually there is biggish difference between the image of desired output, eyes shown by the two usually from
Different faces.The training process of above-mentioned sample process model is similar with the training process of image processing model, no longer superfluous herein
It states.
5th step, based on the 4th eyes image chosen from the 4th eye image collection and from third eyes image set
The corresponding third eyes image of middle selection generates the sample in sample set.
After obtaining the 4th eye image collection, the executing subject of training image processing model can therefrom randomly select the
Four eyes images choose the 4th eyes image depending on the user's operation.In addition, the executing subject of training image processing model
Third eyes image corresponding with the 4th selected eyes image can also be chosen from third eyes image set.At this
In, third eyes image corresponding with the 4th selected eyes image, i.e., for by that can be obtained after sample process model treatment
To the third eyes image of the 4th eyes image.
After choosing the 4th eyes image and corresponding third eyes image, training image handles the execution of model
The two group can be combined into a sample by main body.As a result, by repeatedly choosing, available sample set.To enrich life
At the method for training the sample of above-mentioned image processing model.It should be noted that in practice, it can also be from the first eye figure
Image set closes the eye that there is default eye object in region corresponding with default eye object is chosen in the second eyes image set
Image, then by the default eye object shown in the selected eyes image of sample process model removal, thus using class
Like the method in above-mentioned 5th step, the sample for training above-mentioned image processing model is obtained.
Step 203, the eyes image to be processed in facial image to be processed is replaced with to eyes image after handling, to generate
Facial image after processing.
In the present embodiment, after being handled after eyes image, above-mentioned executing subject can be by above-mentioned face to be processed
Eyes image to be processed in image replaces with eyes image after processing, and then facial image after generation processing.In general, to be processed
The face shown in facial image has certain deviation angle, correspondingly, the eye figure to be processed in facial image to be processed
As with there are angular deviations between facial image to be processed.Above-mentioned executing subject can first pass through affine transformation to processing as a result,
Eyes image rotates a certain angle afterwards, after the eyes image to be processed in facial image to be processed is then replaced with processing again
Eyes image.
With continued reference to the schematic diagram that Fig. 3, Fig. 3 are according to the application scenarios of the image processing method of the present embodiment.?
In the application scenarios of Fig. 3, firstly, server 301 can be to be processed from the acquisition of the terminal device (not shown) of communication connection
Facial image 302.Then, server 301 can determine 303 He of eyes image to be processed from facial image 302 to be processed
Eyes image 304 to be processed.Later, server 301 can be by identified eyes image 303 to be processed and eye figure to be processed
As 304 are separately input into image processing model 305, eyes image after eyes image 306 and processing is respectively obtained after processing
307.Then, server 301 can replace with the eyes image to be processed in facial image 302 to be processed obtained respectively
Eyes image 306 and eyes image 307 after processing, thus generate facial image after processing after processing.
Currently, in terms of to double-edged eyelid or sleeping silkworm is added in facial image, one of prior art, such as disclosure background skill
Described in art, realized by pre-set textures.It is directed to as it can be seen that this method does not have different facial images
Property.In general, differing greatly between the eyes shown in different facial images.In addition, pre-set textures are put
It sets in facial image, often will cause uncoordinated with the ratio of the eyes in facial image.And the above-mentioned reality of the disclosure
The method for applying example offer can carry out different facial images to be processed corresponding by image processing model trained in advance
Processing, realize and targetedly add double-edged eyelid or sleeping silkworm in facial image to be processed, and then avoid face to be processed
It is uncoordinated between added double-edged eyelid or sleeping silkworm and eyes in image.In addition, above-described embodiment of the disclosure mentions
The method of confession can also be realized and targetedly remove shown double-edged eyelid or sleeping silkworm in facial image to be processed.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of image processing method.The image procossing
The process 400 of method, comprising the following steps:
Step 401, eyes image to be processed is determined from acquired facial image to be processed.
In the present embodiment, executing subject (server 104 as shown in Figure 1) available communication of image processing method
The facial image that the terminal device of connection is sent, as facial image to be processed.
Step 402, eyes image to be processed is input in image processing model trained in advance, eye after being handled
Image.
Step 403, the eyes image to be processed in facial image to be processed is replaced with to eyes image after handling, to generate
Facial image after processing.
Above-mentioned steps 401, step 402, step 403 can be respectively according to step 201, the steps in embodiment as shown in Figure 2
Rapid 202, the similar mode of step 203 executes, and the description above with respect to step 201, step 202, step 203 is also applied for step
401, step 402, step 403, details are not described herein again.
Step 404, facial image after processing is sent to the terminal device of communication connection, so that terminal device display is handled
Facial image afterwards.
In the present embodiment, facial image after processing generated can be sent to communication connection by above-mentioned executing subject
Terminal device (terminal device 101,102 as shown in Figure 1).In general, receiving face after the processing that above-mentioned executing subject is sent
After image, above-mentioned terminal device can be shown facial image after the processing.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the image processing method in the present embodiment
400 embody the step of facial image after processing is sent to the terminal device of communication connection.The side of the present embodiment description as a result,
Case, when terminal device by facial image captured by camera that is local or installing thereon be sent to above-mentioned executing subject it
Afterwards, above-mentioned executing subject can be returned for facial image after facial image processing generated.To realize on user
The facial image of biography adds the double-edged eyelid at least one of double-edged eyelid and sleeping silkworm, or the facial image of removal user's upload
At least one of with sleeping silkworm.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides image processing apparatus
One embodiment, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to various electricity
In sub- equipment.
As shown in figure 5, image processing apparatus 500 provided in this embodiment includes determination unit 501,502 and of processing unit
Generation unit 503.Wherein it is determined that unit 501, may be configured to: determined from acquired facial image to be processed to
Handle eyes image.Processing unit 502, may be configured to: eyes image to be processed is input at image trained in advance
Manage model in, eyes image after handle, image processing model for in eyes image to be processed with default eye object
Corresponding region is handled.Generation unit 503, may be configured to: by the eye figure to be processed in facial image to be processed
Picture replaces with eyes image after processing, facial image after handling with generation.
In the present embodiment, in image processing apparatus 500: determination unit 501, processing unit 502 and generation unit 503
Specific processing and its brought technical effect can be respectively with reference to step 201, step 202 and the steps in Fig. 2 corresponding embodiment
203 related description, details are not described herein.
In some optional implementations of the present embodiment, above-mentioned default eye object may include following at least one
Person: double-edged eyelid, crouch silkworm.
In some optional implementations of the present embodiment, above-mentioned image processing model can use sample set training
It obtains, wherein the sample in sample set can be the image pair for including the second image of the first image of sample and sample, sample
It is preset in one image in the corresponding region of eye object there is no eye object is preset, presets eye object in the second image of sample
There is default eye object in corresponding region.
In some optional implementations of the present embodiment, above-mentioned image processing model can train as follows
It obtains: using the first image of sample included by the sample in sample set as the input of initial model, by the sample with input
Desired output of corresponding the second image of sample of first image as initial model, training obtain image processing model.
In some optional implementations of the present embodiment, above-mentioned image processing model can train as follows
It obtains: using the second image of sample included by the sample in sample set as the input of initial model, by the sample with input
Desired output of corresponding the first image of sample of second image as initial model, training obtain image processing model.
In some optional implementations of the present embodiment, the sample in above-mentioned sample set can be as follows
It obtains: cutting eyes image from the first facial image in the first acquired face image set, obtain the first eye figure
Image set closes;Flip horizontal is carried out to the first eyes image in the first eye image collection, obtains the second eyes image set;From
It is chosen in first eye image collection and the second eyes image set and default eye is not present in the default corresponding region of eye object
The eyes image of portion's object obtains third eyes image set;By the third eyes image input in third eyes image set
Into sample process model trained in advance, the 4th eye image collection is obtained, wherein sample process model is used for third eye
Portion's image adds default eye object;Based on the 4th eyes image chosen from the 4th eye image collection and from third eye
The corresponding third eyes image chosen in image collection generates the sample in sample set.
In some optional implementations of the present embodiment, above-mentioned determination unit 501 can be further configured to:
Based on the key point to acquired facial image to be processed extraction, cut from facial image to be processed pre-set dimension to from
Manage eyes image.
In some optional implementations of the present embodiment, above-mentioned apparatus 500 can also include: transmission unit (in figure
It is not shown).Wherein, above-mentioned transmission unit, may be configured to: the terminal that facial image after processing is sent to communication connection is set
It is standby, so that facial image after terminal device display processing.
The device provided by the above embodiment of the disclosure, it is possible, firstly, to by determination unit 501, from acquired to from
Eyes image to be processed is determined in reason facial image;It is then possible to through the processing unit 502, by identified eye to be processed
Portion's image is input in image processing model trained in advance, eyes image after being handled;Then, generation unit can be passed through
503, the eyes image to be processed in facial image to be processed is replaced with into eyes image after obtained processing, after generation processing
Facial image.To realize and targetedly add or remove default eye object to facial image to be processed.
Below with reference to Fig. 6, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1
Server) 600 structural schematic diagram.Server shown in Fig. 6 is only an example, should not be to the function of embodiment of the disclosure
Any restrictions can be brought with use scope.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.Each box shown in Fig. 6 can represent a device, can also root
According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the implementation of the disclosure is executed
The above-mentioned function of being limited in the method for example.It should be noted that computer-readable medium described in embodiment of the disclosure can be with
It is computer-readable signal media or computer readable storage medium either the two any combination.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any above combination.The more specific example of computer readable storage medium can include but is not limited to: have
The electrical connection of one or more conducting wires, portable computer diskette, hard disk, random access storage device (RAM), read-only memory
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer
Readable storage medium storing program for executing can be any tangible medium for including or store program, which can be commanded execution system, device
Either device use or in connection.And in embodiment of the disclosure, computer-readable signal media may include
In a base band or as the data-signal that carrier wave a part is propagated, wherein carrying computer-readable program code.It is this
The data-signal of propagation can take various forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate
Combination.Computer-readable signal media can also be any computer-readable medium other than computer readable storage medium, should
Computer-readable signal media can send, propagate or transmit for by instruction execution system, device or device use or
Person's program in connection.The program code for including on computer-readable medium can transmit with any suitable medium,
Including but not limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned server;It is also possible to individualism, and without
It is incorporated in the server.Above-mentioned computer-readable medium carries one or more program, when said one or multiple journeys
When sequence is executed by the electronic equipment, so that the server: determining eye to be processed from acquired facial image to be processed
Image;Eyes image to be processed is input in image processing model trained in advance, eyes image after being handled, wherein
Image processing model is for handling region corresponding with default eye object in eyes image to be processed;By people to be processed
Eyes image to be processed in face image replaces with eyes image after processing, facial image after handling with generation.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof
The computer program code of work, described program design language include object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor,
Including determination unit, processing unit and generation unit.Wherein, the title of these units is not constituted under certain conditions to the list
The restriction of member itself, for example, determination unit is also described as " determining to from from acquired facial image to be processed
Manage the unit of eyes image ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (18)
1. a kind of image processing method, comprising:
Eyes image to be processed is determined from acquired facial image to be processed;
The eyes image to be processed is input in image processing model trained in advance, eyes image after being handled,
In, described image handles model and is used for region corresponding with default eye object in the eyes image to be processed
Reason;
Eyes image to be processed in the facial image to be processed is replaced with into eyes image after the processing, to generate processing
Facial image afterwards.
2. according to the method described in claim 1, wherein, the default eye object include it is following at least one: double-edged eyelid crouch
Silkworm.
3. according to the method described in claim 1, wherein, described image is handled model and is obtained using sample set training, described
Sample in sample set is the image pair for including the second image of the first image of sample and sample, pre- described in the first image of sample
If the default eye object is not present in the corresponding region of eye object, eye object pair is preset described in the second image of sample
There are the default eye objects in the region answered.
4. according to the method described in claim 3, wherein, training obtains described image processing model as follows:
Using the first image of sample included by the sample in the sample set as the input of initial model, by the sample with input
Desired output of corresponding the second image of sample of this first image as initial model, training obtain described image processing model.
5. according to the method described in claim 3, wherein, training obtains described image processing model as follows:
Using the second image of sample included by the sample in the sample set as the input of initial model, by the sample with input
Desired output of corresponding the first image of sample of this second image as initial model, training obtain described image processing model.
6. according to the method described in claim 3, wherein, the sample in the sample set obtains as follows:
Eyes image is cut from the first facial image in the first acquired face image set, obtains the first eyes image
Set;
Flip horizontal is carried out to the first eyes image in the first eye image collection, obtains the second eyes image set;
It is corresponding that the default eye object is chosen from the first eye image collection and the second eyes image set
The eyes image of the default eye object is not present in region, obtains third eyes image set;
Third eyes image in the third eyes image set is input in sample process model trained in advance, is obtained
4th eye image collection, wherein the sample process model is used to add the default eye object to third eyes image;
Based on the 4th eyes image chosen from the 4th eye image collection and from the third eyes image set
The corresponding third eyes image chosen, generates the sample in the sample set.
7. according to the method described in claim 1, wherein, it is described determined from acquired facial image to be processed it is to be processed
Eyes image, comprising:
Based on the key point to acquired facial image to be processed extraction, default ruler is cut from the facial image to be processed
Very little eyes image to be processed.
8. any method in -7 according to claim 1, wherein the method also includes:
Facial image after the processing is sent to the terminal device of communication connection, so that the terminal device shows the processing
Facial image afterwards.
9. a kind of image processing apparatus, comprising:
Determination unit is configured to determine eyes image to be processed from acquired facial image to be processed;
Processing unit is configured to for the eyes image to be processed being input in image processing model trained in advance, obtain
Eyes image after processing, wherein described image handle model be used for in the eyes image to be processed with default eye object
Corresponding region is handled;
Generation unit is configured to replace with the eyes image to be processed in the facial image to be processed eye after the processing
Portion's image, facial image after being handled with generation.
10. device according to claim 9, wherein the default eye object include it is following at least one: double-edged eyelid,
Sleeping silkworm.
11. device according to claim 9, wherein described image is handled model and obtained using sample set training, described
Sample in sample set is the image pair for including the second image of the first image of sample and sample, pre- described in the first image of sample
If the default eye object is not present in the corresponding region of eye object, eye object pair is preset described in the second image of sample
There are the default eye objects in the region answered.
12. device according to claim 11, wherein training obtains described image processing model as follows:
Using the first image of sample included by the sample in the sample set as the input of initial model, by the sample with input
Desired output of corresponding the second image of sample of this first image as initial model, training obtain described image processing model.
13. device according to claim 11, wherein training obtains described image processing model as follows:
Using the second image of sample included by the sample in the sample set as the input of initial model, by the sample with input
Desired output of corresponding the first image of sample of this second image as initial model, training obtain described image processing model.
14. device according to claim 11, wherein the sample in the sample set obtains as follows:
Eyes image is cut from the first facial image in the first acquired face image set, obtains the first eyes image
Set;
Flip horizontal is carried out to the first eyes image in the first eye image collection, obtains the second eyes image set;
It is corresponding that the default eye object is chosen from the first eye image collection and the second eyes image set
The eyes image of the default eye object is not present in region, obtains third eyes image set;
Third eyes image in the third eyes image set is input in sample process model trained in advance, is obtained
4th eye image collection, wherein the sample process model is used to add the default eye object to third eyes image;
Based on the 4th eyes image chosen from the 4th eye image collection and from the third eyes image set
The corresponding third eyes image chosen, generates the sample in the sample set.
15. device according to claim 9, wherein the determination unit is further configured to:
Based on the key point to acquired facial image to be processed extraction, default ruler is cut from the facial image to be processed
Very little eyes image to be processed.
16. according to the device any in claim 9-15, wherein described device further include:
Transmission unit is configured to for facial image after the processing being sent to the terminal device of communication connection, so that the end
End equipment shows facial image after the processing.
17. a kind of server, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method described in any one of claims 1-8.
18. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Such as method described in any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910409754.5A CN110136054B (en) | 2019-05-17 | 2019-05-17 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910409754.5A CN110136054B (en) | 2019-05-17 | 2019-05-17 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110136054A true CN110136054A (en) | 2019-08-16 |
CN110136054B CN110136054B (en) | 2024-01-09 |
Family
ID=67574692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910409754.5A Active CN110136054B (en) | 2019-05-17 | 2019-05-17 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110136054B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110580678A (en) * | 2019-09-10 | 2019-12-17 | 北京百度网讯科技有限公司 | image processing method and device |
CN110766631A (en) * | 2019-10-21 | 2020-02-07 | 北京旷视科技有限公司 | Face image modification method and device, electronic equipment and computer readable medium |
CN111462007A (en) * | 2020-03-31 | 2020-07-28 | 北京百度网讯科技有限公司 | Image processing method, device, equipment and computer storage medium |
CN112381709A (en) * | 2020-11-13 | 2021-02-19 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, device, equipment and medium |
CN112465717A (en) * | 2020-11-25 | 2021-03-09 | 北京字跳网络技术有限公司 | Face image processing model training method and device, electronic equipment and medium |
CN112489169A (en) * | 2020-12-17 | 2021-03-12 | 脸萌有限公司 | Portrait image processing method and device |
WO2023273697A1 (en) * | 2021-06-30 | 2023-01-05 | 北京字跳网络技术有限公司 | Image processing method and apparatus, model training method and apparatus, electronic device, and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153805A (en) * | 2016-03-02 | 2017-09-12 | 北京美到家科技有限公司 | Customize makeups servicing unit and method |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108021905A (en) * | 2017-12-21 | 2018-05-11 | 广东欧珀移动通信有限公司 | image processing method, device, terminal device and storage medium |
US20180173997A1 (en) * | 2016-12-15 | 2018-06-21 | Fujitsu Limited | Training device and training method for training image processing device |
CN108986016A (en) * | 2018-06-28 | 2018-12-11 | 北京微播视界科技有限公司 | Image beautification method, device and electronic equipment |
CN109241930A (en) * | 2018-09-20 | 2019-01-18 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling supercilium image |
CN109584153A (en) * | 2018-12-06 | 2019-04-05 | 北京旷视科技有限公司 | Modify the methods, devices and systems of eye |
-
2019
- 2019-05-17 CN CN201910409754.5A patent/CN110136054B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153805A (en) * | 2016-03-02 | 2017-09-12 | 北京美到家科技有限公司 | Customize makeups servicing unit and method |
US20180173997A1 (en) * | 2016-12-15 | 2018-06-21 | Fujitsu Limited | Training device and training method for training image processing device |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108021905A (en) * | 2017-12-21 | 2018-05-11 | 广东欧珀移动通信有限公司 | image processing method, device, terminal device and storage medium |
CN108986016A (en) * | 2018-06-28 | 2018-12-11 | 北京微播视界科技有限公司 | Image beautification method, device and electronic equipment |
CN109241930A (en) * | 2018-09-20 | 2019-01-18 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling supercilium image |
CN109584153A (en) * | 2018-12-06 | 2019-04-05 | 北京旷视科技有限公司 | Modify the methods, devices and systems of eye |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110580678A (en) * | 2019-09-10 | 2019-12-17 | 北京百度网讯科技有限公司 | image processing method and device |
CN110580678B (en) * | 2019-09-10 | 2023-06-20 | 北京百度网讯科技有限公司 | Image processing method and device |
CN110766631A (en) * | 2019-10-21 | 2020-02-07 | 北京旷视科技有限公司 | Face image modification method and device, electronic equipment and computer readable medium |
CN111462007A (en) * | 2020-03-31 | 2020-07-28 | 北京百度网讯科技有限公司 | Image processing method, device, equipment and computer storage medium |
CN112381709A (en) * | 2020-11-13 | 2021-02-19 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, device, equipment and medium |
CN112381709B (en) * | 2020-11-13 | 2022-06-21 | 北京字节跳动网络技术有限公司 | Image processing method, model training method, device, equipment and medium |
CN112465717A (en) * | 2020-11-25 | 2021-03-09 | 北京字跳网络技术有限公司 | Face image processing model training method and device, electronic equipment and medium |
CN112489169A (en) * | 2020-12-17 | 2021-03-12 | 脸萌有限公司 | Portrait image processing method and device |
CN112489169B (en) * | 2020-12-17 | 2024-02-13 | 脸萌有限公司 | Portrait image processing method and device |
WO2023273697A1 (en) * | 2021-06-30 | 2023-01-05 | 北京字跳网络技术有限公司 | Image processing method and apparatus, model training method and apparatus, electronic device, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN110136054B (en) | 2024-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136054A (en) | Image processing method and device | |
CN105184249B (en) | Method and apparatus for face image processing | |
WO2019201042A1 (en) | Image object recognition method and device, storage medium, and electronic device | |
CN109816589A (en) | Method and apparatus for generating cartoon style transformation model | |
CN107644209A (en) | Method for detecting human face and device | |
CN110298319B (en) | Image synthesis method and device | |
CN108537152A (en) | Method and apparatus for detecting live body | |
US10992619B2 (en) | Messaging system with avatar generation | |
CN108985257A (en) | Method and apparatus for generating information | |
CN106846497A (en) | It is applied to the method and apparatus of the presentation three-dimensional map of terminal | |
CN109308681A (en) | Image processing method and device | |
CN109360028A (en) | Method and apparatus for pushed information | |
CN109815365A (en) | Method and apparatus for handling video | |
CN110009059A (en) | Method and apparatus for generating model | |
CN110472558A (en) | Image processing method and device | |
CN110288705A (en) | The method and apparatus for generating threedimensional model | |
CN110532983A (en) | Method for processing video frequency, device, medium and equipment | |
CN108388889A (en) | Method and apparatus for analyzing facial image | |
CN109754464A (en) | Method and apparatus for generating information | |
CN109271929A (en) | Detection method and device | |
CN109255814A (en) | Method and apparatus for handling image | |
CN108898604A (en) | Method and apparatus for handling image | |
CN110516598A (en) | Method and apparatus for generating image | |
CN109241930A (en) | Method and apparatus for handling supercilium image | |
CN107945139A (en) | A kind of image processing method, storage medium and intelligent terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |