CN108596839A - A kind of human-face cartoon generation method and its device based on deep learning - Google Patents
A kind of human-face cartoon generation method and its device based on deep learning Download PDFInfo
- Publication number
- CN108596839A CN108596839A CN201810242803.6A CN201810242803A CN108596839A CN 108596839 A CN108596839 A CN 108596839A CN 201810242803 A CN201810242803 A CN 201810242803A CN 108596839 A CN108596839 A CN 108596839A
- Authority
- CN
- China
- Prior art keywords
- face
- hair
- obtains
- image
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Abstract
The present invention discloses out a kind of human-face cartoon generation method and its device based on deep learning, wherein this method includes:Personal images are obtained, 77 human face characteristic points are calibrated;It is reevaluated and rotation process, obtains the image containing positive criteria posture face;According to its picture construction face rectangle frame;And wait than three times of amplification, cut, build number of people image block;Meanwhile face rectangle frame correspondence image is cut out, it is normalized, and be sequentially sent to advance trained face and face attribute multi-tag depth of assortment convolutional Neural prediction network, obtains the gender and face attribute tags of face;According to the gender of face and face attribute tags, number of people image block and face rectangle frame correspondence image, corresponding material is selected in caricature material database, carries out corresponding deformation processing, obtains face, hair and the caricature material at face skin bottom;Then splicing is carried out, cartoon image is finally obtained.In embodiments of the present invention, the drafting that can realize intelligent human-face cartoon, does not stint the human assistance for wanting any.
Description
Technical field
The present invention relates to the non-genuine Rendering field of computer more particularly to a kind of human-face cartoons based on deep learning
Generation method and its device.
Background technology
In recent years, with the development of computer image processing technology, more and more human-face cartoons generate application such as faces sprout,
The unrestrained camera of evil spirit, FaceU have come into our life, they all use computer image processing technology to carry out wash with watercolours to true portrait
Dye, to generate lively human character cartoon image.In general, needing to generate the traditional cartoon generating system of human-face cartoon image
Two classes can be divided into, the first kind is to spell combination by material to create the cartoon generating system of caricature, and typical case example is
Mobile phone applies " face is sprouted ".In addition a kind of caricature generating means are based on computer image processing technology, directly to true man
Portrait is modified, then modified true man's portrait and preprepared template material are blended to the caricature life for constituting caricature
At device, represent as the unrestrained camera of evil spirit.Such as the technical disadvantages of " face is sprouted ":(1) human-face cartoon generate a big element be will picture,
It should there is a degree of similitudes between the caricature generated and target real human face.And the technical solution sprouted in face
In, it generates all materials used in caricature and is required for selecting manually by user, which results in user it is difficult to rapidly split
Go out a width and target real human face human-face cartoon extremely alike in spirit, while the mode of splicing material is also unfavorable for batch manually in this way
Create caricature or animation in ground.
(2) in " face is sprouted ", although caricature material can be replaced arbitrarily, the position of caricature material splicing is fixed
, which reduce the compatible degrees between the caricature of generation and real human face.
(3) in face is sprouted, the material that can be chosen from is limited, and limited material cannot be satisfied owner and all be created that category
In the demand of the unique cartoon image of oneself.
The for another example unrestrained camera of evil spirit, disadvantage:(1) whole picture face is regarded into image, is carried out using computer image processing technology
The method of processing carries out caricature face generation, and the caricature face of generation looks more like the human face photo of a width grey, more
It is stiff, do not have the artistic effect of caricature.
(2) there is mismatches between the true caricature face of deviation and other optional materials of artist's creation, this allows
The caricature effect of generation seems unnatural.
(3) in order to allow caricature face to match with other materials, caricature generating means, which need manually to carefully choose, to be used
Caricature material in order to avoid generate indisposed sense, the cartoon style supported which results in the unrestrained camera of evil spirit is extremely limited.
Invention content
It is an object of the invention to overcome the deficiencies in the prior art, and the present invention provides a kind of faces based on deep learning
Caricature generation method and its device can obtain required material, in this way in such a way that the material in classification is carried out deformation
Benefit be on the one hand to reduce demand for caricature material, on the other hand increase agreeing with for caricature material and true man's photo
Degree realizes the drafting of intelligent human-face cartoon, does not stint the human assistance for wanting any.
To solve the above-mentioned problems, the present invention proposes a kind of human-face cartoon generation method based on deep learning, described
Method includes:
Personal images are obtained, calibrate 77 human face characteristic points in photo using active shape model;
According to calibration No. 36 left eye eyeball characteristic points and No. 39 right eye eyeball characteristic points to the face inclination angle in image
Degree carries out estimation processing, carries out rotation process to original image by calculated angle of inclination so that No. 36 left eye eyeball features
Point and No. 39 right eye eyeball characteristic points reach horizontal position, that is, obtain the image containing positive criteria posture face;
It obtains and contains positive criteria posture facial image, 77 features in its image are calibrated by active shape model
Point therefrom obtains No. 2, No. 12, No. 15 and No. 17 four characteristic points, builds face rectangle frame;
By face rectangle frame by constant mode in center etc. than three times of amplification, and the figure that its amplified rectangle frame is drawn a circle to approve
As region is cut, number of people image block is built;
It obtains containing 77 characteristic points in positive criteria posture human face photo, cuts out the place rectangle of face in face
Region is normalized, and is sequentially sent to advance trained face and face attribute multi-tag depth of assortment convolution god
Predicted network obtains the gender and face attribute tags of face;
According to the gender of face and face attribute tags, corresponding face material is selected in caricature material database, with input
The true picture of correspondence be reference, imitate algorithm by depth image carries out corresponding deformation processing to selected material, is corresponded to
The face caricature material of generation;
The number of people image block is calculated, dividing processing, hair essence cut zone is obtained, in conjunction with the face gender
Attribute selects corresponding hair material in caricature material database, is reference with the true picture of the correspondence of input, passes through depth image
It imitates algorithm and corresponding deformation processing is carried out to selected hair caricature material, obtain the corresponding hair caricature material generated;
The face rectangle frame is obtained, in conjunction with the face gender attribute, face is selected in corresponding caricature material database
Skin bottom material imitates algorithm by depth image and carries out deformation process, obtains the corresponding face skin bottom caricature material generated;
In conjunction with the face caricature material, hair caricature material, face skin bottom caricature material, the side of graph cut is utilized
Formula carries out splicing according to position is corresponded in former real pictures, finally obtains cartoon image.
Preferably, the utilization active shape model calibrates 77 human face characteristic points in photo and is fixed position, consolidates
The number of delimiting the organizational structure.
Preferably, face include in the place rectangular area for cutting out face in face:Eyebrow, eyes, face, nose
Son.
Preferably, the specific steps of the place rectangular area for cutting out face in face include:
It obtains containing No. 16 to No. 21 characteristic points in 77 characteristic points in positive criteria posture human face photo, constitutes six sides
Hexagon A is expanded 1.5 times with the constant equal ratio in center, obtains left eyebrow image block areas (new six side by shape A, center J
Shape A1);
It obtains containing No. 22 to No. 27 characteristic points in 77 characteristic points in positive criteria posture human face photo, constitutes six sides
Hexagon B is expanded 1.5 times with the constant equal ratio in center, obtains right eyebrow image block areas (new six side by shape B, center G
Shape B1);
It obtains containing No. 30 to No. 37 characteristic points in 77 characteristic points in positive criteria posture human face photo, constitutes eight sides
Shape C, center H, by octagon C with constant equal than expanding 1.5 times, acquisition left-eye image block region (the new octagon in center
C1);
It obtains containing No. 40 to No. 47 characteristic points in 77 characteristic points in positive criteria posture human face photo, constitutes eight sides
Shape D, center K, by octagon D with constant equal than expanding 1.5 times, acquisition eye image block region (the new octagon in center
D1);
Obtain containing No. 21 in 77 characteristic points in positive criteria posture human face photo, No. 59, No. 65, No. 22 characteristic points
Trapezoidal E is constituted, nose image block region is obtained;
Acquisition is arrived containing No. 59 to No. 65 characteristic points in 77 characteristic points in positive criteria posture human face photo and No. 72
Dodecagon F is waited so that center is constant than expanding 1.2 times, is obtained by dodecagon F, the center L that No. 76 characteristic points are constituted
Face image block areas (new dodecagon F1)。
Preferably, the specific steps of the gender for obtaining face and face attribute tags include:
Face is given to man, female's attribute tags according to gender, the eyebrow of people is given according to its dense degree dense, sparse
Human eye according to eyes is opened degree and given and opened eyes, narrow an attribute tags, human eye is given according to eyelid type by attribute tags
Give single-edge eyelid, double-edged eyelid attribute tags, the face of people is given according to corners of the mouth radian raise up, be smooth, under curved label, by the mouth of people
It bar opens, close label according to being given whether opening.By the face of people according to whether show one's teeth give leakage tooth, do not leak tooth mark
Label, all these attributes are integrated, you can obtain face character label list, as shown in Figure 3.
The training image database for building the classification of true man's face utilizes advance instruction under convolutional neural networks framework environment
- 16 layers of progress multi-tag training managing of the ultra-deep convolutional neural networks perfected, acquisition can export image block and correspond to class label
Face and face attribute multi-tag depth of assortment convolutional Neural predict network;
The corresponding image block of face rectangle frame is obtained, is normalized as 224*244 sizes, passes through face and face attribute
Multi-tag depth of assortment convolutional Neural prediction network carries out prediction processing, obtains the gender attribute label of face;
Obtain left eyebrow image block areas (new hexagon A1), take its boundary rectangleBy boundary rectangleIt is inputted from original
It is cut in personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment
Convolutional Neural prediction network carries out prediction processing, obtains left eyebrow attribute tags;
Obtain right eyebrow image block areas (new hexagon B1), take its boundary rectangleBy boundary rectangleIt is inputted from original
It is cut in personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment
Convolutional Neural prediction network carries out prediction processing, obtains right eyebrow attribute tags;
Obtain left-eye image block region (new octagon C1), take its boundary rectangleBy boundary rectangleFrom original input
It is cut in people's image, then 224*244 sizes is normalized into, rolled up by face and face attribute multi-tag depth of assortment
Product nerve prediction network carries out prediction processing, obtains left eye attribute tags;
Obtain eye image block region (new octagon D1), take its boundary rectangleBy boundary rectangleFrom original input
It is cut in people's image, then 224*244 sizes is normalized into, rolled up by face and face attribute multi-tag depth of assortment
Product nerve prediction network carries out prediction processing, obtains right eye attribute tags;
It obtains nose image block region (trapezoidal E), takes its boundary rectangle RE, by boundary rectangle REPersonal images are inputted from original
It is middle to cut, then 224*244 sizes are normalized into, pass through face and face attribute multi-tag depth of assortment convolutional Neural
Prediction network carries out prediction processing, obtains nose attribute tags;
Obtain face image block areas (new dodecagon F1), take its boundary rectangleBy boundary rectangleIt is inputted from original
It is cut in personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment
Convolutional Neural prediction network carries out prediction processing, obtains face attribute tags.
Preferably, while the face character label list is integrated out, caricature material database is constructed according to its label list.
Preferably, the training image database of true man's face classification for finely tune with advance trained ultra-deep
- 16 layers of convolutional neural networks.Wherein, each entry of the training image database of true man's face classification includes the training figure
The file path of picture and its attribute tags obtained there are situation according to its attribute in the face character label list.
Preferably, the specific steps for being split processing to the number of people image block include:
Obtain the number of people image block, by deep learning object identification model to number of people image hair zones in the block into
Row calculation processing obtains hair coarse segmentation region,
The thick region of hair is obtained, cutting algorithm using figure carries out dividing processing again, obtains hair essence cut zone.
Preferably, described that place is split to number of people image hair zones in the block by deep learning object identification model
The specific steps of reason include:
True man portrait photo is obtained, to each photo according to constant mode in center etc. than three times of amplification, and is put
The image-region of rectangle frame delineation after big carries out cutting processing, obtains true man's number of people image block;
True man's number of people image block is obtained, is normalized, specification is 224*224 sizes, and uses image labeling work
Tool carries out the mark of pixel scale to it, if number of people image a certain pixel in the block belongs to hair, in its position mark
Upper 1, on the contrary label upper 0.
True man's number of people image block after normalization is obtained, to each of which location of pixels, all centered on the location of pixels, is cut out
The square image blocks that size is 32*32 are cut, the square image blocks that the 224*224 length of side is 32 are obtained;Wherein, square
Image block has exceeded the white polishing in part of former true man portrait photo.
The square image blocks that all sizes are 32*32 are counted with the place for wherein belonging to hair pixel quantity
Reason is determined as that the image block belongs in true man's number of people image block if the hair pixel quantity accounting is more than 40%
Hair portion gives its label 1, otherwise gives its label 0;
To each tape label, the square image blocks that size is 32*32, it is normalized, becomes 64*64 sizes,
And together with its label deposit hair training image database G1;
True man's number of people image block after normalization is obtained, to each of which location of pixels, all centered on the location of pixels, is cut out
Square true man's number of people image block that size is 64*64 is cut, square true man's number of people figure that the 224*224 length of side is 64 is obtained
As block;Wherein, square true man's number of people image block has exceeded the white polishing in part of former true man portrait photo.
The square image blocks that all sizes are 64*64 are counted with the place for wherein belonging to hair pixel quantity
Reason is determined as that the image block belongs in true man's number of people image block if the hair pixel quantity accounting is more than 40%
Hair portion gives its label 1, otherwise gives its label 0;
Each tape label, the square image blocks that size is 64*64 are stored in hair training image database G2;
Utilize convolutional neural networks frame, the good depth convolutional neural networks model ZF of load pre-training1With depth convolution
Neural network model ZF2, enable two network models carry out two classification processing to input picture, both contained hair (1) and do not contained
Hair (0);
The number of people image block is obtained, is normalized, the number of people image block of 224*224 sizes is obtained;
Construct hair likelihood statistical chart M1, M1Size be 224*224, initial value 0;Wherein, M1In each position value
Represent the number of votes obtained that the position is hair.
The square image blocks that the 224*224 length of side is 64 are obtained, depth convolutional neural networks model ZF is passed through1It carries out pre-
Survey is handled, and obtains label:Containing hair (1) and be free of hair (0);
The square image blocks that the 224*224 length of side with label is 64 are obtained, corresponding 32*32 before normalization is found out
Square image blocks, it is found in hair statistical likelihood figure M according to location of pixels1In corresponding region, if the square
The label of image block is (1) containing hair, then the pixel value in this region is added one, be otherwise by the label of the square image blocks
Pixel value without hair (0) region subtracts one;As entire hair statistical likelihood figure M1After the completion of statistics, for hair statistical
Likelihood figure M1In each pixel p, if p>0, then p=1, otherwise p=0.
Construct hair likelihood statistical chart M2, M2Size be 224*224, initial value 0;Wherein, M2In each position value
Represent the number of votes obtained that the position is hair.
The square image blocks that the 224*224 length of side described in S7106 is 64 are obtained, depth convolutional neural networks are passed through
Model ZF2Prediction processing is carried out, label is obtained:Containing hair (1) and be free of hair (0);
The square image blocks that the 224*224 length of side with label is 64 are obtained, it is found in head according to location of pixels
Send out statistical likelihood figure M2In corresponding region, if the label of the square image blocks is (1) containing hair, by this region
Pixel value adds one, and otherwise pixel value that the label of the square image blocks is free to hair (0) region subtracts one;When entire
Hair statistical likelihood figure M2After the completion of statistics, for hair statistical likelihood figure M2In each pixel p, if p>0, then p=1, otherwise p
=0.
By hair statistical likelihood figure M1With hair statistical likelihood figure M2Matrix dot product operation processing is carried out, then obtains hair area
The coarse segmentation mask artwork M in domain3, corresponding to then be hair coarse segmentation region.
Preferably, it is described using figure cut algorithm and carry out the specific steps of dividing processing again include:
By the coarse segmentation mask artwork M of the hair zones3All pixels point in range carries out cumulative processing of averaging,
Obtain the foreground pixel point average value in the hair coarse segmentation region;
By the coarse segmentation mask artwork M of the hair zones3All pixels point outside range carries out cumulative processing of averaging,
Obtain the foreground pixel point average value outside the hair coarse segmentation region;
According in the described hair coarse segmentation region foreground pixel point average value and the described hair coarse segmentation region outside
Foreground pixel point average value, by figure cut algorithm the hair zones coarse segmentation mask artwork M3Corresponding hair rough segmentation
It cuts and is split processing in region, obtain hair essence cut zone.
Preferably, the true man portrait photo need to contain true man's face image and complete hair, and not according to center
Mode of change etc. is cut than three times of amplification, and by the image-region that its amplified rectangle frame is drawn a circle to approve, and obtains true man's number of people
Image block.Wherein, the length of number of people image block is not less than 224 pixels, and width is not less than 224 pixels.100 opening and closing rule are collected altogether
True man portrait photo, medium-length hair 50, bob 50.
Preferably, it is described load the good depth convolutional neural networks model of pre-training the specific steps are:Hair is trained
Image data base G1With hair training image database G2It is loaded into depth convolutional neural networks model respectively as training data input
ZF1With depth convolutional neural networks model ZF2In be trained, the i.e. trained depth convolutional neural networks mould of obtained result
Type ZF1With depth convolutional neural networks model ZF2, when being predicted, directly the model is loaded.
Preferably, it is described in the way of graph cut according to corresponded in former real pictures position carry out stitching portion
The specific steps of reason include:
Face skin bottom caricature material is obtained, by hair caricature material under the foreground fusion mode of graph cut, with face
Skin bottom caricature material is merged, and the face skin bottom caricature material semi-finished product with hair are obtained;
The face skin bottom caricature material semi-finished product with hair are obtained, the foreground by face caricature material in graph cut is melted
Under syntype, is merged with the face skin bottom caricature material semi-finished product with hair, obtain final cartoon image.
Correspondingly, the embodiment of the invention also discloses a kind of human-face cartoon generating means based on deep learning, the dress
Set including:
Characteristic module is extracted, for obtaining human face characteristic point in personal images;
Module is aligned, carries out being rotated into positive criteria posture face for human face characteristic point in the personal images according to acquisition
Processing;
Module is built, for building face rectangle frame, number of people image block;
Generation module, for generating corresponding caricature material;
Concatenation module, for corresponding caricature material to be spliced into personal cartoon image.
Preferably, the structure module is additionally operable to according to the place containing the face in positive criteria posture human face photo
Rectangular area generates the gender and face attribute tags of face.
In embodiments of the present invention, required material can be obtained in such a way that the material in classification is carried out deformation,
On the one hand reduce demand for caricature material, on the other hand increase caricature material and true man's photo agrees with degree;It is real
The drafting of existing intelligence human-face cartoon, does not need any human assistance.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow diagram of human-face cartoon generation method based on deep learning of the embodiment of the present invention;
Fig. 2 be the embodiment of the present invention a kind of human-face cartoon generation method based on deep learning in utilize active shape mould
Type demarcates the schematic diagram of human face characteristic point;
Fig. 3 be the embodiment of the present invention a kind of human-face cartoon generation method based on deep learning in face face attribute mark
Sign the schematic diagram of table;
Fig. 4 is a kind of structure composition signal of human-face cartoon generating means based on deep learning of the embodiment of the present invention
Figure.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts
Embodiment shall fall within the protection scope of the present invention.
Fig. 1 is a kind of flow diagram of human-face cartoon generation method based on deep learning of the embodiment of the present invention, such as
Shown in Fig. 1, this method includes:
S1 is obtained personal images, is calibrated 77 human face characteristic points in photo using active shape model;
S2 tilts the face in image according to calibration No. 36 left eye eyeball characteristic points and No. 39 right eye eyeball characteristic points
Angle carries out estimation processing, carries out rotation process to original image by calculated angle of inclination so that No. 36 left eye eyeballs are special
Sign point and No. 39 right eye eyeball characteristic points reach horizontal position, that is, obtain the image containing positive criteria posture face;
S3 obtains and contains positive criteria posture facial image, 77 spies in its image are calibrated by active shape model
Point is levied, No. 2, No. 12, No. 15 and No. 17 four characteristic points are therefrom obtained, builds face rectangle frame;
S4 by face rectangle frame by constant mode in center etc. than three times of amplification, and its amplified rectangle frame is drawn a circle to approve
Image-region cut, build number of people image block;
S5 obtains containing 77 characteristic points in positive criteria posture human face photo, cuts out the place of face in face
Rectangular area is normalized, and is sequentially sent to advance trained face and face attribute multi-tag depth of assortment volume
Product nerve prediction network, obtains the gender and face attribute tags of face;
S6 selects corresponding face material, with defeated according to the gender of face and face attribute tags in caricature material database
The true picture of correspondence entered is reference, and imitate algorithm by depth image carries out corresponding deformation processing, acquisition pair to selected material
The face caricature material that should be generated;
S7 calculates the number of people image block, dividing processing, hair essence cut zone is obtained, in conjunction with the face
Gender attribute selects corresponding hair material in caricature material database, is reference with the true picture of the correspondence of input, passes through depth
Image imitates algorithm and carries out corresponding deformation processing to selected hair caricature material, obtains the corresponding hair caricature material generated;
S8 obtains the face rectangle frame, and in conjunction with the face gender attribute, people is selected in corresponding caricature material database
Face skin bottom material imitates algorithm by depth image and carries out deformation process, obtains the corresponding face skin bottom caricature material generated;
S9 utilizes graph cut in conjunction with the face caricature material, hair caricature material, face skin bottom caricature material
Mode according to corresponded in former real pictures position carry out splicing, finally obtain cartoon image.
Wherein, as shown in Fig. 2, calibrating 77 human face characteristic points in photo using active shape model described in S1 is
Fixed position, fixed number.
Specifically, face in the place rectangular area of face in face are cut out described in S5 includes:Eyebrow, eyes, mouth
Bar, nose.
Further, the specific steps that the place rectangular area of face in face is cut out described in S5 include:
S5101, acquisition contain No. 16 to No. 21 characteristic points in 77 characteristic points in positive criteria posture human face photo, structure
At hexagon A, center J, hexagon A is expanded 1.5 times with the constant equal ratio in center, obtains left eyebrow image block areas
(new hexagon A1);
S5102, acquisition contain No. 22 to No. 27 characteristic points in 77 characteristic points in positive criteria posture human face photo, structure
At hexagon B, center G, hexagon B is expanded 1.5 times with the constant equal ratio in center, obtains right eyebrow image block areas
(new hexagon B1);
S5103, acquisition contain No. 30 to No. 37 characteristic points in 77 characteristic points in positive criteria posture human face photo, structure
At octagon C, center H, octagon C is expanded 1.5 times with the constant equal ratio in center, it is (new to obtain left-eye image block region
Octagon C1);
S5104, acquisition contain No. 40 to No. 47 characteristic points in 77 characteristic points in positive criteria posture human face photo, structure
At octagon D, center K, octagon D is expanded 1.5 times with the constant equal ratio in center, it is (new to obtain eye image block region
Octagon D1);
S5105, acquisition contain No. 21, No. 59, No. 65, No. 22 in 77 characteristic points in positive criteria posture human face photo
Characteristic point constitutes trapezoidal E, obtains nose image block region;
S5106, obtain containing No. 59 to No. 65 characteristic points in 77 characteristic points in positive criteria posture human face photo and
Dodecagon F, the center L that No. 72 to No. 76 characteristic points are constituted wait dodecagon F than expanding 1.2 so that center is constant
Times, obtain face image block areas (new dodecagon F1)。
Further, the gender of acquisition face described in S5 and the specific steps of face attribute tags include:
S5201 gives face to man, female's attribute tags according to gender, the eyebrow of people is given according to its dense degree dense
Human eye according to eyes is opened degree and given and opened eyes, narrow an attribute tags, by human eye according to eye by close, sparse attribute tags
Skin type gives single-edge eyelid, double-edged eyelid attribute tags, the face of people is given according to corners of the mouth radian raise up, be smooth, under curved label,
It is given whether by the face of people according to opening and opens, closes label.The face of people is given to leakage tooth, no according to whether showing one's teeth
Tooth label is leaked, all these attributes are integrated, you can obtains face character label list, as shown in Figure 3.
The training image database of S5202, structure true man's face classification utilize under convolutional neural networks framework environment
- 16 layers of progress multi-tag training managing of advance trained ultra-deep convolutional neural networks, acquisition can export image block and correspond to class
The face and face attribute multi-tag depth of assortment convolutional Neural of distinguishing label predict network;
S5203 obtains the corresponding image block of face rectangle frame, is normalized as 224*244 sizes, passes through face and five
Official's attribute multi-tag depth of assortment convolutional Neural prediction network carries out prediction processing, obtains the gender attribute label of face;
S5204 obtains left eyebrow image block areas (new hexagon A1), take its boundary rectangleBy boundary rectangleFrom
It is cut in original input personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag point
Class depth convolutional Neural prediction network carries out prediction processing, obtains left eyebrow attribute tags;
S5205 obtains right eyebrow image block areas (new hexagon B1), take its boundary rectangleBy boundary rectangleFrom
It is cut in original input personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag point
Class depth convolutional Neural prediction network carries out prediction processing, obtains right eyebrow attribute tags;
S5206 obtains left-eye image block region (new octagon C1), take its boundary rectangleBy boundary rectangleFrom original
It is cut in input personal images, then 224*244 sizes is normalized into, classified by face and face attribute multi-tag
Depth convolutional Neural prediction network carries out prediction processing, obtains left eye attribute tags;
S5207 obtains eye image block region (new octagon D1), take its boundary rectangleBy boundary rectangleFrom original
It is cut in input personal images, then 224*244 sizes is normalized into, classified by face and face attribute multi-tag
Depth convolutional Neural prediction network carries out prediction processing, obtains right eye attribute tags;
S5208 obtains nose image block region (trapezoidal E), takes its boundary rectangle RE, by boundary rectangle REFrom original input
It is cut in people's image, then 224*244 sizes is normalized into, rolled up by face and face attribute multi-tag depth of assortment
Product nerve prediction network carries out prediction processing, obtains nose attribute tags;
S5209 obtains face image block areas (new dodecagon F1), take its boundary rectangleBy boundary rectangleFrom
It is cut in original input personal images, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag point
Class depth convolutional Neural prediction network carries out prediction processing, obtains face attribute tags.
Specifically, while face character label list described in S5201 is integrated out, caricature is constructed according to its label list
Material database.
Specifically, the training image database of the classification of true man's face described in S5202 is trained with advance for finely tuning
- 16 layers good of ultra-deep convolutional neural networks.Wherein, each entry of the training image database of true man's face classification includes
The file path of the training image and its obtained there are situation according to its attribute in the face character label list
Attribute tags.
Further, the specific steps for being split processing to the number of people image block described in S7 include:
S71 obtains the number of people image block, by deep learning object identification model to number of people image hair area in the block
Domain carries out calculation processing, obtains hair coarse segmentation region,
S72 obtains the thick region of hair, and cutting algorithm using figure carries out dividing processing again, obtains hair essence cut zone.
Specifically, number of people image hair zones in the block are carried out by deep learning object identification model described in S71
The specific steps of dividing processing include:
S7101 obtains true man portrait photo, to each photo according to constant mode in center etc. than three times of amplification, and
The image-region that its amplified rectangle frame is drawn a circle to approve is subjected to cutting processing, obtains true man's number of people image block;
S7102 obtains true man's number of people image block, is normalized, and specification is 224*224 sizes, and uses image
Annotation tool carries out the mark of pixel scale to it, if number of people image a certain pixel in the block belongs to hair, in its position
1 on tagging, on the contrary label upper 0.
S7103, obtain true man's number of people image block after normalization is all with the location of pixels to each of which location of pixels
Center cuts out the square image blocks that size is 32*32, obtains the square image blocks that the 224*224 length of side is 32;Its
In, square image blocks have exceeded the white polishing in part of former true man portrait photo.
S7104 counts the square image blocks that all sizes are 32*32 and wherein belongs to hair pixel quantity
Processing, if the hair pixel quantity accounting be more than 40%, be determined as that the image block belongs to true man's number of people image
Hair portion in block gives its label 1, otherwise gives its label 0;
S7105 is normalized to each tape label, the square image blocks that size is 32*32, becomes 64*64
Size, and together with its label deposit hair training image database G1;
S7106, obtain true man's number of people image block after normalization is all with the location of pixels to each of which location of pixels
Center cuts out square true man's number of people image block that size is 64*64, and it is true to obtain the square that the 224*224 length of side is 64
Everybody head image block;Wherein, square true man's number of people image block has exceeded the white polishing in part of former true man portrait photo.
S7107 counts the square image blocks that all sizes are 64*64 and wherein belongs to hair pixel quantity
Processing, if the hair pixel quantity accounting be more than 40%, be determined as that the image block belongs to true man's number of people image
Hair portion in block gives its label 1, otherwise gives its label 0;
Each tape label, the square image blocks that size is 64*64 are stored in hair training image database G by S71082;
S7109 utilizes convolutional neural networks frame, the good depth convolutional neural networks model ZF of load pre-training1And depth
Spend convolutional neural networks model ZF2, enable two network models to input picture carry out two classification processing, both contain hair (1) and
Without containing hair (0);
S7110 obtains the number of people image block described in S4, is normalized, obtains the number of people figure of 224*224 sizes
As block;
S7111, construction hair likelihood statistical chart M1, M1Size be 224*224, initial value 0;Wherein, M1In each position
The value set represents the number of votes obtained that the position is hair.
S7112 obtains the square image blocks that the 224*224 length of side is 64, passes through depth convolutional neural networks model ZF1
Prediction processing is carried out, label is obtained:Containing hair (1) and be free of hair (0);
S7113 obtains the square image blocks that the 224*224 length of side with label is 64, and it is preceding corresponding to find out normalization
32*32 square image blocks, it is found in hair statistical likelihood figure M according to location of pixels1In corresponding region, if should
The label of square image blocks is (1) containing hair, then the pixel value in this region is added one, otherwise by the square image blocks
The pixel value that label is free from hair (0) region subtracts one;As entire hair statistical likelihood figure M1After the completion of statistics, for head
Send out statistical likelihood figure M1In each pixel p, if p>0, then p=1, otherwise p=0.
S7114, construction hair likelihood statistical chart M2, M2Size be 224*224, initial value 0;Wherein, M2In each position
The value set represents the number of votes obtained that the position is hair.
S7115 obtains the square image blocks that the 224*224 length of side described in S7106 is 64, passes through depth convolution god
Through network model ZF2Prediction processing is carried out, label is obtained:Containing hair (1) and be free of hair (0);
S7116 obtains the square image blocks that the 224*224 length of side with label is 64, is found according to location of pixels
It is in hair statistical likelihood figure M2In corresponding region, if the label of the square image blocks is (1) containing hair, by this
The pixel value in region adds one, and otherwise pixel value that the label of the square image blocks is free to hair (0) region subtracts one;
As entire hair statistical likelihood figure M2After the completion of statistics, for hair statistical likelihood figure M2In each pixel p, if p>0, then p=1,
Otherwise p=0.
S7117, by hair statistical likelihood figure M1With hair statistical likelihood figure M2Matrix dot product operation processing is carried out, then is obtained
The coarse segmentation mask artwork M of hair zones3, corresponding to then be hair coarse segmentation region.
Further, algorithm is cut using figure described in S72 and carry out the specific steps of dividing processing again include:
S7201, by the coarse segmentation mask artwork M of the hair zones3All pixels point in range carries out cumulative averaging
Value processing, obtains the foreground pixel point average value in the hair coarse segmentation region;
S7202, by the coarse segmentation mask artwork M of the hair zones3All pixels point outside range carries out cumulative averaging
Value processing, obtains the foreground pixel point average value outside the hair coarse segmentation region;
S7203, according in the described hair coarse segmentation region foreground pixel point average value and the described hair coarse segmentation
Foreground pixel point average value outside region, by figure cut algorithm the hair zones coarse segmentation mask artwork M3Corresponding head
Hair coarse segmentation is split processing in region, obtains hair essence cut zone.
Specifically, true man portrait photo described in S7101 need to contain true man's face image and complete hair, and according to
Constant mode in center etc. is cut than three times of amplification, and by the image-region that its amplified rectangle frame is drawn a circle to approve, and is obtained true
Everybody head image block.Wherein, the length of number of people image block is not less than 224 pixels, and width is not less than 224 pixels.100 are collected altogether
The true man portrait photo of opening and closing rule, medium-length hair 50, bob 50.
Specifically, described in S7109 load the good depth convolutional neural networks model of pre-training the specific steps are:By head
Send out training image database G1With hair training image database G2It is loaded into depth convolutional Neural net respectively as training data input
Network model ZF1With depth convolutional neural networks model ZF2In be trained, the i.e. trained depth convolutional Neural of obtained result
Network model ZF1With depth convolutional neural networks model ZF2, when being predicted, directly the model is loaded.
Further, described in S9 in the way of graph cut according to corresponded in former real pictures position carry out
The specific steps of splicing include:
S9101 obtains face skin bottom caricature material, by hair caricature material under the foreground fusion mode of graph cut,
It is merged with face skin bottom caricature material, obtains the face skin bottom caricature material semi-finished product with hair;
S9102 obtains the face skin bottom caricature material semi-finished product with hair, by face caricature material in graph cut
Under foreground fusion mode, is merged with the face skin bottom caricature material semi-finished product with hair, obtain final cartoon image.
Correspondingly, the human-face cartoon generating means based on deep learning that the embodiment of the invention also discloses a kind of, such as Fig. 4 institutes
Show, described device includes:
Characteristic module is extracted, for obtaining human face characteristic point in personal images;
Module is aligned, carries out being rotated into positive criteria posture face for human face characteristic point in the personal images according to acquisition
Processing;
Module is built, for building face rectangle frame, number of people image block;
Generation module, for generating corresponding caricature material;
Concatenation module, for corresponding caricature material to be spliced into personal cartoon image.
Specifically, the structure module is additionally operable to according to the place containing the face in positive criteria posture human face photo
Rectangular area generates the gender and face attribute tags of face.
Specifically, the operation principle of the device related function module of the embodiment of the present invention can be found in the correlation of embodiment of the method
Description, which is not described herein again.
In embodiments of the present invention, required material can be obtained in such a way that the material in classification is carried out deformation,
Such benefit is the demand on the one hand reduced for caricature material, on the other hand increases caricature material and true man's photo
Agree with degree, realize the drafting of intelligent human-face cartoon, does not stint the human assistance for wanting any.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage
Medium may include:Read-only memory (ROM, ReadOnly Memory), random access memory (RAM, Random Access
Memory), disk or CD etc..
In addition, be provided for the embodiments of the invention above a kind of human-face cartoon generation method based on deep learning and its
Device is described in detail, and principle and implementation of the present invention are described for specific case used herein, with
The explanation of upper embodiment is merely used to help understand the method and its core concept of the present invention;Meanwhile for the general of this field
Technical staff, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion
The content of the present specification should not be construed as limiting the invention.
Claims (8)
1. a kind of human-face cartoon generation method based on deep learning, which is characterized in that the method includes:
Personal images are obtained, 77 human face characteristic points are calibrated in photo;
According to calibration No. 36 left eye eyeball characteristic points and No. 39 right eye eyeball characteristic points to the face angle of inclination in image into
Row estimation is handled, and rotation process is carried out to original image by calculated angle of inclination so that No. 36 left eye eyeball characteristic points and
No. 39 right eye eyeball characteristic points reach horizontal position, that is, obtain the image containing positive criteria posture face;
Obtain contain positive criteria posture facial image, calibrate 77 characteristic points in its image, therefrom obtain No. 2, No. 12,15
Number and No. 17 four characteristic points, build face rectangle frame;
By face rectangle frame by constant mode in center etc. than three times of amplification, and the image district that its amplified rectangle frame is drawn a circle to approve
Domain is cut, and number of people image block is built;
It obtains containing 77 characteristic points in positive criteria posture human face photo, cuts out the place rectangle region of face in face
Domain is normalized, and is sequentially sent to advance trained face and face attribute multi-tag depth of assortment convolutional Neural
It predicts network, obtains the gender and face attribute tags of face;
According to the gender of face and face attribute tags, corresponding face material is selected in caricature material database, with pair of input
It is reference to answer true picture, carries out corresponding deformation processing to selected material, obtains the corresponding face caricature material generated;
The number of people image block is calculated, dividing processing, hair essence cut zone is obtained, in conjunction with the face gender category
Property, corresponding hair material is selected in caricature material database, is reference with the true picture of the correspondence of input, it is unrestrained to selected hair
Picture element material carries out corresponding deformation processing, obtains the corresponding hair caricature material generated;
The face rectangle frame is obtained, in conjunction with the face gender attribute, face skin bottom is selected in corresponding caricature material database
Material carries out deformation process, obtains the corresponding face skin bottom caricature material generated;
In conjunction with the face caricature material, hair caricature material, face skin bottom caricature material, according to corresponding in former real pictures
Position carries out splicing, finally obtains cartoon image.
2. a kind of human-face cartoon generation method based on deep learning according to claim 1, which is characterized in that described to obtain
Face gender and the specific steps of face attribute tags include:
Face is given to man, female's attribute tags according to gender, give the eyebrow of people to dense, sparse attribute according to its dense degree
Human eye according to eyes is opened degree and given and opened eyes, narrow an attribute tags, gives human eye to list according to eyelid type by label
Eyelid, double-edged eyelid attribute tags, the face of people is given according to corners of the mouth radian raise up, be smooth, under curved label, the face of people is pressed
It is given whether according to opening and opens, closes label.By the face of people according to whether show one's teeth give leakage tooth, do not leak tooth label, will
All these attributes are integrated, you can obtain face character label list.
The training image database for building the classification of true man's face, under convolutional neural networks framework environment, using training in advance
Ultra-deep convolutional neural networks carry out multi-tag training managing, acquisition can export the face and five that image block corresponds to class label
Official's attribute multi-tag depth of assortment convolutional Neural predicts network;
The corresponding image block of face rectangle frame is obtained, is normalized as 224*244 sizes, is passed through face and face attribute is marked more
Label depth of assortment convolutional Neural prediction network carries out prediction processing, obtains the gender attribute label of face;
Obtain left eyebrow image block areas (new hexagon A1), take its boundary rectangleBy boundary rectangleIt is personal from original input
It is cut in image, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment convolution
Nerve prediction network carries out prediction processing, obtains left eyebrow attribute tags;
Obtain right eyebrow image block areas (new hexagon B1), take its boundary rectangleBy boundary rectangleIt is personal from original input
It is cut in image, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment convolution
Nerve prediction network carries out prediction processing, obtains right eyebrow attribute tags;
Obtain left-eye image block region (new octagon C1), take its boundary rectangleBy boundary rectanglePersonal figure is inputted from original
It is cut as in, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment convolution god
Predicted network carries out prediction processing, obtains left eye attribute tags;
Obtain eye image block region (new octagon D1), take its boundary rectangleBy boundary rectanglePersonal figure is inputted from original
It is cut as in, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment convolution god
Predicted network carries out prediction processing, obtains right eye attribute tags;
It obtains nose image block region (trapezoidal E), takes its boundary rectangle RE, by boundary rectangle REIt inputs in personal images and cuts out from original
It shears off, then 224*244 sizes is normalized into, predicted by face and face attribute multi-tag depth of assortment convolutional Neural
Network carries out prediction processing, obtains nose attribute tags;
Obtain face image block areas (new dodecagon F1), take its boundary rectangleBy boundary rectangleIt is personal from original input
It is cut in image, then 224*244 sizes is normalized into, pass through face and face attribute multi-tag depth of assortment convolution
Nerve prediction network carries out prediction processing, obtains face attribute tags.
3. a kind of human-face cartoon generation method based on deep learning according to claim 2, which is characterized in that the people
While face attribute tags table is integrated out, caricature material database is constructed according to its label list.
4. a kind of human-face cartoon generation method based on deep learning according to claim 1, which is characterized in that described
The specific steps for being split processing to the number of people image block include:
The number of people image block is obtained, number of people image hair zones in the block are counted by deep learning object identification model
Calculation is handled, and obtains hair coarse segmentation region,
The thick region of hair is obtained, cutting algorithm using figure carries out dividing processing again, obtains hair essence cut zone.
5. a kind of human-face cartoon generation method based on deep learning according to claim 4, which is characterized in that the profit
Algorithm is cut with figure carry out again the specific steps of dividing processing include:
By the coarse segmentation mask artwork M of the hair zones3All pixels point in range carries out cumulative processing of averaging, and obtains
Foreground pixel point average value in the hair coarse segmentation region;
By the coarse segmentation mask artwork M of the hair zones3All pixels point outside range carries out cumulative processing of averaging, and obtains
Foreground pixel point average value outside the hair coarse segmentation region;
According in the described hair coarse segmentation region foreground pixel point average value and the described hair coarse segmentation region outside before
Scene vegetarian refreshments average value, by figure cut algorithm the hair zones coarse segmentation mask artwork M3Corresponding hair coarse segmentation area
It is split processing in domain, obtains hair essence cut zone.
6. a kind of human-face cartoon generation method based on deep learning according to claim 1, which is characterized in that described
According to corresponded in former real pictures position carry out splicing specific steps include:
Face skin bottom caricature material is obtained, by hair caricature material under the foreground fusion mode of graph cut, with face skin bottom
Caricature material is merged, and the face skin bottom caricature material semi-finished product with hair are obtained;
The face skin bottom caricature material semi-finished product with hair are obtained, the foreground by face caricature material in graph cut merges mould
Under formula, is merged with the face skin bottom caricature material semi-finished product with hair, obtain final cartoon image.
7. a kind of human-face cartoon generating means based on deep learning, which is characterized in that described device includes:
Characteristic module is extracted, for obtaining human face characteristic point in personal images;
Module is aligned, the place for being rotated into positive criteria posture face is carried out for human face characteristic point in the personal images according to acquisition
Reason;
Module is built, for building face rectangle frame, number of people image block;
Generation module, for generating corresponding caricature material;
Concatenation module, for corresponding caricature material to be spliced into personal cartoon image.
8. a kind of human-face cartoon generating means based on deep learning according to claim 7, which is characterized in that described
Structure module is additionally operable to, according to the place rectangular area containing the face in positive criteria posture human face photo, generate the property of face
Other and face attribute tags.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810242803.6A CN108596839A (en) | 2018-03-22 | 2018-03-22 | A kind of human-face cartoon generation method and its device based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810242803.6A CN108596839A (en) | 2018-03-22 | 2018-03-22 | A kind of human-face cartoon generation method and its device based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108596839A true CN108596839A (en) | 2018-09-28 |
Family
ID=63627180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810242803.6A Pending CN108596839A (en) | 2018-03-22 | 2018-03-22 | A kind of human-face cartoon generation method and its device based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596839A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308681A (en) * | 2018-09-29 | 2019-02-05 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109800732A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating caricature head portrait |
CN109858382A (en) * | 2019-01-04 | 2019-06-07 | 广东智媒云图科技股份有限公司 | A method of portrait is drawn according to dictation |
CN109886418A (en) * | 2019-03-12 | 2019-06-14 | 深圳微品致远信息科技有限公司 | A kind of method, system and storage medium intelligently generating Design Works based on machine learning |
CN110070483A (en) * | 2019-03-26 | 2019-07-30 | 中山大学 | A kind of portrait cartooning method based on production confrontation network |
CN110321865A (en) * | 2019-07-09 | 2019-10-11 | 北京字节跳动网络技术有限公司 | Head effect processing method and device, storage medium |
CN110414345A (en) * | 2019-06-25 | 2019-11-05 | 北京汉迪移动互联网科技股份有限公司 | Cartoon image generation method, device, equipment and storage medium |
CN110414428A (en) * | 2019-07-26 | 2019-11-05 | 厦门美图之家科技有限公司 | A method of generating face character information identification model |
CN110598546A (en) * | 2019-08-06 | 2019-12-20 | 平安科技(深圳)有限公司 | Image-based target object generation method and related equipment |
CN110728255A (en) * | 2019-10-22 | 2020-01-24 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111028325A (en) * | 2019-12-12 | 2020-04-17 | 广东智媒云图科技股份有限公司 | Animal animation production method and device for limb characteristic point connecting line |
CN111027492A (en) * | 2019-12-12 | 2020-04-17 | 广东智媒云图科技股份有限公司 | Animal drawing method and device for connecting limb characteristic points |
CN111080754A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111080743A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Figure drawing method and device for connecting characteristic points of head and limbs |
CN111079549A (en) * | 2019-11-22 | 2020-04-28 | 杭州电子科技大学 | Method for recognizing cartoon face by using gating fusion discrimination features |
CN111223164A (en) * | 2020-01-08 | 2020-06-02 | 浙江省北大信息技术高等研究院 | Face sketch generating method and device |
CN111260763A (en) * | 2020-01-21 | 2020-06-09 | 厦门美图之家科技有限公司 | Cartoon image generation method, device, equipment and storage medium based on portrait |
CN111340913A (en) * | 2020-02-24 | 2020-06-26 | 北京奇艺世纪科技有限公司 | Picture generation and model training method, device and storage medium |
CN111445384A (en) * | 2020-03-23 | 2020-07-24 | 杭州趣维科技有限公司 | Universal portrait photo cartoon stylization method |
CN111508048A (en) * | 2020-05-22 | 2020-08-07 | 南京大学 | Automatic generation method for human face cartoon with interactive arbitrary deformation style |
CN112346614A (en) * | 2020-10-28 | 2021-02-09 | 京东方科技集团股份有限公司 | Image display method and device, electronic device and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (en) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | Picture processing method for robot drawing human-face cartoon |
CN101350063A (en) * | 2008-09-03 | 2009-01-21 | 北京中星微电子有限公司 | Method and apparatus for locating human face characteristic point |
CN101383001A (en) * | 2008-10-17 | 2009-03-11 | 中山大学 | Quick and precise front human face discriminating method |
CN101477696A (en) * | 2009-01-09 | 2009-07-08 | 彭振云 | Human character cartoon image generating method and apparatus |
CN102436636A (en) * | 2010-09-29 | 2012-05-02 | 中国科学院计算技术研究所 | Method and system for segmenting hair automatically |
CN104637035A (en) * | 2015-02-15 | 2015-05-20 | 百度在线网络技术(北京)有限公司 | Method, device and system for generating cartoon face picture |
CN105868769A (en) * | 2015-01-23 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Method and device for positioning face key points in image |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN106874861A (en) * | 2017-01-22 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of face antidote and system |
CN107123083A (en) * | 2017-05-02 | 2017-09-01 | 中国科学技术大学 | Face edit methods |
CN107316333A (en) * | 2017-07-07 | 2017-11-03 | 华南理工大学 | It is a kind of to automatically generate the method for day overflowing portrait |
CN107680071A (en) * | 2017-10-23 | 2018-02-09 | 深圳市云之梦科技有限公司 | A kind of face and the method and system of body fusion treatment |
CN107729838A (en) * | 2017-10-12 | 2018-02-23 | 中科视拓(北京)科技有限公司 | A kind of head pose evaluation method based on deep learning |
CN107784678A (en) * | 2017-11-08 | 2018-03-09 | 北京奇虎科技有限公司 | Generation method, device and the terminal of cartoon human face image |
-
2018
- 2018-03-22 CN CN201810242803.6A patent/CN108596839A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1710608A (en) * | 2005-07-07 | 2005-12-21 | 上海交通大学 | Picture processing method for robot drawing human-face cartoon |
CN101350063A (en) * | 2008-09-03 | 2009-01-21 | 北京中星微电子有限公司 | Method and apparatus for locating human face characteristic point |
CN101383001A (en) * | 2008-10-17 | 2009-03-11 | 中山大学 | Quick and precise front human face discriminating method |
CN101477696A (en) * | 2009-01-09 | 2009-07-08 | 彭振云 | Human character cartoon image generating method and apparatus |
CN102436636A (en) * | 2010-09-29 | 2012-05-02 | 中国科学院计算技术研究所 | Method and system for segmenting hair automatically |
CN105868769A (en) * | 2015-01-23 | 2016-08-17 | 阿里巴巴集团控股有限公司 | Method and device for positioning face key points in image |
CN104637035A (en) * | 2015-02-15 | 2015-05-20 | 百度在线网络技术(北京)有限公司 | Method, device and system for generating cartoon face picture |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN106874861A (en) * | 2017-01-22 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of face antidote and system |
CN107123083A (en) * | 2017-05-02 | 2017-09-01 | 中国科学技术大学 | Face edit methods |
CN107316333A (en) * | 2017-07-07 | 2017-11-03 | 华南理工大学 | It is a kind of to automatically generate the method for day overflowing portrait |
CN107729838A (en) * | 2017-10-12 | 2018-02-23 | 中科视拓(北京)科技有限公司 | A kind of head pose evaluation method based on deep learning |
CN107680071A (en) * | 2017-10-23 | 2018-02-09 | 深圳市云之梦科技有限公司 | A kind of face and the method and system of body fusion treatment |
CN107784678A (en) * | 2017-11-08 | 2018-03-09 | 北京奇虎科技有限公司 | Generation method, device and the terminal of cartoon human face image |
Non-Patent Citations (7)
Title |
---|
SAURAV JHA ET AL.: "Bringing Cartoons to Life: Towards Improved Cartoon Face Detection andRecognition Systems", 《COMPUTER SCIENCE》 * |
冯晓斐等: "基于特征变形的人脸肖像漫画生成", 《浙江工业大学学报》 * |
刘礼辉: "基于Adaboost的快速人脸检测系统", 《科技风》 * |
吕妙娴: "基于卷积神经网络的人脸识别算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 * |
李辉等: "基于卷积神经网络的人脸识别算法", 《软件导刊》 * |
栾悉道,等: "《多媒体情报处理技术》", 31 May 2016 * |
沈先耿等: "一种改进的快速多姿态人脸特征点定位算法", 《中国科技论文》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308681A (en) * | 2018-09-29 | 2019-02-05 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109858382A (en) * | 2019-01-04 | 2019-06-07 | 广东智媒云图科技股份有限公司 | A method of portrait is drawn according to dictation |
CN109800732A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating caricature head portrait |
CN109800732B (en) * | 2019-01-30 | 2021-01-15 | 北京字节跳动网络技术有限公司 | Method and device for generating cartoon head portrait generation model |
CN109886418A (en) * | 2019-03-12 | 2019-06-14 | 深圳微品致远信息科技有限公司 | A kind of method, system and storage medium intelligently generating Design Works based on machine learning |
CN110070483A (en) * | 2019-03-26 | 2019-07-30 | 中山大学 | A kind of portrait cartooning method based on production confrontation network |
CN110070483B (en) * | 2019-03-26 | 2023-10-20 | 中山大学 | Portrait cartoon method based on generation type countermeasure network |
CN110414345A (en) * | 2019-06-25 | 2019-11-05 | 北京汉迪移动互联网科技股份有限公司 | Cartoon image generation method, device, equipment and storage medium |
WO2021004322A1 (en) * | 2019-07-09 | 2021-01-14 | 北京字节跳动网络技术有限公司 | Head special effect processing method and apparatus, and storage medium |
CN110321865A (en) * | 2019-07-09 | 2019-10-11 | 北京字节跳动网络技术有限公司 | Head effect processing method and device, storage medium |
CN110414428A (en) * | 2019-07-26 | 2019-11-05 | 厦门美图之家科技有限公司 | A method of generating face character information identification model |
CN110598546A (en) * | 2019-08-06 | 2019-12-20 | 平安科技(深圳)有限公司 | Image-based target object generation method and related equipment |
CN110728255A (en) * | 2019-10-22 | 2020-01-24 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111079549B (en) * | 2019-11-22 | 2023-09-22 | 杭州电子科技大学 | Method for carrying out cartoon face recognition by utilizing gating fusion discrimination characteristics |
CN111079549A (en) * | 2019-11-22 | 2020-04-28 | 杭州电子科技大学 | Method for recognizing cartoon face by using gating fusion discrimination features |
CN111027492B (en) * | 2019-12-12 | 2024-01-23 | 广东智媒云图科技股份有限公司 | Animal drawing method and device for connecting limb characteristic points |
CN111027492A (en) * | 2019-12-12 | 2020-04-17 | 广东智媒云图科技股份有限公司 | Animal drawing method and device for connecting limb characteristic points |
CN111080743B (en) * | 2019-12-12 | 2023-08-25 | 广东智媒云图科技股份有限公司 | Character drawing method and device for connecting head and limb characteristic points |
CN111028325A (en) * | 2019-12-12 | 2020-04-17 | 广东智媒云图科技股份有限公司 | Animal animation production method and device for limb characteristic point connecting line |
CN111080743A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Figure drawing method and device for connecting characteristic points of head and limbs |
CN111080754A (en) * | 2019-12-12 | 2020-04-28 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111080754B (en) * | 2019-12-12 | 2023-08-11 | 广东智媒云图科技股份有限公司 | Character animation production method and device for connecting characteristic points of head and limbs |
CN111028325B (en) * | 2019-12-12 | 2023-08-11 | 广东智媒云图科技股份有限公司 | Animal animation production method and device for connecting limb characteristic points |
CN111223164B (en) * | 2020-01-08 | 2023-10-24 | 杭州未名信科科技有限公司 | Face simple drawing generation method and device |
CN111223164A (en) * | 2020-01-08 | 2020-06-02 | 浙江省北大信息技术高等研究院 | Face sketch generating method and device |
CN111260763A (en) * | 2020-01-21 | 2020-06-09 | 厦门美图之家科技有限公司 | Cartoon image generation method, device, equipment and storage medium based on portrait |
CN111340913A (en) * | 2020-02-24 | 2020-06-26 | 北京奇艺世纪科技有限公司 | Picture generation and model training method, device and storage medium |
CN111445384A (en) * | 2020-03-23 | 2020-07-24 | 杭州趣维科技有限公司 | Universal portrait photo cartoon stylization method |
CN111508048A (en) * | 2020-05-22 | 2020-08-07 | 南京大学 | Automatic generation method for human face cartoon with interactive arbitrary deformation style |
US11763511B2 (en) | 2020-10-28 | 2023-09-19 | Boe Technology Group Co., Ltd. | Methods and apparatuses of displaying preset animation effect image, electronic devices and storage media |
CN112346614A (en) * | 2020-10-28 | 2021-02-09 | 京东方科技集团股份有限公司 | Image display method and device, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596839A (en) | A kind of human-face cartoon generation method and its device based on deep learning | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CN107610209A (en) | Human face countenance synthesis method, device, storage medium and computer equipment | |
CN105069400B (en) | Facial image gender identifying system based on the sparse own coding of stack | |
CN1691740B (en) | Magnified display apparatus and magnified display method | |
CN107506714A (en) | A kind of method of face image relighting | |
CN110930297A (en) | Method and device for migrating styles of face images, electronic equipment and storage medium | |
CN107924579A (en) | The method for generating personalization 3D head models or 3D body models | |
US11854247B2 (en) | Data processing method and device for generating face image and medium | |
WO2023050992A1 (en) | Network training method and apparatus for facial reconstruction, and device and storage medium | |
CN107392118A (en) | The recognition methods of reinforcing face character and the system of generation network are resisted based on multitask | |
CN105184249A (en) | Method and device for processing face image | |
JP2018055470A (en) | Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system | |
CN106303233A (en) | A kind of video method for secret protection merged based on expression | |
CN110427795A (en) | A kind of property analysis method based on head photo, system and computer equipment | |
CN109359499A (en) | A kind of method and apparatus for face classifier | |
CN109242775A (en) | A kind of attribute information moving method, device, equipment and readable storage medium storing program for executing | |
CN109145871A (en) | Psychology and behavior recognition methods, device and storage medium | |
DE112019000040T5 (en) | DETECTING DETERMINATION MEASURES | |
CN108319937A (en) | Method for detecting human face and device | |
US20220358411A1 (en) | Apparatus and method for developing object analysis model based on data augmentation | |
CN117157673A (en) | Method and system for forming personalized 3D head and face models | |
CN111062899A (en) | Guidance-based blink video generation method for generating confrontation network | |
CN109492540A (en) | Face exchange method, apparatus and electronic equipment in a kind of image | |
CN110363170B (en) | Video face changing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180928 |