CN110335332B - Automatic drawing method of human face cartoon - Google Patents

Automatic drawing method of human face cartoon Download PDF

Info

Publication number
CN110335332B
CN110335332B CN201910395963.9A CN201910395963A CN110335332B CN 110335332 B CN110335332 B CN 110335332B CN 201910395963 A CN201910395963 A CN 201910395963A CN 110335332 B CN110335332 B CN 110335332B
Authority
CN
China
Prior art keywords
layer
area
line
eye
base points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910395963.9A
Other languages
Chinese (zh)
Other versions
CN110335332A (en
Inventor
郑进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huoshaoyun Technology Co ltd
Original Assignee
Hangzhou Huoshaoyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huoshaoyun Technology Co ltd filed Critical Hangzhou Huoshaoyun Technology Co ltd
Priority to CN201910395963.9A priority Critical patent/CN110335332B/en
Publication of CN110335332A publication Critical patent/CN110335332A/en
Application granted granted Critical
Publication of CN110335332B publication Critical patent/CN110335332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an automatic drawing method of a human face cartoon, which comprises the following steps: acquiring a face image composed of a plurality of areas according to a plurality of characteristic base points; drawing by taking an eye area, an eyebrow area, a nose area, a mouth area, a hairstyle area and a face area as independent modules respectively; one or more areas of the eye area, the eyebrow area, the mouth area, the hair style area and the face area are formed by fusing a plurality of layers respectively; and fusing the eye area, the eyebrow area, the nose area, the mouth area, the hair style area and the face area which are respectively drawn with the cartoon background template. The drawing method has little dependence on designers, does not need a database, has no matching process, has relatively less workload, and has stronger cartoon sense while ensuring the authenticity; the real and fine cartoon is made by means of codes or a designed calculation method, and the effect is controllable; the method is not limited by environmental factors, and the final finished product effect cannot be influenced by illumination and foreign matter shielding.

Description

Automatic drawing method of human face cartoon
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an automatic drawing method of a human face cartoon.
Background
The human face cartoon is widely applied to internet social contact by the public at present, the drawing of the cartoon usually needs a designer with professional skills to spend a certain time to complete, and the development of computer image processing technologies such as computer vision and computer graphics makes the generation of the human face cartoon realized through computer software possible.
At present, the generation technology of the human face cartoon in the industry is mainly divided into two types: 1) And (3) similarity matching technology of five sense organs. Extracting facial feature positioning and image information of a target user through a facial feature positioning technology, performing similarity matching with a facial feature material library prepared in advance by a designer based on point location characteristics or image characteristics of the facial features to obtain the most similar facial features, and finally generating a human face cartoon through material facial feature recombination; the technology needs to provide a very large material library for the five sense organs, the workload of designers is huge, and even if the designer has a perfect material library, the material library still cannot reflect the detailed characteristics and the differences of the five sense organs of each person, so that the human face cartoon generated by the method generally has the problem of low similarity with the photos of the real persons.
2) Face fusion technology. The technical idea is that a target user image and a cartoon template image drawn by a designer are superposed through a face fusion technology to generate a face cartoon. The face alignment, triangulation and other technologies used by the face fusion technology need to perform affine transformation on the image level of the target face and the template face, so when the face fusion technology is used for a face cartoon scene, the style requirement on the cartoon template is very realistic, and the interesting and lovely characteristics of the face cartoon can be reduced. In addition, the face fusion technology is still alpha fusion of image pixel values, so that the problem of face shadow caused by factors such as a photographing angle, an environment where a user is located, eyes and bang of a user is very difficult to solve, and the requirement on a cartoon template is high.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the automatic drawing method of the human face cartoon which has high similarity with a real person, strong cartoon sense, no need of establishing a database, omission of a database matching process, small workload, automatic generation of images and high image generation speed.
The technical scheme adopted by the invention for solving the technical problem is as follows: a method for automatically drawing a human face cartoon comprises the following steps:
acquiring a face image consisting of a plurality of regions according to a plurality of characteristic base points, wherein the regions comprise an eye region, an eyebrow region, a nose region, a mouth region, a hairstyle region and a face region;
drawing by taking an eye area, an eyebrow area, a nose area, a mouth area, a hairstyle area and a face area as independent modules respectively;
one or more areas of the eye area, the eyebrow area, the mouth area, the hair style area and the face area are formed by fusing a plurality of layers respectively;
and fusing the eye area, the eyebrow area, the nose area, the mouth area, the hairstyle area and the face area which are respectively drawn with the cartoon background template.
Preferably, the eye area is divided into an eye shadow layer, an eyeball layer, an eye white layer, a transition line layer, a foreground line layer and a double eyelid layer.
Preferably, the eye shadow layer, the eye bead layer, the eye white layer, the transition line layer, the foreground line layer and the double-eyelid layer are sequentially fused.
Preferably, the method for automatically drawing the double eyelid layer includes the following steps:
a) Judging whether the human face image has double-fold eyelids or not by an image edge detection technology;
b) Translating the contour line on the eye acquired through the plurality of characteristic base points to acquire a first reference line of the double eyelid layer;
c) Defining the width of each position of the double-eyelid layer as h1, and acquiring the distance between adjacent characteristic base points according to the widest position of the double-eyelid layer and the number of the characteristic base points;
e) Taking any three adjacent characteristic base points as a circle a1 passing through the three characteristic base points;
f) Taking the middle point of the three characteristic base points as a tangent point, and drawing a tangent line of the circle a1;
g) The tangent point moves h1/2 to two sides along the vertical direction of the tangent line respectively to obtain an up-moving point b and a down-moving point c;
h) And (4) connecting the upper moving point b, the lower moving point c and two end points of the first datum line in sequence, and filling the interior to obtain the composite material.
Preferably, in step g), the circle a1 is used as a boundary, and the circle a1 located inside is a downward shift point and the circle a1 located outside is an upward shift point.
Preferably, the foreground line map layer comprises an upper contour foreground line map layer and a lower contour foreground line map layer, and the automatic drawing method comprises the following steps:
1) Acquiring a second reference line of the upper outline foreground line layer through a plurality of characteristic base points;
2) Defining the width of each part of an upper contour foreground line layer as h2, and acquiring the distance between adjacent characteristic base points according to the widest part of the upper contour foreground line layer and the number of the characteristic base points;
3) Taking any three adjacent characteristic base points as a circle a2 passing through the three characteristic base points;
4) Taking the middle point of the three characteristic base points as a tangent point, and drawing a tangent line of the circle a2;
5) The tangent point moves h2/2 to two sides along the vertical direction of the tangent line respectively to obtain an up-moving point d and a down-moving point e;
6) The upper moving point d, the lower moving point e and two end points of the second datum line are sequentially connected, and the interior is filled to obtain a first part of the upper outline foreground line layer;
7) Taking X points near the highest point of the first part of the outline foreground line image layer, wherein X is more than or equal to 4;
8) Fitting a polynomial by using X points and Y degrees of freedom, and determining a plurality of fixed points according to the polynomial;
9) Translating the end point A1 of the first part close to the tail of the eye to obtain an end point A2;
10 Filling an area formed by enclosing the boundary of the first part and a plurality of fixed points and end points A2 to obtain an upper outline foreground line image layer;
11 Slicing the fixed lower contour foreground line into affine transformation to obtain a lower contour foreground line layer;
12 ) fusing the upper contour foreground line layer and the lower contour foreground line layer.
Preferably, the transition line map layer includes an upper transition line map layer and a lower transition line map layer, and the automatic drawing method of the transition line map layer includes the following steps:
i. depicting an on-eye contour line obtained through a plurality of characteristic base points with a certain thickness;
obtaining an upper transition line layer by utilizing a Gaussian fuzzy upper contour line of the eye;
delineating, at a thickness, portions of a lower eye contour line acquired through a plurality of feature base points;
obtaining a lower transition line layer by utilizing the part of the lower contour line of the Gaussian blur eye;
and v, fusing the upper transition line layer and the lower transition line layer to obtain a transition line layer.
Preferably, the automatic drawing method of the eye shadow layer includes the following steps: amplifying the center of the eye contour line obtained by the plurality of characteristic base points by N times, filling the amplified region, carrying out Gaussian blur processing on the region, integrally translating the region after the processing is finished, and then overlapping the end point of the region close to the inner canthus with the inner canthus of the eye contour line.
Preferably, the eyebrow area is divided into an eyebrow pattern layer and an eyebrow arch pattern layer; and constructing an ROI (region of interest) region according to a plurality of characteristic base points by utilizing the gradient color slice, acquiring an eyebrow pattern layer by utilizing fixed eyebrow pattern affine transformation, and fusing the eyebrow pattern layer and the eyebrow pattern layer.
Preferably, the mouth area is divided into a lip-shaped pattern layer, a middle lip line pattern layer, a lip angle pattern layer and a high-gloss pattern layer of a lower lip; constructing an ROI (region of interest) by utilizing the gradient color slice according to a plurality of characteristic base points to obtain a lip-shaped layer; acquiring a middle lip line layer through a plurality of characteristic base points; respectively intercepting a first conversion part and a second conversion part at two ends of a lip line, calculating a rotation angle alpha according to the slope of the end part of the lip line, and respectively rotating the first conversion part and the second conversion part to convert the alpha angle by taking the end point of the lip line as a base point to obtain a lip angle layer; and sequentially fusing the lip pattern layer, the lip line pattern layer, the lip angle pattern layer and the highlight pattern layer.
Preferably, the method for automatically drawing the face region includes the steps of:
s1, zooming the face acquired through a plurality of characteristic base points to adapt to a cartoon background template;
s2, obtaining a thinner part after dividing the face center for symmetrical processing;
s3, modifying the flaws of the face through translation of the characteristic base points on the basis of the characteristic base points;
s4, constructing an ROI (region of interest) by using the optimized feature base points;
and S5, fusing the ROI area with a fixed head shape.
Preferably, the hairstyle area is divided into a front hairstyle layer, a rear hairstyle layer and a bang layer, the front hairstyle layer and the rear hairstyle layer respectively comprise a straight hair attribute, a large hair curling attribute, a small hair curling attribute, a long hair attribute and a short hair attribute, the bang layer comprises an air bang attribute, an oblique bang attribute and a Ji Liuhai attribute, a plurality of deep learning models are respectively constructed according to the attributes, and the front hairstyle layer, the rear hairstyle layer and the bang layer are fused after the attributes are determined.
The invention provides a cartoon automatic drawing thought based on human face five sense organs positioning, which utilizes a canvas tool library and a computer vision library on the programming language level to draw five sense organs based on human face positioning points provided by face + +, and adopts a deep learning technology to firstly classify the hair style of a target user finely and then extracts a similar hair style from a prepared hair style library by a certain similarity matching technology for a hair style part with higher difficulty.
The human face cartoon generating idea fundamentally solves the problem that a material library in the five sense organs similarity matching technology is difficult to complement, and simultaneously also solves the problems that the human face fusion technology is difficult to solve the problems that the fusion image quality is poor and the image with stronger cartoon sense cannot be generated due to illumination, shielding and the like.
The invention has the beneficial effects that: 1) The method has the advantages that the dependence on designers is small, a database is not needed, a matching process is not needed, the workload is relatively low, and the cartoon sense is strong while the trueness is ensured; 2) The real and exquisite cartoon is made by depending on codes or a designed calculation method, the effect is controllable, and the unification can be realized; 3) The method is not limited by environmental factors, and the final finished product effect cannot be influenced by illumination and foreign matter shielding.
Drawings
Fig. 1 is a drawing effect diagram of an eye region according to the present invention.
Fig. 2 is a drawing effect diagram of a double-eyelid layer, a foreground line layer, an eye shadow layer, a transition line layer, an eyeball layer and an eye white layer according to the present invention.
FIG. 3 is a drawing step effect diagram of a double eyelid layer.
Fig. 4 is a drawing effect diagram of the mouth region of the present invention.
FIG. 5 is a drawing diagram of the high gloss layers of the lip layer, the middle lip line layer, the lip angle layer, and the lower lip of the present invention.
FIG. 6 is a schematic diagram of a process for drawing an upper contour foreground line layer in accordance with the present invention.
Fig. 7 is a simplified drawing of a hairstyle area according to the present invention.
Fig. 8 is a schematic view of the facial form before the facial form region is corrected according to the present invention.
Fig. 9 is a schematic view of the corrected face shape of the face area according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A method for automatically drawing a human face cartoon comprises the following steps:
performing face recognition based on the face + + technology, and acquiring a real face image according to a plurality of characteristic base points, wherein the face image comprises a plurality of areas, specifically an eye area, an eyebrow area, a nose area, a mouth area, a hair style area and a face shape area;
the eye area is taken as an independent module, the eye area is independently drawn and divided into a plurality of image layers, after the single image layer is respectively drawn and obtained, the plurality of image layers are sequentially subjected to alpha fusion in sequence, and the drawing of the eye area is completed;
the eyebrow area is taken as an independent module, the eyebrow area is independently drawn and divided into an eyebrow-shaped layer and an eyebrow arch layer, after the single layer is respectively drawn and obtained, the two layers are alpha-fused, and the eyebrow area is drawn;
the nose area is taken as an independent module, and is independently drawn, and the nose area is obtained from the built deep learning model database by using a fixed nose shape;
taking the mouth area as an independent module, independently drawing the mouth area, dividing the mouth area into a plurality of layers, and after obtaining the drawing of each layer, fusing the layers in a alpha mode to finish the drawing of the mouth area;
the hairstyle area is taken as an independent module, the hairstyle area is independently drawn and divided into a plurality of layers, and after single layers are respectively drawn and obtained, the plurality of layers are alpha-fused to finish the drawing of the hairstyle area;
the face area is taken as an independent module, and is independently drawn;
and fusing the eye area, the eyebrow area, the nose area, the mouth area, the hairstyle area and the face area which are respectively drawn with the cartoon background template to obtain a cartoon image drawn according to the human face image.
The specific drawing process of each region is described in detail next.
The eye area comprises an eye shadow image layer 13, an eyeball image layer 16, an eye white image layer 15, a transition line image layer 14, a foreground line image layer and a double eyelid image layer 11 which are sequentially fused, the foreground line image layer comprises an upper contour foreground line image layer 121 and a lower contour foreground line image layer 122, and the transition line image layer comprises an upper transition line image layer and a lower transition line image layer.
The automatic drawing method of the eye shadow map layer comprises the following steps: based on the face + + technology, the center of the eye contour line obtained according to the multiple feature base points is amplified by N times, in this embodiment, the center is amplified by 1.5 times, the amplification area is filled with appropriate colors such as pink, and then gaussian blur processing is performed on the amplification area, after the processing is completed, the amplification area is translated integrally, if the left eye is translated towards the direction close to the left ear, if the right eye is translated towards the direction close to the right ear, then the end point of the amplification area close to the inner canthus is overlapped with the inner canthus of the eye contour line, and the eye shadow map layer is obtained.
The automatic drawing method of the eyeball map layer and the eye white map layer comprises the following steps: and constructing an ROI (region of interest) region for the eyeball and the white of the eye respectively according to a plurality of characteristic base points by using a mask masking method, and covering an eye white layer on the eyeball layer after finishing construction to modify the eyeball layer.
The automatic drawing method of the transition line map layer comprises the following steps:
i. drawing the contour line on the eye obtained according to a plurality of characteristic base points to a certain thickness;
ii, carrying out fuzzy processing on the thickened outline line on the eye by using a Gaussian fuzzy technology to obtain an upper transition line image layer;
acquiring a lower eye contour line according to the plurality of characteristic base points, intercepting most part of the lower eye contour line close to the external canthus, and drawing the lower eye contour line to a certain thickness, such as the thickness of 3 pixel values;
carrying out fuzzy processing on the thickened lower contour line part of the eye by using a Gaussian fuzzy technology to obtain a lower transition line image layer;
and v, carrying out alpha fusion on the upper transition line layer and the lower transition line layer to obtain a transition line layer.
The automatic drawing method of the foreground line layer comprises the following steps:
1) Acquiring a second datum line of the upper contour foreground line layer through a plurality of characteristic base points;
2) Defining the width of each position of the upper contour foreground line layer as h2 (h 2 is a variable), and obtaining the distance between adjacent feature base points according to the widest position of the upper contour foreground line layer and the number of the feature base points, specifically, if the widest position of the upper contour foreground line layer is 10 pixels and the number of the feature base points is 10, 10/10/2=0.5, that is, the distance between the adjacent feature base points is 0.5 pixel;
4) Removing end points at two ends of the second datum line, taking any adjacent three characteristic base points as a group from left to right or from right to left or from the center to two sides, and making a circle passing through the group of three characteristic base points to define the circle as a2;
5) Taking a middle point of the group of three characteristic base points as a tangent point, and making a tangent line of the circle a2;
6) Moving the tangent point to two sides h2/2 along the vertical direction of the tangent line to obtain an up-moving point d and a down-moving point e;
7) By utilizing the method, a plurality of groups of three characteristic base points on the second datum line are respectively subjected to actions of rounding, drawing tangent lines and symmetrically translating to obtain a plurality of up-shifting point d arrays and a plurality of down-shifting point e arrays; for the judgment of the up shift point d array and the down shift point e array, it can be used as a standard, that is, the down shift point e is located in the circle a2, and the up shift point d is located outside the circle a2;
8) Sequentially connecting end points of two ends of the up-shifting point d array, the down-shifting point e array and the second datum line, and filling the inside of a region formed by connection to obtain a first part1 of the upper outline foreground line layer;
9) Taking X points near the highest point of part1 of the first part of the upper contour foreground line layer, wherein X is more than or equal to 4, and taking five points in the embodiment;
10 Fitting a polynomial with 3 degrees of freedom with X =5 points and Y =3 degrees of freedom, determining a plurality of fixed points from the polynomial;
11 An end point A1 of a first part1 of the upper contour foreground line layer close to the tail of the eye is translated by about 6 pixels to obtain an end point A2, if the left eye is the left eye, the end point A2 is translated by 6 pixels in the direction close to the left ear, and if the right eye is the right eye, the end point A1 of the first part of the upper contour foreground line layer close to the right ear is translated by 6 pixels in the direction close to the right ear; of course, this step can also be performed before the step of determining a plurality of fixed points;
12 Filling an area formed by enclosing the upper boundary of the first part1 of the upper outline foreground line layer and connecting lines among a plurality of fixed points, an end point A2 and an end point A1 to obtain a complete upper outline foreground line layer;
13 Slicing the fixed lower contour foreground line, and performing affine transformation to obtain a lower contour foreground line layer, wherein when a plurality of layers are covered, the eye white layer can play a certain covering and decorating role on the lower contour foreground line layer;
14 Fusing the upper and lower contour foreground line layers to obtain the foreground line layer.
The automatic drawing method of the double eyelid drawing layer comprises the following steps:
a) Judging whether the human face image has double eyelids through an image edge detection technology, if so, performing the following steps, and if not, skipping the drawing of the image layer;
b) Translating the contour lines on the eyes acquired according to the plurality of characteristic base points, acquiring a first reference line of a double-eyelid layer, translating upwards and towards the direction close to the left ear if the contour lines on the eyes are translated, translating upwards and towards the direction close to the right ear if the contour lines on the eyes are translated, and translating upwards and towards the direction close to the right ear if the contour lines on the eyes are translated, wherein as shown in fig. 3, the right eye part is translated upwards by 4 pixels and translated rightwards by 2 pixels;
c) Acquiring the number of characteristic base points of a first datum line;
d) Defining the width of each position of the double-eyelid layer as h1 (h 1 is a variable), and obtaining the distance between adjacent feature base points according to the widest position of the double-eyelid layer and the number of the feature base points, specifically, if the widest position of the double-eyelid layer is 10 pixels and the number of the feature base points is 10, 10/10/2=0.5, that is, the distance between adjacent feature base points is 0.5 pixel;
e) Removing end points at two ends of the first datum line, taking any adjacent three characteristic base points from left to right or from right to left or from the center to two sides as a group, and making a circle passing through the group of three characteristic base points to define the circle as a1;
f) Taking a middle point of the group of three characteristic base points as a tangent point, and making a tangent line of the circle a1;
g) Respectively moving the tangent point to two sides by h1/2 along the vertical direction of the tangent line to respectively obtain an upward moving point b and a downward moving point c;
h) Performing actions of rounding, drawing tangent lines and symmetrically translating a plurality of groups of three characteristic base points on the first datum line respectively by using the method to obtain a plurality of upper shift point b arrays and a plurality of lower shift point c arrays; for the judgment of the up-shift point b array and the down-shift point c array, the up-shift point b array and the down-shift point c array can be used as a standard, that is, the down-shift point c is positioned in the circle a1, and the up-shift point b is positioned outside the circle a1;
i) And sequentially connecting the end points of the two ends of the up-shifting point b array, the down-shifting point c array and the first datum line, and filling the inside of a connected area to obtain the double-eyelid layer.
The eyebrow area comprises an alpha fused eyebrow pattern layer and an eyebrow arch pattern layer.
Constructing an ROI (region of interest) region by utilizing a mask masking method according to a plurality of feature base points by utilizing a gradient color slice to obtain the eyebrow pattern layer;
obtaining an eyebrow arch layer by the eyebrow arch layer through fixed eyebrow arch shape affine transformation;
and fusing the eyebrow pattern layer and the eyebrow arch pattern layer alpha.
The mouth area comprises a lip pattern layer 21, a middle lip line pattern layer 22, a lip angle pattern layer 23 and a highlight pattern layer 24 of the lower lip which are alpha-fused with each other. And the lip-shaped layer utilizes the gradient color slice to construct an ROI region according to a plurality of characteristic base points by utilizing a mask masking method to obtain the ROI region. The middle lip line layer is obtained through a plurality of characteristic base points. The automatic drawing method of the lip angle drawing layer comprises the following steps: respectively intercepting a first transformation part and a second transformation part at two ends of a middle lip line, calculating a rotation angle alpha according to the slope of the end part of a lip-shaped image layer, rotating the first transformation part by taking one end point of the middle lip line as a base point to transform an alpha angle, rotating the second transformation part by taking the other end point of the middle lip line as a base point to transform the alpha angle, and respectively moving an area block formed by rotating the first transformation part and an area block formed by rotating the second transformation part to the direction close to each other by about 3 pixels in order to achieve a better fitting effect, thereby obtaining the lip angle image layer; of course, the left lip angle can also be obtained symmetrically by the drawn right lip angle. And constructing an ROI (region of interest) region by utilizing a mask method according to a plurality of feature base points by utilizing a gradient color slice to obtain the highlight layer of the lower lip. And sequentially fusing the lip-shaped picture layer, the lip line picture layer, the lip angle picture layer and the highlight picture layer which are respectively drawn to obtain the mouth area.
The hairstyle area comprises a back hairstyle layer, a front hairstyle layer and a Liuhai layer which are sequentially subjected to alpha fusion. Defining three attributes of hair length (long hair, middle hair, bob and short hair), hair quality (large hair curls and small hair curls) and bang (air bang, bang and Ji Liuhai) to measure the characteristics of a hair style region, wherein the hair length and the hair quality attributes are embodied by a front hair layer and a rear hair layer, the bang attributes are embodied by a bang layer, a deep learning model is respectively constructed aiming at the hair length and the hair quality attributes, and a plurality of deep learning models are required to be constructed aiming at the finer bang attributes. Aiming at the attributes of the Liuhai, whether the Liuhai exists in the face image or not is judged through an image edge detection technology, if the Liuhai exists, a deep learning model is constructed, and then the Liuhai image layer needs to be matched. For the face image of any target user, three attributes of the hairstyle of the user can be determined through the trained deep learning model, then a corresponding back hair drawing layer, a front hair drawing layer and a Liuhai drawing layer are matched from a hairstyle material library, and the final finished hairstyle is obtained through an alpha fusion technology. Fig. 6 shows the hair style materials and the final hair style synthesis effect corresponding to the "long hair-small hair-air Liuhai".
The automatic drawing method of the face area comprises the following steps:
s1, zooming the face acquired according to a plurality of characteristic base points to adapt to a cartoon background template with a fixed size;
the specific method in step S1 is to map the obtained face feature base point to a similar triangle transformation algorithm in a uniform dimension, and different users have different photographing angles and habits, so that the self-photographs of different users are not in a standard dimension, that is, some self-photographs are too large or too small or have a certain angle inclination, as shown in fig. 7, the positioning points of the face part recognized by a user self-photographs through a face + + positioning point are connected, A, B, C is three of the positioning points, wherein a positioning point a and a positioning point B are face end points, and a, B, and c are distances of BC, AC, and AB respectively. Fig. 8 shows the face shape mapped to the uniform dimension after passing through the face shape correction algorithm, which comprises the following specific steps:
1. cos α = (b) can be found by the cosine theorem 2 +c 2 -a 2 ) /2bc, since triangle ABC is similar to triangle a 'B' C ', it is known that α = α', and C/C '= B/B', since a 'and B' are both preset known fixed points, and thus the distance B 'between a' and B 'can be obtained'
2. The dx = b' cos α, dy can be obtained in turn 2 =b' 2 -dx 2 Finally, the coordinate C ' of the C point mapped to the standard size can be obtained, the abscissa of the C ' is the addition of dx to the abscissa of the A ', and the ordinate of the C ' is the addition of dy to the ordinate of the A ' point;
3. according to the step 1 and the step 2, the standard mapping of other positioning points can be obtained, and further the standardized facial form positioning point can be obtained.
S2, vertically dividing the face along the center to obtain a thinner part for symmetric processing to obtain a complete face;
s3, on the basis of the characteristic base points, modifying the flaws of the face through translation of the characteristic base points, wherein the flaws can be over-swollen chin or over-sharp chin, loss of chin characteristics, over-swollen face and the like, if the face is over-swollen, the face thinning operation is performed on the face by adopting the double-eyelid drawing method, and if the chin characteristics are symmetrically lost, the center of the face is repositioned and the symmetrical operation is performed again;
s4, constructing an ROI (region of interest) by utilizing the optimized feature base points and a mask masking method;
and S5, fusing the ROI area with the fixed head shape to obtain a face shape area.
S6, drawing the shadow on the two sides of the face in the ROI area.
The method can also comprise the automatic drawing of a glasses layer, firstly, whether glasses exist in the face image is judged through an image edge detection technology, if the glasses exist, the following steps are carried out, and if the glasses do not exist, the drawing of the layer is skipped; the glasses layer includes a square attribute, a circle attribute, a half-frame attribute, and a full-frame attribute.
The foregoing detailed description is intended to illustrate and not limit the invention, which is intended to be within the spirit and scope of the appended claims, and any changes and modifications that fall within the true spirit and scope of the invention are intended to be covered by the following claims.

Claims (10)

1. A method for automatically drawing a human face cartoon is characterized by comprising the following steps:
acquiring a face image consisting of a plurality of regions according to a plurality of characteristic base points, wherein the regions comprise an eye region, an eyebrow region, a nose region, a mouth region, a hairstyle region and a face region;
drawing by taking an eye area, an eyebrow area, a nose area, a mouth area, a hairstyle area and a face area as independent modules respectively;
one or more areas of the eye area, the eyebrow area, the mouth area, the hair style area and the face area are formed by fusing a plurality of layers respectively;
fusing the eye area, the eyebrow area, the nose area, the mouth area, the hairstyle area and the face area which are respectively drawn with the cartoon background template;
the eye area is divided into an eye shadow layer, an eyeball layer, an eye white layer, a transition line layer, a foreground line layer and a double eyelid layer;
the automatic drawing method of the double eyelid drawing layer comprises the following steps:
a) Judging whether the human face image has double-fold eyelids or not by an image edge detection technology;
b) Translating the contour line on the eye acquired through the plurality of characteristic base points to acquire a first reference line of the double eyelid layer;
c) Defining the width of each position of the double-eyelid layer as h1, and acquiring the distance between adjacent characteristic base points according to the widest position of the double-eyelid layer and the number of the characteristic base points;
d) Taking any three adjacent characteristic base points as a circle a1 passing through the three characteristic base points;
e) Taking the middle point of the three characteristic base points as a tangent point, and drawing a tangent line of the circle a1;
f) The tangent point moves h1/2 to two sides along the vertical direction of the tangent line respectively to obtain an up-moving point b and a down-moving point c;
g) And (4) connecting the upper moving point b, the lower moving point c and two end points of the first datum line in sequence, and filling the interior to obtain the composite material.
2. The automatic drawing method of the human face cartoon as claimed in claim 1, characterized in that: the eye shadow image layer, the eyeball image layer, the eye white image layer, the transition line image layer, the foreground line image layer and the double-eyelid image layer are sequentially fused.
3. The automatic drawing method of the human face cartoon as claimed in claim 1, characterized in that: in the step g), the circle a1 is used as a boundary, a downward shifting point is positioned inside the circle a1, and an upward shifting point is positioned outside the circle a 1.
4. The automatic drawing method of the human face cartoon picture as claimed in claim 1, wherein the foreground line map layer comprises an upper contour foreground line map layer and a lower contour foreground line map layer, and the automatic drawing method comprises the following steps:
1) Acquiring a second reference line of the upper outline foreground line layer through a plurality of characteristic base points;
2) Defining the width of each part of an upper contour foreground line layer as h2, and acquiring the distance between adjacent characteristic base points according to the widest part of the upper contour foreground line layer and the number of the characteristic base points;
3) Taking any three adjacent characteristic base points as a circle a2 passing through the three characteristic base points;
4) Taking the middle point of the three characteristic base points as a tangent point, and making a tangent line of the circle a2;
5) The tangent point moves h2/2 to two sides along the vertical direction of the tangent line respectively to obtain an up-moving point d and a down-moving point e;
6) The upper moving point d, the lower moving point e and two end points of the second datum line are sequentially connected, and the interior is filled to obtain a first part of the upper outline foreground line layer;
7) Taking X points near the highest point of the first part of the outline foreground line image layer, wherein X is more than or equal to 4;
8) Fitting a polynomial by using X points and Y degrees of freedom, and determining a plurality of fixed points according to the polynomial;
9) Translating the end point A1 of the first part close to the tail of the eye to obtain an end point A2;
10 Filling an area formed by enclosing the boundary of the first part and a plurality of fixed points and end points A2 to obtain an upper contour foreground line layer;
11 Slicing the fixed lower contour foreground line into affine transformation to obtain a lower contour foreground line layer;
12 ) fusing the upper contour foreground line layer and the lower contour foreground line layer.
5. The automatic drawing method of the human face cartoon according to claim 1, wherein the transition line layer comprises an upper transition line layer and a lower transition line layer, and the automatic drawing method of the transition line layer comprises the following steps:
i. delineating an on-eye contour line acquired through a plurality of feature base points;
obtaining an upper transition line layer by utilizing a Gaussian fuzzy upper contour line of the eye;
delineating portions of a lower eye contour line acquired through a plurality of feature base points;
obtaining a lower transition line layer by utilizing the part of the lower contour line of the Gaussian blur eye;
and v, fusing the upper transition line image layer and the lower transition line image layer to obtain a transition line image layer.
6. The automatic drawing method of the human face cartoon as claimed in claim 1, wherein the automatic drawing method of the eye shadow layer comprises the following steps: amplifying the center of the eye contour line obtained by a plurality of characteristic base points by N times, filling the amplified region, carrying out Gaussian blur processing on the region, integrally translating the region after the processing is finished, and then coinciding the end point of the region close to the inner canthus with the inner canthus of the eye contour line.
7. The automatic drawing method of the human face cartoon as claimed in claim 1, characterized in that: the eyebrow area is divided into an eyebrow-shaped layer and an eyebrow arch layer; and constructing an ROI (region of interest) region according to a plurality of characteristic base points by utilizing the gradient color slice, acquiring an eyebrow pattern layer by utilizing fixed eyebrow pattern affine transformation, and fusing the eyebrow pattern layer and the eyebrow pattern layer.
8. The automatic drawing method of the human face cartoon as claimed in claim 1, characterized in that: the mouth area is divided into a lip-shaped image layer, a middle lip line image layer, a lip angle image layer and a high-gloss image layer of a lower lip; constructing an ROI (region of interest) by utilizing the gradient color slice according to a plurality of characteristic base points to obtain a lip-shaped layer; acquiring a middle lip line layer through a plurality of characteristic base points; respectively intercepting a first conversion part and a second conversion part at two ends of a lip line, calculating a rotation angle alpha according to the slope of the end part of the lip-shaped layer, and respectively rotating the first conversion part and the second conversion part to convert the alpha angle by taking the end point of the lip line as a base point to obtain a lip angle layer; and sequentially fusing the lip pattern layer, the lip line pattern layer, the lip angle pattern layer and the highlight pattern layer.
9. The automatic drawing method of the human face cartoon as claimed in claim 1, wherein the automatic drawing method of the face area comprises the following steps:
s1, zooming the face acquired through a plurality of characteristic base points to adapt to a cartoon background template;
s2, obtaining a thinner part after dividing the face center for symmetrical processing;
s3, modifying the flaws of the face through translation of the characteristic base points on the basis of the characteristic base points;
s4, constructing an ROI (region of interest) by using the optimized feature base point;
and S5, fusing the ROI with a fixed head form.
10. The automatic drawing method of the human face cartoon as claimed in claim 1, characterized in that: the hairstyle area divides into preceding hair map layer, back hair map layer and bang map layer, preceding hair map layer and back hair map layer all include straight hair attribute, big curly hair attribute, little curly hair attribute, long hair attribute and short hair attribute, the bang map layer includes air bang attribute, oblique bang attribute and Ji Liuhai attribute, constructs a plurality of degree of deep learning models respectively to above-mentioned multiple attribute, fuses preceding hair map layer, back hair map layer and bang map layer after confirming the attribute.
CN201910395963.9A 2019-05-14 2019-05-14 Automatic drawing method of human face cartoon Active CN110335332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910395963.9A CN110335332B (en) 2019-05-14 2019-05-14 Automatic drawing method of human face cartoon

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910395963.9A CN110335332B (en) 2019-05-14 2019-05-14 Automatic drawing method of human face cartoon

Publications (2)

Publication Number Publication Date
CN110335332A CN110335332A (en) 2019-10-15
CN110335332B true CN110335332B (en) 2023-04-18

Family

ID=68139422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910395963.9A Active CN110335332B (en) 2019-05-14 2019-05-14 Automatic drawing method of human face cartoon

Country Status (1)

Country Link
CN (1) CN110335332B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026492B (en) * 2019-12-10 2024-02-20 北京普源精电科技有限公司 Equipment and method for automatically managing interaction of display area
CN113409185B (en) * 2021-05-14 2024-03-05 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003016431A (en) * 2001-06-28 2003-01-17 Mitsubishi Electric Corp Generating device for portrait
CN103456010A (en) * 2013-09-02 2013-12-18 电子科技大学 Human face cartoon generation method based on feature point localization
CN105118023A (en) * 2015-08-31 2015-12-02 电子科技大学 Real-time video human face cartoonlization generating method based on human facial feature points
CN109376582A (en) * 2018-09-04 2019-02-22 电子科技大学 A kind of interactive human face cartoon method based on generation confrontation network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003016431A (en) * 2001-06-28 2003-01-17 Mitsubishi Electric Corp Generating device for portrait
CN103456010A (en) * 2013-09-02 2013-12-18 电子科技大学 Human face cartoon generation method based on feature point localization
CN105118023A (en) * 2015-08-31 2015-12-02 电子科技大学 Real-time video human face cartoonlization generating method based on human facial feature points
CN109376582A (en) * 2018-09-04 2019-02-22 电子科技大学 A kind of interactive human face cartoon method based on generation confrontation network

Also Published As

Publication number Publication date
CN110335332A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN109376582B (en) Interactive face cartoon method based on generation of confrontation network
CN105844706B (en) A kind of full-automatic three-dimensional scalp electroacupuncture method based on single image
CN101853523B (en) Method for adopting rough drawings to establish three-dimensional human face molds
CN103456010B (en) A kind of human face cartoon generating method of feature based point location
CN112669447B (en) Model head portrait creation method and device, electronic equipment and storage medium
CN102800129B (en) A kind of scalp electroacupuncture based on single image and portrait edit methods
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
KR102386642B1 (en) Image processing method and apparatus, electronic device and storage medium
CN111652123B (en) Image processing and image synthesizing method, device and storage medium
CN106652037B (en) Face mapping processing method and device
US20220292772A1 (en) Methods and systems for constructing facial position map
JP7462120B2 (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
CN110335332B (en) Automatic drawing method of human face cartoon
CN104157001A (en) Method and device for drawing head caricature
CN112221145A (en) Game face model generation method and device, storage medium and electronic equipment
CN117157673A (en) Method and system for forming personalized 3D head and face models
CN111243051A (en) Portrait photo-based stroke generating method, system and storage medium
CN109448093B (en) Method and device for generating style image
CN113724302A (en) Personalized glasses customizing method and customizing system
CN114663574A (en) Three-dimensional face automatic modeling method, system and device based on single-view photo
CN115409932A (en) Texture mapping and completion method of three-dimensional human head and face model
CN114972601A (en) Model generation method, face rendering device and electronic equipment
CN113870404A (en) Skin rendering method and device of 3D model
CN110544200A (en) method for realizing mouth interchange between human and cat in video
CN110060257B (en) RGBD hair segmentation method based on different hairstyles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant