CN107316333A - It is a kind of to automatically generate the method for day overflowing portrait - Google Patents
It is a kind of to automatically generate the method for day overflowing portrait Download PDFInfo
- Publication number
- CN107316333A CN107316333A CN201710550145.2A CN201710550145A CN107316333A CN 107316333 A CN107316333 A CN 107316333A CN 201710550145 A CN201710550145 A CN 201710550145A CN 107316333 A CN107316333 A CN 107316333A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- face
- hair
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of method for automatically generating and day overflowing portrait, including step:1st, the detection of Face datection and human face characteristic point and the segmentation at face position;2nd, the higher human face region human face region corresponding with data set of day unrestrained middle identification is matched, each human face region has its corresponding caricature human face region in data set;3rd, the caricature stroke at other positions is generated using the unrestrained feature of characteristic point and day;4th, each position of generation is finally synthesized into human-face cartoon according to the geometrical relationship at each position of face.This very popular Quadratic Finite Element cartoon image can be overflow day according to real human face generation by the inventive method, with very high recreational and application value.
Description
Technical field
The present invention relates to the technical field of image procossing, a kind of method for automatically generating and day overflowing portrait is referred in particular to.
Background technology
In recent years, with Digital Media and medium, Japanese caricature and animation are more and more widely the popular contact of China,
Increasingly liked and pursued by Chinese popular especially young man, " Quadratic Finite Element " culture flourishes, and one of them is important
Reason is exactly that Japanese caricature is different from American-European wind and the cartoon style and skill of Chinese feature.But, the drawing that day overflows needs certain
Drawing natural gift and pictorial skill, the people without foundation of painting, which wants to draw, unrestrained to be generally required to take considerable time satisfied day.Remove
Outside this, beautify today a feast for the eyes with assorted entertainment applications in mobile terminal photo, the cartooning of face, sketchization are more next
More liked by user, the research of correlation technique is also more and more.
In correlative study work, human face photo to be synthesized and sketch sample are divided into mutually not in patent CN104077742A
Overlapping block of pixels, then extracts the Gabor characteristic at each position of face, the Stein divergences of block of pixels is calculated, using optimal power
The method of value synthesizes each sketch block of pixels;In patent CN104123741A, by calculating the non-side in binary edge map
Edge pixel trains facial image and sketch facial image to be generated in storehouse same to the range conversion value of edge pixel point to calculate
The range conversion of one face organ, weighs the similarity between them, so that the human face sketch organ synthesis sketch at each position
Image;But above method often remains the local feature of face, and lost face holotopy, and generation be all
The human face sketch of sketch effect or similar cartoon, style is relatively simple.
The content of the invention
It is an object of the invention to overcome the shortcoming and deficiency of prior art there is provided a kind of automatically generate day to overflow portrait
Method, can overflow this very popular Quadratic Finite Element cartoon image day according to real human face generation, with very high recreational and
Application value.
To achieve the above object, technical scheme provided by the present invention is:It is a kind of to automatically generate the method for day overflowing portrait, bag
Include following steps:
S1, the detection of face and human face characteristic point and the segmentation at face position, step are as follows:
S1.1, using the face classification device obtained based on haar features trainings, the picture that portrait is overflow to day to be generated is carried out
Face datection;
S1.2, to detecting that obtained human face region carries out facial feature points detection in S1.1, obtain including left eye right eye each 6
It is individual, left each 5 points of the right eyebrow of eyebrow, 9 points of nose, 20 points of face, 17 points of facial contour;
S1.3, with the characteristic point obtained in S1.2, left eye and right eye region are intercepted out, obtain face eye areas,
Left eyebrow and right eyebrow region are intercepted out, face brow region is obtained;
S1.4, it will detect that obtained human face region scope expands to cover the region of hair, length of first having hair made in S1.1
Judgement, judge long hair or bob according to whether there is hair below lower jaw;
S1.5, judge in S1.4 long bob on the basis of, construct energy function, method optimization energy function cut using figure
Carry out the segmentation positioning of hair zones;
S1.6, the characteristic point according to the obtained left and right two of S1.2, find out the center of two, are hung down by center
Nogata to graded come judge the face whether wear a pair of spectacles;
S2, by day in unrestrained identification reach the human face region of requirement, including glasses and hair, people corresponding with data set
Face region is matched, and each human face region has its corresponding caricature human face region in data set, and step is as follows:
S2.1, the human eye area to being obtained in S1.3, the gradient direction local feature value of statistical picture, extract image
Hog features are as template, then extract the Hog features of candidate's eye image in database, enter one by one row distance calculating, find out away from
From minimum candidate's eye image, the corresponding caricature human eye of candidate's eye image is exactly the cartoon image that matching is obtained;
S2.2, the hair mask to being split in S1.5, and the hair caricature in data set mask, one by one calculate figure
The second order and third moment of picture, then construct the hu Character eigenvectors of 7 dimensions, then calculate the Euclidean distance between them, and distance is minimum
Person is matching result;
The generation of other position caricature strokes of S3, face, step is as follows:
S3.1, with the brow region obtained in S1.2, calculate the gradient of image vertical direction, the vertical gradient of dense eyebrow
Gross energy is more than the vertical gradient gross energy of thin eyebrow, thus distinguishes heavy eyebrows and thin eyebrow, then connects eyebrow by B-spline curves
Hair characteristic point carrys out the stroke that connection features point obtains eyebrow;
S3.2, the position according to the nose obtained in S1.2, take two of which point position to draw a short straight line and can obtain
Nose;
S3.3, by the corresponding relation before the characteristic point of face, judge a closed state for face, then generate face and overflow
Draw;
S3.4, according to the characteristic point that cheek profile is obtained in S1.2, each two characteristic points in chin lowermost end both sides are given up,
Remaining point is fitted with B-spline curves so that the caricature cheek profile chin that connection is obtained is more sharp;
S4, according to corresponding geometrical relationship between each position of original face, matching in S2 and S3 is obtained and generated
Caricature be combined, obtain final day overflow style human-face cartoon.
In S1.4, the judgement of the first cutting & styling judges long hair or short according to whether there is hair below lower jaw
Hair, it is specific as follows:
To the facial image of test, using the human face characteristic point obtained in S1.2, three points in human face region are found out, point
Not Wei neck both sides and face point of interface and lower jaw bottom, the coordinate of three points is:P1(x1,y1),P2(x2,y2),P3(x3,
y3), wherein, x1,x2,x3For abscissa, y1,y2,y3For ordinate;
The facial image of test is converted into gray level image, because the color Relative Skin and background of hair are generally all face
The deeper part of color, so using training data by many experiments, can determine a gray threshold t, being done using this threshold value
Image binaryzation, 0 is taken less than t, and 1 is taken more than t, obtained binary image, this gray value of hair can not be reached into requirement
Region distinguished with other regions;
Statistics is 0 number of pixels per a line value, so as to obtain the pixel number that every a line value is 0, obtains hair system
Count histogram histv, by judging that every a line whether there is pixel for 0 point in lower jaw bottom and lower jaw bottom setpoint distance,
That is the hist of lower jaw bottom and lower jaw bottom setpoint distancevWhether it is more than 0 to judge lower jaw bottom and lower jaw bottom setpoint distance
In whether there is hair, so as to judge the length of hair.
In S1.5, it is described judge long bob in S1.4 on the basis of, construct energy function, wardrobe entered using the figure method of cutting
The positioning in region is sent out, it is specific as follows:
First in the training set of hair zones has been marked, training data is divided into long hair and bob is two kinds of
Picture, to long hair and bob, counts the prior distribution of facial image hair position respectively, mark result corresponding for pixel i,
With P (li) represent, liFor label, it is assumed that the label of entire image is L={ l1,l2,...,li,...,lN},liHair is represented for 0
Region, 1 represents non-hair region, P (li) number of times of hair zones can be belonged to by calculating the location of pixels of this in training image
Ratio between training image quantity is worth to;
Then, structural map to cut the energy function of method, the energy function that figure cuts method is mainly made up of two parts, and a part is
Area item R (L) a, part is border item B (L):
E (L)=aR (L)+B (L)
Wherein,
Wherein, area item R (L) is calculated using the prior distribution in training set, P (li) distribute label l for pixel iiIt is general
Rate, is that each pixel x distribution makes the label of its maximum probability in image, to make energy minimum, so taking negative;And in border item
In B (L), p and q are neighborhood territory pixel, IpAnd IqRespectively p and q pixel value, N is image all pixels set, δ (lp,lq) be
Indicative function, lpAnd lqFor pixel p and q label, B< p, q >For weighing the similitude between pixel p and q, σ2For the side of pixel
Difference, B< p, q >Smaller expression pixel p and q difference are bigger;
The energy function E (L) of method is cut for figure, energy function is optimized using max-flow method, final view picture is obtained
The label result of image, so that the hair zones split.
In S1.6, the center for finding out two is judged by the graded of center vertical direction
The face whether wear a pair of spectacles, it is specific as follows:
Using the position of the human eye feature point obtained in S1.2, the center position of two glasses is calculated, in center
Position, takes one piece of rectangular area, because when there are glasses, the picture frame of glasses middle is because color Relative Skin all compares
Relatively deep, therefore, the gradient of the whole piece horizontal line of picture frame position one all can be than larger;
To this rectangular area, gray level image is converted into, the gradient of entire image in vertical direction is calculated, then obtains
Two maximum values of gradient on each row in this rectangular area, and the ordinate of maximum and Second Largest Value is designated as a respectivelyi
∈A,bi=B, i=1,2 ..., n, wherein n are picturewide, and A and B are coordinate set, and the average of A and set B is respectively μAWith
μB, pixel ordinate A and B variance var (A), var (B) are then calculated respectively:
If there are glasses, the graded at the picture frame of glasses middle is concentrated mainly on same vertical nearby coordinates, institute
Concentration can be compared with the size of this vertical coordinate, so the variance var (A), var (B) as vertical coordinate set A and B are small
If the threshold value of setting, you can judge that the face has glasses.
It is described to calculate the gradient of image vertical direction with the brow region obtained in S1.2 in S3.1, dense eyebrow
Vertical gradient gross energy is more than the vertical gradient gross energy of thin eyebrow, thus distinguishes heavy eyebrows and thin eyebrow, is connected by B-spline curves
Characteristic point is connect to obtain the stroke of eyebrow, and the heavy eyebrows and the curve of thin eyebrow that generate are specific as follows:
For thin eyebrow, five points of the eyebrow obtained in S1.2 are directly connected by B-spline curves;
For heavy eyebrows, five points of the eyebrow obtained in S1.2 are fitted by B-spline curves first, first is then reconnected
Individual point and the 5th point.
It is described according to the characteristic point that cheek profile is obtained in S1.2 in S3.4, the Partial Feature point on chin both sides is given up
Abandon so that the caricature cheek profile chin that connection is obtained is more sharp, specific as follows:
To the characteristic point detected, give up each two points in chin lowermost end both sides, remaining point be fitted with B-spline curves,
It can obtain the more sharp effect of personage's chin in overflowing day.
The present invention compared with prior art, has the following advantages that and beneficial effect:
1st, have from existing generation cartoon human face it is obvious different, what the present invention was generated be overflow day style face it is unrestrained
Draw, unrestrained day is now most popular cartoon style, and the generation that day overflows style face is recreational higher.
2nd, the animation portrait for generation adds hair zones, makes the effect of generation more life-like.
3rd, some distinction and unconspicuous feature are simplified, processing feature sockdolager face position the most simplifies calculation
Method overall structure.
Brief description of the drawings
Fig. 1 is the inventive method flow chart.
Fig. 2 is to detect whether face carries the signal mentioned in the method for glasses described in step S1.6 of the present invention
Figure.
Fig. 3 a are the schematic diagram for obtaining thin eyebrow caricature stroke described in step S3.1 of the present invention.
Fig. 3 b are the schematic diagram for obtaining heavy eyebrows caricature stroke described in step S3.1 of the present invention.
Fig. 4 is the schematic diagram of the drawing face cheek contour method described in step S3.4 of the present invention.
Embodiment
With reference to specific embodiment, the invention will be further described.
As shown in figure 1, the method for automatically generating day unrestrained portrait that the present embodiment is provided, can use software programming technique
Automatically the step of human-face cartoon is generated is realized, to the human face photo of a caricature to be generated of input, comprised the following steps that:
S1, Face datection, facial feature points detection and the segmentation of face position:
S1.1, with the grader based on haar features, Face datection is carried out to the human face photo of input, people in photo is found out
The position of face.
S1.2, to detecting obtained human face region in S1.1, use recurrence local binary feature method carry out face
Feature point detection, obtains 68 each characteristic points, including each 6 points of left eye right eye, left each 5 points of the right eyebrow of eyebrow, 9 points of nose, face
20 points, 17 points of facial contour.
S1.3, with the characteristic point obtained in S1.2, determine the position of eye areas, left eye and right eye region intercepted out
Come, obtain face eye areas;The position of eye areas is determined, left eyebrow and right eyebrow region are intercepted out, face eyebrow is obtained
Region.
S1.4, to detecting obtained human face region in S1.1, expand to the region including hair, the estimation of first cutting & styling,
Judge long hair or bob according to whether there is hair below lower jaw, to the facial image of test, utilize what is obtained in S1.2
Human face characteristic point, finds out three points in human face region, as shown in Fig. 2 respectively the point of interface of neck both sides and face and under
Jaw bottom, the coordinate of three points is P1 (x1,y1),P2(x2,y2),P3(x3,y3), wherein, x1,x2,x3For abscissa, y1,y2,y3
For ordinate;Then the facial image of test is converted into gray level image, because the color Relative Skin and background of hair are general
All it is the deeper part of color, so using training data by many experiments, this method can determine a gray threshold t,
Image binaryzation is done using this threshold value, 0 is taken less than t, 1 is taken more than t, obtained binary image can be substantially this by hair
Dark-coloured region is substantially distinguished with other regions.Then count per the number of pixels that a line value is 0, so as to obtain every a line value
For 0 pixel number, hair statistical histogram hist is obtainedv, by judging in lower jaw bottom and the segment distance of lower jaw bottom one
Per a line with the presence or absence of the point that pixel is 0, that is, judge the hist of each row distance in lower jaw bottom and the segment distance of lower jaw bottom onev
Whether it is more than 0 to judge to whether there is hair in lower jaw bottom and the segment distance of lower jaw bottom one, so as to judge the length of hair.
S1.5, judge in S1.4 long bob on the basis of, construct energy function, cutting method using figure carries out hair zones
Segmentation positioning:
First in the training set of hair zones has been marked, training data is divided into long hair and bob is two kinds of
Picture, to long hair and bob, counts the prior distribution of facial image hair position respectively, mark result corresponding for pixel i,
With P (li) represent, liFor label, it is assumed that the label of entire image is L={ l1,l2,...,li,...,lN},liHair is represented for 0
Region, 1 represents non-hair region, P (li) number of times of hair zones can be belonged to by calculating the location of pixels of this in training image
Ratio between training image quantity is worth to;
Then, structural map to cut the energy function of method, the energy function that figure cuts method is mainly made up of two parts, and a part is
Area item R (L) a, part is border item B (L):
E (L)=aR (L)+B (L)
Wherein,
Wherein, area item R (L) is calculated using the prior distribution in training set, P (li) distribute label l for pixel iiIt is general
Rate, it is intended that each pixel x is assigned as the label of its maximum probability in image, to make energy minimum, so taking negative.
And in border item B (L), p and q are neighborhood territory pixel, IpAnd IqFor pixel value, N is image all pixels set, δ
(lp,lq) it is indicative function, lpAnd lqFor pixel p and q label, B< p, q >For weighing the similitude between pixel p and q, σ2For
The variance of pixel, B< p, q >Smaller expression pixel p and q difference are bigger.Border item B (L) represent if two neighborhood territory pixel difference very
It is small, then it belongs to same target or the possibility of same background is just very big, if their difference is very big, that illustrates this
Two pixels are likely in the marginal portion of target and background, then the possibility being partitioned from is than larger.
Then the energy function E (L) of method is cut for figure, max-flow method can be used to optimize energy function, obtained
The label result of final entire image, so that the hair zones split.
S1.6, the characteristic point according to the obtained left and right two of S1.2, two are calculated by the characteristic point of the eyes of left and right two
Center, takes the rectangular area of one piece of 10 × 30 size as shown in Figure 2, referring to Fig. 2 as can be seen that when there are glasses
Wait, the picture frame of glasses middle is because the general Relative Skin of color is all deep, therefore, the whole piece horizontal line of picture frame position one
Gradient all can be than larger.
This rectangular area is extracted, gray level image is converted into, the gradient of whole rectangular image in vertical direction is calculated
Value, then obtains two maximum values of gradient on each row in this rectangular area, and by maximum and the vertical seat of Second Largest Value
Mark is designated as respectively, ai∈A,bi=B, i=1,2 ..., n, n are picturewide, and pixel ordinate A and B side are then calculated respectively
Poor var (A), var (B):
If if there are glasses, it is attached that the graded at the picture frame of glasses middle is concentrated mainly on same vertical coordinate
Closely, so the size of this vertical coordinate can compare concentration, so variance var (A), var (B) as vertical coordinate set A and B
If threshold value less than setting, it is possible to determine that the face has glasses.
S2, will day it is unrestrained in the higher human face region human face region corresponding with data set of identification matched, it is several
According to concentrating each human face region to have its corresponding caricature human face region, concretely comprise the following steps:
S2.1, the human eye area to being obtained in S1.3, the gradient direction local feature value of statistical picture, extract image
Hog features are as template, then extract the Hog features of candidate's eye image in database, and Euclidean distance calculating is carried out one by one, is looked for
Go out the minimum candidate's eye image of distance, the corresponding caricature human eye of candidate's eye image is exactly the human eye caricature figure that matching is obtained
Picture.
S2.2, the hair mask to being split in S1.5,7 are constructed by the second order of image and three ranks normalization central moment
Bending moment, the shape eigenvectors of 7 dimensions are not constructed by this 7 invariant moments, and each hair concentrated to data is same to calculate correspondence
Hu squares construct 7 dimensional feature vectors, then by the hair mask and data that split concentrate hair caricature mask one by one
The Euclidean distance of characteristic vector is calculated, is matching result apart from reckling.
The generation of other position caricature strokes of S3, face:
S3.1, the gradient for calculating image vertical direction, the vertical gradient gross energy of dense eyebrow are more than the vertical of thin eyebrow
Gradient gross energy, thus distinguishes heavy eyebrows and thin eyebrow, by B-spline curves connection features point obtains eyebrow come connection features point
The curve of stroke, the heavy eyebrows of generation and thin eyebrow be specially:
For thin eyebrow, five points of the eyebrow obtained in S1.2 are directly connected by B-spline curves, as shown in Figure 3 a;
For heavy eyebrows, five points of the eyebrow obtained in S1.2 are fitted by B-spline curves first, first is then reconnected
Individual point and the 5th point, as shown in Figure 3 b.
S3.2, due to day it is unrestrained in nose it is fairly simple, be substantially by simple virgule as expression, therefore basis
The position of the nose obtained in S1.2, takes a point and a point beside it for nose center, connects two and point with the finger or gesticulate one
Short straight line can be obtained by nose.
S3.3, the generation of face caricature first have to judge that face is closed or open, the present invention by shutting up and
The relation of the corners of the mouth and middle lip judges mouth states when opening one's mouth, to what is opened one's mouth, by connecting upper lip and lower lip
Characteristic point obtains caricature face, for what is shut up, and upper lower lip is overlapped, therefore only connects the characteristic point of upper lip.
S3.4, the generation of cheek contour line will also consider that day overflows idiosyncrasy, according to obtaining cheek profile in S1.2
Characteristic point, as shown in figure 4, to the characteristic point detected, giving up each two points in chin lowermost end both sides, being fitted with B-spline curves
It is remaining, the more sharp effect of personage's chin in overflowing day can be obtained, so doing can be so that connects obtained caricature face
Cheek profile chin is more sharp, more like the feature that personage connects profile is overflow day, while remaining the basic shape of face appearance of original face
State feature.
S4, according to the relation between each position of original face, will be matched in S2 and S3 the caricature that obtains and generate by
It is combined according to the position relationship between original face regional, obtains the human-face cartoon for final day overflowing style.
Embodiment described above is only the preferred embodiments of the invention, and the practical range of the present invention is not limited with this, therefore
The change that all shape, principles according to the present invention are made, all should cover within the scope of the present invention.
Claims (6)
1. a kind of automatically generate the method for day overflowing portrait, it is characterised in that comprises the following steps:
S1, the detection of face and human face characteristic point and the segmentation at face position, step are as follows:
S1.1, using the face classification device obtained based on haar features trainings, the picture that portrait is overflow to day to be generated carries out face
Detection;
S1.2, to detecting that obtained human face region carries out facial feature points detection in S1.1, obtain including each 6 of left eye right eye
Point, left each 5 points of the right eyebrow of eyebrow, 9 points of nose, 20 points of face, 17 points of facial contour;
S1.3, with the characteristic point obtained in S1.2, left eye and right eye region are intercepted out, obtain face eye areas, and will
Left eyebrow and right eyebrow region intercept out, obtain face brow region;
S1.4, it will detect that obtained human face region scope expands to cover the region of hair in S1.1, length of first having hair made is sentenced
It is disconnected, judge long hair or bob according to whether there is hair below lower jaw;
S1.5, judge in S1.4 long bob on the basis of, construct energy function, cutting method optimization energy function using figure is carried out
The segmentation positioning of hair zones;
S1.6, the characteristic point according to the obtained left and right two of S1.2, obtain the center of two, passing through center Vertical Square
To graded come judge the face whether wear a pair of spectacles;
S2, by day in unrestrained identification reach the human face region of requirement, including glasses and hair, face area corresponding with data set
Domain is matched, and each human face region has its corresponding caricature human face region in data set, and step is as follows:
S2.1, the human eye area to being obtained in S1.3, the gradient direction local feature value of statistical picture, extract the Hog of image
Feature is as template, then extracts the Hog features of candidate's eye image in database, and Euclidean distance calculating is carried out one by one, is found out
The minimum candidate's eye image of distance, the corresponding caricature human eye of candidate's eye image is exactly the cartoon image that matching is obtained;
S2.2, the mask by the hair caricature in the hair mask and data set that are split in S1.5, calculate image one by one
Second order and third moment, then construct the hu Character eigenvectors of 7 dimensions, then calculate the Euclidean distance between them, are apart from reckling
Matching result;
The generation of other position caricature strokes of S3, face, step is as follows:
S3.1, with the brow region obtained in S1.2, the gradient of image vertical direction is calculated to judge that eyebrow is heavy eyebrows or thin
Eyebrow, the vertical gradient gross energy of dense eyebrow is more than the vertical gradient gross energy of thin eyebrow, thus distinguishes heavy eyebrows and thin eyebrow, so
Afterwards the stroke that eyebrow characteristic point obtains eyebrow come connection features point is connected by B-spline curves;
S3.2, the position according to the nose obtained in S1.2, take two of which point position to draw a short straight line and can obtain nose;
S3.3, by the corresponding relation before the characteristic point of face, judge a closed state for face, then generate the caricature of face
Image;
S3.4, according to the characteristic point that cheek profile is obtained in S1.2, by each two spies in the both sides of chin bottom most position characteristic point
Levy and a little give up, remaining point is fitted with B-spline curves so that the caricature cheek profile chin that connection is obtained has sharp effect;
The geometrical relationship having between S4, each position according to original face, will match obtained caricature and life in S2 and S3
It is combined into obtained caricature, obtains the human-face cartoon for final day overflowing style.
2. a kind of method for automatically generating day unrestrained portrait according to claim 1, it is characterised in that:It is described in S1.4
The judgement of first cutting & styling, judges long hair or bob according to whether there is hair below lower jaw, specific as follows:
To the facial image of test, using the human face characteristic point obtained in S1.2, three points in human face region are found out, are respectively
The point of interface and lower jaw bottom of neck both sides and face, the coordinate of three points is P1 (x1,y1),P2(x2,y2),P3(x3,y3), its
In, x1,x2,x3For abscissa, y1,y2,y3For ordinate;
The facial image of test is converted into gray level image, because the gray value contrast skin and background of hair are that gray value is minimum
Part, so using training data by many experiments, can determine a gray threshold t, image two done using this threshold value
Value, 0 is taken less than t, and 1 is taken more than t, obtains binary image, so as to which this gray value of hair not reached to the area of threshold value
Domain is distinguished with other regions;
Statistics is 0 number of pixels per a line value, so as to obtain the pixel number that every a line value is 0, obtains hair statistical straight
Side figure histv, by judge in lower jaw bottom and lower jaw bottom fixed range per a line with the presence or absence of pixel for 0 point, i.e., under
Jaw bottom and the hist of lower jaw bottom fixed rangevWhether it is more than 0 to judge to be in lower jaw bottom and lower jaw bottom fixed range
It is no to there is hair, so as to judge the length of hair.
3. a kind of method for automatically generating day unrestrained portrait according to claim 1, it is characterised in that:It is described in S1.5
Judge in S1.4 on the basis of long bob, construct energy function, cut the positioning that method carries out hair zones using figure, specifically such as
Under:
First in the training set of hair zones has been marked, training data is divided into long hair and the two kinds of figure of bob
Piece, to long hair and bob, counts the prior distribution of facial image hair position, mark result corresponding for pixel i is used respectively
P(li) represent, liFor label, it is assumed that the label of entire image is L={ l1,l2,...,li,...,lN},liHair area is represented for 0
Domain, 1 represents non-hair region, P (li) can by calculate the location of pixels of this in training image belong to hair zones number of times and
Ratio between training image quantity is worth to;
Then, structural map to cut the energy function of method, the energy function that figure cuts method is mainly made up of two parts, a part is region
Item R (L) a, part is border item B (L):
E (L)=aR (L)+B (L)
<mrow>
<mi>R</mi>
<mrow>
<mo>(</mo>
<mi>L</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>&Element;</mo>
<mi>N</mi>
</mrow>
</munder>
<mi>log</mi>
<mi> </mi>
<mi>P</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>l</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>B</mi>
<mrow>
<mo>(</mo>
<mi>L</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mo>{</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>}</mo>
<mo>&Element;</mo>
<mi>N</mi>
</mrow>
</munder>
<msub>
<mi>B</mi>
<mrow>
<mo><</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>></mo>
</mrow>
</msub>
<mo>&CenterDot;</mo>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>l</mi>
<mi>p</mi>
</msub>
<mo>,</mo>
<msub>
<mi>l</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein,
<mrow>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>l</mi>
<mi>p</mi>
</msub>
<mo>,</mo>
<msub>
<mi>l</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "}">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>l</mi>
<mi>p</mi>
</msub>
<mo>=</mo>
<msub>
<mi>l</mi>
<mi>q</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>l</mi>
<mi>p</mi>
</msub>
<mo>&NotEqual;</mo>
<msub>
<mi>l</mi>
<mi>q</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
<mrow>
<msub>
<mi>B</mi>
<mrow>
<mo><</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>></mo>
</mrow>
</msub>
<mo>=</mo>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>p</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>q</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msup>
<mi>&sigma;</mi>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
Wherein, area item R (L) is calculated using the prior distribution in training set, P (li) distribute label l for pixel iiProbability, figure
Each pixel x is assigned as making the label of its maximum probability as in, and energy function is constructed with this, to make energy minimum, so taking negative
Number;And in border item B (L), p and q are neighborhood territory pixel, IpAnd IqFor pixel value, N is image all pixels set, δ (lp,lq)
For indicative function, lpAnd lqFor pixel p and q label, B< p, q >For weighing the similitude between pixel p and q, σ2For pixel
Variance, B< p, q >Smaller expression pixel p and q difference are bigger;
The energy function E (L) of method is cut for figure, energy function is optimized using max-flow method, final entire image is obtained
Label result so that the hair zones split.
4. a kind of method for automatically generating day unrestrained portrait according to claim 1, it is characterised in that:Looked for described in S1.6
Go out the center of two, judged by the graded of center vertical direction the face whether wear a pair of spectacles, specifically such as
Under:
Using the position of the human eye feature point obtained in S1.2, the center position of two glasses is calculated, in center position,
Take one piece of rectangular area because when there are glasses, the picture frame of glasses middle because gray value than skin gray value
Small, therefore, the Grad of the whole piece horizontal line of picture frame position one is bigger than the Grad of other positions;
To this rectangular area, gray level image is converted into, the gradient of entire image in vertical direction is calculated, then obtains this
Two maximum values of gradient on each row in rectangular area, and the ordinate of maximum and Second Largest Value is designated as a respectivelyi∈A,
bi=B, i=1,2 ..., n, wherein n are picturewide, and A and B are coordinate set, and the average of A and set B is respectively μAAnd μB,
Then pixel ordinate A and B variance var (A), var (B) are calculated respectively:
<mrow>
<mi>var</mi>
<mrow>
<mo>(</mo>
<mi>A</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>a</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>A</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mi>var</mi>
<mrow>
<mo>(</mo>
<mi>B</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>&mu;</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
If there are glasses, the graded at the picture frame of glasses middle is concentrated mainly on same vertical nearby coordinates, so this
The size of individual vertical coordinate can compare concentration, so the variance var (A), var (B) as vertical coordinate set A and B are both less than and set
If fixed threshold value, you can judge that the face has glasses.
5. a kind of method for automatically generating day unrestrained portrait according to claim 1, it is characterised in that:It is described in S3.1
With the brow region obtained in S1.2, the gradient of image vertical direction is calculated, the vertical gradient gross energy of dense eyebrow is more than carefully
The vertical gradient gross energy of eyebrow, thus distinguishes heavy eyebrows and thin eyebrow, by B-spline curves connection features point obtains eyebrow
Stroke, and the heavy eyebrows and the curve of thin eyebrow that generate are specific as follows:
For thin eyebrow, five points of the eyebrow obtained in S1.2 are directly connected by B-spline curves;
For heavy eyebrows, five points of the eyebrow obtained in S1.2 are fitted by B-spline curves first, first point is then reconnected
With the 5th point.
6. a kind of method for automatically generating day unrestrained portrait according to claim 1, it is characterised in that:It is described in S3.4
According to the characteristic point that cheek profile is obtained in S1.2, each two characteristic points in the both sides of chin bottom most position characteristic point are given up,
So that the caricature cheek profile chin that connection is obtained has sharp effect, it is specific as follows:
To the characteristic point detected, give up each two points in chin lowermost end both sides, remaining point be fitted with B-spline curves, you can
Obtain the sharp effect of personage's chin in overflowing day.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710550145.2A CN107316333B (en) | 2017-07-07 | 2017-07-07 | A method of it automatically generates and day overflows portrait |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710550145.2A CN107316333B (en) | 2017-07-07 | 2017-07-07 | A method of it automatically generates and day overflows portrait |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107316333A true CN107316333A (en) | 2017-11-03 |
CN107316333B CN107316333B (en) | 2019-10-18 |
Family
ID=60178469
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710550145.2A Active CN107316333B (en) | 2017-07-07 | 2017-07-07 | A method of it automatically generates and day overflows portrait |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107316333B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945244A (en) * | 2017-12-29 | 2018-04-20 | 哈尔滨拓思科技有限公司 | A kind of simple picture generation method based on human face photo |
CN108009470A (en) * | 2017-10-20 | 2018-05-08 | 深圳市朗形网络科技有限公司 | A kind of method and apparatus of image zooming-out |
CN108109115A (en) * | 2017-12-07 | 2018-06-01 | 深圳大学 | Enhancement Method, device, equipment and the storage medium of character image |
CN108596839A (en) * | 2018-03-22 | 2018-09-28 | 中山大学 | A kind of human-face cartoon generation method and its device based on deep learning |
CN109829486A (en) * | 2019-01-11 | 2019-05-31 | 新华三技术有限公司 | Image processing method and device |
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN110276809A (en) * | 2018-03-15 | 2019-09-24 | 深圳市紫石文化传播有限公司 | Method and apparatus for face image processing |
CN110414345A (en) * | 2019-06-25 | 2019-11-05 | 北京汉迪移动互联网科技股份有限公司 | Cartoon image generation method, device, equipment and storage medium |
CN112907569A (en) * | 2021-03-24 | 2021-06-04 | 北京房江湖科技有限公司 | Head image area segmentation method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477696A (en) * | 2009-01-09 | 2009-07-08 | 彭振云 | Human character cartoon image generating method and apparatus |
CN101527049A (en) * | 2009-03-31 | 2009-09-09 | 西安交通大学 | Generating method of multiple-style face cartoon based on sample learning |
CN103218838A (en) * | 2013-05-11 | 2013-07-24 | 苏州华漫信息服务有限公司 | Automatic hair drawing method for human face cartoonlization |
CN103456010A (en) * | 2013-09-02 | 2013-12-18 | 电子科技大学 | Human face cartoon generation method based on feature point localization |
-
2017
- 2017-07-07 CN CN201710550145.2A patent/CN107316333B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477696A (en) * | 2009-01-09 | 2009-07-08 | 彭振云 | Human character cartoon image generating method and apparatus |
CN101527049A (en) * | 2009-03-31 | 2009-09-09 | 西安交通大学 | Generating method of multiple-style face cartoon based on sample learning |
CN103218838A (en) * | 2013-05-11 | 2013-07-24 | 苏州华漫信息服务有限公司 | Automatic hair drawing method for human face cartoonlization |
CN103456010A (en) * | 2013-09-02 | 2013-12-18 | 电子科技大学 | Human face cartoon generation method based on feature point localization |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009470A (en) * | 2017-10-20 | 2018-05-08 | 深圳市朗形网络科技有限公司 | A kind of method and apparatus of image zooming-out |
CN108009470B (en) * | 2017-10-20 | 2020-06-16 | 深圳市朗形网络科技有限公司 | Image extraction method and device |
CN108109115A (en) * | 2017-12-07 | 2018-06-01 | 深圳大学 | Enhancement Method, device, equipment and the storage medium of character image |
CN107945244A (en) * | 2017-12-29 | 2018-04-20 | 哈尔滨拓思科技有限公司 | A kind of simple picture generation method based on human face photo |
CN110276809A (en) * | 2018-03-15 | 2019-09-24 | 深圳市紫石文化传播有限公司 | Method and apparatus for face image processing |
CN108596839A (en) * | 2018-03-22 | 2018-09-28 | 中山大学 | A kind of human-face cartoon generation method and its device based on deep learning |
CN109829486A (en) * | 2019-01-11 | 2019-05-31 | 新华三技术有限公司 | Image processing method and device |
CN109902635A (en) * | 2019-03-04 | 2019-06-18 | 司法鉴定科学研究院 | A kind of portrait signature identification method based on example graph |
CN109919081A (en) * | 2019-03-04 | 2019-06-21 | 司法鉴定科学研究院 | A kind of automation auxiliary portrait signature identification method |
CN110414345A (en) * | 2019-06-25 | 2019-11-05 | 北京汉迪移动互联网科技股份有限公司 | Cartoon image generation method, device, equipment and storage medium |
CN112907569A (en) * | 2021-03-24 | 2021-06-04 | 北京房江湖科技有限公司 | Head image area segmentation method and device, electronic equipment and storage medium |
CN112907569B (en) * | 2021-03-24 | 2024-03-15 | 贝壳找房(北京)科技有限公司 | Head image region segmentation method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107316333B (en) | 2019-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107316333A (en) | It is a kind of to automatically generate the method for day overflowing portrait | |
CN101763503B (en) | Face recognition method of attitude robust | |
CN105844252B (en) | A kind of fatigue detection method of face key position | |
CN104834898B (en) | A kind of quality classification method of personage's photographs | |
CN109815826B (en) | Method and device for generating face attribute model | |
CN104850825B (en) | A kind of facial image face value calculating method based on convolutional neural networks | |
CN103810490B (en) | A kind of method and apparatus for the attribute for determining facial image | |
CN104680141B (en) | Facial expression recognizing method and system based on moving cell layering | |
CN106778468B (en) | 3D face identification method and equipment | |
CN101819628B (en) | Method for performing face recognition by combining rarefaction of shape characteristic | |
CN104408462B (en) | Face feature point method for rapidly positioning | |
CN105893936B (en) | A kind of Activity recognition method based on HOIRM and Local Feature Fusion | |
CN103268483A (en) | Method for recognizing palmprint acquired in non-contact mode in open environment | |
CN102436637B (en) | Method and system for automatically segmenting hairs in head images | |
CN101305913A (en) | Face beauty assessment method based on video | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
CN102799901A (en) | Method for multi-angle face detection | |
CN103632132A (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN106599785B (en) | Method and equipment for establishing human body 3D characteristic identity information base | |
CN103440476A (en) | Locating method for pupil in face video | |
CN103310194A (en) | Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction | |
CN103810491A (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points | |
CN109377429A (en) | A kind of recognition of face quality-oriented education wisdom evaluation system | |
CN102147852A (en) | Method for detecting hair area | |
CN108268814A (en) | A kind of face identification method and device based on the fusion of global and local feature Fuzzy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |