CN107153806A - A kind of method for detecting human face and device - Google Patents
A kind of method for detecting human face and device Download PDFInfo
- Publication number
- CN107153806A CN107153806A CN201610120358.7A CN201610120358A CN107153806A CN 107153806 A CN107153806 A CN 107153806A CN 201610120358 A CN201610120358 A CN 201610120358A CN 107153806 A CN107153806 A CN 107153806A
- Authority
- CN
- China
- Prior art keywords
- face
- region
- window
- human face
- face region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of method for detecting human face and device.In the method for detecting human face, pending image is obtained;Human face region detection is carried out to the pending image according to M grades of various sizes of windows respectively, M is the integer more than 1;It is determined that the confidence level of the human face region each detected;According to confidence level highest human face region, face profile point detection is carried out.Because the method for detecting human face is according to the progress face detection of confidence level highest human face region, it is ensured that certain robustness, and then the accuracy of testing result can be ensured to a certain extent.
Description
Technical field
The present invention relates to image processing field, more particularly to a kind of method for detecting human face and device.
Background technology
With the development of computer technology particularly mode identification technology, there is Face datection important theory to grind
Study carefully value and application value.Face datection refers to the image given for any one width, using certain strategy
It is scanned for, to determine whether containing face, face can also to be returned to if face is detected
Position, size and posture.
At present, mainly there are three aspects in the widely used field of human face detection tech:1) Automatic face recognition
System, detects whether there is face in piece image first, determined if it there is face face position and
Size, then the face in image is identified;2) media and amusement, in the network virtual world,
By the change of face, substantial amounts of entertainment and effect can be produced, mobile phone, digital camera etc. consumes electricity
In sub- product, the entertainment selection based on face increasingly enriches;3) picture search, is known based on facial image
The search engine of other technology will be with a wide range of applications, using image as the search of input, can sentence
It whether there is face in disconnected image, if it is present while similar image is searched for, people as searching class
Face.
The content of the invention
The embodiments of the invention provide a kind of method for detecting human face and device.
A kind of method for detecting human face provided in an embodiment of the present invention, including:
Obtain pending image;
Human face region detection, M are carried out to the pending image according to M grades of various sizes of windows respectively
For the integer more than 1;
It is determined that the confidence level of the human face region each detected;
According to confidence level highest human face region, face profile point detection is carried out.
A kind of human face detection device provided in an embodiment of the present invention, including:
Acquisition module, for obtaining pending image;
First detection module, for being entered respectively according to M grades of various sizes of windows to the pending image
Pedestrian's face region detection, M is the integer more than 1;
Determining module, for the confidence level for the human face region for determining each to detect;
Second detection module, for according to confidence level highest human face region, carrying out face profile point detection.
In embodiments of the present invention, the pending image is entered according to M grades of various sizes of windows respectively
Pedestrian's face region detection, it is determined that the confidence level of the human face region each detected, according to confidence level highest people
Face region, carries out face profile point detection.Because the method for above-mentioned Face datection is according to confidence level highest people
Face region carries out face detection, it is ensured that certain robustness, and then can ensure detection to a certain extent
As a result accuracy.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of method for detecting human face provided in an embodiment of the present invention;
Fig. 2 is the schematic flow sheet provided in an embodiment of the present invention that Face datection is carried out to a candidate region;
Fig. 3 is the schematic flow sheet of window sliding strategy provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of window sliding strategy provided in an embodiment of the present invention;
Fig. 5 a to Fig. 5 d are the testing result that is obtained according to method for detecting human face provided in an embodiment of the present invention
Schematic diagram;
Fig. 6 is the schematic flow sheet of upper cosmetic method provided in an embodiment of the present invention;
Fig. 7 is eye contour point schematic diagram provided in an embodiment of the present invention;
Fig. 8 is eyes makeup design sketch provided in an embodiment of the present invention;
Fig. 9 is lip outline point schematic diagram provided in an embodiment of the present invention;
Figure 10 is lip makeup design sketch provided in an embodiment of the present invention;
Figure 11 is beard profile point schematic diagram provided in an embodiment of the present invention;
Figure 12 is beard makeup design sketch provided in an embodiment of the present invention;
Figure 13 a to Figure 13 d are Face datection provided in an embodiment of the present invention and makeup result schematic diagram;
Figure 14 is the structural representation of human face detection device provided in an embodiment of the present invention.
Embodiment
In order that the object, technical solutions and advantages of the present invention are clearer, below in conjunction with accompanying drawing to this hair
It is bright to be described in further detail, it is clear that described embodiment is only a part of embodiment of the invention,
Rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not doing
Go out all other embodiment obtained under the premise of creative work, belong to the scope of protection of the invention.
It is a kind of schematic flow sheet of method for detecting human face provided in an embodiment of the present invention, the stream referring to Fig. 1
Journey can be performed by the electronic equipment with image processing function or device.
As illustrated, the flow may include following steps:
Step 101:Obtain pending image.
Step 102:Human face region is carried out to the pending image according to M grades of various sizes of windows respectively
Detection, M is the integer more than 1.
In human face detection tech, generally for various sizes of Face datection is adapted to, different sizes can be used
Window traversal view picture figure, to each candidate region carry out Face datection.
Window size for Face datection can be pre-set., can be according to initial window in a kind of preferred scheme
The size and default amplification coefficient of mouth, obtain the first order to N grades of windows;Wherein ,+1 grade of window of jth
Mouthful size be according to the amplification coefficient to be amplified what is obtained on the size basis of j-th stage window,
1<=j<=N-1, N are integer.For example:Amplification coefficient is 1.1, and the size of first window is (single for 10 × 10
Position is pixel, as follows), the length of first window and width are multiplied by the amplification coefficient respectively, then the second window
Size be 11 × 11, the 3rd window is long and wide respectively multiplied by with 1.1 on the basis of the second window, with this
Analogize.
Preferably, M grades of windows can be chosen from above-mentioned N grades of window is used for follow-up human face region detection.
Under normal circumstances, N numerical value is larger, for example 32 grades of conventional windows, and operand is huge, time-consuming longer,
Therefore therefrom the window of selected part size is used for follow-up Face datection, can reduce operand, saves fortune
Evaluation time, reduces the requirement (such as reducing the requirement to processor and memory standards) to hardware.
Preferably, M grades of windows are chosen from N grades of window equal intervals.For example, according to the chi of home window
In 32 grades of windows that very little and default amplification coefficient is obtained, according to the order of size from small to large, this 32
Level window is ordered as:Window 1, window 2 ... ..., window 32, then in 32 grades of windows selected window 4,
This 8 grades of windows of window 8, window 12, window 16, window 20, window 24, window 28, window 32.
Feature extraction is carried out based on this 8 grades of windows, but above-mentioned 32 are distributed in because 8 grades of windows of selection are uniformly discrete
In level window, the demand for adapting to various sizes of Face datection still can be met.The series quantity of selection is got over
Few, then operand is smaller, and arithmetic speed is faster, but robustness is decreased;Conversely, then operand is bigger,
Arithmetic speed is slower, but robustness is higher.Therefore in the specific implementation, can according to different scenes to robustness and
The different requirements of arithmetic speed, determine the series of window.
In further embodiments, the M level windows selected in above-mentioned N grades of various sizes of window
Can not be in uniform discrete distribution, such as, and for the scene that the face size in pending image is more unified,
Then the window of corresponding size can be chosen according to possible face size.
As shown in Fig. 2 when carrying out Face datection for wherein one-level window, can be slided in pending image
The window is moved, candidate region is obtained, and following steps are performed for obtained each candidate region:
Step 1021:Candidate region is chosen according to this grade of window, feature templates corresponding with this grade of window are used
Feature extraction is carried out to each candidate region.
Step 1022:If the feature extracted is more than cascade classifier by the result of calculation of cascade classifier
Threshold value, then it is human face region to adjudicate corresponding candidate region, wherein, the threshold value of cascade classifier is based on sample
Obtained after being turned down on the basis of threshold value obtained by this training.
In above-mentioned steps, if every first-level class device in cascade classifier adjudicates the candidate region for face
Region, then be defined as human face region by the candidate region.Under certain condition, part cascade can also be skipped
Grader, only carries out Face datection by remaining cascade classifier to candidate region.
The threshold value of above-mentioned cascade classifier is obtained by sample training.Preferably, in the embodiment of the present invention,
After cascade classifier is obtained according to sample sequence, the threshold value of the cascade classifier can be turned down.Appropriate reduction threshold
After value, for example, the 94%~98% of original threshold is reduced in some scenarios, can reduce Face datection
Reject rate (probability that human face region is determined as to non-face region), on the one hand, improve face is blocked,
The anti-interference of the disturbing factors such as light influence, wearing spectacles;On the other hand, it is to avoid be used for spy due to reducing
The influence levied the number of the various sizes of window of extraction and brought.It can improve and know by mistake while reduction reject rate
Rate (by the probability that non-face regional determination is human face region), therefore in the amplitude that threshold value is reduced, can
To consider reject rate and misclassification rate, to ensure the robustness of system.
In flow as shown in Figure 2, it can include per the corresponding feature templates of one-level window multiple.Step
Multiple feature templates can be used in 1021 and the plurality of feature templates can be Harr feature templates, correspondingly,
In step 1022, it is detected using the Adaboost cascade classifiers based on Harr features.It is above-mentioned
Feature templates can also be other feature templates, and the embodiment of the present invention is without limitation.
, can be according to the step in the step-length and vertical direction in the horizontal direction of setting in above-mentioned steps 1021
It is long, this grade of window is slided successively from left to right, from top to down so as to obtain multiple candidate regions, and in choosing
Get and characteristics extraction is carried out behind a candidate region, and the characteristic value based on extraction is entered using cascade classifier
Row human face region is adjudicated.Under normal circumstances, the step-length of horizontal direction and the equal length of this grade of window, vertically
Step-length on direction is equal with the height of this grade of window.
Preferably, the strategy of window sliding can accordingly be adjusted according to the testing result to candidate region
It is whole, to reduce amount of calculation, save operation time.Specific window sliding strategy can be as shown in Figure 3:
If current candidate region is judged as human face region, in a second direction by behind current candidate region
Zone marker in m-1 times of the second step-length is non-face region, is slided in a first direction with n times of the first step-length
Dynamic window, obtains treating favored area (step 301, step 305, step 306, step 307).Above-mentioned mistake
Cheng Tongchang is performed in a case where:In a first direction between current candidate region and pending image boundary
In the case that sliding distance is more than or equal to n times of the first step-length.When current candidate region in a first direction with
When sliding distance between pending image boundary is less than n times of the first step-length, window is jumped in a second direction
The original position of first direction obtains and treats favored area (step 301, step 305, step 306, step 308).
Wherein, m and n is the integer more than 1, and m and n is no more than 4 under normal circumstances.
If what is obtained treats that favored area has been marked as non-face region, treat favored area with treating in a first direction
The sliding distance between image boundary is handled more than or equal in the case of the first step-length, is slided according to the first step-length
Window, obtains next candidate region (step 304, step 302, step 303, step 304, step 309);
If this treats that favored area is not flagged as non-face region, this is treated favored area as next candidate region
(step 304, step 309).
It is starting point in first party using current candidate region if current candidate region is judged as non-face region
Upwards with the first step-length sliding window, obtain treating favored area (step 301, step 302, step 303).
Said process is generally performed in a case where:Current candidate region and pending image side in a first direction
In the case that sliding distance between boundary is more than or equal to the first step-length.When current candidate region in a first direction
When sliding distance between pending image boundary is less than the first step-length, window is jumped to the in a second direction
The original position in one direction obtains and treats favored area (step 301, step 302, step 308).
If what is obtained treats that favored area has been marked as non-face region, treat favored area with treating in a first direction
The sliding distance handled between image boundary is more than or equal in the case of the first step-length, with the first step-length sliding window
Mouthful, obtain next candidate region (step 304, step 302, step 303, step 304, step 309);
If this treats that favored area is not flagged as non-face region, this is treated favored area as next candidate region (step
Rapid 304, step 309).
Wherein, if first direction is horizontal direction, second direction is vertical direction, then a length of window of the first step
Width, the height of a length of window of second step;If first direction is vertical direction, second direction is level side
To the then height of a length of window of the first step, the width of a length of window of second step.
In order to more clearly explain above-mentioned flow, illustrated by taking n=m=3 as an example, and with reference to Fig. 4.Such as
The pending image of view picture shown in Fig. 4, by window in the horizontal direction from left to right, vertically from up to
Lower slider.If current candidate region be A, if region A is judged as human face region, by window with
Region A is that starting point is slided in the horizontal direction using 3 times of window widths as step-length, obtains treating favored area B;And
By the region in region A in vertical direction 2 times of window heights, i.e. region C, labeled as non-face region,
I.e. when window sliding is into the region, because the window has been marked as non-face region, Ze Buduigai areas
Domain carries out feature extraction and judgement, and window is directly slided to next treat in favored area.If current candidate region is
During B, if region B is judged as non-face region, and region B and pending image boundary in horizontal direction
Between sliding distance be less than the first step-length, then window is jumped to the start bit of first direction in a second direction
Put, favored area, i.e. region D are treated in acquisition.
Step 103:It is determined that the confidence level of the human face region each detected.
When carrying out human face region detection to pending image using various sizes of window, it may be possible to detect
Human face region, it is also possible to can't detect.Its confidence level is calculated to each region for being detected as face, by confidence
Highest human face region is spent as the human face region eventually detected, the rectangle as shown in Fig. 5 a to Fig. 5 b
Square frame.Specifically, shown in the calculation formula of confidence level such as formula (1):
Conf=∑s (Ti-Tri) (1)
Wherein, conf represents the confidence level of human face region, TiRepresent the pending image in the human face region
Characteristic value pass through i-stage cascade classifier result of calculation, TriThe threshold value of i-stage cascade classifier is represented,
TiAnd TriDifference it is bigger, represent confidence level it is higher.
Step 104:According to confidence level highest human face region, face profile point detection is carried out.
Preferably, the first ratio and the second ratio can be pre-set, the first ratio and the second ratio are respectively less than
Or can be with identical equal to 1, second ratio and the first ratio, can also be different.
In above-mentioned steps 102, first pending image can be zoomed in and out according to the first ratio, obtain
One image, i.e., reduce the resolution ratio of pending image according to the first ratio;Then again respectively according to M grades not
Window with size carries out human face region detection to the first image.
Correspondingly, in above-mentioned steps 104, first pending image can be zoomed in and out according to the second ratio,
The second image is obtained, i.e., the resolution ratio of pending image is reduced according to the second ratio;Then according to the first image
The position of middle confidence level highest human face region, determines confidence level highest human face region in the second image;Again
The face profile in pending image is determined according to the position of the face profile point detected in the second image
Point.
Preferably, the second ratio is more than or equal to the first ratio, i.e. the resolution ratio of the second image is more than or equal to
The resolution ratio of first image.Because requirement of the image for the detection of face profile point to resolution ratio is usually above
Requirement of the image to resolution ratio for human face region detection, if point of the image detected for face profile point
Resolution is relatively low, detection to face profile point be easily interfered the influence of factor, for example, in resolution ratio
Eye contour point is vulnerable to the interference of glasses in the case of relatively low.
For example:The resolution ratio of pending image is 1920 × 1080, can be with when carrying out human face region detection
The resolution ratio of pending image is reduced to 320 × 240;When carrying out the detection of face profile point, it will can treat
The resolution ratio of processing image is reduced to 640 × 480, then will be detected in resolution ratio is 320 × 240 image
To confidence level highest human face region be converted to resolution ratio be 640 × 480 image in human face region, enter
Row face profile point is detected, finally according to the face profile point detected in the image that resolution ratio is 640 × 480
In position determine the position of the face profile point in former pending image.By said process, it can reduce
Operand, saves operation time, the adaptability of the equipment to nonidentity operation ability is improved, while also can be true
Protect certain precision.
In above-mentioned steps 104, face profile point detection can be carried out using many algorithms, it is preferable that entering
During the detection of row face profile point, ASM (Active Shape Model, active shape model) can be used
Algorithm is detected.The effect that is obtained when face is detected is aligned using ASM algorithms preferably, but is difficult to gram
Take the inclination of face.
Therefore, the problem of effect is not good when to inclined Face datection in order to overcome ASM algorithms, in order to
Obtain the higher face profile point of robustness, it is preferable that face inspection is being carried out to face using ASM algorithms
Before survey, the face inclination angle in confidence level highest human face region is calculated first, is then inclined according to the face
Oblique angle, determines the face profile point in confidence level highest human face region.
Specifically, the process for calculating face inclination angle is as described below:
1) eyes region in human face region is determined.
For example, the eyes region in human face region can be estimated according to the ratio in " three five, front yards ", wherein,
" three front yards " refers to the length ratio of face, from forehead hairline line to brow ridge, from brow ridge to nose bottom, from nose bottom under
Chin, respectively accounts for the 1/3 of face length, and " five " refer to the width ratio of face, from left side hairline to right side hairline, are five
The length of eyes, the distance between two eyes, the distance of two outside to side hairlines respectively for an eyes.
Therefore the human face region detected can be divided into the row of three row five, and eyes are located at the second row secondary series and second
During row four is arranged.
2) according to the component value of color space in eyes region, the pupil position of eyes is determined, wherein, face
The component value of the colour space can be the one or more in brightness value, chromatic value.
So that the color space of pending image is YCrCb as an example, it will can be accorded with the eyes position that estimated
Close following condition point set as eyes reference zone:
- Y-component is in [100,125] are interval;
- Cb components are in [115,130] are interval;
- Cr components are in [125,145] are interval.
In the reference zone of each eye, using the position where the minimum value of Y-component as eyes pupil
Position., may be by the end during pupil position due to determining eyes according to Y-component in specific perform
The interference of hair or glasses, therefore after Y-component minimum value position is obtained, the position is judged,
If the position is in the periphery of reference zone, then it is assumed that the position is not the pupil position of eyes, if the position
In default central area, then the position is defined as to the pupil position of eyes.
3) face inclination angle is determined according to the pupil position of eyes.
Assuming that the pupil position of eyes is respectively (x1, y1) and (x2, y2), then face tiltangleθ can
To be tried to achieve according to formula (2):
θ=arctan [(y2-y1)/(x2-x1)] (2)
Specifically, according to face inclination angle, face profile point in confidence level highest human face region is determined
Process is as described below:
1) according to face inclination angle, the pending image in confidence level highest human face region is rotated,
Obtain postrotational human face region image.
2) face profile point detection is carried out for postrotational human face region image.
3) according to face inclination angle, the coordinate of the face profile point detected is reversely rotated, obtained
Face profile point coordinates in confidence level highest human face region.
Preferably, threshold value can be pre-set to face inclination angle, is somebody's turn to do if the face inclination angle calculated is more than
Threshold value, then it represents that the degree of accuracy of the face due to tilting the face profile point that can influence to detect in image, is adopted
With the above-mentioned method for first rotating human face region and obtaining face profile point again;If the face inclination angle calculated is small
In the threshold value, then it represents that the smaller degree of accuracy for not interfering with face detection in the angle of inclination of the face in image,
The detection of face profile point can be directly carried out to human face region without rotation, detection is being ensured to reach
Operand is reduced in the case of the degree of accuracy.
Point as shown in Fig. 5 a to Fig. 5 d, the face point sequence of an outline as detected by the above method
For Xface={ (x1,y1),(x2,y2),...,(xn,yn)}。
Image shown in Fig. 5 a to Fig. 5 d shoots for model machine, the wearing spectacles for there being face in figure, and what is had is to deposit
In a certain degree of side face, what some presence were shot faces upward and (bows) visual angle, and some faces tilt, and illustrate this hair
The method for detecting human face that bright embodiment is provided, with higher robustness, can exclude many disturbing factors,
Output reliable and stable Face datection result and face testing result.
Method for detecting human face provided in an embodiment of the present invention, goes for different application scenarios, especially fits
In the handheld device not high for operational capability or the makeup for entertainment requirements, in U.S. figure application, protecting
Operand is reduced in the case of card certain robustness, arithmetic speed is improved.
Further, after face profile point is detected, makeup can also be carried out to the face detected,
Referring to Fig. 6, makeup process specifically includes following steps:
Step 601:According to face profile point, face makeup template is handled.
Specifically, in above-mentioned steps, the angle of inclination of face, root can be determined according to face profile point
The anglec of rotation of face makeup template is determined according to the angle of inclination of face, according to the anglec of rotation of face makeup template
Degree rotates to face makeup template;The size of face can also be determined according to face profile point, according to
The size of face is zoomed in and out to face makeup template.
Spin matrix TθAs shown in formula (3):
Wherein, θ represents the anglec of rotation of face.
Scaled matrix TsAs shown in formula (4):
Wherein, Sx represents zooming parameter of the face template in X-axis, and Sy represents face template in Y-axis
Zooming parameter.
In certain embodiments, make laughs to reach, the makeup effect such as exaggerate, some makeup templates can be only
Rotated or only zoomed in and out, or even without rotating or scaling, the present invention is without limitation.
Step 602:The human face region that face makeup template after processing is fitted in pending image.
When makeup template to be fitted to the human face region in pending image, in the way of central point is fitted
Makeup template is fitted on pending image.For example, to eyes carry out makeup when, will by rotation,
The center position of eyes makeup template after scaling and the eye center point aligned in position in pending figure, will
Fitted to by the eyes makeup template after rotation, scaling on pending image.
Preferably, in above-mentioned steps, the face makeup template after being handled according to default transparency setting
Transparency, will set transparency after face makeup template fit in pending image set transparency after
Human face region, such as shown in formula (5).
Iout(x, y)=(1- α) Iin(x,y)+α·[Tscale·Tθ·Imask(x,y)] (5)
Wherein, Iout(x, y) represents the output result after makeup, Iin(x, y) represents pending image, Imask(x,y)
Makeup template is represented, α represents transparency.
In some embodiments of the invention, in the pending image of makeup, human face region to be generally larger,
Then in above-mentioned steps 102, M grades of windows can be chosen from the N level windows arranged from small to large according to size
During mouth, larger-size window can be only selected., can be with selection window still by taking above-mentioned 32 windows as an example
18th, window 20, window 22, window 24, window 26, window 28, this six grades of windows to be to extract characteristic value,
And human face region detection is carried out according to characteristic value, to reduce operand, improve arithmetic speed.
In order to illustrate more clearly of above-mentioned makeup process, think to enter exemplified by eyes, lip, beard makeup below
Row explanation.
Embodiment one, eyes makeup
Before makeup is carried out, it is necessary first to detect the human face region in pending image, human face region is entered
Row face detect that to obtain face profile point specific detection process is referring to previous embodiment, the eyes detected
Profile point is as shown in Figure 7.
Then the angle of inclination of eyes is determined according to the profile point of eyes, the angle of inclination of eyes will consider
Face inclination angle and the individual difference of eye contour.Wherein face tiltangleθfaceCan by eyes central point
Position is tried to achieve, and computational methods can be the computational methods as described in formula (2), can also be according to acquisition
Face profile point is calculated, specific as shown in formula (6):
Wherein, (xrce, yrce) and (xlce, ylce) the center position coordinate of right eye and left eye is represented respectively,
The center position coordinate can be shown in formula (7) and formula (8)
Wherein, (xe_in, ye_in) represent eyes inner eye corner position coordinates, (xe_out, ye_out) represent eye
Why the tail of the eye position coordinates of eyeball, the center position coordinates of eyes are determined according to canthus position coordinates, are
Because the position coordinates inside and outside canthus is not easily susceptible to the influence of the disturbing factors such as glasses, more stable.
In addition the individual difference (difference for showing the inclination angle of eyes itself) of human eye is considered, by inside and outside
Canthus point obtains a left side (right side) eye tilt angle theta of imagele(θre), such as shown in formula (9):
The final anglec of rotation θ for determining left eye makeup templateleyeAs shown in formula (10), right eye makeup template
Anglec of rotation θreyeAs shown in formula (11):
Shown in zoom factor Sx such as formula (12) of the eyes makeup template in X-axis, the contracting in Y-axis
Put shown in factor S y such as formula (13):
Wherein, distance functionimge_inAnd imge_outRespectively
Represent the position of inner eye corner and the tail of the eye in pending image, maske_inAnd maske_outIt is illustrated respectively in
The position of inner eye corner and the tail of the eye, img in eyes makeup templatee_upAnd imge_downIt is illustrated respectively in pending
In image on eyes profile extreme point and bottom profiled extreme point position, maske_upAnd maske_downDifference table
Show the position of profile extreme point and bottom profiled extreme point on eyes in eyes makeup template.
Influence can be produced on profile point position above and below the eyes got, cause meter in view of wearing spectacles sometimes
Obtained Y-axis zoom factor Sy produce it is abnormal, therefore the main zoom factor Sx using X-axis of scaling multiple as
Standard, because generally right and left eyes Angle Position is relatively stable, is disturbed influence smaller.It is high according to the width of human eye profile
Determine whether Y-axis zoom factor Sy produces exception than ratio:
The ratio of width to height of usual human eye profile is usually no more than 3/5, show more than 3/5 above and below profile point position it is different
Often, then Y-axis zoom factor Sy is made to be equal to X-axis zoom factor Sx.
Eyes makeup template is carried out after rotation scaling according to the inclination angle, zoom factor determined, fitted to
On pending image, as shown in Figure 8.
Embodiment two, lip makeup
In embodiments of the present invention, lip makeup process is divided into upper lip makeup process and lower lip makeup mistake
Journey.Upper and lower lip makeup is similar to eyes makeup, before makeup is carried out, it is necessary first to detect pending
Human face region in image, face detection is carried out to human face region to obtain face profile point, then according to mouth
The profile point of lip calculates lip inclination angle, the center position of lip and scaling system in pending image
Number.
Lip tiltangleθmObtained by formula (14):
Wherein, (xm_left,ym_left) and (xm_right,ym_right) left corners of the mouth point and right corners of the mouth point coordinates are represented respectively,
The left and right corners of the mouth point of lip and the upper bottom profiled extreme point of upper lower lip are as shown in Figure 9.
Center position (the x of upper lipcmu, ycmu) obtained by formula (15) and formula (16)
Wherein, (xmu_up,ymu_up) and (xmu_down,ymu_down) respectively represent upper lip upper profile extreme value
Point and bottom profiled extreme value point coordinates.
Similarly, lower lip center position (x can be obtained according to the above methodcmd, ycmd)。
Shown in zoom factor Sx such as formula (17) of the upper lip makeup template in X-axis, in Y-axis
Shown in zoom factor Sy such as formula (18):
Wherein, imgm_leftAnd imgm_rightIt is illustrated respectively in left corners of the mouth point and right corners of the mouth point in pending image
Position, maskm_leftAnd maskm_rightIt is illustrated respectively in left corners of the mouth point and the right corners of the mouth in lip makeup template
The position of point, imgmu_upAnd imgmu_downIt is illustrated respectively in the upper profile pole of upper lip in pending image
The position of value point and bottom profiled extreme point, maskmu_upAnd maskmu_downIt is illustrated respectively in upper lip makeup
In template on lip profile extreme point and bottom profiled extreme point position.
Similarly, zoom factor of the lower lip in X-axis, Y-axis can be obtained according to the above method.
Upper and lower lip makeup template is carried out after rotation scaling according to the inclination angle, zoom factor determined,
Fit on pending image, as shown in Figure 10.
Embodiment three, beard makeup
Before beard makeup is carried out, it is necessary first to the human face region in pending image is detected, to face area
Domain carries out face detection to obtain face profile point, the upper profile extreme point of upper lip and the bottom profiled pole of nose
Value point is as shown in figure 11.
The center of beard determines by the upper profile extreme point of upper lip and the bottom profiled extreme point of nose, such as
Shown in formula (19) and formula (20):
Wherein, (xn_down,yn_down) represent nose bottom profiled extreme value point coordinates, (xmu_up,ymu_up) table
Show the upper profile extreme value point coordinates of upper lip.By above-mentioned formula it can be seen that on pending image beard center
Position be nose lower extreme point and upper lip upper profile extreme point line midpoint.
The angle of inclination of beard refers to the calculating process at foregoing face inclination angle, and here is omitted.
Shown in zoom factor Sx such as formula (21) of the beard makeup template in X-axis, the contracting in Y-axis
Put shown in factor S y such as formula (22):
Wherein, imgrceAnd imglceThe position of left and right pupil in pending image is illustrated respectively in, can specifically be joined
Obtained according to formula (7) and formula (8);masklbAnd maskrbIt is illustrated respectively in beard makeup template pre-
2 points first set, this distance between 2 points is according to the default images of left and right eyes of size of makeup template
Central point between distance.imgn_downAnd imgmu_upIt is illustrated respectively in nose bottom profiled in pending image
The position of profile extreme point, mask on extreme point and upper lipb_upAnd maskb_downIt is illustrated respectively in beard
In makeup template on beard profile extreme point and bottom profiled extreme point position.
Beard makeup template is carried out after rotation scaling according to the inclination angle, zoom factor determined, fitted to
On pending image, as shown in figure 12.
Referring to Figure 13 a to Figure 13 d, wherein, Figure 13 a are the rectangle region in classics Lena images, Figure 13 b
Domain is that the point in the human face region detected, Figure 13 c is the face profile point detected, and Figure 13 d are makeup
Design sketch.As a result show:The system is reliable and stable, can overcome the interference of many human face five-sense-organ detections, output
Correct face position, it is ensured that makeup result is accurately and natural.
In the above embodiment of the present invention, pending image is entered according to the different window of M grades of sizes respectively
Pedestrian's face region detection, and calculate the confidence level of the human face region each detected;According to confidence level highest
Human face region, carries out face profile point detection.Due to carrying out face inspection according to confidence level highest human face region
Survey, with higher robustness, and then the testing result obtained is more accurate.In addition, using subseries
Window be used for feature extraction can improve operation efficiency so that the embodiment of the present invention improve robust
In the case of property, the degree of accuracy, operand is reduced again, system effectiveness is improved.Can to face makeup stabilization
Lean on, the interference of many human face five-sense-organ detections can be overcome, it is ensured that makeup result is accurately and natural.In addition the system
Arithmetic speed is very fast, and actual test also possesses preferable Consumer's Experience in MIPS equipment, can overcome hand-held
Embedded device operational capability is weaker, the problem of image quality is poor, while being also applied for application environment more preferably
PC platforms, compatible stability and high efficiency possess larger application and practical value.
Based on identical technical concept, the embodiment of the present invention additionally provides a kind of human face detection device, such as Figure 14
Shown, the device includes:
Acquisition module 1401, for obtaining pending image;
First detection module 1402, for being entered respectively according to M grades of various sizes of windows to pending image
Pedestrian's face region detection, M is the integer more than 1;
Determining module 1403, for the confidence level for the human face region for determining each to detect;
Second detection module 1404, for according to confidence level highest human face region, carrying out face profile point inspection
Survey.
Further, the device also includes choosing module (being not shown), for according to home window
Size and default amplification coefficient, obtain the 1st grade to N grades of windows;Wherein ,+1 grade of window of jth
Size be according to amplification coefficient to be amplified what is obtained on the size basis of j-th stage window,
1<=j<=N-1, N are the integer more than M;Then, M window is chosen from above-mentioned N number of window.
Preferably, above-mentioned selection module, specifically for choosing M grades of windows from N grades of window equal intervals.
Specifically, above-mentioned first detection module 1402, specifically for:According to M grades of various sizes of windows
When one-level window in mouthful carries out human face region detection to pending image, candidate regions are chosen according to this grade of window
Domain, feature extraction is carried out using feature templates corresponding with this grade of window to each candidate region;If extracting
Feature be more than the threshold value of cascade classifier by the result of calculation of cascade classifier, then adjudicate current candidate area
Domain is human face region;Wherein, the threshold value of cascade classifier is the base of the threshold value obtained by based on sample training
Obtained after being turned down on plinth.
Preferably, above-mentioned first detection module 1402, specifically for:If candidate region is judged as face area
Domain, then in a second direction by the zone marker in m-1 times of the second step-length behind candidate region be non-face area
Domain, and obtain treating favored area with n times of the first step-length sliding window in a first direction;If candidate region quilt
Adjudicate as non-face region, then obtain treating favored area with the first step-length sliding window in a first direction;Its
In, m and n is the integer more than 1;
If treating, favored area has been marked as non-face region, in a first direction with the first step-length sliding window
Mouthful, obtain candidate region;Otherwise, favored area will be treated as candidate region;
Wherein, candidate area size is identical with window size;If first direction is horizontal direction, second direction
For the width of vertical direction, the then a length of window of the first step, the height of a length of window of second step;If first direction
For vertical direction, second direction is horizontal direction, then the height of a length of window of the first step, a length of window of second step
The width of mouth.
Specifically, above-mentioned determining module 1403 according to above-mentioned formula (1) specifically for determining human face region
Confidence level.
Preferably, above-mentioned first detection module 1402 first can also enter according to the first ratio to pending image
Row scaling, obtains the first image, and the first ratio is less than or equal to 1;Then respectively according to M grades of different sizes
Window to the first image carry out human face region detection.Above-mentioned second detection module 1404 can be first according to
Two ratios are zoomed in and out to pending image, obtain the second image, and the second ratio is less than or equal to 1, and greatly
In or equal to the first ratio;Then according to the position of confidence level highest human face region in the first image, it is determined that
Confidence level highest human face region in second image;Further according to confidence level highest human face region in the second image
Carry out face profile point detection;Determined finally according to the position of the face profile point detected in the second image
Face profile point in pending image.
Specifically, above-mentioned second detection module 1404 is specifically for determining in confidence level highest human face region
Face inclination angle, according to face inclination angle, determine the face profile point in confidence level highest human face region.
Specifically, the process at the above-mentioned calculating face of second detection module 1404 inclination angle is as described below:
1) eyes region in human face region is determined.
2) according to the component value of color space in eyes region, the pupil position of eyes is determined, wherein, face
The component value of the colour space can be the one or more in brightness value, chromatic value.
3) face inclination angle is determined according to the pupil position of eyes.
Specifically, above-mentioned second detection module 1404 determines confidence level highest people according to face inclination angle
The process of face profile point in face region is as described below:
1) according to face inclination angle, the pending image in confidence level highest human face region is rotated,
Obtain postrotational human face region image.
2) face profile point detection is carried out for postrotational human face region image.
3) according to face inclination angle, the coordinate of the face profile point detected is reversely rotated, obtained
Face profile point coordinates in confidence level highest human face region.
Preferably, above-mentioned second detection module 1404, can pre-set threshold value, if people to face inclination angle
Face inclination angle is more than predetermined threshold value, then according to face inclination angle, to treating in confidence level highest human face region
Processing image is rotated, and obtains postrotational human face region image;For postrotational human face region image
Carry out face profile point detection;According to face inclination angle, the coordinate of the face profile point detected is carried out anti-
To rotation, the face profile point coordinates in confidence level highest human face region is obtained.If the face calculated inclines
Oblique angle is less than the threshold value, then it represents that the smaller standard for not interfering with face detection in the angle of inclination of the face in image
Exactness, can directly carry out the detection of face profile point, to reach true without rotation to human face region
Operand is reduced in the case of protecting accuracy in detection.
Further, the human face detection device can also include:
Processing module (is not shown), for according to face profile point, being carried out to face makeup template
Processing;
Module of fitting (being not shown), it is pending for the face makeup template after processing to be fitted to
Human face region in image.
Specifically, above-mentioned processing module can determine the angle of inclination of face according to the coordinate of face profile point,
Face makeup template is rotated according to angle of inclination;Five can also be determined according to the coordinate of face profile point
The size of official, is zoomed in and out according to the size of face to face makeup template.
Specifically, above-mentioned processing module is when handling eyes makeup template, according to the inside and outside eye of eyes
Angular coordinate determines left and right eyes center point coordinate and left and right eyes angle of inclination, such as formula (8), formula
(9), shown in formula (10);Face inclination angle, such as formula are determined according to left and right eyes center point coordinate
(7) shown in;The anglec of rotation of images of left and right eyes makeup template is determined according to formula (11) and formula (12).
Specifically, above-mentioned processing module is when handling beard makeup template, first according to inside and outside eye
Angular coordinate determines eye center point coordinates, shown in such as formula (8), formula (9);Then according to it is left,
Right eye eyeball center point coordinate determines face inclination angle, shown in such as formula (7);Using face inclination angle as recklessly
The anglec of rotation of sub- makeup template.
Specifically, above-mentioned processing module is sat when handling lip makeup template according to the left and right corners of the mouth
Mark determines the angle of inclination of lip, shown in such as formula (15);Upper mouth is determined according to upper lip profile point coordinates
The size of lip, is zoomed in and out according to the size of upper lip to upper lip makeup template, such as formula (18), public affairs
Shown in formula (19);Similarly, lower lip makeup template is zoomed in and out;Respectively according to the upper of upper and lower lip
Bottom profiled extreme value point coordinates determines the center point coordinate of upper and lower lip, such as formula (16), formula (17)
It is shown.
Above-mentioned laminating module is when lip template is fitted on pending image, according to the central point of upper lip
Coordinate, the upper lip makeup template by rotation, scaling is fitted in pending image;According to lower lip
Center point coordinate, will by rotation and/or scaling lower lip makeup template fit in pending image.
Specifically, above-mentioned laminating module is also particularly useful on the face after being handled according to default transparency setting
The transparency of the human face region of adornment template and pending image, will set the face makeup template after transparency
Fit in pending image and the human face region after transparency is set.
In the above embodiment of the present invention, pending image is entered according to the different window of M grades of sizes respectively
Pedestrian's face region detection and the confidence level for calculating the human face region each detected;According to confidence level highest people
Face region, carries out face profile point detection.Due to carrying out face detection according to confidence level highest human face region,
With higher robustness, and then the testing result obtained is more accurate.In addition, using the window of portion size
Mouth, which is used for feature extraction, can improve operation efficiency, so that the embodiment of the present invention is improving robustness, standard
In the case of exactness, operand is reduced again, system effectiveness is improved.It is reliable and stable to face makeup, energy
Overcome the interference of many human face five-sense-organ detections, it is ensured that makeup result is accurately and natural.In addition the system operations are fast
Degree is very fast, and actual test also possesses preferable Consumer's Experience in MIPS equipment, can overcome hand-held embedded
Equipment operational capability is weaker, the problem of image quality is poor, while being also applied for application environment more preferably PC
Platform, compatible stability and high efficiency, possesses larger application and practical value.
Obviously, those skilled in the art can carry out various changes and modification without departing from this hair to the present invention
Bright spirit and scope.So, if the present invention these modifications and variations belong to the claims in the present invention and
Within the scope of its equivalent technologies, then the present invention is also intended to comprising including these changes and modification.
Claims (24)
1. a kind of method for detecting human face, it is characterised in that including:
Obtain pending image;
Human face region detection, M are carried out to the pending image according to M grades of various sizes of windows respectively
For the integer more than 1;
It is determined that the confidence level of the human face region each detected;
According to confidence level highest human face region, face profile point detection is carried out.
2. the method as described in claim 1, it is characterised in that respectively according to M grades of various sizes of windows
Mouth is carried out to the pending image before human face region detection, in addition to:
According to the size of home window and default amplification coefficient, the 1st grade is obtained to N grades of windows;Its
In, the size of+1 grade of window of jth is according to amplification coefficient progress on the size basis of j-th stage window
What amplification was obtained, 1<=j<=N-1, N are the integer more than M;
M grades of windows are chosen from the N grades of window.
3. method as claimed in claim 2, it is characterised in that described that M grades are chosen from N grades of windows
Window, including:M grades of windows are chosen from the N grades of window equal intervals.
4. the method as described in claim 1, it is characterised in that according to the M grades of various sizes of window
One-level window in mouthful carries out human face region detection to the pending image, including:
Candidate region is chosen according to this grade of window, using feature templates corresponding with this grade of window to each candidate
Region carries out feature extraction;
If the feature extracted is more than the threshold value of the cascade classifier by the result of calculation of cascade classifier,
Then judgement corresponding candidate region is human face region;Wherein, the threshold value of the cascade classifier is based on sample
Obtained after being turned down on the basis of threshold value obtained by training.
5. method as claimed in claim 4, it is characterised in that described that candidate is chosen according to this grade of window
Region, including:
If the candidate region is judged as human face region, in a second direction by behind the candidate region
Zone marker in m-1 times of the second step-length is non-face region, and in a first direction with n times of the first step-length
Sliding window, obtains treating favored area;If the candidate region is judged as non-face region, in first party
Upwards with the first step-length sliding window, obtain treating favored area;Wherein, m and n is the integer more than 1;
If described treat that favored area has been marked as non-face region, slided in a first direction with the first step-length
Window, obtains candidate region;Otherwise, treat favored area as candidate region using described;
Wherein, the candidate area size is identical with the window size;If first direction is horizontal direction,
Second direction is vertical direction, then the width of a length of window of the first step, the height of a length of window of second step;If
First direction is vertical direction, and second direction is horizontal direction, then the height of a length of window of the first step, second
Step-length is the width of window.
6. the method as described in claim 1 or 4, it is characterised in that the confidence level is according to following public affairs
Formula is determined:
Conf=∑s (Ti-Tri)
Wherein, conf represents the confidence level of human face region, TiRepresent that the characteristic value of the human face region passes through
The result of calculation of i grades of cascade classifiers, TriRepresent the threshold value of i-stage cascade classifier.
7. the method as described in claim 1, it is characterised in that described respectively according to M grades of different sizes
Window human face region detection is carried out to the pending image, including:
The pending image is zoomed in and out according to the first ratio, the first image, first ratio is obtained
Less than or equal to 1;
Human face region detection is carried out to described first image according to M grades of various sizes of windows respectively;
It is described that face profile point detection is carried out according to confidence level highest human face region, including:
The pending image is zoomed in and out according to the second ratio, the second image, second ratio is obtained
Less than or equal to 1, and more than or equal to the first ratio;
According to the position of confidence level highest human face region in described first image, determine in second image
Confidence level highest human face region;
Face profile point detection is carried out according to confidence level highest human face region in second image;
Determined according to the position of the face profile point detected in second image in the pending image
Face profile point.
8. the method as described in claim 1, it is characterised in that according to confidence level highest human face region,
Face profile point detection is carried out, including:
Determine the face inclination angle in the confidence level highest human face region;
According to the face inclination angle, the face profile point in the confidence level highest human face region is determined.
9. method as claimed in claim 8, it is characterised in that described according to the face inclination angle,
The face profile point in the confidence level highest human face region is determined, including:
According to the face inclination angle, the pending image in the confidence level highest human face region is carried out
Rotation, obtains postrotational human face region image;Or, it is more than predetermined threshold value at the face inclination angle
In the case of, according to the face inclination angle, to the pending image in the confidence level highest human face region
Rotated, obtain postrotational human face region image;
Face profile point detection is carried out for postrotational human face region image;
According to the face inclination angle, the coordinate of the face profile point detected is reversely rotated, obtained
Face profile point coordinates in the confidence level highest human face region.
10. method as claimed in claim 8, it is characterised in that described according to the confidence level highest
Human face region determines face inclination angle, including:
Determine the eyes region in the confidence level highest human face region;
According to the component value of color space in the eyes region, the pupil position of eyes, the color are determined
The component value in space includes brightness value and/or chromatic value;
Face inclination angle is determined according to the pupil position of the eyes.
11. the method as any one of claim 1 to 5,7 to 10, it is characterised in that entering
After the detection of row face profile point, in addition to:
The angle of inclination of face is determined according to the coordinate of the face profile point, according to the inclination angle of the face
Degree determines the anglec of rotation of face makeup template, and face makeup template is revolved according to the anglec of rotation
Turn;And/or the size of face is determined according to the coordinate of the face profile point, according to the size pair of the face
Face makeup template is zoomed in and out;
The human face region that face makeup template after processing is fitted in the pending image.
12. method as claimed in claim 11, it is characterised in that described according to the face profile point
Coordinate determine the angles of inclination of face, the rotation of face makeup template to be determined according to the angle of inclination of the face
Gyration, including:
Left and right eyes center point coordinate and left and right eyes inclination angle are determined according to the inside and outside canthus coordinate of eyes
Degree;
Face inclination angle is determined according to left and right eyes center point coordinate;
The anglec of rotation of images of left and right eyes makeup template is determined according to following formula:
<mrow>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>l</mi>
<mi>e</mi>
<mi>y</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>=</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>f</mi>
<mi>a</mi>
<mi>c</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>+</mo>
<mfrac>
<mrow>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>l</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
</mrow>
</msub>
</mrow>
<mn>2</mn>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
<mi>y</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>=</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>f</mi>
<mi>a</mi>
<mi>c</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>l</mi>
<mi>e</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>&theta;</mi>
<mrow>
<mi>r</mi>
<mi>e</mi>
</mrow>
</msub>
</mrow>
<mn>2</mn>
</mfrac>
</mrow>
Wherein, θleyeAnd θreyeThe anglec of rotation of images of left and right eyes makeup template is represented respectively;θfaceRepresent face
Inclination angle;θleRepresent left eye inclination angle, θreRepresent right eye inclination angle.
13. method as claimed in claim 11, it is characterised in that described according to the face profile point
Coordinate determine the angles of inclination of face, the rotation of face makeup template to be determined according to the angle of inclination of the face
Gyration, including:
Eye center point coordinates is determined according to inside and outside canthus coordinate;
Face inclination angle is determined according to left and right eyes center point coordinate;
Using face inclination angle as beard makeup template the anglec of rotation.
14. method as claimed in claim 11, it is characterised in that described according to the face profile point
Coordinate determine the angles of inclination of face, the rotation of face makeup template to be determined according to the angle of inclination of the face
Gyration, including:The angle of inclination of lip is determined according to left and right corners of the mouth coordinate;
The coordinate according to the face profile point determines the size of face, according to the size pair of the face
Face makeup template is zoomed in and out, including:
The size of upper lip is determined according to upper lip profile point coordinates, according to the size of the upper lip to upper
Lip makeup template is zoomed in and out;The size of lower lip is determined according to lower lip profile point coordinates, according to institute
The size for stating lower lip is zoomed in and out to lower lip makeup template;
The human face region that the face makeup template by after processing is fitted in the pending image, bag
Include:
The center point coordinate of upper and lower lip is determined according to the upper bottom profiled extreme point coordinate of upper and lower lip respectively;
According to the center point coordinate of the upper lip, it will be pasted by the upper lip makeup template for rotating and/or scaling
Close in the pending image;According to the center point coordinate of the lower lip, by rotation and/or it will scale
Lower lip makeup template fit in the pending image.
15. a kind of human face detection device, it is characterised in that including:
Acquisition module, for obtaining pending image;
First detection module, for being entered respectively according to M grades of various sizes of windows to the pending image
Pedestrian's face region detection, M is the integer more than 1;
Determining module, for the confidence level for the human face region for determining each to detect;
Second detection module, for according to confidence level highest human face region, carrying out face profile point detection.
16. device as claimed in claim 15, it is characterised in that also include:
Module is chosen, for the size according to home window and default amplification coefficient, the 1st grade is obtained extremely
N grades of windows;Wherein, the size of+1 grade of window of jth is according to institute on the size basis of j-th stage window
State amplification coefficient and be amplified what is obtained, 1<=j<=N-1, N are the integer more than M;From the N grades of window
M grades of windows are chosen in mouthful.
17. device as claimed in claim 16, it is characterised in that the selection module, specifically for
M grades of windows are chosen from the N grades of window equal intervals.
18. device as claimed in claim 15, it is characterised in that the first detection module, specifically
For:
One-level window in the M grades of various sizes of window carries out face to the pending image
During region detection, candidate region is chosen according to this grade of window, feature templates pair corresponding with this grade of window are used
Each candidate region carries out feature extraction;
If the feature extracted is more than the threshold value of the cascade classifier by the result of calculation of cascade classifier,
Then judgement corresponding candidate region is human face region;Wherein, the threshold value of the cascade classifier is based on sample
Obtained after being turned down on the basis of threshold value obtained by training.
19. device as claimed in claim 18, it is characterised in that the first detection module, specifically
For:
If the candidate region is judged as human face region, in a second direction by behind the candidate region
Zone marker in m-1 times of the second step-length is non-face region, and in a first direction with n times of the first step-length
Sliding window, obtains treating favored area;If the candidate region is judged as non-face region, in first party
Upwards with the first step-length sliding window, obtain treating favored area;Wherein, m and n is the integer more than 1;
If described treat that favored area has been marked as non-face region, slided in a first direction with the first step-length
Window, obtains candidate region;Otherwise, treat favored area as candidate region using described;
Wherein, the candidate area size is identical with the window size;If first direction is horizontal direction,
Second direction is vertical direction, then the width of a length of window of the first step, the height of a length of window of second step;If
First direction is vertical direction, and second direction is horizontal direction, then the height of a length of window of the first step, second
Step-length is the width of window.
20. the device as described in claim 15 or 18, it is characterised in that the determining module is specifically used
In:
The confidence level of human face region is determined according to following formula:
Conf=∑s (Ti-Tri)
Wherein, conf represents the confidence level of human face region, TiRepresent that the characteristic value of the human face region passes through
The result of calculation of i grades of cascade classifiers, TriRepresent the threshold value of i-stage cascade classifier.
21. device as claimed in claim 15, it is characterised in that second detection module, specifically
For:
Determine the face inclination angle in the confidence level highest human face region;
According to the face inclination angle, the face profile point in the confidence level highest human face region is determined.
22. device as claimed in claim 21, it is characterised in that second detection module, specifically
For:
According to the face inclination angle, the pending image in the confidence level highest human face region is carried out
Rotation, obtains postrotational human face region image;Or, it is more than predetermined threshold value at the face inclination angle
In the case of, according to the face inclination angle, to the pending image in the confidence level highest human face region
Rotated, obtain postrotational human face region image;
Face profile point detection is carried out for postrotational human face region image;
According to the face inclination angle, the coordinate of the face profile point detected is reversely rotated, obtained
Face profile point coordinates in the confidence level highest human face region.
23. device as claimed in claim 21, it is characterised in that second detection module, specifically
For:
Determine the eyes region in the confidence level highest human face region;
According to the component value of color space in the eyes region, the pupil position of eyes, the color are determined
The component value in space includes brightness value and/or chromatic value;
Face inclination angle is determined according to the pupil position of the eyes.
24. the device as any one of claim 15 to 19,21 to 23, it is characterised in that
Also include:
Processing module, the angle of inclination for determining face according to the coordinate of the face profile point, according to institute
The angle of inclination for stating face determines the anglec of rotation of face makeup template, according to the anglec of rotation on face
Adornment template is rotated;And/or the size of face is determined according to the coordinate of the face profile point, according to described
The size of face is zoomed in and out to face makeup template;
Laminating module, for the face for fitting to the face makeup template after processing in the pending image
Region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610120358.7A CN107153806B (en) | 2016-03-03 | 2016-03-03 | Face detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610120358.7A CN107153806B (en) | 2016-03-03 | 2016-03-03 | Face detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107153806A true CN107153806A (en) | 2017-09-12 |
CN107153806B CN107153806B (en) | 2021-06-01 |
Family
ID=59792447
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610120358.7A Active CN107153806B (en) | 2016-03-03 | 2016-03-03 | Face detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107153806B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN109657587A (en) * | 2018-12-10 | 2019-04-19 | 南京甄视智能科技有限公司 | Side face method for evaluating quality and system for recognition of face |
CN111523414A (en) * | 2020-04-13 | 2020-08-11 | 绍兴埃瓦科技有限公司 | Face recognition method and device, computer equipment and storage medium |
CN116071804A (en) * | 2023-01-18 | 2023-05-05 | 北京六律科技有限责任公司 | Face recognition method and device and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1731416A (en) * | 2005-08-04 | 2006-02-08 | 上海交通大学 | Method of quick and accurate human face feature point positioning |
US20120223956A1 (en) * | 2011-03-01 | 2012-09-06 | Mari Saito | Information processing apparatus, information processing method, and computer-readable storage medium |
CN102708575A (en) * | 2012-05-17 | 2012-10-03 | 彭强 | Daily makeup design method and system based on face feature region recognition |
CN102750527A (en) * | 2012-06-26 | 2012-10-24 | 浙江捷尚视觉科技有限公司 | Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene |
CN103049733A (en) * | 2011-10-11 | 2013-04-17 | 株式会社理光 | Human face detection method and human face detection equipment |
CN103793693A (en) * | 2014-02-08 | 2014-05-14 | 厦门美图网科技有限公司 | Method for detecting face turning and facial form optimizing method with method for detecting face turning |
CN103902978A (en) * | 2014-04-01 | 2014-07-02 | 浙江大学 | Face detection and identification method |
CN104408462A (en) * | 2014-09-22 | 2015-03-11 | 广东工业大学 | Quick positioning method of facial feature points |
CN105046231A (en) * | 2015-07-27 | 2015-11-11 | 小米科技有限责任公司 | Face detection method and device |
-
2016
- 2016-03-03 CN CN201610120358.7A patent/CN107153806B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1731416A (en) * | 2005-08-04 | 2006-02-08 | 上海交通大学 | Method of quick and accurate human face feature point positioning |
US20120223956A1 (en) * | 2011-03-01 | 2012-09-06 | Mari Saito | Information processing apparatus, information processing method, and computer-readable storage medium |
CN103049733A (en) * | 2011-10-11 | 2013-04-17 | 株式会社理光 | Human face detection method and human face detection equipment |
CN102708575A (en) * | 2012-05-17 | 2012-10-03 | 彭强 | Daily makeup design method and system based on face feature region recognition |
CN102750527A (en) * | 2012-06-26 | 2012-10-24 | 浙江捷尚视觉科技有限公司 | Long-time stable human face detection and tracking method in bank scene and long-time stable human face detection and tracking device in bank scene |
CN103793693A (en) * | 2014-02-08 | 2014-05-14 | 厦门美图网科技有限公司 | Method for detecting face turning and facial form optimizing method with method for detecting face turning |
CN103902978A (en) * | 2014-04-01 | 2014-07-02 | 浙江大学 | Face detection and identification method |
CN104408462A (en) * | 2014-09-22 | 2015-03-11 | 广东工业大学 | Quick positioning method of facial feature points |
CN105046231A (en) * | 2015-07-27 | 2015-11-11 | 小米科技有限责任公司 | Face detection method and device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN108197593B (en) * | 2018-01-23 | 2022-02-18 | 深圳极视角科技有限公司 | Multi-size facial expression recognition method and device based on three-point positioning method |
CN109657587A (en) * | 2018-12-10 | 2019-04-19 | 南京甄视智能科技有限公司 | Side face method for evaluating quality and system for recognition of face |
CN111523414A (en) * | 2020-04-13 | 2020-08-11 | 绍兴埃瓦科技有限公司 | Face recognition method and device, computer equipment and storage medium |
CN111523414B (en) * | 2020-04-13 | 2023-10-24 | 绍兴埃瓦科技有限公司 | Face recognition method, device, computer equipment and storage medium |
CN116071804A (en) * | 2023-01-18 | 2023-05-05 | 北京六律科技有限责任公司 | Face recognition method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107153806B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780906B (en) | A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks | |
CN110232311B (en) | Method and device for segmenting hand image and computer equipment | |
CN104517104B (en) | A kind of face identification method and system based under monitoring scene | |
CN105205480B (en) | Human-eye positioning method and system in a kind of complex scene | |
CN110232713B (en) | Image target positioning correction method and related equipment | |
CN103577815B (en) | A kind of face alignment method and system | |
CN103810490B (en) | A kind of method and apparatus for the attribute for determining facial image | |
CN111507994A (en) | Portrait extraction method, portrait extraction device and mobile terminal | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
CN109598234A (en) | Critical point detection method and apparatus | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
CN101339609A (en) | Image processing apparatus and image processing method | |
CN109086734A (en) | The method and device that pupil image is positioned in a kind of pair of eye image | |
CN109086724A (en) | A kind of method for detecting human face and storage medium of acceleration | |
CN107153806A (en) | A kind of method for detecting human face and device | |
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN103218615B (en) | Face judgment method | |
Winarno et al. | Multi-view faces detection using Viola-Jones method | |
CN113837065A (en) | Image processing method and device | |
Chen et al. | Fast face detection algorithm based on improved skin-color model | |
CN108986105A (en) | A kind of image pre-processing method and system based on content | |
Heydarzadeh et al. | An efficient face detection method using adaboost and facial parts | |
CN104156689A (en) | Method and device for positioning feature information of target object | |
Yi et al. | Face detection method based on skin color segmentation and facial component localization | |
Vimal et al. | Face Detection’s Various Techniques and Approaches: A Review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 519085 High-tech Zone, Tangjiawan Town, Zhuhai City, Guangdong Province Applicant after: ACTIONS TECHNOLOGY Co.,Ltd. Address before: 519085 High-tech Zone, Tangjiawan Town, Zhuhai City, Guangdong Province Applicant before: ACTIONS (ZHUHAI) TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |