CN106485222A - A kind of method for detecting human face being layered based on the colour of skin - Google Patents

A kind of method for detecting human face being layered based on the colour of skin Download PDF

Info

Publication number
CN106485222A
CN106485222A CN201610885523.8A CN201610885523A CN106485222A CN 106485222 A CN106485222 A CN 106485222A CN 201610885523 A CN201610885523 A CN 201610885523A CN 106485222 A CN106485222 A CN 106485222A
Authority
CN
China
Prior art keywords
face
skin
region
colour
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610885523.8A
Other languages
Chinese (zh)
Inventor
胡静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201610885523.8A priority Critical patent/CN106485222A/en
Publication of CN106485222A publication Critical patent/CN106485222A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of method for detecting human face being layered based on the colour of skin, sets up face complexion model according to face candidate region first;Then carry out area of skin color segmentation using face complexion model, carry out approximate description face shape using ellipse, for not in the range of elliptic region process as non-face region;Non-face region is removed further further according to Texture complication;Then the travel direction normalization of face candidate region is rotated;Finally connected domain analysis are carried out using face template.The present invention is pre-processed to image using face complexion characteristic, greatly reduces the hunting zone of face.Additionally, the colour of skin property calculation complexity of face is little, it is all unusual robust for Geometrical changes such as face rotation, scalings, highly useful to the detection of face hence with face complexion characteristic.Method that the present invention is provided is quick, accurately, robustness good, application surface is wider, can apply at aspects such as image recognition, speech recognition, data mining, machine vision.

Description

A kind of method for detecting human face being layered based on the colour of skin
Technical field
The present invention relates to a kind of method for detecting human face, more particularly to a kind of method for detecting human face being layered based on the colour of skin, category In technical field of face recognition.
Background technology
Face datection is the first step in Automatic face recognition system, to given still image or sequence of video images, Detection if having, which is split from background, and determines its position in the picture and size wherein either with or without face.? Some occasions, the condition of shooting image can control, and such as police wants them to lean on certain part of face when clapping the photo of criminal Nearly scale, at this moment the positioning of face is very simple.In other cases, face position in the picture is pre- unknown before this, than The photo for such as shooting in some complex backgrounds, the at this moment detection of face will be affected by following factor:(1) face is in the picture Position, the anglec of rotation and yardstick are not fixed;(2) hair style and cosmetic can cover some features;(3) noise occurred in image; The application of Face datection is mainly face information and processes (checking, identification, Expression analysis etc.) system, and video conference is remote Journey educational system, monitoring system and tracking, the image based on content and video frequency searching etc..
Method for detecting human face can substantially be divided into two big class:Based on face characteristic method and the method for image content-based. Additionally, color and face movable information can be used to the pretreatment as Face datection.The method of feature based is according to face Priori, carries out face using the low-level image feature such as facial contour, face edge, organ specificity, template characteristic etc. of face Detection;And human face region is considered as a two-dimentional picture element matrix by the method for image content-based, face and inhuman is classified as Two class of face, using sample training and the scheme of identification.
Research finds that distribution of the colour of skin of face in color space is relatively concentrated, and colouring information is to a certain extent Face can be come with major part background segment, have been proposed for much different color space model at present, for difference Occasion.Once after color model determines, Face Detection can be carried out first, the similitude in colourity and sky according to them Between on correlation be partitioned into possible human face region, while being made whether it is people using the geometrical property or gray feature in region The checking of face.
To the mid-90, the method for most of Face datection is all to rely on to extract facial features localization.As utilized The operators such as Soble, Marr-Hildreth, Laplacian to facial image extract edge feature, by extract edge with advance The face edge model of definition is mated so as to be inferred to the presence or absence of face.And for example Bur extracts face characteristic, such as eye first Eyeball, nose, face, profile etc., these features are integrated, and are inferred to the presence or absence of face using the method for statistical analysis. Active shape is also used in Face datection, main thought rebuild face shape can hundred deforming templates, define an energy Flow function, is that energy function is minimized by constantly adjustment model parameter, can detect face, etc..
Substantially, the method for detecting human face of feature based is usually converted into the problem of the search of face features, such as, Expert Hamouz just using Gabor filter detect 10 facial characteristics so as to be inferred to face presence whether.Based on face The detection method advantage of feature is that facial characteristics, with respect to brightness of image, is blocked, and angle etc. is insensitive, additionally, detecting The characteristic information face recognition module that can also act as below.Certainly, shortcoming is that the complexity of this algorithm, particularly counts Complexity is calculated, the image for processing low resolution and plurality of human faces detection are also had difficulties.
Although colouring information is a highly useful information for Face datection, but colouring information can only be by people Face area of skin color, including face, hand equivalent background segment out, reduces the scope of Face datection, therefore merely with colouring information To detect face or inadequate, in addition it is also necessary to which other subsequent treatment are confirming that face whether there is in area of skin color.Face is because of people And different, absolutely not identical, even if twin, its face also certainly exist certain in terms of difference.Although the mankind are at expression, age Or in the case that hair style etc. occurs great variety, difficulty can be detected by face not and identify a certain individual, but Set up one fully automated can carry out recognition of face ground system be but very difficulty.It is related to pattern-recognition, image Many knowledge of the aspects such as process, computer vision, physiology, psychology and cognitive science, and special with based on other biological There are close ties in the personal identification method that levies and computer man-machine perception interactive field.But, with fingerprint, retina, rainbow Other human body living creature characteristic recognition systems such as film, gene, palm shape are compared, and face identification system is more direct, friendly, user's nothing Any mental handicape, and by the expression/posture analysis of face, moreover it is possible to obtain some letters that other identifying systems are difficult to obtain Breath.
Content of the invention
The technical problem to be solved in the present invention is to provide the people that a kind of quick, accurate, robustness is good, based on colour of skin layering Face detecting method.
In order to above-mentioned technical problem is solved, the technical scheme is that and a kind of Face datection being layered based on the colour of skin is provided Method, it is characterised in that:Step is:
Step 1:Face complexion model is set up
1.1 color-space choosing
Face complexion model is built using YCrCb color space;
The nonlinear transformation of 1.2 YCrCb color spaces
The chromatic component of YCrCb color format is not to be totally independent of monochrome information Y and exist, the cluster area of the colour of skin Domain is also to form nonlinear change with the difference of monochrome information Y;Therefore nonlinear transformation is carried out to YCrCb color format, makes Obtain Y value as far as possible little for chromatic component CrCb impact;
1.3 ellipse fitting
With an ellipse come approximate area of skin color, and its analytical expression is calculated, obtain the face skin after ellipse fitting Color model;
Step 2:Shape certification
Set up layer-stepping and model is processed, for the certification in face candidate region, concrete mode is as follows:
1) area of skin color segmentation is carried out using face complexion model;
2) in order to the face of different directions is processed, carry out approximate description face shape using ellipse, according to ellipse long and short shaft Angle, analyzes the anglec of rotation of face, and specifies face candidate ellipse long and short shaft ratio, and the size of major and minor axis is setting model Within enclosing, will process as non-face region for elliptic region not in the range of this;
Step 3:Texture is verified
Due to the otherness of eyes, face, eyebrow face feature and people's face skin in color in human face region, therefore, It is more complicated that human face region compares other candidate region textures of hand, neck, calculates Texture complication, according to Texture complication come Non-face region is removed further;
Step 4:Direction normalizes
Angle to ellipse long and short shaft, rotates the travel direction normalization of face candidate region;
Step 5:Connected domain analysis
Eyes, face, the gray scale of brow region, texture are clearly distinguishable from other regions of face, from the face area for detecting Domain this it appears that;
Candidate face region first to detecting carries out gray processing, through to face face feature graphical analysis, making Y to which Gradient on direction is processed, and Y represents black space, then carries out connected domain analysis using face template, when in corresponding connected domain Gray scale and exceed certain threshold value when, then it is assumed that the candidate face region be human face region;During less than certain threshold value, then just phase Instead.
Preferably, in the step 1, the concrete grammar of the nonlinear transformation of YCrCb color space is:
Sectional linear fitting is done on the border of chromatic component CrCb, the border with chromatic component CrCb is poly- to limit the colour of skin Class region, concrete formula are as follows:
C′iFor the chromatic component C ' after conversionb、C′r, i is the component of b, r,Represent CiMean value,For C 'iComponent Scaled value,For C 'iComponent scaled value in the Y direction, W are scaling value, LCiFor C 'iTone side To HCiFor C 'iSaturation degree direction, KlFor the deviant of L * component, KhFor the deviant in H direction, YmaxFor maximum, YminFor minimum Value;
Through above-mentioned nonlinear color transformation, then it is projected in Cr '-Cb ' two-dimensional space, just obtains face skin Color model.
Preferably, in the step 1, according to face complexion sample training estimate in HHI image library, in YCrCb space In:
Kl= 125,Kh=188, Ymin=16, Ymax=235.
Preferably, in the step 1, the formula of ellipse fitting is:
Wherein:ecxAnd ecyFor elliptical center coordinate;A, b are major and minor axis;θ is one random angle of rotation;Appoint for one when rotation After meaning angle, new elliptic equation is changed into formula (10);C′b-cxRepresent chrominance space projection in the x direction;C′b-cyRepresent Chrominance space projection in y-direction.
Preferably, in the step 1, in formula (9) and formula (10),
Cx=109.38, Cy=152.02, θ=2.53 radians, ecx=1.60, ecy=2.41, a=25.39, b= 14.03.
Preferably, in the step 2, when carrying out area of skin color segmentation using face complexion model, using medium filtering, shape State processes operator and eliminates non-face region.
Preferably, in the step 3, Texture complication is calculated using variance, removed according to the size of variance further Non-face region.
As the distribution of the colour of skin in color space of face is relatively concentrated, colouring information is permissible to a certain extent Face is come with major part background segment, suitable color model is therefore selected, image is carried out using face complexion characteristic Pretreatment, greatly reduces the hunting zone of face.Additionally, the colour of skin property calculation complexity of face is little, face is rotated, The Geometrical changes such as scaling are all unusual robusts, are useful hence with face complexion characteristic for the detection of face.
The present invention provide method first by face colour of skin characteristic as Face datection pretreatment, by coloured image The YCrCb color space through improving is gone to from rgb space, compare the spy of concentration using face complexion on CrCb component Property, split face complexion area;Then a kind of certification that layering Face datection model carry out candidate face region is given.This Bright application surface is wider, can apply at aspects such as image recognition, speech recognition, data mining, machine vision.
Description of the drawings
Fig. 1 is layering Face datection model framework chart.
Specific embodiment
With reference to specific embodiment, the present invention is expanded on further.It should be understood that these embodiments are merely to illustrate the present invention Rather than limit the scope of the present invention.In addition, it is to be understood that after the content for having read present invention instruction, people in the art Member can be made various changes or modifications to the present invention, and these equivalent form of values equally fall within the application appended claims and limited Fixed scope.
First, color space
Before algorithm is introduced, first several color spaces are briefly introduced.Color space is exactly to represent that with numeral color belongs to A kind of model of property, it is general not have a kind of color system, because table can be carried out with different models and method to color Reach.Each color space has the color characteristics of its own.When the problem of color quantizing is considered, it is necessary first to solution Problem is exactly to define a color space.Several different color spaces be there are for different color attributes.
At present, using most wide visual color space be by CIE (Commission Internationale de I' Eclairage) the international bio electronics committee was developed for first in nineteen twenty, and was developed on its basis, no It is the real color of nature, its advantage is that the color that can be felt us with these primary colors is represented with numeral.
(1) RGB color space:
By wavelength be respectively 700nm, 546.1nm, 435.8nm light be defined as three primary colors just constitute RGB (red, green, Blue) color space.Space is an additive color space.Because color is produced with photochromic addition by photochromic.Therefore, combine Second color for going out is always brighter than primary colors.The Red Green Blue of maximum intensity is added and produces white, identical numerical value Red, green, blue is added and produces neutral ash, and the neutrality ash of numerical value more low yield life is darker, otherwise brighter.It is that electronic input apparatus generally make Color Language, such as display, scanner and digital camera etc..These equipment be by ray or absorb light come Reproducing colors, rather than use reflection light.Rgb space is a space for being suitable for machine processing, rather than with the mankind Color space based on vision.Its major defect is that it depends on this trichromatic brightness value.Therefore based on rgb space System is very sensitive for the inhomogeneities of the change, shade and illumination of brightness.In addition, its gamut range is also very narrow, have A little visible light colors can not be represented with it.
(2) CMY space:
CMY (green grass or young crops, product, Huang) space is a substractive color space, and it is applied to printing technology, and printed matter passes through reflection light Principle reproducing colors.If deducting any one in RGB three primary colors from white light, then you will respectively obtain red, green, blue Complementary color CMY.The color produced with the color of CMY substractive color space generation and with RGB additive color space is not fully identical.Especially, CMY can not represent the color of RGB by simple conversion.When you in press change into CMY from the neutrality ash of RGB, slightly red Purple.This phenomenon can be remedied with black Y.But the equality conversion that the 4th kind of color destroys RGB to CMYK is to increase, So that the color between RGB and CMYK is corresponded to and becomes increasingly complex.Correspond can them without simple method.
(3) XYZ space:
XYZ Color Space Definitions three kinds of imaginary primary colors X, Y, Z, the shaft-like and cone cell of human eye are most quick to them Sense.The feature in this space is that all of color is all represented with these primary colors, and the formula for being transformed into XYZ by RGB is as follows:
(4) yuv space:
YUV color space is a basic color space for composite coloured video standard.Distinguish the side of several primary colors Method is different, and yuv space represents that using monochrome information Y and the UV component corresponding with tone, saturation degree proportionate relationship color is empty Between.UV component is also designated as chromatic component.The relation for being transformed into YUV from RGB is as follows:
(5) YCrCb space:
YCrCb space be by yuv space through proportional zoom and plus side-play amount after the color space that obtains, be usually used in figure As compression as JPEG, H.261 with MPEG field, it with the relation of rgb space is:
2nd, algorithm
Distribution of the colour of skin of research surface face in color space is relatively concentrated, and colouring information is to a certain extent Face can be come with major part background segment, suitable color model therefore be selected, using face complexion characteristic to image Pre-processed, the hunting zone of face is greatly reduced, additionally, the colour of skin property calculation complexity of face is little, for face Rotation, the Geometrical change such as scaling is all unusual robust, is useful hence with face complexion characteristic for the detection of face 's.
1st, color-space choosing
The present embodiment adopts YCrCb color space, and which has the following advantages:
(1) YCrCb color format has the principle of compositionality similar with human visual perception process.
(2) YCrCb color format is widely used in TV such as shows at the field, and many video compression codings, The color presentation format for such as generally adopting in the standard such as MPEG, JPEG.
(3) YCrCb color format is with similar with other these color format such as HIS by the luminance component in color The advantage that separates.
(4) some other color format such as HIS is compared, and the calculating process of YCrCb color format and space coordinates represent shape Formula is fairly simple.
(5) test result indicate that in YCrCb color space the Clustering features of the colour of skin relatively good.
2nd, the nonlinear transformation of YCrCb color space
As color space is sensitive for the illumination on facial image surface, therefore using the chromatic component of color space To build complexion model.Select CrCb to build complexion model, however, YCrCb color format is directly passed through by rgb color form Obtained by linear transformation, so its chromatic component is not to be totally independent of monochrome information and exist, the cluster areas of the colour of skin And nonlinear change is formed with the difference of monochrome information Y.
In YCrCb color space, colour of skin cluster is in biapiculate spindle shape that is, larger and less in Y value Part, colour of skin cluster areas also reduce therewith.As can be seen here, where Y value difference, in the sub- plane of Cr-Cb, seek skin The cluster areas of color are infeasible, it is necessary to consider the impact that Y value difference is caused, i.e., YCrCb color format is carried out non-linear Conversion so that Y value affects as far as possible little for chromatic component CrCb.Here, sectional linear fitting is done on the border of chromatic component.
4 borders with Cr-Cb can be very good the excessively bright or excessively dark area of adaptation brightness limiting colour of skin cluster areas Domain, so that the robustness of complexion model is greatly improved.Concrete formula is as follows:
Ci' for conversion after chromatic component C 'b、C′r, i is the component of b, r,Represent CiMean value,For C 'iComponent Scaled value,For C 'iComponent scaled value in the Y direction, W are scaling value, LCiFor C 'iTone side To HCiFor C 'iSaturation degree direction, KlFor the deviant of L * component, KhFor the deviant in H direction, YmaxFor maximum, YminFor minimum Value.
Kl= 125,Kh=188, these data are all estimated according to face complexion sample training in HHI image library, in YCrCb space In, Ymin=16, Ymax=235.Through such nonlinear color transformation, then it is projected into Cr '-Cb ' two-dimensional space In, it is possible to obtain the colour of skin Clustering Model of practicality.According to the conventional method, can be with an ellipse come this colour of skin area approximate Domain, and obtain its analytical expression and be:
Wherein:ecxAnd ecyFor elliptical center coordinate;A, b are major and minor axis;θ is one random angle of rotation;Appoint for one when rotation After meaning angle, new elliptic equation is changed into formula (10);C′b-cxRepresent chrominance space projection in the x direction;C′b-cyRepresent Chrominance space projection in y-direction.
Constant in analytic expression is respectively:
Cx=109.38, Cy=152.02, θ=2.53 (radians), ecx=1.60, ecy=2.41, a=25.39, b= 14.03
3rd, certification
Although colouring information is a highly useful information for Face datection, but colouring information can only be by people Face area of skin color, including face, hand equivalent background segment out, reduces the scope of Face datection, therefore merely with colouring information To detect face or inadequate, in addition it is also necessary to other subsequent treatment confirming that face whether there is in area of skin color, the present embodiment Give a kind of layer-stepping and model is processed for the certification in face candidate region, as shown in figure 1, concrete mode is as follows:
(1) first, area of skin color segmentation is carried out using face complexion model above, at medium filtering, morphology Adjustment etc. eliminates non-face region;
(2) secondly, in order to the face of different directions is processed, carry out approximate description face shape using ellipse, because according to ellipse The angle of circle major and minor axis, can just analyze the anglec of rotation of face, and specify face candidate ellipse long and short shaft ratio, major and minor axis Size within limits, will process as non-face region for elliptic region not in the range of this.
In the present embodiment, using the ellipse fitting algorithm based on least square method, face candidate region is fitted: If F (a, x) is elliptic curve to be asked, wherein a=[a, b, c, d, e, f] is elliptic parameter vector, x=[x2,xy,y2,x,y] For sample point coordinates multinomial vector;Wherein, a=[a, b, c, d, e, f] is elliptic parameter vector, and the oval X-axis component of x, y are ellipse Round Y-axis component.
Define F (a, xi)=d is sample point xiAlgebraic distance to curve F (a, x)=0.Give N number of sample point, then ellipse The solution of Circle Parameters vector is:
Due to the otherness of the face feature such as eyes, face, eyebrow and people's face skin in color in human face region, because This, it is more complicated that human face region compares other face candidate region textures such as hand, neck, it is possible to use variance is calculating texture Complexity, removes non-face region further according to the size of variance.
Subsequently, the angle theta to ellipse long and short shaft, rotates the travel direction normalization of face candidate region, and formula is as follows:
Xrotated=X cos (θ)+Y sin (θ) (13)
Yrotated=Y cos (θ)-X sin (θ) (14)
XrotatedRepresent the value after angle theta being rotated on transverse direction;YrotatedRepresent and revolve on ellipse short shaft direction Turn the value after angle theta.
Connected domain analysis are finally carried out:Eyes, face, the gray scale in the region of eyebrow, texture to be clearly distinguished from face its His region, from the human face region for detecting this it appears that.Here, the candidate face region first to detecting carries out gray scale Change, through to a large amount of face face feature graphical analyses, the gradient which is made in Y-direction is processed, then is connected using face template Logical domain analysis, when gray scale in corresponding connected domain and exceed certain threshold value when, then it is assumed that the candidate face region be human face region. During less than certain threshold value, then contrast.
Table 2-1 have recorded the detection speed of single face and plurality of human faces.Here using the sequence of video images conduct of 1000 frames Experiment sample, the total time-consuming of single Face datection is 21258 milliseconds, and speed is about 21 milliseconds/frame, and total consumption of plurality of human faces detection When be 35666 milliseconds, speed is 36 milliseconds/frame or so, and accuracy of detection is respectively 91.8% and 86.6%, as experiment adopts 25 The video image of frame/second, has obviously met the requirement of real-time on the PC of AMD700/128RAM.From experimental data As can be seen that this algorithm has preferable effect.
Test frame number Testing time (millisecond) Detection speed (millisecond/frame) Accuracy of detection
Single face 1000 21258 21.258 91.8%
Plurality of human faces 1000 35666 35.666 86.6%
Test result indicate that:This model can not only detect front face, and for any level angle, the people of attitude Face expression equally has preferable Detection results.For various attitudes, angle, distance, when mutually blocking, relatively can have Imitated detects multiple human face target.

Claims (8)

1. a kind of method for detecting human face being layered based on the colour of skin, it is characterised in that step is:
Step 1:Face complexion model is set up
1.1 color-space choosing
Face complexion model is built using YCrCb color space;
The nonlinear transformation of 1.2 YCrCb color spaces
The chromatic component of YCrCb color format is not to be totally independent of monochrome information Y and exist, the cluster areas of the colour of skin It is to form nonlinear change with the difference of monochrome information Y;Therefore nonlinear transformation is carried out to YCrCb color format so that Y value Chromatic component CrCb is affected as far as possible little;
1.3 ellipse fitting
With an ellipse come approximate area of skin color, and its analytical expression is calculated, obtain the face complexion mould after ellipse fitting Type;
Step 2:Shape certification
Set up layer-stepping and model is processed, for the certification in face candidate region, concrete mode is as follows:
1) area of skin color segmentation is carried out using face complexion model;
2) in order to the face of different directions is processed, carry out approximate description face shape using ellipse, according to the angle of ellipse long and short shaft, Analyze the anglec of rotation of face, and specify face candidate ellipse long and short shaft ratio, the size of major and minor axis setting range it Interior, will process as non-face region for elliptic region not in the range of this;
Step 3:Texture is verified
Due to the otherness of eyes, face, eyebrow face feature and people's face skin in color in human face region, therefore, face It is more complicated that other candidate region textures of hand, neck are compared in region, calculates Texture complication, enters one according to Texture complication Step removes non-face region;
Step 4:Direction normalizes
Angle to ellipse long and short shaft, rotates the travel direction normalization of face candidate region;
Step 5:Connected domain analysis
Eyes, face, the gray scale of brow region, texture are clearly distinguishable from other regions of face, can from the human face region for detecting Will become apparent from;
Candidate face region first to detecting carries out gray processing, through to face face feature graphical analysis, making Y-direction to which On gradient process, Y represents black space, then carries out connected domain analysis using face template, when gray scale in corresponding connected domain During with exceeding certain threshold value, then it is assumed that the candidate face region is human face region;During less than certain threshold value, then contrast.
2. as claimed in claim 1 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 1, The concrete grammar of the nonlinear transformation of YCrCb color space is:
Sectional linear fitting is done on the border of chromatic component CrCb, the colour of skin is limited with the border of chromatic component CrCb and clusters area Domain, concrete formula are as follows:
C i &prime; = ( C i ( Y ) - C &OverBar; i ( Y ) ) W C i W C i ( Y ) + C &OverBar; i ( K h ) Y < K l orK h < Y C i ( Y ) Y &Element; &lsqb; K l , K h &rsqb; - - - ( 5 )
W C i ( Y ) = WL C i + ( Y - Y min ) ( W C i - WL C i ) K l - Y min Y < K l WH C i + ( Y max - Y ) ( W C i - WH C i ) Y max - K h K h < Y - - - ( 6 )
C &OverBar; b ( Y ) = 108 + 10 ( K l - Y ) K l - Y min Y < K l 108 + 10 ( Y - K h ) Y max - K h K h < Y - - - ( 7 )
C &OverBar; r ( Y ) = 154 + 10 ( K l - Y ) K l - Y min Y < K l 154 + 22 ( Y - K h ) Y max - K h H h < Y - - - ( 8 )
C′iFor the chromatic component C ' after conversionb、C′r, i is the component of b, r,Represent CiMean value,For C 'iThe ratio of component Example scale value,For C 'iComponent scaled value in the Y direction, W are scaling value, LCiFor C 'iTone direction, HCiFor C 'iSaturation degree direction, K1For the deviant of L * component, KhFor the deviant in H direction, YmaxFor maximum, YminFor minimum of a value;
Through above-mentioned nonlinear color transformation, then it is projected in two cone space of Cr ' Cb ', just obtains face complexion mould Type.
3. as claimed in claim 2 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 1, According to face complexion sample training estimate in HHI image library, in YCrCb space:
W C b = 46.97 , WL C b = 23 , WH C b = 14 , W C r = 38.76 , WL C r = 20 , WH C r = 10 ,
Kl=125, Kh=188, Ymin=16, Ymax=235.
4. as claimed in claim 2 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 1, The formula of ellipse fitting is:
( x - ec x ) 2 a 2 + ( y - ec y ) 2 b 2 = 1 - - - ( 9 )
x y = cos &theta; sin &theta; - sin &theta; cos &theta; C b &prime; - c x C r &prime; - c y - - - ( 10 )
Wherein:ecxAnd ecyFor elliptical center coordinate;A, b are major and minor axis;θ is one random angle of rotation;When one random angle of rotation Afterwards, new elliptic equation is changed into formula (10);C′b-cxRepresent chrominance space projection in the x direction;C′b-cyRepresent colourity Space projection in y-direction.
5. as claimed in claim 4 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 1, In formula (9) and formula (10),
Cx=109.38, Cy=152.02, θ=2.53 radians, ecx=1.60, ecy=2.41, a=25.39, b=14.03.
6. as claimed in claim 1 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 2, When area of skin color segmentation is carried out using face complexion model, eliminate non-face region using medium filtering, Morphological scale-space operator.
7. as claimed in claim 1 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 2, Using the ellipse fitting algorithm based on least square method, face candidate region is fitted.
8. as claimed in claim 1 a kind of based on the colour of skin be layered method for detecting human face, it is characterised in that:In the step 3, Texture complication is calculated using variance, non-face region is removed further according to the size of variance.
CN201610885523.8A 2016-10-10 2016-10-10 A kind of method for detecting human face being layered based on the colour of skin Pending CN106485222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610885523.8A CN106485222A (en) 2016-10-10 2016-10-10 A kind of method for detecting human face being layered based on the colour of skin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610885523.8A CN106485222A (en) 2016-10-10 2016-10-10 A kind of method for detecting human face being layered based on the colour of skin

Publications (1)

Publication Number Publication Date
CN106485222A true CN106485222A (en) 2017-03-08

Family

ID=58269467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610885523.8A Pending CN106485222A (en) 2016-10-10 2016-10-10 A kind of method for detecting human face being layered based on the colour of skin

Country Status (1)

Country Link
CN (1) CN106485222A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578000A (en) * 2017-08-25 2018-01-12 百度在线网络技术(北京)有限公司 For handling the method and device of image
CN108021881A (en) * 2017-12-01 2018-05-11 腾讯数码(天津)有限公司 A kind of skin color segmentation method, apparatus and storage medium
CN108509951A (en) * 2018-03-28 2018-09-07 韩劝劝 Image procossing formula opening operation platform
CN108564070A (en) * 2018-05-07 2018-09-21 京东方科技集团股份有限公司 Method for extracting gesture and its device
CN109977734A (en) * 2017-12-28 2019-07-05 华为技术有限公司 Image processing method and device
CN110188680A (en) * 2019-05-29 2019-08-30 南京林业大学 Tea tree tender shoots intelligent identification Method based on factor iteration
CN110513762A (en) * 2018-10-30 2019-11-29 永康市道可道科技有限公司 Super bath lamp body is automatically switched platform
CN110751078A (en) * 2019-10-15 2020-02-04 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color area of three-dimensional face
CN111898470A (en) * 2020-07-09 2020-11-06 武汉华星光电技术有限公司 Device and method for extracting fingerprint outside screen and terminal
CN111914632A (en) * 2020-06-19 2020-11-10 广州杰赛科技股份有限公司 Face recognition method, face recognition device and storage medium
CN113269141A (en) * 2021-06-18 2021-08-17 浙江机电职业技术学院 Image processing method and device
CN113313093A (en) * 2021-07-29 2021-08-27 杭州魔点科技有限公司 Face identification method and system based on face part extraction and skin color editing
WO2022198751A1 (en) * 2021-03-25 2022-09-29 南京邮电大学 Rapid facial detection method based on multi-layer preprocessing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (en) * 2006-10-12 2007-03-21 上海交通大学 Method for detecting colour image human face under complex background
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932847A (en) * 2006-10-12 2007-03-21 上海交通大学 Method for detecting colour image human face under complex background
CN102324025A (en) * 2011-09-06 2012-01-18 北京航空航天大学 Human face detection and tracking method based on Gaussian skin color model and feature analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
REIN-LIEN HSU 等: "Face Detection in Color Images", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
何柯峰: "在线人脸识别系统", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑(月刊)》 *
曾龙龙 等: "基于颜色重心和分层过滤结构的人脸检测算法", 《浙江理工大学学报》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578000B (en) * 2017-08-25 2023-10-31 百度在线网络技术(北京)有限公司 Method and device for processing image
CN107578000A (en) * 2017-08-25 2018-01-12 百度在线网络技术(北京)有限公司 For handling the method and device of image
CN108021881A (en) * 2017-12-01 2018-05-11 腾讯数码(天津)有限公司 A kind of skin color segmentation method, apparatus and storage medium
CN108021881B (en) * 2017-12-01 2023-09-01 腾讯数码(天津)有限公司 Skin color segmentation method, device and storage medium
CN109977734A (en) * 2017-12-28 2019-07-05 华为技术有限公司 Image processing method and device
CN108509951A (en) * 2018-03-28 2018-09-07 韩劝劝 Image procossing formula opening operation platform
CN108509951B (en) * 2018-03-28 2019-01-18 六安荣耀创新智能科技有限公司 Image procossing formula opening operation platform
CN108564070A (en) * 2018-05-07 2018-09-21 京东方科技集团股份有限公司 Method for extracting gesture and its device
CN110513762A (en) * 2018-10-30 2019-11-29 永康市道可道科技有限公司 Super bath lamp body is automatically switched platform
CN110513762B (en) * 2018-10-30 2021-04-23 新昌县馁侃农业开发有限公司 Automatic switch platform for bathroom heater lamp body
CN110188680A (en) * 2019-05-29 2019-08-30 南京林业大学 Tea tree tender shoots intelligent identification Method based on factor iteration
CN110751078A (en) * 2019-10-15 2020-02-04 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color area of three-dimensional face
CN110751078B (en) * 2019-10-15 2023-06-20 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color region of three-dimensional face
CN111914632A (en) * 2020-06-19 2020-11-10 广州杰赛科技股份有限公司 Face recognition method, face recognition device and storage medium
CN111914632B (en) * 2020-06-19 2024-01-05 广州杰赛科技股份有限公司 Face recognition method, device and storage medium
CN111898470A (en) * 2020-07-09 2020-11-06 武汉华星光电技术有限公司 Device and method for extracting fingerprint outside screen and terminal
CN111898470B (en) * 2020-07-09 2024-02-09 武汉华星光电技术有限公司 Off-screen fingerprint extraction device and method and terminal
WO2022198751A1 (en) * 2021-03-25 2022-09-29 南京邮电大学 Rapid facial detection method based on multi-layer preprocessing
CN113269141A (en) * 2021-06-18 2021-08-17 浙江机电职业技术学院 Image processing method and device
CN113269141B (en) * 2021-06-18 2023-09-22 浙江机电职业技术学院 Image processing method and device
CN113313093B (en) * 2021-07-29 2021-11-05 杭州魔点科技有限公司 Face identification method and system based on face part extraction and skin color editing
CN113313093A (en) * 2021-07-29 2021-08-27 杭州魔点科技有限公司 Face identification method and system based on face part extraction and skin color editing

Similar Documents

Publication Publication Date Title
CN106485222A (en) A kind of method for detecting human face being layered based on the colour of skin
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN104732200B (en) A kind of recognition methods of skin type and skin problem
CN106446872A (en) Detection and recognition method of human face in video under low-light conditions
CN109858439A (en) A kind of biopsy method and device based on face
Le Meur et al. Relevance of a feed-forward model of visual attention for goal-oriented and free-viewing tasks
CN107545536A (en) The image processing method and image processing system of a kind of intelligent terminal
CN106650606A (en) Matching and processing method for face image and face image model construction system
CN106529494A (en) Human face recognition method based on multi-camera model
CN112906550B (en) Static gesture recognition method based on watershed transformation
Atharifard et al. Robust component-based face detection using color feature
CN106326823A (en) Method and system for acquiring head image in picture
CN106909883A (en) A kind of modularization hand region detection method and device based on ROS
CN106909884A (en) A kind of hand region detection method and device based on hierarchy and deformable part sub-model
Paul et al. PCA based geometric modeling for automatic face detection
CN108274476A (en) A kind of method of anthropomorphic robot crawl sphere
Rahman et al. An automatic face detection and gender classification from color images using support vector machine
Mohammed et al. Image segmentation for skin detection
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
Parente et al. Assessing facial image accordance to ISO/ICAO requirements
Naji Human face detection from colour images based on multi-skin models, rule-based geometrical knowledge, and artificial neural network
Rao et al. Neural network approach for eye detection
Aminian et al. Face detection using color segmentation and RHT
Belaroussi et al. Fusion of multiple detectors for face and eyes localization
CN110991223A (en) Method and system for identifying beautiful pupil based on transfer learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170308