CN102236786B - Light adaptation human skin colour detection method - Google Patents

Light adaptation human skin colour detection method Download PDF

Info

Publication number
CN102236786B
CN102236786B CN 201110185739 CN201110185739A CN102236786B CN 102236786 B CN102236786 B CN 102236786B CN 201110185739 CN201110185739 CN 201110185739 CN 201110185739 A CN201110185739 A CN 201110185739A CN 102236786 B CN102236786 B CN 102236786B
Authority
CN
China
Prior art keywords
skin
model
rgb
image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110185739
Other languages
Chinese (zh)
Other versions
CN102236786A (en
Inventor
苗振江
耿杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN 201110185739 priority Critical patent/CN102236786B/en
Publication of CN102236786A publication Critical patent/CN102236786A/en
Application granted granted Critical
Publication of CN102236786B publication Critical patent/CN102236786B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a light adaptation human skin colour detection method in the model identification and image processing technical field. The method disclosed in the invention comprises the following steps of: collecting a training database; using the training database to train a basal skin colour model and a light model; using the basal skin colour model to screen pixels of an image to be detected; finding out a light model closest to the image to be detected from the light model; using the model to correct the image to be detected and the basal skin colour model; and outputting the corrected image to be detected after being detected by the basal skin colour model. By means of the invention, the light component in a skin colour region in the image to be detected is obviously improved; the detection precision is improved; the accuracy of the skin colour detection is improved; and the noise interference is reduced.

Description

A kind of human body skin tone testing method of illumination adaptive
Technical field
The invention belongs to the Pattern recognition and image processing technical field, relate in particular to a kind of human body skin tone testing method of illumination adaptive.
Background technology
The human body skin tone testing technology is widely used in a plurality of fields, such as gesture identification, recognition of face and pornographic image filtration etc.Usually as a kind of pre-service work in these fields, its precision will have a huge impact the work for the treatment of in later stage Face Detection.This technical requirement can detect the pixel of human body skin as much as possible in piece image, and reduces the number of non-skin pixel point.Common disposal route is that a large amount of human body complexion images are compiled, and skin pixel point is marked in the drawings, then makes up complexion model according to these pixels, the pixel in the image to be detected is differentiated again.Yet human body complexion is changed significantly by illumination effect in image, thereby so that the skin color detection method on basis can not be tackled different light conditions well.For Face Detection can adapt to different photoenvironments, two kinds of ways are arranged usually: the one, to treat detected image and re-start the illumination balance and keep complexion model constant (color constancy), another kind is that complexion model is carried out dynamic self-adapting (dynamic adaptation).
All there is certain deficiency in these two kinds of methods, and first method adopts unified illumination model, all utilizes all images to be detected this illumination model to carry out the illumination correction.Yet unified illumination model also is not suitable for all images to be detected, and the image after the conversion can cause cross-color usually.Second method is come complexion model is carried out self-adaptive processing by the illumination of analyzing in the image to be detected, but the illumination analytic process is introduced noise easily, with revising the skew that complexion model causes model easily with noisy illumination model.
Summary of the invention
In carrying out the Face Detection process, there are the deficiencies such as cross-color and easy introducing noise for the existing method of mentioning in the above-mentioned background technology, the present invention proposes a kind of human body skin tone testing method of illumination adaptive.
Technical scheme of the present invention is that a kind of human body skin tone testing method of illumination adaptive is characterized in that the method may further comprise the steps:
Step 1: collect data composition data storehouse, train basic complexion model and illumination model with the data in the database;
Step 2: the pixel for the treatment of detected image with basic complexion model screens;
Step 3: treat detected image on the basis of step 2 and carry out the illumination model analysis, from illumination model, find out one and the immediate illumination model of image to be detected;
Step 4: with and the immediate illumination model of image to be detected treat detected image and basic complexion model is revised;
Step 5: revised image to be detected output after revised basic complexion model detects.
The basic complexion model of described usefulness is treated the formula that the pixel of detected image screens:
P ( rgb | skin ) P ( rgb | ⫬ skin ) ≥ Θ 1
Wherein:
P (rgb|skin) is the probability that the interior skin pixel point of color block (r, g, b) occurs;
Figure BDA0000073582590000022
Probability for non-skin pixel point appearance in the color block (r, g, b);
Θ 1Be the primary detection threshold value.
The computing formula of described P (rgb|skin) is:
P ( rgb | skin ) = s ( rgb ) T s
Wherein:
S (rgb) falls into the interior number of color block (r, g, b) for the skin pixel point in the tranining database;
T sSum for the skin pixel point in the tranining database.
Described
Figure BDA0000073582590000031
Computing formula be:
P ( rgb | ⫬ skin ) = n ( rgb ) T n
Wherein:
N (rgb) falls into the interior number of color block (r, g, b) for the non-skin pixel point in the tranining database;
T nSum for the non-skin pixel point in the tranining database.
Described to treat the method that detected image revises be the area of skin color projection, and the formula of area of skin color projection is:
x t=Ax i+b
Wherein:
x tBe the point of the skin pixel after the projection;
x iSkin pixel point for one 3 * 1 in original image dimension;
A is nonsingular matrix;
B is the vector of 3 * 1 dimensions.
The described method that basic complexion model is revised is that complexion model merges, and the formula that complexion model merges is:
P′(rgb|skin)=P(rgb|skin)+u*P t(rgb|skin)
Wherein:
P ' is (rgb|skin) for after complexion model merges, the probability of the skin pixel point appearance in the color block (r, g, b);
U is that complexion model merges threshold value
P t(rgb|skin) be in illumination model t, the probability of occurrence of the skin pixel point in the color block (r, g, b).
Described P t(rgb|skin) computing formula is:
P t ( rgb | skin ) = s t ( rgb ) T st
Wherein:
s t(rgb) be in illumination model t, the number of the skin pixel point that occurs in the block (r, g, b);
T StBe the sum of the pixel among the whole illumination model t.
The present invention excavates the colour of skin light conditions that might occur by the method for cluster from tranining database, generate thus a plurality of illumination models, and these illumination models can be simulated different colour of skin illumination information effectively.Utilize illumination model to treat detected image correction, can significantly improve the illumination composition of area of skin color in the image to be detected, make the colour of skin quantity of illumination in the image to be detected consistent with illumination model, thereby reach the purpose that improves accuracy of detection.And this process can the distortion so that image becomes.Utilize illumination model that basic complexion model is made amendment, can be so that basic complexion model be offset towards the area of skin color of image to be detected, thus can significantly promote the accuracy of Face Detection, and reduce noise.
Description of drawings
Fig. 1 is training process flow diagram of the present invention;
Fig. 2 is test flow chart of the present invention;
Fig. 3 is the distance measure synoptic diagram in the illumination model classification.
Embodiment
Below in conjunction with accompanying drawing, preferred embodiment is elaborated.Should be emphasized that following explanation only is exemplary, rather than in order to limit the scope of the invention and to use.
The present invention is divided into two steps of training and testing, respectively as shown in Figures 1 and 2.
1. in training step, at first will collect data composition data storehouse, i.e. a large amount of training image is comprising containing human body skin and not containing the image of human body skin.Database will be used for training basic complexion model and illumination model.The statistics with histogram model realization is used in the training of basis complexion model.The training step of illumination model is divided into image characteristics extraction, two parts of image clustering.The principle of illumination model is that the broca scale that will contain identical or the close quantity of illumination in the tranining database looks like to carry out cluster, then constructs M different photoenvironment, is used for adapting to the different image to be detected of illumination condition.
2. in testing procedure, an image to be detected at first will pass through primary detection, and primary detection is only used basic colour of skin operation, utilizes traditional Bayesian Estimation to obtain a rough colour of skin result.Through after the primary detection, testing result is carried out the illumination analysis, in the illumination model that has trained, pick out an immediate illumination model, utilize this illumination model to revise simultaneously image to be detected and basic complexion model.Use the method for area of skin color projection for the correction of image to be detected, the complexion model fusion method is used in the correction of basic complexion model.Revised image to be detected is exported through after the revised complexion model secondary detection.
Concrete steps of the present invention are:
Step 1: collect data composition data storehouse, train basic complexion model and illumination model with the data in the database;
Step 2: the pixel for the treatment of detected image with basic complexion model screens;
Step 3: treat detected image on the basis of step 2 and carry out the illumination model analysis, from illumination model, find out one and the immediate illumination model of image to be detected;
Step 4: with and the immediate illumination model of image to be detected treat detected image and basic complexion model is revised;
Step 5: revised image to be detected output after revised basic complexion model detects.
Specify the training flow process of system below in conjunction with accompanying drawing 1:
1. the training of basic complexion model
The training of basis complexion model adopts the statistics with histogram model to realize.In the rgb color space each passage is divided into 32 five equilibriums, such one has 32 * 32 * 32 color blocks.Skin pixel point and non-skin pixel point are set up respectively a histogram model, and statistics falls into the colour of skin and the non-skin pixel point number in each color block.We just can calculate the probability that the colour of skin in each color block or non-skin pixel point occur like this, are defined as follows:
P ( rgb | skin ) = s ( rgb ) T s , P ( rgb | ⫬ skin ) = n ( rgb ) T n - - - ( 1 )
Wherein:
P (rgb|skin) is the probability that the interior skin pixel point of color block (r, g, b) occurs;
Figure BDA0000073582590000063
Probability for non-skin pixel point appearance in the color block (r, g, b);
S (rgb) falls into the interior number of color block (r, g, b) for the skin pixel point in the tranining database;
N (rgb) falls into the interior number of color block (r, g, b) for the non-skin pixel point in the tranining database;
T sSum for the skin pixel point in the tranining database;
T nSum for the non-skin pixel point in the tranining database.
2. the training of illumination model
The training of illumination model is divided into feature extraction and cluster.
At first the broca scale in the tranining database is looked like to carry out feature extraction, according to the mark (skin pixel point is marked out by hand) of broca scale picture, skin pixel points all in the piece image is added up.Proper vector is defined as:
ξ=[m(r),m(g),m(b),δ(r),δ(g),δ(b)] (2)
Wherein:
ξ is the proper vector of this broca scale picture;
M (r) is the average of skin pixel point in the r passage;
M (g) is the average of skin pixel point in the g passage;
M (b) is the average of skin pixel point in the b passage;
δ (r) is the standard deviation of skin pixel point at the r passage;
δ (g) is the standard deviation of skin pixel point at the g passage;
δ (b) is the standard deviation of skin pixel point at the b passage.
By feature extraction, every width of cloth broca scale picture just can represent with a proper vector ξ.
The quantity of next establishing illumination model is M, by the k-means algorithm proper vectors all in the tranining database is carried out cluster, and cluster numbers is identical with the quantity of illumination model, also is M, and each cluster just can represent a kind of colour of skin quantity of illumination like this.Skin pixel point under specific photoenvironment in each cluster can be described with a polynary Gaussian distribution, so illumination model just is expressed as the ternary Gaussian distribution in a rgb space, is defined as follows:
p ( x ) = 1 ( 2 π ) 3 / 2 | Σ t | 1 / 2 exp [ - 1 2 ( x - μ t ) T Σ t - 1 ( x - μ t ) ] - - - ( 3 )
Wherein:
P (x) is the distribution probability of pixel x in illumination model;
X is the three-dimensional image vegetarian refreshments;
μ tAverage for illumination model;
tBe the covariance matrix of illumination model, μ tAnd ∑ tCan be obtained by maximal possibility estimation.
Like this, an illumination model just can be used Gaussian distribution (μ t, ∑ t) represent.
Specify the testing process of system below in conjunction with accompanying drawing 2:
1. primary detection
For an image to be detected, at first to carry out primary detection to it, obtain a rough testing result, primary detection is only used basic complexion model.For each pixel in this image, determine that at first which color block it is positioned at, and then uses following formula and screens:
P ( rgb | skin ) P ( rgb | ⫬ skin ) ≥ Θ 1 , 1≥0) (4)
Wherein:
Θ 1Be the primary detection threshold value.
If a pixel satisfies formula (4), then this pixel is labeled as the skin pixel point, otherwise, be labeled as non-skin pixel point.Θ 1Select greatlyr, then stricter for the requirement of skin pixel point, will obtain skin pixel point still less.The result of primary detection will have influence on the area of skin color projection steps of back.
2. illumination model is selected
The result of primary detection will be used for analyzing the quantity of illumination wherein, and find out immediate with it one in the illumination model that has trained, and carry out area of skin color projection and the complexion model fusion steps of back with this illumination model.
For the skin pixel point that obtains after the primary detection, at first we also describe their distributions in the rgb space with the ternary Gauss model, and namely we have also obtained one group of parameter (μ i, ∑ i) the skin pixel point distribution situation of image to be detected described.We can learn that the point with equal probabilities is distributed on the surface of a super spheroid by the characteristic of Gaussian distribution, and most data all drop in the super spheroid that is made of (μ, ∑).This super spheroid is:
γ 2=(x-μ) T-1(x-μ) (5)
Wherein:
γ is mahalanobis distance;
μ is the average of Gaussian distribution;
∑ is the covariance matrix of Gaussian distribution.
In three-dimensional rgb color space, it just becomes a triaxial ellipsoid body, and we choose γ=1, and we have just had two spheroids like this: illumination model spheroid and image spheroid, as shown in Figure 3.We need to find a tolerance to describe relation between image to be detected and the illumination model, so we have defined a distance.Each spheroid has 6 summits, between the summit of two spheroids corresponding relation is arranged, this distance just be defined as these 6 pairs of summits apart from sum.These summits can be passed through two covariance matrix ∑s tAnd ∑ iCarrying out Eigenvalues Decomposition obtains.We just can calculate the distance between image to be detected and the illumination model like this, and the illumination model of image distance minimum to be detected just is used for representing the light conditions of this image.
3. area of skin color projection (skin color region projection)
Need image to be detected is processed behind the selected illumination model, to revise the quantity of illumination wherein, be conducive to like this improve the precision that complexion model detects.We will realize with the area of skin color projection in this step, and its principle is the area of skin color with image to be detected, project in the illumination model that has trained and shall go, to change illumination condition wherein.We have obtained illumination model spheroid (μ in the preceding step t, ∑ t) and image spheroid (μ i, ∑ i), learnt that also most data all drop in the super spheroid that is made of (μ, ∑) in Gaussian distribution, wherein: μ has represented the center of spheroid, and ∑ has represented the spheroid Rotation and Zoom.Therefore we only need to project to the image spheroid on the illumination model spheroid (target spheroid), just can realize the change of light conditions.The area of skin color projection is defined as based on affined transformation:
x t=Ax i+b (6)
Wherein:
x tBe the point of the skin pixel after the projection;
x iSkin pixel point for one 3 * 1 in original image dimension;
A and b are projective parameters, and wherein A is nonsingular matrix, and b is the vector of 3 * 1 dimensions.
Projection mean before the projection and projection after data point identical probable value is arranged, so make γ it, we can obtain projection equation thus:
(x tt) Tt -1(x tt)=(x ii) Ti -1(x ii) (7)
With formula (6) substitution, obtain:
(Ax i+b-μ t) Tt -1(Ax i+b-μ t)=(x ii) Ti -1(x ii) (8)
Figure BDA0000073582590000101
[x i-A -1t-b)] TA Tt -1A[x i-A -1t-b)]=(x ii) Ti -1(x ii) (9)
⇒ A - 1 ( μ t - b ) = μ i A T Σ t - 1 A = Σ i - 1 - - - ( 10 )
tAnd ∑ iBe symmetric matrix, so ∑ t -1And ∑ i -1Be symmetric matrix also, to ∑ t -1And ∑ i -1Do Eigenvalues Decomposition, can obtain:
Σ t - 1 = V t · D t · V t T ; Σ i - 1 = V i · D i · V i T - - - ( 11 )
Wherein:
V t, V iBe the orthogonal matrix of 3 * 3 dimensions, and V t -1=V t T, V i -1=V i T
D t, D iIt is the real diagonal matrix of 3 * 3 dimensions.
Second equation in this pattern (10) just become:
A T V t D t V t T A = V i D i V i T - - - ( 12 )
⇒ V i T A T V t D t V t T A V i = D i - - - ( 13 )
Can establish Y = V t T AV i , Then:
Y TD tY=D i (14)
Might as well establish Y is diagonal matrix, then:
Y 2 D t = D i ⇒ Y = D t / D i - - - ( 15 )
Can be drawn by formula (10) and (15):
A = V t Y V i T = V t D t / D i V i T - - - ( 16 )
b=μ t-Aμ i (17)
Parameter (the A that is drawn by formula (16), (17), b) the presentation video spheroid has projected on the target spheroid (illumination model spheroid) fully, but generally, carry out the projection meeting fully so that image fault, thereby do not reach expected effect.So generally we only need the image spheroid partly to move to the illumination model spheroid, that is to say, change the position at target spheroid place.Here, we have used threshold value to control the position of target spheroid, and new target spheroid is defined as:
μ t ′ = w μ i + ( 1 - w ) μ t Σ t ′ = v Σ i + ( 1 - v ) Σ t , (w,v ∈[0,1]) (18)
Wherein:
μ ' tAverage for the target spheroid;
∑ ' tCovariance matrix for the target spheroid;
W is the mean control threshold value of target spheroid;
V is the covariance matrix control threshold value of target spheroid.
Arranging like this can guarantee (μ ' t, ∑ ' t) at (μ i, ∑ i) and (μ t, ∑ t) between.Therefore w=v=0 represents complete projection, and w=v=1 represents not projection.Like this, (the μ in our wushu (16) t, ∑ t) replace to (μ ' t, ∑ ' t) just obtained:
A = V t ′ D t ′ / D i V i T b = μ t ′ - A μ i - - - ( 19 )
Wherein:
V t ′ · D t ′ · V t ′ T = Σ t ′ - 1 .
So just can utilize formula (6) to treat detected image and carry out projection.
4. complexion model merges (skin model fusion) and secondary detection
Fixing basic complexion model can not be tackled different photoenvironments well, so we utilize the illumination model of having selected equally basic complexion model to be improved, and namely carries out complexion model and merges.Because illumination model is looked like to form by a series of broca scales, therefore, we can also calculate the probability that in illumination model skin pixel point occurs as the basis of formation complexion model, and and basic complexion model take same 32 * 32 * 32 color block to arrange.The probability of occurrence of the skin pixel point in the illumination model is defined as:
P t ( rgb | skin ) = s t ( rgb ) T st - - - ( 20 )
Wherein:
P t(rgb|skin) be in illumination model t, the probability of occurrence of skin pixel point in the color block (r, g, b);
s t(rgb) be in illumination model t, the number of the skin pixel point that occurs in the block (r, g, b);
T StBe the sum of the pixel among the whole illumination model t.
In complexion model merged, we came the controlled light model to the influence degree of basic complexion model with threshold value equally.The formula that complexion model merges is:
P′(rgb|skin)=P(rgb|skin)+u*P t(rgb|skin) (21)
Wherein:
P ' is (rgb|skin) for after complexion model merges, the probability of the skin pixel point appearance in the color block (r, g, b);
U is that complexion model merges threshold value.
Use new complexion model to carry out secondary detection the image to be detected after the projection, its method is identical with primary detection, is defined as:
P ′ ( rgb | skin ) P ( rgb | ⫬ skin ) ≥ Θ 2 , 2≥0) (22)
Equally, the pixel that satisfies formula (22) in the image will be marked as the skin pixel point, the to be detected image of output after secondary detection.
The above; only for the better embodiment of the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (1)

1. the human body skin tone testing method of an illumination adaptive is characterized in that the method may further comprise the steps:
Step 1: collect data composition data storehouse, train basic complexion model and illumination model with the data in the database;
Step 2: the pixel for the treatment of detected image with basic complexion model screens;
The basic complexion model of described usefulness is treated the formula that the pixel of detected image screens:
Figure FDA00002280278800011
Wherein:
P (rgb|skin) is the probability that the interior skin pixel point of color block (r, g, b) occurs;
Figure FDA00002280278800012
Probability for non-skin pixel point appearance in the color block (r, g, b);
Θ 1Be the primary detection threshold value;
The computing formula of described P (rgb|skin) is:
Figure FDA00002280278800013
Wherein:
S (rgb) falls into the interior number of color block (r, g, b) for the skin pixel point in the tranining database;
T sSum for the skin pixel point in the tranining database;
Described
Figure FDA00002280278800014
Computing formula be:
Figure FDA00002280278800015
Wherein:
N (rgb) falls into the interior number of color block (r, g, b) for the non-skin pixel point in the tranining database;
T nSum for the non-skin pixel point in the tranining database;
Step 3: treat detected image on the basis of step 2 and carry out the illumination model analysis, from illumination model, find out one and the immediate illumination model of image to be detected;
Step 4: with and the immediate illumination model of image to be detected treat detected image and basic complexion model is revised;
Described to treat the method that detected image revises be the area of skin color projection, and the formula of area of skin color projection is:
x t=Ax i+b
Wherein:
x tBe the point of the skin pixel after the projection;
x iSkin pixel point for one 3 * 1 in original image dimension;
A is nonsingular matrix;
B is the vector of 3 * 1 dimensions;
The described method that basic complexion model is revised is that complexion model merges, and the formula that complexion model merges is:
P′(rgb|skin)=P(rgb|skin)+u*P t(rgb|skin)
Wherein:
P ' is (rgb|skin) for after complexion model merges, the probability of the skin pixel point appearance in the color block (r, g, b);
U is that complexion model merges threshold value;
P t(rgb|skin) be in illumination model t, the probability of occurrence of the skin pixel point in the color block (r, g, b);
Described P t(rgb|skin) computing formula is:
Figure FDA00002280278800031
Wherein:
s t(rgb) be in illumination model t, the number of the skin pixel point that occurs in the block (r, g, b);
T StBe the sum of the pixel among the whole illumination model t;
Step 5: revised image to be detected output after revised basic complexion model detects.
CN 201110185739 2011-07-04 2011-07-04 Light adaptation human skin colour detection method Expired - Fee Related CN102236786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110185739 CN102236786B (en) 2011-07-04 2011-07-04 Light adaptation human skin colour detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110185739 CN102236786B (en) 2011-07-04 2011-07-04 Light adaptation human skin colour detection method

Publications (2)

Publication Number Publication Date
CN102236786A CN102236786A (en) 2011-11-09
CN102236786B true CN102236786B (en) 2013-02-13

Family

ID=44887424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110185739 Expired - Fee Related CN102236786B (en) 2011-07-04 2011-07-04 Light adaptation human skin colour detection method

Country Status (1)

Country Link
CN (1) CN102236786B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103196550A (en) * 2012-01-09 2013-07-10 西安智意能电子科技有限公司 Method and equipment for screening and processing imaging information of launching light source
TWI520101B (en) * 2014-04-16 2016-02-01 鈺創科技股份有限公司 Method for making up skin tone of a human body in an image, device for making up skin tone of a human body in an image, method for adjusting skin tone luminance of a human body in an image, and device for adjusting skin tone luminance of a human body in
CN105224917B (en) * 2015-09-10 2019-06-21 成都品果科技有限公司 A kind of method and system using color space creation skin color probability map
CN105678813A (en) * 2015-11-26 2016-06-15 乐视致新电子科技(天津)有限公司 Skin color detection method and device
CN106897965B (en) * 2015-12-14 2020-04-28 国基电子(上海)有限公司 Color image processing system and color image processing method
CN106295608B (en) * 2016-08-22 2020-12-15 北京航空航天大学 Human skin color detection method
CN108510500B (en) * 2018-05-14 2021-02-26 深圳市云之梦科技有限公司 Method and system for processing hair image layer of virtual character image based on human face skin color detection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7542600B2 (en) * 2004-10-21 2009-06-02 Microsoft Corporation Video image quality
CN101251898B (en) * 2008-03-25 2010-09-15 腾讯科技(深圳)有限公司 Skin color detection method and apparatus
CN101630363B (en) * 2009-07-13 2011-11-23 中国船舶重工集团公司第七〇九研究所 Rapid detection method of face in color image under complex background

Also Published As

Publication number Publication date
CN102236786A (en) 2011-11-09

Similar Documents

Publication Publication Date Title
CN102236786B (en) Light adaptation human skin colour detection method
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN105095856B (en) Face identification method is blocked based on mask
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
CN102598057B (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN109034126B (en) Micro-expression recognition method based on optical flow main direction
WO2016138838A1 (en) Method and device for recognizing lip-reading based on projection extreme learning machine
CN113076994B (en) Open-set domain self-adaptive image classification method and system
CN103824090B (en) Adaptive face low-level feature selection method and face attribute recognition method
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
CN102844768B (en) The shielding of image template
CN103136504A (en) Face recognition method and device
CN104063722A (en) Safety helmet identification method integrating HOG human body target detection and SVM classifier
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN103996018A (en) Human-face identification method based on 4DLBP
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN102867188A (en) Method for detecting seat state in meeting place based on cascade structure
Jia et al. Fooling the eyes of autonomous vehicles: Robust physical adversarial examples against traffic sign recognition systems
Fang et al. Partial attack supervision and regional weighted inference for masked face presentation attack detection
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN103714340B (en) Self-adaptation feature extracting method based on image partitioning
CN107895162A (en) Saliency algorithm of target detection based on object priori
CN104331700B (en) Group Activity recognition method based on track energy dissipation figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130213

Termination date: 20140704

EXPY Termination of patent right or utility model