CN102637251B - Face recognition method based on reference features - Google Patents
Face recognition method based on reference features Download PDFInfo
- Publication number
- CN102637251B CN102637251B CN 201210074224 CN201210074224A CN102637251B CN 102637251 B CN102637251 B CN 102637251B CN 201210074224 CN201210074224 CN 201210074224 CN 201210074224 A CN201210074224 A CN 201210074224A CN 102637251 B CN102637251 B CN 102637251B
- Authority
- CN
- China
- Prior art keywords
- facial image
- image
- similarity
- identified
- fixed reference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000001815 facial effect Effects 0.000 claims abstract description 97
- 238000012549 training Methods 0.000 claims abstract description 54
- 230000009467 reduction Effects 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 15
- 238000004458 analytical method Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000012847 principal component analysis method Methods 0.000 claims abstract description 3
- 239000013598 vector Substances 0.000 claims description 28
- 239000000284 extract Substances 0.000 claims description 17
- 239000004744 fabric Substances 0.000 claims description 13
- 238000012546 transfer Methods 0.000 claims description 6
- 210000004709 eyebrow Anatomy 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 20
- 238000011160 research Methods 0.000 description 8
- 238000012360 testing method Methods 0.000 description 6
- 238000000513 principal component analysis Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000002386 leaching Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 230000000192 social effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a face recognition method based on reference features. The method comprises the following steps that: scale invariant features and local binary pattern features of a face image to be recognized are extracted; a principal component analysis method is utilized for dimensionality reduction to obtain the image features of the face image to be recognized; the similarity of the image features to a cluster center is calculated by utilizing the obtained image features to obtain the reference features of the face image to be recognized; and the similarity of the reference features of the face image to be recognized and the reference features of training data concentration is calculated to obtain an analysis result. The reference features of the face image provided by the invention comprise texture information and structure information of the face image, so that the method provided by the invention can more comprehensively represent the face compared with the method in the prior art, which only represents the texture information or the structure information of the face. The process of feature extraction is simple and easy to realize; the recognition result is highly precise; high recognition rate of different facial gestures of the same person is realized.
Description
Technical field
The invention belongs to computer vision technique, relate to a kind of face identification method based on fixed reference feature.
Background technology
Society is increasing to be paid close attention to along with the public safety problem is subject to, and the research of face recognition technology has been subject to the great attention of academia, business circles and government.Face recognition technology is an emerging technology, starts from late 1960s for the research of this gate technique, and distance also only has five in a decade or so now.It is that the New Times of Research on Face Recognition Technology and application has been opened in the appearance of some commercial face identification systems in the later stage nineties 20th century that face recognition technology really obtains paying attention to.In China, the research of face recognition technology is started late, commercial face recognition technology is also enough unripe, but in recent years because the social influence that especially big public safety accident causes, face recognition technology has successively obtained the attention of government bodies, business circles and academia, and the research of face recognition technology has obtained unprecedented development.Give birth to the utilization that the fortune merit can go up face recognition technology in the 2008 Beijing Olympic Games and Shenzhen University in 2011 and obtained good public security effect, formed good social effect, increasing company and research institute have all included face recognition technology in new research category, and face recognition technology has become the new strategic emphasis of each major company and research institute.
Face recognition technology is compared with traditional biological identification technology huge advantage.At first be its naturality, this recognition method of face recognition technology is identical with the biological characteristic that human (even other biology) utilizes when carrying out individual identification, human also the differentiation by observation and comparison people face and the affirmation identity.And fingerprint recognition, iris recognition etc. do not have naturality, because human or other is biological individual by this type of biological characteristic difference.Next be its not by perceptibility, do not discovered for a kind of recognition methods very importantly yet, this can make this recognition methods not offensive, and is not easy to be cheated because be not easy to arouse people's attention.Recognition of face has the characteristics of this respect, it utilizes visible light to obtain human face image information fully, and be different from fingerprint recognition or iris recognition, need to utilize electronic pressure transmitter to gather fingerprint, perhaps closely gather iris image, these special acquisition modes are easy to be discovered by the people, thereby more likely by impersonation.These characteristics are specially adapted to runaway convict's tracker.Moreover be that it is untouchable, not the needing of face recognition technology touches the facial image that detected object just can obtain detected object, and this is different from the traditional biological recognition technology, so face recognition technology also is easily.
The yardstick invariant features is a kind of feature of Description Image local message.The yardstick invariant features has that yardstick is constant, the characteristics of translation invariant, invariable rotary, therefore is widely used in during image local describes.Extract the yardstick invariant features and comprise position and the yardstick of determining key point, the principal direction of determining key point neighborhood gradient and three steps of descriptor structure.The local binary patterns feature is a kind of feature of Description Image texture information.The principle of local binary patterns feature is that the binary string that a pixel and its neighborhood territory pixel produce is represented.For a pixel, to gather from making the round neighborhood territory pixel that obtains as center of circle fixed length as radius, selected pixel wherein is as starting point, compare respectively each pixel of neighborhood set and the size of center of circle pixel, be 1 for the pixel greater than the center of circle in the correspondence position assignment of binary string, be 0 for the pixel less than the center of circle in the correspondence position assignment of binary string, the binary string that obtains like this is exactly the partial binary mode characteristic of this pixel.Information TRANSFER MODEL clustering method is a kind of widely used clustering method, and the analysis that the method is crossed the information TRANSFER MODEL for the rambling data communication device of input finally obtains cluster centre.
Summary of the invention
The object of the present invention is to provide a kind of face identification method based on fixed reference feature, the characteristic extraction procedure of the method is simple, and the recognition result accuracy rate is high.
A kind of face identification method based on fixed reference feature provided by the invention is characterized in that the method comprises the steps:
(1) obtain the characteristics of image of facial image:
To facial image to be identified, at first extract yardstick invariant features and the local binary patterns feature of facial image, and then with principal component analysis method dimensionality reduction, obtain the characteristics of image of facial image to be identified;
(2) obtain the fixed reference feature of facial image:
The characteristics of image that utilization obtains, the computed image feature obtains facial image fixed reference feature to be identified to the similarity of cluster centre;
(3) judgment analysis
The fixed reference feature that fixed reference feature and the training data of facial image to be identified are concentrated is adjudicated the sorter analysis with linearity, obtain analysis result.
As improvement of the technical scheme, the detailed process of obtaining the facial image fixed reference feature in the step (2) is:
(2.1) calculate the characteristics of image of people's face to be identified to the similarity of each cluster centre of training dataset
Facial image to be identified, its characteristics of image is denoted as Y, and the cluster centre set of note training dataset is C, C={C
1, C
2..., C
N, N represents the number of cluster centre, its span is 150 to 250; For Y and C
1, wish is calculated its similarity, at first with Y as positive sample, with C-{C
1As negative sample, training linear judgement sorter yClassifier1 is
yClassifier
1=LDA(+:Y,-:{C-C
1})
With C
1As linearity judgement sorter yClassifier
1Input obtains adjudicating mark yScore
1, be
yScore
1=yClassifier
1(C
1)
YScore
1Weighed C
1Similarity to the facial image characteristic Y;
Then, with C
1As positive sample, with C-{C
1As negative sample, training judgement sorter yClassifier
2, be
yClassifier
2=LDA(+:C
1,-:{C-C
1})
Adjudicate sorter yClassifier with Y as linearity
2Input obtains adjudicating mark yScore
2, be
yScore
2=yClassifier(Y)
YScore
2Weighed the facial image characteristic Y to C
1Similarity;
So, the present invention defines the facial image characteristic Y to cluster centre C
1Similarity be
S(Y,C
1)=(yScore
1+yScore
2)/2
At last, calculate Y to the similarity of cluster centre for each cluster centre among the cluster centre set C, resulting similarity vector S (Y) is denoted as
S(Y)=[S(Y,C
1),S(Y,C
2),...,S(Y,C
N)]
(2.2) similarity vector S (Y) is carried out normalization, define normalized S
N(Y) be the fixed reference feature of facial image to be identified, be
S so
N(Y) be exactly the fixed reference feature of facial image Y to be identified.
As further improvement in the technical proposal, the detailed process of obtaining the characteristics of image of facial image in the step (1) is:
(3.1) 13 of mark facial image to be identified key points, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth; The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be recorded;
(3.2) for facial image to be identified, extract the yardstick invariant features of its each gauge point;
(3.3) the local binary patterns feature of extraction facial image to be identified;
(3.4) utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary patterns feature difference dimensionality reduction, with the yardstick invariant features behind the dimensionality reduction and local binary patterns merging features, be facial image feature to be identified.
The invention discloses a kind of face identification method based on fixed reference feature.For the given width of cloth facial image of user, the present invention can identify the people's face in the image, and the present invention just is based on that the fixed reference feature of facial image identifies the personage in the image.
For facial image to be identified, at first facial image to be identified is extracted its yardstick invariant features and local binary patterns feature, then utilize the principal component analytical method dimensionality reduction to obtain facial image fixed reference feature to be identified; Calculate again facial image to be identified to the similarity of cluster centre, utilize the similarity of cluster centre to obtain facial image fixed reference feature to be identified; Calculate at last facial image fixed reference feature to be identified to the similarity of training dataset facial image fixed reference feature, draw court verdict.
In a word, compared with prior art, the present invention has following technique effect:
1. the fixed reference feature of facial image has comprised texture information and the structural information of facial image, only reflects the texture information of people's face or only reflects that the structural information of people's face has more fully characterized people's face than existing method;
2. the characteristic extraction procedure of the inventive method is simple, is easy to realize;
3. the recognition result accuracy rate of the inventive method is high;
4. the inventive method has higher discrimination for the different facial pose of same personage.
The facial image that the employed training data of the inventive method is concentrated can obtain according to following process: at first extract yardstick invariant features and local binary patterns feature, then adopt the method dimensionality reduction of principal component analysis.Training data behind the dimensionality reduction is concentrated facial image feature exploit information TRANSFER MODEL clustering, obtain cluster centre.Then, the similarity of calculation training data set facial image and cluster centre obtains the fixed reference feature of training dataset facial image at last.
Description of drawings
Fig. 1 is based on the facial image recognition method synoptic diagram of fixed reference feature.
Fig. 2 is facial image key point position synoptic diagram.
Fig. 3 is five equilibrium facial image synoptic diagram.
Embodiment
It is the facial image fixed reference feature that face recognition technology provided by the invention has proposed a kind of New Characteristics, is a kind of new method of face representation in the recognition of face with this mark sheet traveller on a long journey face information.At first, the facial image of concentrating for training data extracts yardstick invariant features (SIFT feature) and local binary patterns feature (LBP feature), utilize principal component analytical method dimensionality reduction and splicing, then adopt information TRANSFER MODEL clustering to draw the cluster centre of training dataset, then obtain the fixed reference feature of facial image, then the test facial image is also extracted fixed reference feature, calculate the similarity of test person face image reference feature and tranining database image reference feature, result of calculation analysis is drawn final conclusion.The present invention is based on that the fixed reference feature of facial image identifies the personage in the image.Learning method of the present invention is different from existing method.
Training dataset X: training dataset is the data acquisition of a facial image.Training dataset obtains by gathering the positive face image of personage with the high definition camera, is different from general photograph, and the ratio of width to height that collection result adopted 1: 1 keeps the human face region in the image, and guarantees the in the same size of human face region by the shooting distance of control camera.At least gather two photos to reduce the error of bringing owing to the defective of camera for each personage.The size of training dataset X preferably will have the image about 1000, it is very large that excessive data set can cause computing to consume, and less data set effect is also not ideal enough, and such as for gate control system, everyone gathers the employee image and just can form data set X.Here, we remember that each image among the data set X is X
i, wherein subscript i is the sequence number of image.
The inventive method comprises two stages, and the phase one is that subordinate phase is the identification to test pattern to the study of training set.
The study of (one) training being gathered
1, extracts characteristics of image
1.1 the facial image among the mark training dataset X
The facial image of training dataset by the high definition camera according to getting, for each image Xi wherein, 13 key points of mark portrait, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth, see Fig. 2.The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be recorded.
1.2 extract the yardstick invariant features of gauge point
Each width of cloth image that training data is concentrated is all marked, for facial image X
i, extract the yardstick invariant features of its each gauge point.
The below illustrates its leaching process as an example of gauge point P example, the leaching process of all the other points is identical.Centered by a P, get the pixel of 8 neighborhoods, obtain the zone of 16 * 16 pixel sizes.The zone of this 16 * 16 size evenly is divided into 16 parts, obtains the zone that size is 4 * 4 pixels.For each zone of 4 * 4, calculate Grad and the principal direction of each pixel in this zone.Centered by pixel, 2 π are divided into 8 five equilibriums, be respectively [0, π/4) ... [and 7 π/4,2 π), obtain eight mean lines.In 4 * 4 zone, the Grad G of each pixel at the corresponding vector of two-dimensional space, is calculated this vector to the angle apart from its nearest mean line, then calculate it in the length of the projection of this mean line, and be denoted as j.Calculate these 16 Grad obtain respectively eight vector projected lengths on the direction and, be denoted as vector v, vector v be illustrated in vector projected length on 8 directions and,
Then according to from top to bottom, 16 4 * 4 zones of order computation from left to right draw 16 vectors altogether.Then according to same order these 16 vector splicings are obtained the vector V of one 128 dimension:
Wherein, v
k(k=1...16) expression is pressed from top to bottom, the vector in k zone of order from left to right.
V is exactly the yardstick invariant features vector of key point, namely the yardstick invariant features of gauge point.
1.3 extract the local binary patterns feature of facial image
Each width of cloth facial image of training dataset all is comprised of pixel, and people's face is positioned at the position of image center.Calculate the local binary patterns feature [H of facial image
1, H
2... H
14], at first with facial image ten quarterns, see Fig. 3.
Suppose facial image X
iPixel size be w * h, wherein w is the width of image, h is the height of image, with image at Width halve i.e. [1, w/2], [w/2+1, w], with image at short transverse seven five equilibriums i.e. [0, h/7], [h/7+1,2h/7] ... [6h/7+1, h].With vertical curve and horizontal line these Along ents are linked up, image just has been divided into 14 parts.According to from top to bottom, from left to right in proper order, with first regional Q
1Be example, all the other regional disposal routes are similar, repeat no more.For each the pixel q in the zone, the size of 8 neighborhood territory pixels centered by q is denoted as l
1, l
2... l
8, wherein initial pixel definition be q vertically directly over pixel order arrange in the direction of the clock.For a neighborhood territory pixel l
a(a=1 ... 8), if pixel value satisfies l
a〉=q, so the numerical value d of its corresponding binary string
aBe 1, l else if
a<p is the numerical value d of its corresponding binary string so
aBe 0.
Be that 8 the vertical scope of 10 systems corresponding to binary string is 0~255 because position is long, so each pixel value can with a numeric representation between 0~255, be denoted as Dec.Carry out same processing for each pixel in the zone, then for its histogram H on 0~255 of all pixels statisticses in this zone
1,
H
1=histogram({Dec}) (1.3-2)
Namely have 256 numerical statistics wherein number of times of each numerical value appearance, so Q altogether for 0~255
1Be expressed as the vector H of 256 dimensions
1For All Ranges Q
1, Q
2... Q
14, carry out same processing according to from left to right order from top to bottom.Matrix [H
1, H
2... H
14] be exactly the local binary patterns feature of training image.
1.4 extraction characteristics of image
Each width of cloth image that training data is concentrated has all extracted yardstick invariant features and local binary patterns feature, is denoted as respectively [v
1, v
2... v
13] and [h
1, h
2... h
14].For yardstick invariant features [v
1, v
2... v
13], regard each row as a group observations, regard row as variable.Because the description vectors that the method that adopts yardstick invariant features and local binary patterns feature to describe facial image generates has the too high defective of dimension, so that increase severely according to the complexity of calculating in the cluster that is about to carry out.Simultaneously, have certain relevance between the dimension of high-dimensional description vectors, so that information overlap between the different dimensions of description vectors, increased the complicacy of computing.Principal component analysis (PCA) is a kind of method of data mining of widely used maturation, has the effect of dimensionality reduction.Utilize the method for principal component analysis that the dimension of yardstick invariant features is tieed up from 128 * 13 dimensionality reductions to 150, be denoted as vector S.
S=princomp([v
1,v
2...v
13])
dim=150 (2-1)
Equally, for local binary patterns feature [h
1, h
2... h
14], the method for employing principal component analysis is regarded each row as a group observations, regards row as variable, and the local binary patterns feature is tieed up from 256 * 14 dimensionality reductions to 150, is denoted as vector H.
H=princomp([h
1,h
2...h
14])
dim=150 (2-2)
Yardstick invariant features and local binary patterns feature are carried out dimensionality reduction be not limited to 150 dimensions, can be between 100~200 dimensions.
2, extract fixed reference feature
Each width of cloth image that training data is concentrated has all extracted yardstick invariant features S and local binary patterns feature H, and the note characteristics of image is I, by I=[S; H] determine.
Each width of cloth image X for training dataset X
i, the characteristics of image that extracts is denoted as
High-dimensional (300 a dimension) vector in essence, with all characteristics of image among the training dataset X, input as clustering algorithm, equaling 200 with information TRANSFER MODEL method (Message Passing Model) with the classification number carries out cluster and (is not limited to 200 for cluster classification number, can be between 150~250), obtain the cluster centre of data acquisition, also can be described as " central point ", cluster centre is denoted as C, wherein C={C
1, C
2... C
200, C
m" central point " for classification m (m=1,2...200).
For each width of cloth image X among the training dataset X
t, calculate it to the similarity of cluster centre C.At first calculate X
tTo C
iSimilarity.Similarity calculating method of the present invention is based on linear judgement sorter and realizes.At first, with X
tAs positive sample, with set X-{X
tAs negative sample, training linear judgement sorter Classifier
1,
Classifier
1=LDA(+:X
t,-:X-{X
t}) (3-2)
Set the C that is input as of sorter
i, note sorter Classifier
1The judgement mark be Score
1,
S
1=Classifier
1(C
i) (3-3)
Then with C
iAs positive sample, with set X-{X
tAs negative sample, training linear judgement sorter Classifier
2,
Classifier
2=LDA(+:C
i,-:X-{X
t}) (3-4)
Set the X that is input as of sorter
t, note sorter Classifier
2The judgement mark be Score
2,
S
2=Classifier
2(X
t) (3-5)
The present invention defines facial image X so
tWith cluster centre C
iSimilarity be
S(X
t,C
i)=(Score
1+Score
2)/2 (3-6)
For each width of cloth image X among the training dataset X
t, calculate respectively X according to above-mentioned formula
tTo C
iThe similarity of (i=1,2...200), result are denoted as S (X
t), S (X wherein
t)=[S (X
t, C
1) ..., S (X
t, C
200)].Then with S (X
t) normalization, the result is denoted as S
N(X
t),
The present invention defines normalized S
N(X
t) be image X
tFixed reference feature.Fixed reference feature is based on that the cluster centre of training dataset X obtains.At last, for each width of cloth image X among the training dataset X
tAll calculate its fixed reference feature, the note fixed reference feature is R
t
(2) test image to be identified
1. extraction characteristics of image, same training process.
2. extraction fixed reference feature, except not having cluster process, all the other same training process.
Above two steps the same with learning process, for the given facial image data of user, extract at first thick and fast yardstick invariant features and local binary patterns feature, and then with the method dimensionality reduction of principal component analysis, obtain characteristics of image.After obtaining characteristics of image, the distribution histogram of computed image feature on cluster centre obtains fixed reference feature again.
3. test facial image to be identified
At last, the fixed reference feature of test pattern and the fixed reference feature of training dataset are calculated similarity, make final conclusion.
For the facial image Y that is gathered by the high definition camera, to its processing of carrying out respectively above-mentioned two steps, the fixed reference feature of remembering is R
YCompare fixed reference feature R
YWith the concentrated fixed reference feature R of training data
t, I wherein
t∈ X.Remember that the similarity vector is,
S(R
Y,R
t)=exp(-(R
Y-R
t)
2) (4-1)
This computing formula represents for vector R
YWith vector R
tEach corresponding dimension, calculate respectively its difference, then obtain the logarithm value of its opposite number, Here it is S (R
Y, R
t) numerical value of respective dimensions element s, so S (R
Y, R
t) be one and R
Y(or R
t) vector that dimension is identical.
With similarity vector S (R
Y, R
t) arranged sequentially according to from big to small of element,
S(R
Y,R
t)=[s
1,s
2...s
M],s
1≥s
2≥...≥s
M (4-2)
s
1Be exactly vector S (R
Y, R
t) maximal value in all elements, s
2Second largest numerical value, by that analogy.
If satisfy s
1>1.5s
2, judge so personage's identity and R among the facial image Y
1Corresponding personage X is identical, this is because the similarity degree of facial image Y and facial image X is far longer than the similarity degree of facial image Y and other facial images, what facial image Y and facial image X represented so is same person, and the present invention adopts such criterion judgement personage's identity.On the contrary, if satisfy s
1≤ 1.5s
2, the personage's identity among the facial image Y can't be determined so, and this is because facial image Y keeps having certain similarity with all facial image, can't conclude identity, and the present invention adopts this criterion to adjudicate the accuracy that personage's identity has improved the result.
The present invention not only is confined to above-mentioned embodiment; persons skilled in the art are according to content disclosed by the invention; can adopt other multiple embodiment to implement the present invention; therefore; every employing thinking of the present invention; do some simple variations or change, all fall into the scope of protection of the invention.
Claims (2)
1. the face identification method based on fixed reference feature is characterized in that the method comprises the steps:
(1) obtain the characteristics of image of facial image:
To facial image to be identified, at first extract yardstick invariant features and the local binary patterns feature of facial image, and then with principal component analysis method dimensionality reduction, obtain the characteristics of image of facial image to be identified;
(2) obtain the fixed reference feature of facial image:
The characteristics of image that utilization obtains, the computed image feature obtains facial image fixed reference feature to be identified to the similarity of cluster centre;
(3) judgment analysis
The fixed reference feature that fixed reference feature and the training data of facial image to be identified are concentrated is adjudicated the sorter analysis with linearity, obtain analysis result;
The detailed process of obtaining the facial image fixed reference feature in the step (2) is:
(2.1) calculate the characteristics of image of people's face to be identified to the similarity of each cluster centre of training dataset
Facial image to be identified, its characteristics of image is denoted as Y, remembers that the cluster centre set of training dataset is, and N represents the number of cluster centre, and its span is 150 to 250, for Y and C
1, wish is calculated its similarity, at first with Y as positive sample, with C-{C
1As negative sample, training linear judgement sorter yClassifier
1, be
yClassifier
1=LDA(+:Y,-:{C-C
1})
With C
1As linearity judgement sorter yClassifier
1Input obtains adjudicating mark yScore
1, be
yScore
1=yClassifier
1(C
1)
YScore
1Weighed C
1Similarity to the facial image characteristic Y;
Then, with C
1As positive sample, with C-{C
1As negative sample, training judgement sorter yClassifier
2, be
yClassifier
2=LDA(+:C
1,-:{C-C
1})
Adjudicate sorter yClassifier with Y as linearity
2Input obtains adjudicating mark yScore
2, be
yScore
2=yClassifier(Y)
YScore
2Weighed the facial image characteristic Y to C
1Similarity;
So, definition facial image characteristic Y is to cluster centre C
1Similarity be
S(Y,C
1)=(yScore
1+yScore
2)/2
At last, calculate Y to the similarity of cluster centre for each cluster centre among the cluster centre set C, resulting similarity vector S (Y) is denoted as
S(Y)=[S(Y,C
1),S(Y,C
2),...,S(Y,C
N)]
(2.2) similarity vector S (Y) is carried out normalization, define normalized S
N(Y) be the fixed reference feature of facial image to be identified, be
S so
N(Y) be exactly the fixed reference feature of facial image Y to be identified;
The fixed reference feature that training data described in the step (3) is concentrated is according to following Procedure Acquisition:
(3.1) facial image among the mark training dataset X
13 key points for each facial image mark portrait among the training dataset X, serve as a mark a little, key point comprises eyebrow two ends, eyes two ends, nose key area and the corners of the mouth, according to from left to right the two-dimensional coordinate of these key points of journal from top to bottom;
(3.2) the yardstick invariant features of extraction gauge point;
(3.3) the local binary patterns feature of extraction facial image;
(3.4) extract the facial image feature
Utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary patterns feature difference dimensionality reduction, with the yardstick invariant features behind the dimensionality reduction and local binary patterns merging features, be the facial image feature;
(3.5) extract fixed reference feature
Each width of cloth image X for training dataset X
i, the characteristics of image that extracts is denoted as
With all characteristics of image among the training dataset X, as the clustering algorithm input, equal N with information TRANSFER MODEL method with the classification number and carry out cluster, obtain the cluster centre of data acquisition, cluster centre is denoted as C, wherein C={C
1, C
2... C
N, be the central point of the classification, i=1,2 ..., N; For each width of cloth image X among the training dataset X
t, calculate it to the similarity of cluster centre C; At first calculate X
tTo C
1Similarity; Similarity calculating method is based on linear judgement sorter and realizes; At first, with X
tAs positive sample, with data set X-{X
tAs negative sample, training linear judgement sorter Classifier
1,
Classifier
1=LDA (+: X
t,-: X-{X
t) formula I
Set the C that is input as of sorter
1, note sorter Classifier
1The judgement mark be Score
1,
S
1=Classifier
1(C
1) formula II
Then with C
iAs positive sample, with data set X-{X
tAs negative sample, training linear judgement sorter Classifier
2,
Classifier
2=LDA (+: C
1,-: X-{X
t) formula III
Set the X that is input as of sorter
t, note sorter Classifier
2The judgement mark be Score
2,
S
2=Classifier
2(X
t) formula IV
Define so facial image X
tWith cluster centre C
1Similarity be
S (X
t, C
1)=(Score
1+ Score
2)/2 formula V
For each width of cloth image X among the training dataset X
t, calculate respectively X according to formula V
tTo cluster centre C
iSimilarity, i=1,2 ..., N, the result is denoted as S (X
t), S (X wherein
t)=[S (X
t, C
1) ..., S (X
t, C
N)]; Then with S (X
t) normalization, the result is denoted as S
N(X
t),
S
N(X
t) be image X
tFixed reference feature, be based on that the cluster centre of training dataset X obtains; At last, for each width of cloth image X among the training dataset X
tAll calculate its fixed reference feature.
2. the face identification method based on fixed reference feature according to claim 1 is characterized in that, the detailed process of obtaining the characteristics of image of facial image in the step (1) is:
(1.1) 13 of mark facial image to be identified key points, the position of these points respectively at the eyebrow two ends, eyes two ends, nose key area and the corners of the mouth; The process of mark is exactly according to from left to right order from top to bottom the two-dimensional coordinate of these key positions to be recorded;
(1.2) for facial image to be identified, extract the yardstick invariant features of its each gauge point;
(1.3) the local binary patterns feature of extraction facial image to be identified;
(1.4) utilize principal component analytical method with facial image yardstick invariant features to be identified and local binary patterns feature difference dimensionality reduction, with the yardstick invariant features behind the dimensionality reduction and local binary patterns merging features, be facial image feature to be identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210074224 CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210074224 CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102637251A CN102637251A (en) | 2012-08-15 |
CN102637251B true CN102637251B (en) | 2013-10-30 |
Family
ID=46621640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201210074224 Expired - Fee Related CN102637251B (en) | 2012-03-20 | 2012-03-20 | Face recognition method based on reference features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102637251B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10019622B2 (en) | 2014-08-22 | 2018-07-10 | Microsoft Technology Licensing, Llc | Face alignment with shape regression |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034840B (en) * | 2012-12-05 | 2016-05-04 | 山东神思电子技术股份有限公司 | A kind of gender identification method |
CN103268653A (en) * | 2013-05-30 | 2013-08-28 | 苏州福丰科技有限公司 | Face identification method for access control system |
CN103955667B (en) * | 2013-05-31 | 2017-04-19 | 华北电力大学 | SIFT human face matching method based on geometrical constraint |
CN104036259B (en) * | 2014-06-27 | 2016-08-24 | 北京奇虎科技有限公司 | Human face similarity degree recognition methods and system |
CN104376312B (en) * | 2014-12-08 | 2019-03-01 | 广西大学 | Face identification method based on bag of words compressed sensing feature extraction |
CN106529377A (en) * | 2015-09-15 | 2017-03-22 | 北京文安智能技术股份有限公司 | Age estimating method, age estimating device and age estimating system based on image |
CN105335753A (en) * | 2015-10-29 | 2016-02-17 | 小米科技有限责任公司 | Image recognition method and device |
CN105320948A (en) * | 2015-11-19 | 2016-02-10 | 北京文安科技发展有限公司 | Image based gender identification method, apparatus and system |
CN105469059A (en) * | 2015-12-01 | 2016-04-06 | 上海电机学院 | Pedestrian recognition, positioning and counting method for video |
CN105740378B (en) * | 2016-01-27 | 2020-07-21 | 北京航空航天大学 | Digital pathology full-section image retrieval method |
CN105740808B (en) * | 2016-01-28 | 2019-08-09 | 北京旷视科技有限公司 | Face identification method and device |
CN105913050A (en) * | 2016-05-25 | 2016-08-31 | 苏州宾果智能科技有限公司 | Method and system for face recognition based on high-dimensional local binary pattern features |
CN107463865B (en) * | 2016-06-02 | 2020-11-13 | 北京陌上花科技有限公司 | Face detection model training method, face detection method and device |
CN106127170B (en) * | 2016-07-01 | 2019-05-21 | 重庆中科云从科技有限公司 | A kind of training method, recognition methods and system merging key feature points |
CN106250821A (en) * | 2016-07-20 | 2016-12-21 | 南京邮电大学 | The face identification method that a kind of cluster is classified again |
CN106314356A (en) * | 2016-08-22 | 2017-01-11 | 乐视控股(北京)有限公司 | Control method and control device of vehicle and vehicle |
CN107944431B (en) * | 2017-12-19 | 2019-04-26 | 天津天远天合科技有限公司 | A kind of intelligent identification Method based on motion change |
CN108197282B (en) * | 2018-01-10 | 2020-07-14 | 腾讯科技(深圳)有限公司 | File data classification method and device, terminal, server and storage medium |
CN108388141B (en) * | 2018-03-21 | 2019-04-26 | 特斯联(北京)科技有限公司 | A kind of wisdom home control system and method based on recognition of face |
CN108629283B (en) * | 2018-04-02 | 2022-04-08 | 北京小米移动软件有限公司 | Face tracking method, device, equipment and storage medium |
CN108416336B (en) * | 2018-04-18 | 2019-01-18 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN109815887B (en) * | 2019-01-21 | 2020-10-16 | 浙江工业大学 | Multi-agent cooperation-based face image classification method under complex illumination |
CN110334763B (en) * | 2019-07-04 | 2021-07-23 | 北京字节跳动网络技术有限公司 | Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium |
CN111259822A (en) * | 2020-01-19 | 2020-06-09 | 杭州微洱网络科技有限公司 | Method for detecting key point of special neck in E-commerce image |
CN117373100B (en) * | 2023-12-08 | 2024-02-23 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8014572B2 (en) * | 2007-06-08 | 2011-09-06 | Microsoft Corporation | Face annotation framework with partial clustering and interactive labeling |
CN101840510B (en) * | 2010-05-27 | 2012-02-08 | 武汉华杰公共安全技术发展有限公司 | Adaptive enhancement face authentication method based on cost sensitivity |
CN102169581A (en) * | 2011-04-18 | 2011-08-31 | 北京航空航天大学 | Feature vector-based fast and high-precision robustness matching method |
-
2012
- 2012-03-20 CN CN 201210074224 patent/CN102637251B/en not_active Expired - Fee Related
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10019622B2 (en) | 2014-08-22 | 2018-07-10 | Microsoft Technology Licensing, Llc | Face alignment with shape regression |
Also Published As
Publication number | Publication date |
---|---|
CN102637251A (en) | 2012-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102637251B (en) | Face recognition method based on reference features | |
CN107273845B (en) | Facial expression recognition method based on confidence region and multi-feature weighted fusion | |
CN107609497B (en) | Real-time video face recognition method and system based on visual tracking technology | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN105512624A (en) | Smile face recognition method and device for human face image | |
CN105335732B (en) | Based on piecemeal and differentiate that Non-negative Matrix Factorization blocks face identification method | |
CN106203356B (en) | A kind of face identification method based on convolutional network feature extraction | |
CN102622590B (en) | Identity recognition method based on face-fingerprint cooperation | |
CN102902986A (en) | Automatic gender identification system and method | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
CN102156887A (en) | Human face recognition method based on local feature learning | |
CN102902980B (en) | A kind of biometric image analysis based on linear programming model and recognition methods | |
Hasan | An application of pre-trained CNN for image classification | |
CN104573672B (en) | A kind of discriminating kept based on neighborhood is embedded in face identification method | |
CN104299003A (en) | Gait recognition method based on similar rule Gaussian kernel function classifier | |
CN105956570B (en) | Smiling face's recognition methods based on lip feature and deep learning | |
CN104008375A (en) | Integrated human face recognition mehtod based on feature fusion | |
CN107545243A (en) | Yellow race's face identification method based on depth convolution model | |
Wang et al. | Multi-scale feature extraction algorithm of ear image | |
CN104063721A (en) | Human behavior recognition method based on automatic semantic feature study and screening | |
CN106056074A (en) | Single training sample face identification method based on area sparse | |
Gao et al. | A novel face feature descriptor using adaptively weighted extended LBP pyramid | |
CN103632145A (en) | Fuzzy two-dimensional uncorrelated discriminant transformation based face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131030 |