CN100416596C - Method for judging characteristic point place using Bayes network classification device image - Google Patents

Method for judging characteristic point place using Bayes network classification device image Download PDF

Info

Publication number
CN100416596C
CN100416596C CNB2006101170503A CN200610117050A CN100416596C CN 100416596 C CN100416596 C CN 100416596C CN B2006101170503 A CNB2006101170503 A CN B2006101170503A CN 200610117050 A CN200610117050 A CN 200610117050A CN 100416596 C CN100416596 C CN 100416596C
Authority
CN
China
Prior art keywords
unique point
point
network classifier
bayesian network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101170503A
Other languages
Chinese (zh)
Other versions
CN1936925A (en
Inventor
杜春华
杨杰
张田昊
陈鲁
王华华
吴证
袁泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2006101170503A priority Critical patent/CN100416596C/en
Publication of CN1936925A publication Critical patent/CN1936925A/en
Application granted granted Critical
Publication of CN100416596C publication Critical patent/CN100416596C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The method includes following steps: (1) building ASM model; (2) using detecting human face and positioning eye initializes initial position of ASM search; (3) creating sample corresponding to each feature point on human face; (4) obtaining a Bayes network classifier by using corresponding sample for each feature point; (5) using initial position of ASM search as starting location, and using Bayes network classifier carries out positioning feature point. Being related to eye detection, training classifier, positioning feature point of ASM, the method for positioning feature points on human face is applicable to identifying faces, sexual distinction, determining emotional expression, and evaluating age with high precision.

Description

Method with BAYESIAN NETWORK CLASSIFIER image discriminating characteristic point position
Technical field
What the present invention relates to is man face characteristic point positioning method in a kind of recognition of face field, specifically is a kind of method with BAYESIAN NETWORK CLASSIFIER image discriminating characteristic point position.
Background technology
Face recognition technology is a technology the most practical in numerous living things feature recognition cores, it comprises Expression Recognition, sex identification, estimation of Age, attitude estimation etc., and the face characteristic point location is the core technology in these research fields, and the precision of final recognition of face depends on the accuracy of face characteristic point location to a great extent.Thereby, accurately locate a large amount of human face characteristic points and can greatly improve the precision of recognition of face.At present, the most practical human face characteristic positioning method is the global characteristics independent positioning method.In these class methods, ASM (moving shape model) method can be located a lot of human face characteristic points simultaneously, and its speed is fast, the variation of illumination, background is had stronger robustness, thereby be widely used in positioning feature point.
Find by prior art documents, Cootes, T.F. wait (" Activeshape models-their training and application ") (moving shape model-its training and the application) delivered in " Computer Vision and ImageUnderstanding " (computer vision and image understanding) (the 38th page of first phase nineteen ninety-five), the moving shape model method has been proposed in this article, in the method, when search characteristics point reposition, on perpendicular to the one-dimensional profile on the direction of former and later two unique point lines, find the center of the sub-profile that makes the mahalanobis distance minimum and set the reposition that this center is current unique point, it is not enough, and point is: this operation is to finish under the prerequisite of the gray-scale value Normal Distribution of hypothesis unique point surrounding pixel, in fact, because the quantity of image discriminating sample can not be very many so that the complete Normal Distribution of the profile of unique point, and, the also not necessarily complete Normal Distribution of the gray-scale value of unique point neighboring pixel, particularly in the background more complicated, under the situation of uneven illumination, pixel around the unique point is just disobeyed normal distribution fully, has so just seriously influenced the precision of positioning feature point.This method effect is unsatisfactory simultaneously.
Summary of the invention
The present invention is directed to defective such as image discriminating inexactness in the ASM method, a kind of method with BAYESIAN NETWORK CLASSIFIER image discriminating characteristic point position has been proposed, the new position problems of its image discriminating unique point is converted to based on machine image discriminating problem solves, thereby can improve the precision of judging characteristic point.
The present invention is achieved by the following technical solutions, comprises the steps:
(1) sets up the ASM model;
(2) initial position of searching for by detection of people's face and eye location initialization ASM;
(3) be that people's each unique point on the face generates corresponding sample;
(4), obtain a BAYESIAN NETWORK CLASSIFIER with its corresponding sample for each unique point;
(5) with the initial position of ASM search as reference position, and use BAYESIAN NETWORK CLASSIFIER to carry out positioning feature point.
Described step (1) is meant: at first demarcate the unique point of k people's face on each training sample image of training set by hand, the shape of this k unique point composition can be by a vector x (i)=[x 1, x 2..., x k, y 1, y 2..., y k] represent, unique point with identical numbering has been represented identical feature in different images, n training sample image makes their represented shapes the most approaching on size, direction and position just to n shape vector should be arranged thereby calibrate this n vector then.Then the shape vector after n the calibration is carried out PCA (pivot analysis) and handle, finally any one shape can be expressed as x=x+Pb, wherein b=P T. (x-x), b has represented the situation of change of preceding t maximum pattern, has so just trained the ASM model.
Described step (2), be meant: on image, find human face region with the ababoost method, method with template matches finds two positions on facial image then, and set that point coordinate is [a in two, b], to the above-mentioned average shape model x that tries to achieve, the center of calculating left and right sides eyeball four unique points on every side respectively is as left and right sides eye position, thereby obtain two middle point coordinate [c, d], whole average shape model x translation [a-c, b-d], so just obtained the initial position of ASM search then.
Described step (3), be meant: for people's each unique point on the face, 1 pixel is respectively selected on both sides on perpendicular to former and later two unique point line directions of this unique point, calculate the gray-scale value derivative and the normalization of this 2 * 1+1 pixel, so just obtained an one-dimensional vector, in this one-dimensional vector, select the individual element of 2 * m (m<1) to form an one dimension subvector then from front to back successively, always have 2 * (1-m)+1 such subvector, and be followed successively by these one dimension subvectors and be marked with corresponding classification number, 1,2 ..., 2 * (1-m)+1.So, a unique point on the width of cloth facial image has just generated 2 * (1-m)+1 inhomogeneous training sample, n training image, and each unique point is with regard to corresponding n * (2* (1-m)+1) individual training sample.
Described step (4) is meant: for each unique point, train its corresponding BAYESIAN NETWORK CLASSIFIER with its corresponding n * (2 * (1-m)+1) individual training sample, so just can obtain k BAYESIAN NETWORK CLASSIFIER.
Described step (5) is meant: with initial position ferret out shape in new image that step (2) obtains, this search procedure mainly is that the variation by affined transformation and parameter b realizes.Specifically realize by following two steps that iterate:
1) reposition of search characteristics point
At first use initial position that step (2) obtains as reference position, for j unique point in the model, be that 1 pixel is respectively selected on the both sides, center on perpendicular to its former and later two unique point line directions with it, calculate the gray-scale value derivative and the normalization of this 2*1+1 pixel, so just obtained an one-dimensional vector, in this one-dimensional vector, select the individual element of 2 * m (m<1) to form an one dimension subvector then from front to back successively, always have 2 * (1-m)+1 such subvector, this 2 * (1-m)+1 subvector is sent into j BAYESIAN NETWORK CLASSIFIER classifies, and classification number differentiated the reposition that is set at j unique point for the center of the subvector of m+1, calculate the variation dX of this characteristic point position simultaneously j, each unique point is all carried out such calculating just obtains k change in location dX i, i=1,2 ..., k, and form a vectorial dX=(dX 1, dX 2..., dX k).
2) renewal of parameter in the affined transformation and b
By formula X=M (s, θ) [x]+X c: M (s (1+ds), (θ+d θ)) [x+dx]+(X c+ dX c)=(X+dX), M (s (1+ds), (θ+d θ)) [x+dx]=M (s, θ) [x]+dX+X c-(X c+ dX c), simultaneously by formula x=x+Pb, expectation finds db to make can get db=P by formula x=x+Pb by x+dx=x+P (b+db) -1Dx so just can make following renewal: X to parameter c=X c+ w tDX c, Y c=Y c+ w tDY c, θ=θ+w θD θ, b=b+W bDb, w in the formula t, w θ, w s, W bBe used for the weights that controlled variable changes, so just can obtain new shape by formula x=x+Pb.
The man face characteristic point positioning method that the present invention proposes has very high precision.Carry out the method for positioning feature point in characteristic point positioning method that carries out with BAYESIAN NETWORK CLASSIFIER that proposes with the face database contrast the present invention who takes and the original ASM method with one-dimensional profile, the average error of the former positioning feature point is respectively 2.8 pixels, and the average error of latter's positioning feature point is respectively 4.5 pixels, and experiment shows that the method with BAYESIAN NETWORK CLASSIFIER location feature point that the present invention proposes has greatly improved than other people face characteristic point positioning method on precision.
Description of drawings
Fig. 1 is the facial image that indicates unique point.
Fig. 2 is the result of eye location.
Fig. 3 is with initial model and utilizes BAYESIAN NETWORK CLASSIFIER to carry out the result that the ASM search obtains.
Embodiment
Below in conjunction with a specific embodiment technical scheme of the present invention is described in further detail.
The facial image database that the image that embodiment adopts is taken from Shanghai Communications University.Whole implement process is as follows:
1. from face database, select 600 facial images of having marked unique point to set up the ASM model.Marked the facial image of unique point, as shown in Figure 1.Promptly at first select 60 unique points on each training sample image of training set, the shape that these 60 unique points are formed can be by a vector x (i)=[x 1, x 2..., x 60, y 1, y 2..., y 60] represent, unique point with identical numbering is represented identical feature in different images, 600 training sample image just have 400 shape vectors, then these 400 vectors are carried out calibration operation, make the represented shape of these shape vectors the most approaching on size, direction and position.Then the shape vector after 400 calibrations is carried out PCA (pivot analysis) and handle, any one shape can be expressed as x=x+Pb like this, b=P in the formula T. (x-x), the value representation of b the situation of change of preceding 26 patterns, so just trained the ASM model.
2. in image, find human face region with the adaboost method and find two position with template matching method, as shown in Figure 2, and obtain that point coordinate is (234 in two, 251), try to achieve on the ASM initial model point coordinate (113,145) in two then, then this model individual pixel of translation 121 (234-113) on directions X, the individual pixel of translation 106 (251-145) on the Y direction has so just obtained the initial position of ASM model.For people's 60 unique points on the face, 16 pixels are respectively selected on both sides on perpendicular to the front and back two unique point line directions of this unique point, calculate the gray-scale value derivative and the normalization of this 33 (2*16+1) individual pixel, so just obtained an one-dimensional vector, in this one-dimensional vector, select 16 elements to form an one dimension subvector then from front to back successively, always have 17 such subvectors, and be followed successively by these one dimension subvectors and be marked with corresponding classification number, 1,2,, 17.A unique point on such width of cloth facial image has just generated 17 inhomogeneous training samples, 600 training images, the individual training sample of just corresponding 10200 (600*17) of each unique point.
3. for each unique point, 10200 training sample training BAYESIAN NETWORK CLASSIFIERs with its correspondence so just can obtain 60 sorters.
4. the initial position with model carries out the position that the ASM search obtains human face characteristic point as reference position and with BAYESIAN NETWORK CLASSIFIER, promptly initial model is covered on the image, to j unique point in the model, be that 16 pixels are respectively selected on the both sides, center on perpendicular to its former and later two unique point line directions with it, so just formed an one-dimensional vector that length is 33 (2*16+1), calculate the gray-scale value derivative and the normalization of these 33 pixels, so just obtained an one-dimensional vector, in this one-dimensional vector, select 16 elements to form an one dimension subvector then from front to back successively, always have 17 such subvectors, these 17 subvectors are sent into j sorter classifies, and be classification number the reposition that the center of 9 subvector is set at j unique point by differentiation, calculate the variation dX of this characteristic point position simultaneously j, each unique point is all carried out such calculating just obtains 60 change in location dX i, i=1,2 ..., 60, and form a vectorial dX=(dX 1, dX 2..., dX 60).
5. and then according to the variation that above-mentioned dX calculates affine transformation parameter and b, just can finally locate 60 unique points through 19 step iteration, as shown in Figure 3.
As can be seen from the above embodiments, the man face characteristic point positioning method that the present invention proposes has related to the detection of people's face, eye detection, BAYESIAN NETWORK CLASSIFIER training, ASM positioning feature point can further be applied to aspects such as recognition of face, sex identification, Expression Recognition, estimation of Age, the precision that its tool is very high.

Claims (7)

1. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position is characterized in that, comprises the steps:
(1) sets up the ASM model;
(2) initial position of searching for by detection of people's face and eye location initialization ASM;
(3) be that people's each unique point on the face generates corresponding sample;
(4), obtain a BAYESIAN NETWORK CLASSIFIER with its corresponding sample for each unique point;
(5) with the initial position of ASM search as reference position, and use BAYESIAN NETWORK CLASSIFIER to carry out positioning feature point.
2. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 1, it is characterized in that, described step (1), be meant: at first demarcate the unique point of k people's face on each training sample image of training set by hand, the shape of this k unique point composition is by a vector x (i)=[x 1, x 2..., x k, y 1, y 2..., y k] represent, unique point with identical numbering has been represented identical feature in each image, n training sample image is just to there being n shape vector, thereby calibrating this n vector makes their represented shapes the most approaching on size, direction and position, then the shape vector after n the calibration being carried out PCA handles, final any one shape all is expressed as x=x+Pb, wherein b=P T. (x-x), b has represented the situation of change of preceding t maximum pattern, has so just trained the ASM model;
3. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 1, it is characterized in that, described step (2), be meant: on image, find human face region with the ababoost method, method with template matches finds two positions on facial image then, and the middle point coordinate of setting two lines is [a, b], to the above-mentioned average shape model x that tries to achieve, the center of calculating four unique points around the eyeball of the left and right sides respectively is as left and right sides eye position, thereby obtains the middle point coordinate [c of two lines, d], whole average shape model x translation [a-c, b-d], so just obtained the initial position of ASM search then.
4. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 1, it is characterized in that, described step (3), be meant: for people's each unique point on the face, l pixel respectively selected on both sides on perpendicular to former and later two unique point line directions of this unique point, calculate the gray-scale value derivative and the normalization of this 2 * l+1 pixel, so just obtained an one-dimensional vector, in this one-dimensional vector, select 2 * m (0<m<1 then from front to back successively, m rounds numerical value) one dimension subvector of individual element composition, always have 2 * (l-m)+1 such subvector, and be followed successively by these one dimension subvectors and be marked with corresponding classification number, 1,2, ..., 2 * (l-m)+1, so, a unique point on the width of cloth facial image has just generated 2 * (l-m)+1 inhomogeneous training sample, n training image, each unique point is with regard to corresponding n * (2* (l-m)+1) individual training sample.
5. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 1, it is characterized in that, described step (4), be meant: for each unique point, train its corresponding BAYESIAN NETWORK CLASSIFIER with its corresponding n * (2 * (l-m)+1) individual training sample, so just obtain k BAYESIAN NETWORK CLASSIFIER.
6. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 1, it is characterized in that, described step (5), be meant: as reference position ferret out shape in new image, this search procedure mainly is that the variation by affined transformation and parameter b realizes with the initial position of model.
7. the method with BAYESIAN NETWORK CLASSIFIER search characteristics point position according to claim 6 is characterized in that, described step (5), specifically by following two steps realize repeatedly:
1) calculates the reposition of each unique point
At first initial model is covered on the image, for j unique point in the model, be that l pixel respectively selected on the both sides, center on perpendicular to its former and later two unique point line directions with it, calculate the gray-scale value derivative and the normalization of this 2 * l+1 pixel, so just obtained an one-dimensional vector, in this one-dimensional vector, select 2*m element to form an one dimension subvector then from front to back successively, always have 2 * (l-m)+1 such subvector, m<l wherein, this 2 * (l-m)+1 subvector is sent into k sorter classifies, and classification number differentiated the reposition that is set at j unique point for the center of the subvector of l+1, calculate the variation dX of this characteristic point position simultaneously j, each unique point is all carried out such calculating just obtains k change in location dX i, i=1,2 ..., k, and form a vectorial dX=(dX 1, dX 2..., dX k);
2) renewal of parameter in the affined transformation and b
Parameter is made following renewal: X c=X c+ w tDX c, Y c=Y c+ w tDY c, θ=θ+w θD θ, b=b+W bDb, w in the formula t, w θ, w s, W bBe to be used for the weights that controlled variable changes, obtain new shape by formula x=x+Pb like this.
CNB2006101170503A 2006-10-12 2006-10-12 Method for judging characteristic point place using Bayes network classification device image Expired - Fee Related CN100416596C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101170503A CN100416596C (en) 2006-10-12 2006-10-12 Method for judging characteristic point place using Bayes network classification device image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101170503A CN100416596C (en) 2006-10-12 2006-10-12 Method for judging characteristic point place using Bayes network classification device image

Publications (2)

Publication Number Publication Date
CN1936925A CN1936925A (en) 2007-03-28
CN100416596C true CN100416596C (en) 2008-09-03

Family

ID=37954420

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101170503A Expired - Fee Related CN100416596C (en) 2006-10-12 2006-10-12 Method for judging characteristic point place using Bayes network classification device image

Country Status (1)

Country Link
CN (1) CN100416596C (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100592692C (en) * 2007-09-27 2010-02-24 南京大学 Conditional mutual information based network intrusion classification method of double-layer semi-idleness Bayesian
CN101763500B (en) * 2008-12-24 2011-09-28 中国科学院半导体研究所 Method applied to palm shape extraction and feature positioning in high-freedom degree palm image
CN102741882B (en) * 2010-11-29 2015-11-25 松下电器(美国)知识产权公司 Image classification device, image classification method, integrated circuit, modeling apparatus
CN102163289B (en) * 2011-04-06 2016-08-24 北京中星微电子有限公司 The minimizing technology of glasses and device, usual method and device in facial image
CN103186760A (en) * 2011-12-28 2013-07-03 昌曜科技股份有限公司 Pedestrian identification, detection and statistic system
US8565486B2 (en) * 2012-01-05 2013-10-22 Gentex Corporation Bayesian classifier system using a non-linear probability function and method thereof
CN103390282B (en) * 2013-07-30 2016-04-13 百度在线网络技术(北京)有限公司 Image labeling method and device thereof
CN105205827A (en) * 2015-10-16 2015-12-30 中科院成都信息技术股份有限公司 Auxiliary feature point labeling method for statistical shape model
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face
CN111353943B (en) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030206171A1 (en) * 2002-05-03 2003-11-06 Samsung Electronics Co., Ltd. Apparatus and method for creating three-dimensional caricature
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN1786980A (en) * 2005-12-08 2006-06-14 上海交通大学 Melthod for realizing searching new position of person's face feature point by tow-dimensional profile
CN1797420A (en) * 2004-12-30 2006-07-05 中国科学院自动化研究所 Method for recognizing human face based on statistical texture analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030206171A1 (en) * 2002-05-03 2003-11-06 Samsung Electronics Co., Ltd. Apparatus and method for creating three-dimensional caricature
CN1797420A (en) * 2004-12-30 2006-07-05 中国科学院自动化研究所 Method for recognizing human face based on statistical texture analysis
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning
CN1786980A (en) * 2005-12-08 2006-06-14 上海交通大学 Melthod for realizing searching new position of person's face feature point by tow-dimensional profile

Also Published As

Publication number Publication date
CN1936925A (en) 2007-03-28

Similar Documents

Publication Publication Date Title
CN100416596C (en) Method for judging characteristic point place using Bayes network classification device image
CN100349173C (en) Method for searching new position of feature point using support vector processor multiclass classifier
CN109408653B (en) Human body hairstyle generation method based on multi-feature retrieval and deformation
Milborrow et al. Locating facial features with an extended active shape model
Ramanathan et al. Face verification across age progression
AU2002304495B2 (en) Object identification
CN101561874B (en) Method for recognizing face images
CN1731416A (en) Method of quick and accurate human face feature point positioning
CN108629336B (en) Face characteristic point identification-based color value calculation method
CN106980809B (en) Human face characteristic point detection method based on ASM
CN1786980A (en) Melthod for realizing searching new position of person's face feature point by tow-dimensional profile
CN108710829A (en) A method of the expression classification based on deep learning and the detection of micro- expression
CN101819628A (en) Method for performing face recognition by combining rarefaction of shape characteristic
CN1687957A (en) Man face characteristic point positioning method of combining local searching and movable appearance model
CN104036299B (en) A kind of human eye contour tracing method based on local grain AAM
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
CN101833654A (en) Sparse representation face identification method based on constrained sampling
CN100383807C (en) Feature point positioning method combined with active shape model and quick active appearance model
Chen et al. Silhouette-based object phenotype recognition using 3D shape priors
CN111368762A (en) Robot gesture recognition method based on improved K-means clustering algorithm
KR20080035711A (en) Global feature extraction method for 3d face recognition
CN106682575A (en) Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
CN104732247B (en) A kind of human face characteristic positioning method
JP2009093490A (en) Age estimation device and program
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080903

Termination date: 20121012