CN102254180B - Geometrical feature-based human face aesthetics analyzing method - Google Patents

Geometrical feature-based human face aesthetics analyzing method Download PDF

Info

Publication number
CN102254180B
CN102254180B CN201110177113.5A CN201110177113A CN102254180B CN 102254180 B CN102254180 B CN 102254180B CN 201110177113 A CN201110177113 A CN 201110177113A CN 102254180 B CN102254180 B CN 102254180B
Authority
CN
China
Prior art keywords
sample
attractive
weak classifier
feature
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110177113.5A
Other languages
Chinese (zh)
Other versions
CN102254180A (en
Inventor
朱振峰
段红帅
赵耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201110177113.5A priority Critical patent/CN102254180B/en
Publication of CN102254180A publication Critical patent/CN102254180A/en
Application granted granted Critical
Publication of CN102254180B publication Critical patent/CN102254180B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a geometrical feature-based human face aesthetics analyzing method, which comprises the steps of: carrying out combining description through combined strategy-based local geometrical features and constructing weak classifiers by using memory-based dynamically weighted kernel density estimation (MDKDE); and realizing effective integration of the features by using an Adaboost ensemble learning mechanism so as to obtain accurate classifications for human face aesthetics. Different from the traditional geometrical feature-based human face aesthetics analyzing technique, the geometrical feature-based human face aesthetics analyzing method selects the local geometrical features which are used for describing human face aesthetics from multiple angles, such as Euclidean distance, gradient and area and the like to compose single descriptions for the human face aesthetics, and combined feature description is obtained by combining descriptions of the local geometrical features, and the weak classifiers for Adaboost ensemble learning are constructed by using the MDKDE, thus a good classifying result for randomly input human face images is obtained.

Description

A kind of face aesthetic feeling analytical approach based on geometric properties
Technical field
The present invention relates to face aesthetic feeling analytical approach technical field, particularly relate to a kind of face aesthetic feeling analytical approach based on geometric properties.
Background technology
Face aesthetic feeling is the unification of personality and common, and individual character refers to that everyone appearance is not quite similar, and has own beautiful speciality, and general character refers to the performance difference of everyone U.S., but all follows certain rule.Since ancient times, people just never stopped beautiful pursuit, philosopher, psychologist, esthete etc. are also attempting to find essence of beauty always, and many aesthetics law are formed, as Chinese classic Aesthetic Standards becomes Sinitic Important Thought system, " three five, front yards " saying that China's traditional aesthetic is seen, " golden ratio " of having praised highly since ancient Greek, and new " golden ratio " of researcher's recent findings etc.
In early days, it is believed that U.S.A is a kind of abstract subjective concept, " beautiful in onlooker's eyes (Beauty is in the eye of the beholder) ", to beautiful judgement meeting person's the difference such as sex, age, race, education degree, cultural environment and different according to the observation.Voltaire was also once thought, " U.S.A is relative ".But, studies have found that, present women's beauty and ugliness picture of dividing according to adult's standard to the baby of the 2-3 month and the 6-8 month, baby watches the time of beauty's picture attentively and is obviously longer than non-beauty; Meanwhile, a research discovery of evolutionary psychology, Asian, Spaniard, Black people and white man, in the time that women's photo is given a mark, have the positive correlation up to 0.93.Be that global people are using a set of similar aesthetic feeling judgment criterion.
Face aesthetic feeling analytical technology is just causing that people more and more note, this is because huge economic benefit and the social benefit that face aesthetic feeling analytical technology is brought.Cosmetic surgery association of the U.S. delivers report and claims, U.S.'s lift face market increases by 4.57 times over 10 years, and within 2007, the industry output value reaches approximately 13,000,000,000 dollars; Evolutionary psychologist is introduced " breeding is worth " concept from the relation of feminine charm and breeding, has explained the aesthetic conceptions of the male sex to women from the angle of evolving; In the face of the magnanimity information on internet, carry out personalized information retrieval from aesthetic feeling angle and also more and more become the main stream approach of each social network sites.Along with the expansion of human life style's continuous change, scope of activities and various nationalities population are due to the Long-term Effect of race, region, weather, life habit, all there is marked change in crowd's constitutive character all over the world, the change of its facial characteristics is also especially obvious, therefore study contemporary crowd's appearance feature, can provide important basic data for clinical aesthetic medicine, health care and evaluation of nutrition standard.Crowd's the each organ of having attractive appearance is followed certain ratio, and out of proportion be one of lopsided important change index.Scientist, by the crowd of having attractive appearance and general population's contrast, studies the beautiful social influence bringing, i.e. the influence degree of appearance to interpersonal relation.Beautiful feature is also having a strong impact on sculptor, artist's etc. artistic creation, for the mankind provide spiritual value difficult to the appraisal.
In view of face aesthetic feeling analytical technology is in potential application prospects in field such as cosmetic surgery, cosmetic, social network sites, entertainment software, scientific researches, people start to think deeply essence of beauty, and can forming beautiful element be what, quantize it.
The 80s and 90s in last century, people repeatedly extract at Mark Lewis-Francis Galton on the basis of characteristics of image technology and utilize anamorphose software (Morphing software), by the synthetic average face of multiple independent facial images, obtain " average face theory (Averageness hypothesis) ", think that to only have average face to be only the most beautiful the most attracting, more approach average face more attractive.But, the people such as Perrett adopt other synthetic methods, only that the synthetic beautiful face average face obtaining is more attractive than all people's face (comprise beautiful with pic-eyed) is synthesized to the average face obtaining, part has negated " average face theory ", think that average face is no doubt attractive, but attractive face differs, to establish a capital be average face.
But, in recent years, along with science and technology and the progress, particularly computer technology of infotech, network technology and the development of capacity storage technology greatly, people start the angle from data mining, utilize the method for machine learning and the pattern-recognition face aesthetic feeling of analyzing and researching.According to aesthetic feeling analytical algorithm based on feature difference, at present the face aesthetic feeling analytical technology of main flow can be divided into following two classes substantially:
1, the method based on subspace (Subspace based)
Method based on subspace, from face presentation (Appearance) information, is directly inputted the gray level image of face, can realize the analysis of full automatic face aesthetic feeling.In face recognition application, obtain compared with ten-strike.The dimension of facial image is conventionally higher, and the distribution of congener's face image in such higher dimensional space is very not compact, thereby is unfavorable for classification, and computation complexity is high.Subspace analysis is compressed to a low n-dimensional subspace n the facial image data projection in original higher dimensional space, makes the distribution of data in subspace compacter, is more conducive to classification, also greatly reduces the complexity of calculating simultaneously.
2, the method based on geometric properties (Geometrical features based).
Research method based on geometric properties need to obtain according to existing experimental knowledge the geometric proportion relation of each organ of possible description face aesthetic feeling, and training obtains having the feature that strong classification returns performance and describe combination, return for the classification of newly entering facial image.The method is the cognitive principle to face aesthetic feeling from human brain, and easy to understand, memory space is little, illumination variation is insensitive.
But because the feature description that forms face aesthetic feeling is many-sided, the information that standard differs, thereby there is very large challenge, be also faced with a lot of problems.Due to the emphasis difference of said method in the time that extraction face Aesthetic Characteristics is described, make the analysis to face aesthetic feeling also exist a lot of problems:
(1), for the algorithm based on subspace, it need to carry out position correction pre-service to input facial image, and is subject to the impact of illumination, postural change, picture quality etc.
(2), for the method based on geometric properties, first, the method is poor to the robustness of strong expression shape change and postural change; Secondly, general geometric properties has only been described basic configuration and the structural relation of face, has ignored local fine feature, as textural characteristics, causes partial information to lose; In addition, also need manually to demarcate geometric relationship, workload is large, and experimentation needs manpower intervention, can not realize full-automatic.
Therefore, need the urgent technical matters solving of those skilled in the art to be exactly: the defect that the proposition how can innovate has a kind of effective measures to exist to overcome prior art.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of face aesthetic feeling analytical approach based on geometric properties, and effectively realization character is integrated, obtains the accurate classification of face aesthetic feeling.
In order to address the above problem, the invention discloses a kind of face aesthetic feeling analytical approach based on geometric properties, described method comprises: offline database is set up and on-line monitoring, wherein off-line data established part of the library is divided into off-line pre-service and processed offline, and on-line monitoring part is divided into online pre-service and online processing;
Described off-line pre-service comprises demarcating 41 unique points on each organ of every facial image in preliminary election picture database, on each organ, the line of unique point is for describing the profile of this organ, store the coordinate of each unique point, combination single features is described and assemblage characteristic is described, during for on-line monitoring, compare with corresponding assemblage characteristic selected after unknown face image labeling;
Described processed offline comprises describes the Feature Combination of all image compositions in the database obtaining, and adopts the dynamic weighting Density Estimator based on memory to build Weak Classifier;
The described dynamic weighting Density Estimator based on memory builds Weak Classifier and comprises:
The sample of input is described with geometric properties, each feature f is trained to a Weak Classifier:
Described a Weak Classifier of each feature f training is comprised: all sample weights w of normalization t,ifor described for " attractive " sample weights, for " not having attractive " sample weights;
The corresponding Weak Classifier of each feature f, to f=1 ..., nf: described nf is Feature Combination description
Number; For any point in f, the Density Estimator of its distribution density function p (x) is described be i sample point in Adaboost algorithm before (t-1) wheel Dynamic Weights mean value, for kernel function, h represents window width or bandwidth,
Described Density Estimator obtains Weak Classifier wherein for feature f belongs to the probability density of attractive classification, for feature f belongs to the probability density that does not have attractive classification;
The described attractive reference value that refers to facial image is greater than preset value, described in do not have the attractive reference value that refers to facial image to be less than preset value; Described reference value is the grade mean value that every image exceedes 50 people's marking;
Described online pre-service comprises each the facial image sample to input, manually demarcate by the position of 41 selected geometric properties points of offline database, store the coordinate figure of each point, carry out single features description and assemblage characteristic and describe combination, as query vector;
Described online processing particular content comprises to be described the geometric properties of input picture to carry out integrated study by Adaboost algorithm, and integrated each Weak Classifier is that strong classifier is classified, and obtains the class label with the described facial image respective classes of query vector;
Described step of carrying out integrated study by Adaboost algorithm comprises:
The geometric properties of face sample is described (x 1, y 1) ... (x i, y i) ..., (x n, y n), wherein x i, i=1 ..., n is that single features is described or assemblage characteristic is described, class label y i=1,0 represents respectively " attractive " and " not having attractive ", and n is total sample number;
Initialization sample weight w 1, i, for attractive w 1, i=12m, for not having attractive w 1, i=12l, wherein m and l are respectively positive and negative number of samples, n=m+l;
To t=1 ..., T: process, wherein T is the number of the Weak Classifier chosen, comprising:
Normalized weight w t,i:
w t , i = w t , i Σ j = 1 n w t , j
To each feature f, train a MDWKDE Weak Classifier h t,f;
Calculate the weighting (w of corresponding characteristic Weak Classifier t) error rate:
e t , f = Σ i w ‾ t , i | h t , f ( x i ) - y i |
Choose and have minimal error rate (e t) sorter be best Weak Classifier h t; Error rate according to best Weak Classifier is readjusted weight:
w t + 1 , i = w ‾ t , i β t 1 - e i
Wherein, e i=0 represents x icorrectly classified, e i=1 represents x iclassified mistakenly; Wherein, β t = e t 1 - e t ;
The Weak Classifier group integrated study of T Face geometric eigenvector f composition is obtained to final strong classifier:
h ( x ) = 1 Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 otherwise
Wherein value by h (x) can judgement sample classification.
Preferably, in described preliminary election picture database, every facial image carries classified information, is specifically divided into positive example sample or negative routine sample, i.e. " attractive " or " not having attractive ".
Preferably, described method also comprises:
According to described classified information, constructed Weak Classifier is carried out to verification.
Preferably, described classified information is to provide respectively above objective aesthetic feeling score value through 50 people, divides by its mean value size.
Preferably, calculate the Euclidean distance between any two points in 41 geometric properties points for each facial image in offline database, and with pupil spacing normalizing, get altogether 817, totally 20 of slopes between the both sides any two points of 10 unique points of calculating description mandibular profile, calculate the triangle area of describing face area, and the triangle area normalizing forming with two pupils and prenasale, get altogether 8, Euclidean distance, 845 dimensional feature vectors of slope and triangle area composition are described as its single features, simultaneously, store 845 dimensional feature vectors of facial image in all offline databases, during for on-line monitoring, compare with corresponding 845 dimensional feature vectors selected after unknown face image labeling,
For each facial image in offline database, after processed offline, obtain after the classification performance of every one dimension of 845 dimensional feature vectors, by its classification performance height sequence, choosing front 100 dimensional vectors that classification performance is stronger carries out combination of two and obtains 4950 dimension union features and describe as its assemblage characteristic, while being respectively used to on-line monitoring, describing and compare with corresponding assemblage characteristic selected after unknown face image labeling.
Preferably, for each facial image of input, calculate totally 817 of Euclidean distances between any two points in 41 geometric properties points, and with pupil spacing normalizing, totally 20 of slopes between the both sides any two points of 10 unique points of calculating description mandibular profile, totally 8 of the triangle areas of calculating description face area, and the triangle area normalizing forming with two pupils and prenasale, the query vector that 845 dimensional feature vectors that storage is made up of Euclidean distance, slope and triangle area are described as its single features;
For pretreated each the face picture of Input Online, the height sequence of every one dimension classification performance of describing according to 845 dimension single features of offline database facial image is chosen corresponding front 100 dimensions and is carried out combination of two and obtain the query vector storage of 4950 dimension union features as its assemblage characteristic description from 845 dimensional features of input picture are described.
Compared with prior art, the present invention has the following advantages:
The present invention combines description by the local geometric features based on combined strategy, and adopt the dynamic weighting Density Estimator (MDKDE) based on memory to build Weak Classifier, utilize the effective integration of Adaboost integrated study mechanism realization character, obtain the accurate classification of face aesthetic feeling.Different from the face aesthetic feeling analytical technology based on geometric properties is in the past, the present invention chooses the local geometric features of describing face aesthetic feeling from multi-angles such as Euclidean distance, slope and areas, form the single description of face aesthetic feeling, and combine and obtain assemblage characteristic description by local geometric features description, and adopt MDKDE to build the Weak Classifier of Adaboost integrated study, the facial image of any input has been obtained to good classification results.
Brief description of the drawings
Fig. 1 is the schematic diagram of a kind of face aesthetic feeling analytical approach based on geometric properties described in the specific embodiment of the invention;
Fig. 2 is the schematic diagram of the chosen position of the geometric properties point described in the specific embodiment of the invention;
Fig. 3 is the schematic diagram that the Face geometric eigenvector described in the specific embodiment of the invention extracts;
Fig. 4 is that the face slope characteristics described in the specific embodiment of the invention is described diagram;
Fig. 5 is that the triangle area feature described in the specific embodiment of the invention is described diagram;
Fig. 6 is that schematic diagram is described in the face characteristic combination described in the specific embodiment of the invention; Wherein i, i=1,2 ... the 845 i dimensional feature description values for single features description, suppose that 1 to 845 is to sort from high to low by its classification performance;
Fig. 7 is the Adaboost algorithm principle schematic diagram described in the specific embodiment of the invention;
Fig. 8 is that the non-ginseng described in the specific embodiment of the invention is estimated schematic diagram;
Fig. 9 is the dynamic weighting Density Estimator schematic diagram described in the specific embodiment of the invention;
Figure 10 is the DWKDE driving based on sample classification mistake described in the specific embodiment of the invention;
Figure 11 is the MDWKDE driving based on sample classification mistake described in the specific embodiment of the invention;
Figure 12 is the single local feature description classification results schematic diagram described in the specific embodiment of the invention;
Figure 13 is that the single features described in the specific embodiment of the invention is described integrated study classification results schematic diagram;
Figure 14 is the assemblage characteristic interpretive classification result schematic diagram described in the specific embodiment of the invention;
Figure 15 is that the assemblage characteristic described in the specific embodiment of the invention is described integrated study classification results schematic diagram.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, below in conjunction with the drawings and specific embodiments, the present invention is further detailed explanation.
With reference to Fig. 1, show the schematic diagram of a kind of face aesthetic feeling analytical approach based on geometric properties of the present invention, below the specific embodiment of the present invention is elaborated.
1, geometric point mark
In the present invention, consider the size of labor workload, in the time choosing how much mark points of facial image, choose altogether 41 unique points, on each organ, the line energy approximate of each unique point goes out the profile of this organ.Each face image to input manually marks according to predefined 41 geometric properties points, and stores the coordinate figure of each point.Fig. 2 has provided the chosen position of the geometric properties point of this programme, and Fig. 3 is the schematic diagram of face Extraction of Geometrical Features.
2, Feature Combination
The face picture having marked for each:
(1) calculate any two some A (x 1, y 1) and B (x 2, y 2) between totally 820 of Euclidean distances, and with pupil spacing normalizing, remove pupil spacing and upperlip intermediate point to 2 of the distances of upper lip and lower lip, totally 817;
(2) calculate the both sides any two points C (x that describes mandibular profile each point 3, y 3) and D (x 4, y 4) between totally 20 of slopes the face slope characteristics specifically providing with reference to Fig. 4 is described diagram;
(3) calculate the triangle area of describing face area, and the triangle area normalizing forming with two pupils and prenasale, remove the triangle area that two pupils and prenasale form, totally 8, the triangle area feature specifically providing with reference to Fig. 5 is described diagram;
(4) above-mentioned totally 845 dimensional features are combined as its single features and described, as shown in Figure 6;
(5) after the classification performance of every one dimension that obtains 845 dimensional features through processed offline, by its classification performance height sequence, choose front 100 dimensions that classification performance is stronger and carry out combination of two and obtain 4950 dimension secondary-combination features and describe as its assemblage characteristic, as shown in Figure 6.
3, Adaboost integrated study
Nineteen ninety-five, Freund and Schapire have proposed Adaboost algorithm according to online allocation algorithm, and its full name is Adaptive Boosting, is named Adaboost to be different from Boosting algorithm.
Adaboost is a kind of iterative algorithm, and its core concept is to train different Weak Classifiers for the different distributions of same training set, then these Weak Classifiers is gathered, and forms a stronger final strong classifier.Its algorithm itself distributes to realize by changing data, and whether it is correct according to the classification of each sample among each training set, and the accuracy rate of the overall classification of last time, determines the weights of each sample.Give lower floor's sorter by the new data set of revising weights and train, finally merge last the sorter that training obtains at every turn, as last Decision Classfication device.Be Adaboost algorithm principle schematic diagram with reference to Fig. 7.
Use Adaboost sorter classification center of gravity can be placed on above the data that easy mistake divides, as shown in figure 10.Wherein, circle and point represent data to be sorted, the larger expression weight of symbol is higher, solid line represents current classification results (being obtained by front m merging), dotted line represents current presorting, its meeting is more paid close attention to last by wrong data of dividing, i.e. the large data of those symbols are reduced by the probability of wrong point in current classification.In the time of m=150, substantially by the point of two kinds of distinct symbols separately.
Adaboost algorithm is trained different Weak Classifiers for the different weight distribution of same training set.When initial, the weight that each sample is corresponding is identical, i.e. 1n, and wherein n is number of samples, trains a Weak Classifier under this sample distribution.For the correct sample of classification, reduce its weight, be just highlighted out by the sample of misclassification like this, thereby obtain a new sample distribution.Under new sample distribution, again Weak Classifier is trained, obtain Weak Classifier.The rest may be inferred, through T circulation, obtains T Weak Classifier, this T Weak Classifier is got up by certain weight stack (Boost), and the strong classifier of finally being wanted.
Adaboost Integrated Algorithm flow process concrete steps are as follows:
The geometric properties of face sample is described (x 1, y 1) ..., (x n, y n), wherein x i, i=1 ..., n is that single features is described or assemblage characteristic is described, class label y i=1,0 represents respectively " attractive " and " not having attractive ", and n is total sample number.
(1) initialization sample weight w 1, i, w 1, i=12m(is to " attractive ") or w 1, i=12l(is not to " having attractive "), wherein m and l are respectively positive and negative number of samples, n=m+l;
(2) to t=1 ..., T:(T is the number of the Weak Classifier chosen)
1), normalized weight w t,i:
w t , i = w t , i Σ j = 1 n w t , j
2), to each feature f, train a MDWKDE Weak Classifier h t,f;
3), calculate the weighting (w of corresponding characteristic Weak Classifier t) error rate:
e t , f = Σ i w ‾ t , i | h t , f ( x i ) - y i |
4), choose and have minimal error rate (e t) sorter be best Weak Classifier h t;
5), readjust weight according to the error rate of best Weak Classifier:
w t + 1 , i = w ‾ t , i β t 1 - e i
Wherein, e i=0 represents x icorrectly classified, e i=1 represents x iclassified mistakenly;
β t = e t 1 - e t .
(3) the Weak Classifier group integrated study of T Face geometric eigenvector f composition is obtained to final strong classifier:
h ( x ) = 1 Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 otherwise
Wherein value by h (x) can judgement sample classification.
4, build MDWKDE Weak Classifier
Each input feature vector corresponding a Weak Classifier, the training of Adaboost algorithm is exactly to select optimum Weak Classifier, and gives the process of weight.Weak Classifier adopts certain strategy to classify to sample, therefore, the selection of Weak Classifier is played vital effect for the accuracy of sample classification, can restrain in order to ensure final strong classifier, Weak Classifier must be less than 0.5 to the mistake of sample classification point rate, at least slightly good than random conjecture, training algorithm finally can be restrained like this.
The Weak Classifier of training in Adaboost algorithm can be any sorter, as decision tree, and neural network, threshold value sorter, probability density estimation etc.The common method that solves the distribution density function of stochastic variable by given sample point set has parameter estimation and non-parametric estmation.Because parameter estimation will truly distribute and regard a kind of known distribution of parametric form as, as linear, can change linearity or index condition etc., then in objective function family, find specific solution, determine the unknown parameter in regression model.Experience and theory show, usually have larger gap between this fundamental assumption of parameter model and actual physical model, and these methods not can achieve satisfactory results.And the unfavorable priori distributing with relevant data of non-parametric estmation, true distribution is not done on the basis of any a priori assumption, true distribution is estimated, it is a kind of method from data sample data distribution characteristics itself, thereby, be all subject to attention highly at statistical theory and application.The Density Estimator that in this programme, we estimate taking non-ginseng, can be with reference to Fig. 8 as example.
The Density Estimator that Rosenblatt (1955) and Emanuel Parzen (1962) propose, have another name called Parzen window density Estimation, it is a kind of probability density algorithm of classics, in theory of probability, be used for estimating unknown density function, belonging to one of non-parametric test method, is a kind of Nonparametric Estimation with widespread use field.
If sample set (x 1, x 2..., x n), for any point, the Density Estimator of its distribution function p (x) can be defined as follows:
p ^ ( x ) = 1 nh Σ i = 1 n K h ( x - x i ) - - - ( 1 )
Wherein for kernel function, h represents window width or bandwidth.Can be found out by above formula, general Density Estimator is utilized data point x ito the distance K of x h(x-x i) judge x i" contribution " in the time of the probability density of estimation point x, and the weighted mean of local function centered by each sampled point is as the estimated value of this data point probability density function, as shown in figure 11.
In this programme, we adopt conventional gaussian kernel function, and (1) formula can be written as:
p ^ ( x ) = 1 n Σ i = 1 n 1 ( 2 π ) d / 2 σ exp { - 1 2 σ 2 ( x - x i ) T ( x - x i ) } - - - ( 2 )
Wherein, x=[x 1, x 2..., x d] tbe d dimensional vector, σ is sample variance.
Formula (2) supposes that in the time solving probability density all sample points are 1/n at the weights of sample point, but in Adaboost algorithm, each takes turns training all can readjust sample weights later, in conjunction with Adaboost algorithm, each takes turns after training process the thought of adjusting weight that each sample is corresponding and produce different training sets according to sample classification error rate to this programme, dynamic weighting Density Estimator (Dynamically Weighted KDE is proposed, DWKDE) algorithm, specifically with reference to Fig. 9, by dynamic sample weight estimated probability density, (2) formula can be rewritten as:
p ^ t ( x ) = Σ i = 1 n w t , i ( 2 π ) d / 2 σ exp { - 1 2 σ 2 ( x - x i ) T ( x - x i ) } - - - ( 3 )
Wherein w t,i, i=1 ..., n is that after last round of Adaboost training, new sample weights distributes.
The better unknown probability density distribution of matching sample of DWKDE driving based on sample classification mistake, each sample point x iprobability density " contribution " to x not only comprises it apart from K h(x-x i) variation, also comprised sample point x ithe variation of weights, readjust after weights at Adaboost, the last round of sample weights by wrong point increases, corresponding probability density increases, what next round was divided by mistake may reduce, as shown in figure 12.
As seen from Figure 13, along with the redistribution of sample weights, in figure, the variation of the probability density of green line indication sample point is floated greatly, and DWKDE adopts every redistribution of taking turns sample weights, will cause the acute variation of probability density.The impact of probability density being estimated for fear of sample weights acute variation, this programme proposes dynamic weighting Density Estimator (the Memory Based Dynamically Weighted KDE based on memory on the basis of dynamic weighting Density Estimator, MDWKDE), according to the historical variations trend of sample weights, the sample weights of the t time distributes and is obtained by front (t-1) inferior distribution of weights statistics, according to (3) formula, can obtain following probability density and estimate:
p ^ t ( x ) = Σ i = 1 n w ‾ t , i ( 2 π ) d / 2 σ exp { - 1 2 σ 2 ( x - x i ) T ( x - x i ) } - - - ( 4 )
Wherein, be i sample in Adaboost algorithm before (t-1) wheel Dynamic Weights mean value.Choosing of weighted mean avoided the distribution of weights causing due to sample mis-classification to change the acute variation that causes that probability density is estimated, play the effect of certain correction based on dynamic weighting Density Estimator, as shown in figure 14, because the mean value of (t-1) wheel Dynamic Weights before adopting distributes as new sample weights, reduce the amplitude of acute variation, make sample probability density Estimation increasing or decreasing in a little scope, improved the stable accuracy of estimating.
To sum up, utilize the dynamic weighting Density Estimator training Weak Classifier based on memory driving based on sample classification mistake in the Adaboost Integrated Algorithm that this programme proposes, concrete steps are as follows:
The sample geometric properties of input is described, each feature f is trained to a Weak Classifier:
(1), all sample weights wt of normalization, i:
w t , i Pos = w t , i Pos Σ j = 1 m w t , j Pos , w t , i Neg = w t , i Neg Σ j = 1 l w t , j Neg
Wherein, for " attractive " sample weights, for " not having attractive " sample weights;
(2), the corresponding Weak Classifier of each feature f, to f=1 ..., nf:(nf is that Feature Combination is described number)
1), for any point in f, the Density Estimator of its distribution density function p (x):
p ^ t ( x ) = Σ i = 1 n w ‾ t , i K h ( x - x i )
Wherein, be i sample point in Adaboost algorithm before (t-1) wheel Dynamic Weights mean value, for kernel function, h represents window width or bandwidth, and in this method, we select modal Gaussian window:
K ( x ) = 1 ( 2 π ) d / 2 σ exp { - 1 2 σ 2 ( x - x i ) T ( x - x i ) }
Wherein, x=[x 1, x 2..., x d] tbe d dimensional vector, when d=1, x is that single features is described, and when d=2, x is that assemblage characteristic is described, and σ is sample variance;
2), obtain Weak Classifier by Density Estimator:
h t , f ( x ) = sgn ( p ^ t Pos ( x ) - p ^ t Neg ( x ) )
Wherein for feature f belongs to the probability density of " attractive " classification, for feature f belongs to the probability density of " not having attractive " classification.
Further describe, the database of using in experiment comes from Hotornot website, and this website is a social network sites that allows people to give a mark to the picture of uploading voluntarily.Selected the subdata bases of 1230 women's picture compositions herein, and every pictures is all with the grade mean value that exceedes 50 people and give a mark, i.e. benchmark score value, grade is divided into 1 to 10 grade.Two class situations are only considered in experiment, " do not have attractive " (Unattractive) and " attractive " (Attractive), wherein reference value is less than 8, totally 601 pictures are divided into the first kind, " do not have attractive ", residue reference value is greater than 8, and totally 629 divide Equations of The Second Kind into, i.e. " attractive ".
In order to obtain its geometric properties, show image is carried out to geometry mark, mark altogether 41 points, then extract its range information, slope information and triangle area information totally 845 dimensional features, can obtain the single local geometric features of 845 dimension, wherein any two dimensional features can be combined to a two dimension combination Expressive Features.
Above-mentioned single local geometric features and assemblage characteristic description are carried out respectively to learning classification, from experimental result, the classification performance that assemblage characteristic is described is better than single local geometric features, i.e. assemblage characteristic description has more effectively face aesthetic feeling intrinsic and describes performance; Adopt respectively Adaboost model to carry out integrated study classification to above-mentioned single local geometric features and assemblage characteristic description, can be with reference to Figure 15, from experimental result, adopt Adaboost model can extract more effectively intrinsic and describe, and the dynamic weighting Density Estimator structure Weak Classifier method performance based on memory that this programme proposes is better than Density Estimator
Method.
Above to a kind of face aesthetic feeling analytical approach based on geometric properties provided by the present invention, be described in detail, applied specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment is just for helping to understand method of the present invention and core concept thereof; , for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention meanwhile.

Claims (6)

1. the face aesthetic feeling analytical approach based on geometric properties, it is characterized in that, described method comprises: offline database is set up and on-line monitoring, and wherein off-line data established part of the library is divided into off-line pre-service and processed offline, and on-line monitoring part is divided into online pre-service and online processing;
Described off-line pre-service comprises demarcating 41 unique points on each organ of every facial image in preliminary election picture database, on each organ, the line of unique point is for describing the profile of this organ, store the coordinate of each unique point, combination single features is described and assemblage characteristic is described, during for on-line monitoring, compare with corresponding assemblage characteristic selected after unknown face image labeling;
Described processed offline comprises describes the Feature Combination of all image compositions in the database obtaining, and adopts the dynamic weighting Density Estimator based on memory to build Weak Classifier;
The described dynamic weighting Density Estimator based on memory builds Weak Classifier and comprises:
The sample of input is described with geometric properties, to a Weak Classifier of each feature f training: described a Weak Classifier of each feature f training is comprised: all sample weights w of normalization t,ifor described for " attractive " sample weights, for " not having attractive " sample weights;
The corresponding Weak Classifier of each feature f, to f=1 ..., nf: described nf is that Feature Combination is described number; For any point in f, the Density Estimator of its distribution density function p (x) is described be i sample point in Adaboost algorithm before (t-1) wheel Dynamic Weights mean value, for kernel function, h represents window width or bandwidth, and the Weak Classifier that the described dynamic weighting Density Estimator based on memory builds is wherein for feature f belongs to the probability density of attractive classification, for feature f belongs to the probability density that does not have attractive classification;
The described attractive reference value that refers to facial image is greater than preset value, described in do not have the attractive reference value that refers to facial image to be less than preset value; Described reference value is the grade mean value that every image exceedes 50 people's marking;
Described online pre-service comprises each the facial image sample to input, manually demarcate by the position of 41 selected geometric properties points of offline database, store the coordinate figure of each point, carry out single features description and assemblage characteristic and describe combination, as query vector;
Described online processing particular content comprises to be described the geometric properties of input picture to carry out integrated study by Adaboost algorithm, and integrated each Weak Classifier is that strong classifier is classified, and obtains the class label with the described facial image respective classes of query vector;
Described step of carrying out integrated study by Adaboost algorithm comprises:
The geometric properties of face sample is described (x 1, y 1) ... (x i, y i) ..., (x n, y n), wherein x i, i=1 ..., n is that single features is described or assemblage characteristic is described, class label y i=1,0 represents respectively " attractive " and " not having attractive ", and n is total sample number;
Initialization sample weight w 1, i, for attractive w 1, i=12m, for not having attractive w 1, i=12l, wherein m and l are respectively positive and negative number of samples, n=m+l;
To t=1 ..., T: process, wherein T is the number of the Weak Classifier chosen, comprising:
Normalized weight w t,i:
w t , i = w t , i Σ j = 1 n w t , j
To each feature f, train a MDWKDE Weak Classifier h t,f;
Calculate the weighting (w of corresponding characteristic Weak Classifier t) error rate:
e t , f = Σ i w ‾ t , i | h t , f ( x i ) - y i |
Choose and have minimal error rate (e t) sorter be best Weak Classifier h t; Error rate according to best Weak Classifier is readjusted weight:
w t + 1 , i = w ‾ t , i β t 1 - e i
Wherein, e i=0 represents x icorrectly classified, e i=1 represents x iclassified mistakenly; Wherein, β t = e t 1 - e t ;
The Weak Classifier group integrated study of T Face geometric eigenvector f composition is obtained to final strong classifier:
h ( x ) = 1 Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 otherwise
Wherein value by h (x) can judgement sample classification.
2. the method for claim 1, is characterized in that:
In described preliminary election picture database, every facial image carries classified information, is specifically divided into positive example sample or negative routine sample, i.e. " attractive " or " not having attractive ".
3. method as claimed in claim 2, is characterized in that, described method also comprises:
According to described classified information, constructed Weak Classifier is carried out to verification.
4. method as claimed in claim 2, is characterized in that:
Described classified information is to provide respectively above objective aesthetic feeling score value through 50 people, divides by its mean value size.
5. the method for claim 1, is characterized in that:
Calculate the Euclidean distance between any two points in 41 geometric properties points for each facial image in offline database, and with pupil spacing normalizing, get altogether 817, totally 20 of slopes between the both sides any two points of 10 unique points of calculating description mandibular profile, calculate the triangle area of describing face area, and the triangle area normalizing forming with two pupils and prenasale, get altogether 8, Euclidean distance, 845 dimensional feature vectors of slope and triangle area composition are described as its single features, simultaneously, store 845 dimensional feature vectors of facial image in all offline databases, during for on-line monitoring, compare with corresponding 845 dimensional feature vectors selected after unknown face image labeling,
For each facial image in offline database, after processed offline, obtain after the classification performance of every one dimension of 845 dimensional feature vectors, by its classification performance height sequence, choosing front 100 dimensional vectors that classification performance is stronger carries out combination of two and obtains 4950 dimension union features and describe as its assemblage characteristic, while being respectively used to on-line monitoring, describing and compare with corresponding assemblage characteristic selected after unknown face image labeling.
6. the method for claim 1, is characterized in that:
For each facial image of input, calculate totally 817 of Euclidean distances between any two points in 41 geometric properties points, and with pupil spacing normalizing, totally 20 of slopes between the both sides any two points of 10 unique points of calculating description mandibular profile, totally 8 of the triangle areas of calculating description face area, and the triangle area normalizing forming with two pupils and prenasale, the query vector that 845 dimensional feature vectors that storage is made up of Euclidean distance, slope and triangle area are described as its single features;
For pretreated each the face picture of Input Online, the height sequence of every one dimension classification performance of describing according to 845 dimension single features of offline database facial image is chosen corresponding front 100 dimensions and is carried out combination of two and obtain the query vector storage of 4950 dimension union features as its assemblage characteristic description from 845 dimensional features of input picture are described.
CN201110177113.5A 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method Expired - Fee Related CN102254180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110177113.5A CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110177113.5A CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Publications (2)

Publication Number Publication Date
CN102254180A CN102254180A (en) 2011-11-23
CN102254180B true CN102254180B (en) 2014-07-09

Family

ID=44981432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110177113.5A Expired - Fee Related CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Country Status (1)

Country Link
CN (1) CN102254180B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107101972A (en) * 2017-05-24 2017-08-29 福州大学 A kind of near infrared spectrum quick detection radix tetrastigme place of production method

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165B (en) * 2013-11-28 2017-07-04 小米科技有限责任公司 A kind of method of image procossing, device and terminal device
CN104765732B (en) * 2014-01-02 2019-05-24 腾讯科技(深圳)有限公司 Image parameters acquisition methods and image parameters acquisition device
CN104850820B (en) * 2014-02-19 2019-05-31 腾讯科技(深圳)有限公司 A kind of recognition algorithms and device
US10043112B2 (en) * 2014-03-07 2018-08-07 Qualcomm Incorporated Photo management
CN104021550B (en) * 2014-05-22 2017-01-18 西安理工大学 Automatic positioning and proportion determining method for proportion of human face
US10929774B2 (en) * 2015-02-12 2021-02-23 Koninklijke Philips N.V. Robust classifier
CN104834898B (en) * 2015-04-09 2018-05-15 华南理工大学 A kind of quality classification method of personage's photographs
CN106709411A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Appearance level acquisition method and device
CN106446793B (en) * 2016-08-31 2019-01-01 广州莱德璞检测技术有限公司 A kind of facial contour calculation method based on human face characteristic point
CN107833199B (en) * 2016-09-12 2020-03-27 南京大学 Method for analyzing quality of copy cartoon image
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device
CN107290305B (en) * 2017-07-19 2019-11-01 中国科学院合肥物质科学研究院 A kind of near infrared spectrum quantitative modeling method based on integrated study
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
CN108596094B (en) * 2018-04-24 2021-02-05 杭州数为科技有限公司 Character style detection system, method, terminal and medium
CN111354478A (en) * 2018-12-24 2020-06-30 黄庆武整形医生集团(深圳)有限公司 Shaping simulation information processing method, shaping simulation terminal and shaping service terminal
CN111091040B (en) * 2019-10-15 2023-04-07 西北大学 Human face attractive force data processing method based on global contour and facial structure classification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100375108C (en) * 2006-03-02 2008-03-12 复旦大学 Automatic positioning method for characteristic point of human faces
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100375108C (en) * 2006-03-02 2008-03-12 复旦大学 Automatic positioning method for characteristic point of human faces
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
E. Parzen.On the Estimation of a Probability Density Fucntion and the Mode.《Annals of Mathematical Statistics》.1962,第33卷(第3期),1065-1076. *
范小九等.一种改进的AAM人脸特征点快速定位方法.《电子与信息学报》.2009,第31卷(第6期),1354-1358. *
陈伊力等.一种新颖的人脸美丽评价方法.《2008年全国模式识别学术会议》.2008,282-286. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107101972A (en) * 2017-05-24 2017-08-29 福州大学 A kind of near infrared spectrum quick detection radix tetrastigme place of production method

Also Published As

Publication number Publication date
CN102254180A (en) 2011-11-23

Similar Documents

Publication Publication Date Title
CN102254180B (en) Geometrical feature-based human face aesthetics analyzing method
Yang et al. Learning face age progression: A pyramid architecture of gans
CN106650806B (en) A kind of cooperating type depth net model methodology for pedestrian detection
Lucey et al. Investigating spontaneous facial action recognition through aam representations of the face
US8457391B2 (en) Detecting device for specific subjects and learning device and learning method thereof
CN104680141B (en) Facial expression recognizing method and system based on moving cell layering
CN107766787A (en) Face character recognition methods, device, terminal and storage medium
CN109815826A (en) The generation method and device of face character model
Cai et al. Facial expression recognition method based on sparse batch normalization CNN
Xie et al. Facial expression recognition based on shape and texture
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN104834941A (en) Offline handwriting recognition method of sparse autoencoder based on computer input
CN103824052A (en) Multilevel semantic feature-based face feature extraction method and recognition method
CN102609693A (en) Human face recognition method based on fuzzy two-dimensional kernel principal component analysis
Zhai et al. Asian female facial beauty prediction using deep neural networks via transfer learning and multi-channel feature fusion
Yi et al. Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation
Balaji et al. Multi-level feature fusion for group-level emotion recognition
Tang et al. Facial expression recognition using AAM and local facial features
Chen et al. What if the Irresponsible Teachers Are Dominating?
Sun et al. Face recognition based on local gradient number pattern and fuzzy convex-concave partition
Kalayci et al. Automatic analysis of facial attractiveness from video
Shu et al. Computational face reader based on facial attribute estimation
Bekhouche Facial soft biometrics: extracting demographic traits
Liao Facial age feature extraction based on deep sparse representation
CN104156708A (en) Feature representation method based on dynamic facial expression sequence and K-order emotional intensity model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140709

Termination date: 20160628

CF01 Termination of patent right due to non-payment of annual fee