CN102254180A - Geometrical feature-based human face aesthetics analyzing method - Google Patents

Geometrical feature-based human face aesthetics analyzing method Download PDF

Info

Publication number
CN102254180A
CN102254180A CN 201110177113 CN201110177113A CN102254180A CN 102254180 A CN102254180 A CN 102254180A CN 201110177113 CN201110177113 CN 201110177113 CN 201110177113 A CN201110177113 A CN 201110177113A CN 102254180 A CN102254180 A CN 102254180A
Authority
CN
China
Prior art keywords
feature
mrow
points
combined
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110177113
Other languages
Chinese (zh)
Other versions
CN102254180B (en
Inventor
朱振峰
段红帅
赵耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201110177113.5A priority Critical patent/CN102254180B/en
Publication of CN102254180A publication Critical patent/CN102254180A/en
Application granted granted Critical
Publication of CN102254180B publication Critical patent/CN102254180B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a geometrical feature-based human face aesthetics analyzing method, which comprises the steps of: carrying out combining description through combined strategy-based local geometrical features and constructing weak classifiers by using memory-based dynamically weighted kernel density estimation (MDKDE); and realizing effective integration of the features by using an Adaboost ensemble learning mechanism so as to obtain accurate classifications for human face aesthetics. Different from the traditional geometrical feature-based human face aesthetics analyzing technique, the geometrical feature-based human face aesthetics analyzing method selects the local geometrical features which are used for describing human face aesthetics from multiple angles, such as Euclidean distance, gradient and area and the like to compose single descriptions for the human face aesthetics, and combined feature description is obtained by combining descriptions of the local geometrical features, and the weak classifiers for Adaboost ensemble learning are constructed by using the MDKDE, thus a good classifying result for randomly input human face images is obtained.

Description

Human face aesthetic feeling analysis method based on geometric features
Technical Field
The invention relates to the technical field of face aesthetic feeling analysis methods, in particular to a face aesthetic feeling analysis method based on geometric features.
Background
The aesthetic feeling of the human face is the unification of the personality and the commonality, the personality means that every person has different appearances and has the characteristics of own beauty, and the commonality means that every person has different appearances but follows a certain rule. The pursuit of beauty by people has never been stopped since ancient times, philosophers, psychologists, artists and the like are trying to find the essence of beauty and form many aesthetic laws, for example, the ancient aesthetic standard of China becomes the important thought system of Chinese culture, the traditional Chinese aesthetic five-eyes saying method, the gold proportion advocated since ancient Greece, and the new gold proportion discovered recently by researchers.
In the early days, it was considered that Beauty is an abstract subjective concept, "Beauty is in the eye of the observer" (Beauty is in the eye of the observer), i.e., the judgment of Beauty varies depending on differences in sex, age, race, education level, cultural environment, etc. of the observer. VORTAI has also been considered to be "relative". However, it has been found that infants between months 2-3 and 6-8 are presented with images of female clown divided according to adult standards, and the time that the infants watch the images of the beauty is significantly longer than that of non-beauty; meanwhile, a study of evolutionary psychology found that asians, spain, blacks and whites had a positive correlation of up to 0.93 when scoring female photographs. That is, people all over the world use a similar set of aesthetic judgment criteria.
The human face aesthetic analysis technology is attracting more and more attention because of the great economic and social benefits brought by the human face aesthetic analysis technology. The american society for cosmetic surgery reports that the american market for cosmetic surgery has increased 4.57 times over the 10 years, with an industry value of about $ 130 billion in 2007; the evolution psychologist introduces a concept of 'reproductive value' from the relationship between female beauty and reproduction and explains the beauty of the male to the female from the perspective of evolution; in the face of massive information on the internet, personalized information retrieval from the aesthetic sense is also becoming the mainstream method of each social network site. Along with the continuous change of the life style of human beings, the expansion of the range of activities and the long-term influence of the race, region, climate and life habit of all national populations, the constitutional features of people all over the world are obviously changed, and the facial features of people are particularly obviously changed, so that the appearance features of contemporary people are researched, and important basic data can be provided for the clinical beauty medical science, medical care and nutrition evaluation standards. The organs of the seemingly beautiful population follow a certain proportion, and the imbalance of the proportion is one of the important change indexes of the deformity. Scientists study the social influence brought by beauty, namely the influence degree of appearance on human relationships by comparing apparent people with common people. The aesthetic features also seriously affect the artistic creation of sculptors, artists and the like, and provide immeasurable mental value for human beings.
In view of the potential application prospect of the face aesthetic analysis technology in the fields of cosmetic surgery, makeup, social network sites, entertainment software, scientific research and the like, people begin to think about the essence of beauty, what the elements forming beauty are, whether quantification can be carried out on the elements.
In the eighties and ninety years of the last century, people synthesize an average face from a plurality of independent face images by using image deformation software (Morphing software) on the basis of a technology of extracting image features for multiple times in Francis Golgion to obtain an average face theory (average facial theory), namely, the average face is considered to be the most beautiful and attractive only and the closer to the average face is the more attractive. However, Perrett et al, using other synthesis methods, synthesize only beautiful faces to obtain an average face that is more appealing than an average face synthesized from all faces (both beautiful and non-beautiful), partially negating the "average face theory," which considers that an average face is inherently attractive, but attractive faces are not necessarily average faces.
However, in recent years, with the progress of scientific technology and information technology, especially the continuous development of computer technology, network technology and mass storage technology, people begin to analyze and research the aesthetic feeling of human face by using methods of machine learning and pattern recognition from the perspective of data mining. According to the different features based on the aesthetic analysis algorithm, the currently mainstream face aesthetic analysis technology can be roughly divided into the following two categories:
1. subspace (Subspace based) based approach
The subspace-based method starts from face Appearance (Appearance) information, and directly inputs a gray image of a face, so that full-automatic face aesthetic analysis can be realized. Have met with great success in face recognition applications. The dimension of the face image is usually high, and the distribution of the face images of the same type in such a high-dimensional space is not compact, so that the face images are not beneficial to classification, and the computational complexity is high. The subspace analysis compresses the face image data in the original high-dimensional space to a low-dimensional subspace in a projection mode, so that the distribution of the data in the subspace is more compact, the classification is more facilitated, and meanwhile, the calculation complexity is greatly reduced.
2. Geometric features based methods.
The geometric feature-based research method needs to obtain the possible geometric proportion relation of each organ for describing the aesthetic feeling of the human face according to the existing empirical knowledge, and trains to obtain a feature description combination with strong classification regression performance for classification regression of the new human face image. The method starts from the cognitive principle that the human brain is aesthetic to the face, and is easy to understand, small in storage amount and insensitive to illumination change.
However, the description of the features forming the aesthetic feeling of the human face is multifaceted and information with different standards, so that the method has great challenges and faces many problems. Because the emphasis points of the method are different when the human face aesthetic feeling feature description is extracted, a plurality of problems still exist in human face aesthetic feeling analysis:
(1) for subspace-based algorithms, it requires position calibration preprocessing of the input face image and is susceptible to illumination, pose variation, image quality, and the like.
(2) For the method based on the geometric features, firstly, the method has poor robustness to strong expression changes and posture changes; secondly, general geometric features only describe the basic shape and the structural relationship of the human face, and local fine features such as texture features are ignored, so that partial information is lost; in addition, the geometric relationship needs to be manually calibrated, the workload is large, the experiment process needs manual intervention, and full automation cannot be realized.
Therefore, one technical problem that needs to be urgently solved by those skilled in the art is: how to innovatively provide an effective measure to overcome the defects in the prior art.
Disclosure of Invention
The invention aims to provide a human face aesthetic feeling analysis method based on geometric features, which effectively realizes the integration of the features and obtains the accurate class of the human face aesthetic feeling.
In order to solve the above problems, the present invention discloses a human face aesthetic analysis method based on geometric features, the method comprising: establishing an offline database and performing online monitoring, wherein the offline database establishing part comprises offline pretreatment and offline treatment, and the online monitoring part comprises online pretreatment and online treatment;
the off-line preprocessing comprises the steps of calibrating 41 feature points on each organ of each face image in a preselected image database, wherein connecting lines of the feature points on each organ are used for describing the outline of the organ, storing coordinates of the feature points, combining single feature description and combined feature description, and comparing the combined feature description with corresponding combined features selected after the unknown face image is labeled during on-line monitoring;
the off-line processing comprises the steps of combining and describing the characteristics of all image components in the obtained database, and constructing a weak classifier by adopting dynamic weighted kernel density estimation based on memory;
the online preprocessing comprises the steps of manually calibrating the positions of 41 geometric feature points selected by an offline database for each input facial image sample, storing coordinate values of each point, and performing single feature description and combined feature description combination to serve as query vectors;
the specific content of online processing comprises the steps of performing ensemble learning on the geometric feature description of the input image through an Adaboost algorithm, integrating each weak classifier into a strong classifier for classification, and obtaining class labels of classes corresponding to the face image described by the query vector.
Preferably, each face image in the pre-selected picture database carries classification information, and is specifically divided into positive examples or negative examples, i.e., "attractive" or "unattractive".
Preferably, the method further comprises:
and checking the constructed weak classifier according to the classification information.
Preferably, the classification information is divided by the average value of objective aesthetic sense scores given by at least 50 persons.
Preferably, the Euclidean distance between any two points in 41 geometric feature points of each face image in the offline database is calculated, the pupil distance is normalized, 817 geometric feature points are taken in total, 20 slopes between any two points on two sides of 10 feature points describing the mandible contour are calculated, the triangular area describing the face area is calculated, the triangular area formed by two pupil points and nose tip points is normalized, 8 Euclidean distances, 845 dimensional feature vectors formed by the slopes and the triangular area are taken as the single feature description, and 845 dimensional feature vectors of the face image in the offline database are stored and are used for being compared with corresponding 845 dimensional feature vectors selected after the unknown face image is detected in the online monitoring process;
for each face image in the offline database, after offline processing, classification performance of each dimension of 845 dimensional feature vectors is obtained, and then according to the classification performance ranking, the top 100 dimensional vectors with strong classification performance are selected to be combined pairwise to obtain 4950 dimensional combined features as combined feature descriptions thereof, and the combined feature descriptions are respectively used for comparing with corresponding combined feature descriptions selected after the unknown face image is labeled during online monitoring.
Preferably, for each input face image, 817 euclidean distances between any two points in 41 geometric feature points are calculated, the 20 slopes between any two points on two sides of 10 feature points describing the mandible contour are calculated by normalizing the pupil distance, the 8 triangle areas describing the face area are calculated, the triangle areas formed by two pupil points and a nose tip point are normalized, and 845-dimensional feature vectors formed by the euclidean distances, the slopes and the triangle areas are stored as query vectors of single feature description;
for each face picture input to online preprocessing, according to the high-low ranking of classification performance of each dimension of 845-dimension single feature descriptions of the face picture of the offline database, the corresponding top 100 dimensions are selected from the 845-dimension feature descriptions of the input picture to be combined pairwise to obtain 4950-dimension combined features which are used as query vectors of combined feature descriptions of the combined features to be stored.
Compared with the prior art, the invention has the following advantages:
the invention carries out combined description by local geometric characteristics based on a combination strategy, adopts dynamic weighted kernel density estimation (MDKDE) based on memory to construct a weak classifier, and utilizes an Adaboost integrated learning mechanism to realize effective integration of characteristics so as to obtain accurate class of face aesthetic feeling. Different from the traditional human face aesthetic feeling analysis technology based on geometric characteristics, the invention selects local geometric characteristics for describing the human face aesthetic feeling from multiple angles such as Euclidean distance, slope, area and the like to form single description of the human face aesthetic feeling, combines the local geometric characteristic descriptions to obtain combined characteristic description, and adopts MDKDE to construct an Adaboost ensemble learning weak classifier, thereby obtaining a better classification result for human face images which are input randomly.
Drawings
Fig. 1 is a schematic diagram of a human face aesthetic analysis method based on geometric features according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a selected location of geometric feature points according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of face geometric feature extraction according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a slope characterization of a human face according to an embodiment of the present invention;
FIG. 5 is a triangular area characterization diagram according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a combination of facial features according to an embodiment of the present invention; where i, i ═ 1, and 2 … 845 is the ith dimension feature description value of the single feature description, assuming that 1 to 845 are sorted from high to low in terms of their classification performance;
FIG. 7 is a schematic diagram of the Adaboost algorithm according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a non-parametric estimation process according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a dynamic weighted kernel density estimation according to an embodiment of the present invention;
FIG. 10 is a sample classification error-driven DWKDE according to an embodiment of the present invention;
fig. 11 is a sample classification error-driven MDWKDE according to an embodiment of the invention;
FIG. 12 is a diagram illustrating a single local feature description classification result according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a classification result of single feature description ensemble learning according to an embodiment of the present invention;
FIG. 14 is a diagram illustrating combined feature description classification results according to an embodiment of the present invention;
fig. 15 is a diagram illustrating a classification result of combined feature description ensemble learning according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a schematic diagram of a human face aesthetic analysis method based on geometric features according to the present invention is shown, and a detailed description of the present invention is provided below.
1. Geometric point labeling
In the invention, considering the size of the manual workload, when the geometric annotation points of the face image are selected, 41 feature points are selected in total, and the connecting line of the feature points on each organ can approximately describe the contour of the organ. And manually labeling each input face image according to 41 preset geometric feature points, and storing the coordinate value of each point. Fig. 2 shows the selected positions of the geometric feature points in the scheme, and fig. 3 is a schematic diagram of the extraction of geometric features of a human face.
2. Feature combination
For each labeled face picture:
(1) calculate any two points A (x)1,y1) And B (x)2,y2) The Euclidean distance between the two is 820,
Figure BDA0000071896330000071
and normalizing by the interpupillary distance, and removing 2 interpupillary distances and distances from the middle point of the upper lip and the lower lip to the upper lip and the lower lip, wherein 817 interpupillary distances are all used;
(2) calculating any two points C (x) on two sides of each point for describing the mandible contour3,y3) And D (x)4,y4) 20 in total between
Figure BDA0000071896330000072
The description diagram of the slope characteristic of the human face is given by specifically referring to fig. 4;
(3) calculating the triangular area describing the face area, normalizing the triangular area formed by the two pupil points and the nose tip point, removing 8 triangular areas formed by the two pupil points and the nose tip point, and specifically referring to a triangular area characteristic description diagram given by the figure 5;
(4) the 845 dimensional features are combined together as a single feature description, as shown in FIG. 6;
(5) after obtaining the classification performance of each dimension of the 845 dimensional features through offline processing, the first 100 dimensions with stronger classification performance are selected to be combined pairwise according to the classification performance ranking, so as to obtain 4950 dimensional secondary combined features as combined feature descriptions, as shown in fig. 6.
3. Adaboost ensemble learning
In 1995, Freund and Schapire proposed Adaboost's algorithm, which is called Adaptive Boosting overall, according to an online distribution algorithm, and named Adaboost to be distinguished from the Boosting algorithm.
Adaboost is an iterative algorithm, and the core idea is to train different weak classifiers for different distributions of the same training set, and then to assemble the weak classifiers to form a stronger final strong classifier. The algorithm is realized by changing data distribution, and determines the weight of each sample according to whether the classification of each sample in each training set is correct and the accuracy of the last overall classification. And (4) sending the new data set with the modified weight value to a lower-layer classifier for training, and finally fusing the classifiers obtained by each training as a final decision classifier. Fig. 7 is a schematic diagram of the principle of the Adaboost algorithm.
The use of the Adaboost classifier can place the classification centroid over the data that is prone to misinterpretation, as shown in fig. 10. Wherein, the circle and the point represent the data to be classified, the larger the symbol is, the higher the weight is, the solid line represents the current classification result (obtained by m times of previous combination), the dotted line represents the current pre-classification, which will pay more attention to the data which is wrongly classified last time, i.e. the large data of the symbol, and the probability of being wrongly classified in the current classification is reduced. When m is 150, the dots of two different symbols have been substantially separated.
The Adaboost algorithm trains different weak classifiers for different weight distributions of the same training set. Initially, the corresponding weight of each sample is the same, i.e. 1/n, where n is the number of samples, and a weak classifier is trained under the sample distribution. For correctly classified samples, the weights are reduced, so that the misclassified samples are highlighted, and a new sample distribution is obtained. And training the weak classifiers again under the new sample distribution to obtain the weak classifiers. And repeating the process for T times to obtain T weak classifiers, and superposing (Boost) the T weak classifiers according to a certain weight to obtain the final desired strong classifier.
The Adaboost integration algorithm flow comprises the following specific steps:
geometric feature description (x) of face samples1,y1),…,(xn,yn) Wherein x isiI-1, …, n being a single or combined feature description, class label y i1, 0 respectively means "attractive" and "unattractive"And n is the total number of samples.
(1) Initialized sample weight w1,iw 1,i1/2m (attractive for) or w 1,i1/2l (unattractive), where m and l are the number of positive and negative samples, respectively, and n is m + l;
(2) for T ═ 1, …, T: (T is the number of selected weak classifiers)
1) Normalized weight wti
<math> <mrow> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </mfrac> </mrow> </math>
2) Training an MDWKDE weak classifier h for each feature ft,f
3) Calculating the weights (w) of the weak classifiers corresponding to all the featurest) Error rate:
<math> <mrow> <msub> <mi>e</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>f</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mi>i</mi> </msub> <msub> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>|</mo> <msub> <mi>h</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>f</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </math>
4) selecting the one with the minimum error rate (e)t) The classifier is an optimal weak classifier ht
5) And readjusting the weight according to the error rate of the optimal weak classifier:
<math> <mrow> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>=</mo> <msub> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <msubsup> <mi>&beta;</mi> <mi>t</mi> <mrow> <mn>1</mn> <mo>-</mo> <msub> <mi>e</mi> <mi>i</mi> </msub> </mrow> </msubsup> </mrow> </math>
wherein e isi0 represents xiIs correctly classified, e i1 represents xiIs wrongly classified;
(3) and (3) integrally learning a weak classifier group consisting of the geometrical characteristics f of the T person face to obtain a final strong classifier:
<math> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msub> <mi>&alpha;</mi> <mi>t</mi> </msub> <msub> <mi>h</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>&GreaterEqual;</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msub> <mi>&alpha;</mi> <mi>t</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mi>otherwise</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein
Figure BDA0000071896330000092
The sample type can be judged according to the value of h (x).
4. Construction of MDWKDE Weak classifier
Each input feature corresponds to a weak classifier, and the training of the Adaboost algorithm is a process of selecting the optimal weak classifier and giving weight. The weak classifier adopts a certain strategy to classify the samples, so the selection of the weak classifier plays a crucial role in the accuracy of sample classification, and in order to ensure that the final strong classifier can be converged, the error rate of the weak classifier on the sample classification is required to be less than 0.5, namely, at least slightly better than random guess, so that the training algorithm can be converged finally.
The weak classifier trained in the Adaboost algorithm may be any classifier, such as a decision tree, a neural network, a threshold classifier, a probability density estimation, etc. Common methods for solving the distribution density function of a random variable from a given set of sample points are parametric estimation and non-parametric estimation. Since the parameter estimation considers the true distribution as a known distribution in a parametric form, such as linear, scalable linear or exponential behavior, etc., then a specific solution is found in the objective function family, i.e. the unknown parameters in the regression model are determined. Experience and theory have shown that there is often a large gap between this basic assumption for parametric models and the actual physical model, and that these methods do not always achieve satisfactory results. The non-parameter estimation does not utilize prior knowledge about data distribution, estimates the real distribution on the basis of not making any prior assumption on the real distribution, is a method for researching data distribution characteristics from a data sample, and therefore receives high attention in both the statistical theory and the application field. In this embodiment, we take the non-parametric estimation of kernel density as an example, and refer to fig. 8.
The kernel density estimation proposed by Rosenblatt (1955) and emery Parzen (1962), also known as Parzen window density estimation, is a classical probability density algorithm used in probability theory to estimate unknown density functions, belongs to one of non-parametric test methods, and is a non-parametric estimation method with wide application fields.
Let sample set (x)1,x2,…,xn) For any point, the kernel density estimate of its distribution function p (x) can be defined as follows:
<math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>nh</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>K</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
whereinFor the kernel function, h represents the window width or bandwidth. As can be seen from the above equation, a general kernel density estimation utilizes data points xiDistance K to xh(x-xi) To determine xiThe "contribution" in estimating the probability density of a point x and the weighted average of the local function centered at each sample point as the estimate of the probability density function for that data point, as shown in fig. 11.
In this scheme, a commonly used gaussian kernel function is adopted, and then the formula (1) can be written as:
<math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>3</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mi>&sigma;</mi> </mrow> </mfrac> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein x is [ x ]1,x2,…,xd]TIs a d-dimensional column vector and σ is the sample variance.
In the formula (2), when the probability density is solved, it is assumed that the weights of all sample points at the sampling point are all 1/n, but in the Adaboost algorithm, the sample weight is readjusted after each round of training, the scheme combines the idea that after each round of training process of the Adaboost algorithm, the weight corresponding to each sample is adjusted according to the sample classification error rate to generate different training sets, and proposes a Dynamic Weighted Kernel Density Estimation (DWKDE) algorithm, specifically refer to fig. 9, that is, the probability density is estimated by the dynamic sample weight, and then the formula (2) can be rewritten as follows:
<math> <mrow> <msub> <mover> <mi>p</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msub> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mi>&sigma;</mi> </mrow> </mfrac> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein wt,iAnd i is 1, …, and n is a new sample weight distribution after the previous round of Adaboost training.
Based on the DWKDE driven by the sample classification error, the unknown probability density distribution of the sample can be better fitted, and each sample point xiThe probability density "contribution" to x not only includes its distance Kh(x-xi) Also contains the sample point xiAfter Adaboost readjusts the weights, the weights of the samples that were mistakenly divided in the previous round are increased, and the corresponding probability density is increasedThe likelihood of the next round being misclassified is reduced as shown in fig. 12.
As can be seen from fig. 13, with the redistribution of the sample weights, the change of the probability density of the sample point indicated by the green line in the graph is large, and the DWKDE using the redistribution of each round of sample weights will inevitably cause a drastic change of the probability density. In order to avoid the influence of the drastic change of the sample weight on the probability density estimation, the scheme provides a Memory-Based dynamic Weighted kernel density estimation (MDWKDE) on the basis of the dynamic Weighted kernel density estimation, namely according to the historical change trend of the sample weight, the sample weight distribution of the t time is obtained by the distribution weight statistics of the previous (t-1) times, and according to the formula (3), the following probability density estimation can be obtained:
<math> <mrow> <msub> <mover> <mi>p</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <msub> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mi>&sigma;</mi> </mrow> </mfrac> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,
Figure BDA0000071896330000112
is the average of the dynamic weights of the ith sample in the Adaboost algorithm in the previous (t-1) round. The selection of the weight average value avoids the drastic change of probability density estimation caused by the distribution weight change caused by sample misclassification, and plays a certain role in correcting the estimation based on the dynamic weighting kernel density, as shown in fig. 14, the average value of the dynamic weights of the previous (t-1) round is adopted as the new sample weight distribution, so that the drastic change amplitude is reduced, the sample probability density estimation is increased or decreased within a small range, and the stability and accuracy of the estimation are improved.
In summary, in the Adaboost integration algorithm proposed in the present scheme, the weak classifier is trained by using the memory-based dynamic weighted kernel density estimation based on the sample classification error driving, and the specific steps are as follows:
describing the geometric features of the input samples, and training a weak classifier for each feature f:
(1) normalizing all sample weights wt,i
<math> <mrow> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>Pos</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>Pos</mi> </msubsup> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>Pos</mi> </msubsup> </mrow> </mfrac> <mo>,</mo> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>Neg</mi> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>Neg</mi> </msubsup> <mrow> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> <msubsup> <mi>w</mi> <mrow> <mi>t</mi> <mo>,</mo> <mi>j</mi> </mrow> <mi>Neg</mi> </msubsup> </mrow> </mfrac> </mrow> </math>
Wherein,
Figure BDA0000071896330000114
in order to be an "attractive" sample weight,
Figure BDA0000071896330000115
sample weight that is "unattractive";
(2) each feature f corresponds to a weak classifier, and the pair f is 1, …, nf: (nf is the number of feature combination descriptions)
1) For any point in f, estimating the nuclear density of the distribution density function p (x):
<math> <mrow> <msub> <mover> <mi>p</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <msub> <mi>K</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
wherein,
Figure BDA0000071896330000117
is the average value of the dynamic weights of the previous (t-1) round of the ith sample point in the Adaboost algorithm,for the kernel function, h represents the window width or bandwidth, and in this method we select the most common gaussian window:
<math> <mrow> <mi>K</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mrow> <mo>(</mo> <mn>2</mn> <mi>&pi;</mi> <mo>)</mo> </mrow> <mrow> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mi>&sigma;</mi> </mrow> </mfrac> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </math>
wherein x is [ x ]1,x2,…,xd]TD-dimensional column vectors, x is a single feature description when d is 1, x is a combined feature description when d is 2, and sigma is a sample variance;
2) weak classifiers estimated from kernel density:
h t , f ( x ) = sgn ( p ^ t Pos ( x ) - p ^ t Neg ( x ) )
wherein
Figure BDA0000071896330000123
To characterize the probability density that f belongs to the "attractive" category,
Figure BDA0000071896330000124
is the probability density that the feature f belongs to the category "unattractive".
To illustrate further, the database used in the experiment comes from the Hotornot website, which is a social website that allows people to score images that are uploaded voluntarily. A sub-database consisting of 1230 female pictures is selected, and each picture is attached with a grade average value which is a benchmark score and is graded into 1 to 10 grades, wherein the grade average value is more than 50 people. The experiment considered only two types of cases, "Unattractive" (Unattractive) and "Attractive" (Attractive), where the reference value was less than 8, a total of 601 pictures were classified into the first type, i.e., "Unattractive", the remaining reference value was greater than 8, and a total of 629 were classified into the second type, i.e., "Attractive".
In order to obtain the geometric features, the image is displayed and geometrically labeled, 41 points are labeled in total, and then distance information, slope information and triangular area information of the image are extracted to obtain 845-dimensional features, namely 845-dimensional single local geometric features can be obtained, wherein any two-dimensional features can be combined into a two-dimensional combined description feature.
The single local geometric features and the combined feature description are respectively subjected to learning classification, and the experimental result shows that the classification performance of the combined feature description is superior to that of the single local geometric features, namely the combined feature description has more effective human face aesthetic sense intrinsic description performance; the Adaboost model is respectively adopted to carry out ensemble learning classification on the single local geometric feature and the combined feature description, referring to FIG. 15, and experimental results show that more effective eigen description can be extracted by adopting the Adaboost model, and the performance of the method for constructing the weak classifier based on the dynamic weighting kernel density estimation of the memory is superior to that of the kernel density estimation method.
The human face aesthetic feeling analysis method based on geometric features provided by the invention is described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A human face aesthetic analysis method based on geometric features is characterized by comprising the following steps: establishing an offline database and performing online monitoring, wherein the offline database establishing part comprises offline pretreatment and offline treatment, and the online monitoring part comprises online pretreatment and online treatment;
the off-line preprocessing comprises the steps of calibrating 41 feature points on each organ of each face image in a preselected image database, wherein connecting lines of the feature points on each organ are used for describing the outline of the organ, storing coordinates of the feature points, combining single feature description and combined feature description, and comparing the combined feature description with corresponding combined features selected after the unknown face image is labeled during on-line monitoring;
the off-line processing comprises the steps of combining and describing the characteristics of all image components in the obtained database, and constructing a weak classifier by adopting dynamic weighted kernel density estimation based on memory;
the online preprocessing comprises the steps of manually calibrating the positions of 41 geometric feature points selected by an offline database for each input facial image sample, storing coordinate values of each point, and performing single feature description and combined feature description combination to serve as query vectors;
the specific content of online processing comprises the steps of performing ensemble learning on the geometric feature description of the input image through an Adaboost algorithm, integrating each weak classifier into a strong classifier for classification, and obtaining class labels of classes corresponding to the face image described by the query vector.
2. The method of claim 1, wherein:
each face image in the pre-selected picture database carries classification information, which is specifically divided into positive examples or negative examples, i.e. "attractive" or "unattractive".
3. The method of claim 2, wherein the method further comprises:
and checking the constructed weak classifier according to the classification information.
4. The method of claim 2, wherein:
the classification information is divided according to the average value by respectively giving objective aesthetic feeling scores for more than 50 persons.
5. The method of claim 1, wherein:
calculating Euclidean distances between any two points in 41 geometric feature points of each face image in an offline database, normalizing by pupil distance to obtain 817 points in total, calculating 20 slopes between any two points on two sides of 10 feature points describing the mandible contour, calculating a triangular area describing the face area, normalizing by the triangular area formed by two pupil points and a nose tip point to obtain 8 feature vectors in total, wherein 845-dimensional feature vectors formed by the Euclidean distances, the slopes and the triangular area are used as single feature descriptions of the feature vectors, and simultaneously, storing 845-dimensional feature vectors of the face image in all offline databases for comparison with corresponding 845-dimensional feature vectors selected after an unknown face image is labeled during online monitoring;
for each face image in the offline database, after offline processing, classification performance of each dimension of 845 dimensional feature vectors is obtained, and then according to the classification performance ranking, the top 100 dimensional vectors with strong classification performance are selected to be combined pairwise to obtain 4950 dimensional combined features as combined feature descriptions thereof, and the combined feature descriptions are respectively used for comparing with corresponding combined feature descriptions selected after the unknown face image is labeled during online monitoring.
6. The method of claim 1, wherein:
for each input human face image, 817 Euclidean distances between any two points in 41 geometric feature points are calculated, 20 slopes between any two points on two sides of 10 feature points describing the mandible contour are calculated by normalizing pupil distances, 8 triangular areas describing the human face area are calculated, the triangular areas formed by two pupil points and nose cusps are normalized, and 845-dimensional feature vectors formed by the Euclidean distances, the slopes and the triangular areas are stored as query vectors of single feature description of the feature vectors;
for each face picture input to online preprocessing, according to the high-low ranking of classification performance of each dimension of 845-dimension single feature descriptions of the face picture of the offline database, the corresponding top 100 dimensions are selected from the 845-dimension feature descriptions of the input picture to be combined pairwise to obtain 4950-dimension combined features which are used as query vectors of combined feature descriptions of the combined features to be stored.
CN201110177113.5A 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method Expired - Fee Related CN102254180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110177113.5A CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110177113.5A CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Publications (2)

Publication Number Publication Date
CN102254180A true CN102254180A (en) 2011-11-23
CN102254180B CN102254180B (en) 2014-07-09

Family

ID=44981432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110177113.5A Expired - Fee Related CN102254180B (en) 2011-06-28 2011-06-28 Geometrical feature-based human face aesthetics analyzing method

Country Status (1)

Country Link
CN (1) CN102254180B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
CN104765732A (en) * 2014-01-02 2015-07-08 腾讯科技(深圳)有限公司 Picture parameter acquisition method and picture parameter acquisition device
CN104834898A (en) * 2015-04-09 2015-08-12 华南理工大学 Quality classification method for portrait photography image
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN106104577A (en) * 2014-03-07 2016-11-09 高通股份有限公司 Photo management
CN104021550B (en) * 2014-05-22 2017-01-18 西安理工大学 Automatic positioning and proportion determining method for proportion of human face
CN106446793A (en) * 2016-08-31 2017-02-22 广州莱德璞检测技术有限公司 Face contour calculation method based on face feature points
CN106709411A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Appearance level acquisition method and device
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device
CN107249434A (en) * 2015-02-12 2017-10-13 皇家飞利浦有限公司 Robust classification device
CN107290305A (en) * 2017-07-19 2017-10-24 中国科学院合肥物质科学研究院 A kind of near infrared spectrum quantitative modeling method based on integrated study
CN107833199A (en) * 2016-09-12 2018-03-23 南京大学 A kind of method for copying cartoon image quality analysis
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
CN108596094A (en) * 2018-04-24 2018-09-28 杭州数为科技有限公司 Personage's style detecting system, method, terminal and medium
CN111091040A (en) * 2019-10-15 2020-05-01 西北大学 Human face attractive force data processing method based on global contour and facial structure classification
WO2020135287A1 (en) * 2018-12-24 2020-07-02 甄选医美邦(杭州)网络科技有限公司 Plastic surgery simulation information processing method, plastic surgery simulation terminal and plastic surgery service terminal

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107101972B (en) * 2017-05-24 2019-10-15 福州大学 A kind of near infrared spectrum quickly detects radix tetrastigme place of production method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100375108C (en) * 2006-03-02 2008-03-12 复旦大学 Automatic positioning method for characteristic point of human faces
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100375108C (en) * 2006-03-02 2008-03-12 复旦大学 Automatic positioning method for characteristic point of human faces
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《2008年全国模式识别学术会议》 20081231 陈伊力等 一种新颖的人脸美丽评价方法 282-286 1-6 , *
《Annals of Mathematical Statistics》 19621231 E. Parzen On the Estimation of a Probability Density Fucntion and the Mode 1065-1076 1-6 第33卷, 第3期 *
《电子与信息学报》 20090630 范小九等 一种改进的AAM人脸特征点快速定位方法 1354-1358 1-6 第31卷, 第6期 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
US9652661B2 (en) 2013-11-28 2017-05-16 Xiaomi Inc. Method and terminal device for image processing
CN104765732A (en) * 2014-01-02 2015-07-08 腾讯科技(深圳)有限公司 Picture parameter acquisition method and picture parameter acquisition device
CN104765732B (en) * 2014-01-02 2019-05-24 腾讯科技(深圳)有限公司 Image parameters acquisition methods and image parameters acquisition device
CN104850820B (en) * 2014-02-19 2019-05-31 腾讯科技(深圳)有限公司 A kind of recognition algorithms and device
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN106104577A (en) * 2014-03-07 2016-11-09 高通股份有限公司 Photo management
CN104021550B (en) * 2014-05-22 2017-01-18 西安理工大学 Automatic positioning and proportion determining method for proportion of human face
CN107249434A (en) * 2015-02-12 2017-10-13 皇家飞利浦有限公司 Robust classification device
CN107249434B (en) * 2015-02-12 2020-12-18 皇家飞利浦有限公司 Robust classifier
CN104834898A (en) * 2015-04-09 2015-08-12 华南理工大学 Quality classification method for portrait photography image
CN104834898B (en) * 2015-04-09 2018-05-15 华南理工大学 A kind of quality classification method of personage's photographs
CN106709411A (en) * 2015-11-17 2017-05-24 腾讯科技(深圳)有限公司 Appearance level acquisition method and device
CN106446793A (en) * 2016-08-31 2017-02-22 广州莱德璞检测技术有限公司 Face contour calculation method based on face feature points
CN107833199A (en) * 2016-09-12 2018-03-23 南京大学 A kind of method for copying cartoon image quality analysis
CN107833199B (en) * 2016-09-12 2020-03-27 南京大学 Method for analyzing quality of copy cartoon image
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device
CN107290305B (en) * 2017-07-19 2019-11-01 中国科学院合肥物质科学研究院 A kind of near infrared spectrum quantitative modeling method based on integrated study
CN107290305A (en) * 2017-07-19 2017-10-24 中国科学院合肥物质科学研究院 A kind of near infrared spectrum quantitative modeling method based on integrated study
CN108108715A (en) * 2017-12-31 2018-06-01 厦门大学 It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
CN108596094A (en) * 2018-04-24 2018-09-28 杭州数为科技有限公司 Personage's style detecting system, method, terminal and medium
WO2020135287A1 (en) * 2018-12-24 2020-07-02 甄选医美邦(杭州)网络科技有限公司 Plastic surgery simulation information processing method, plastic surgery simulation terminal and plastic surgery service terminal
CN111091040A (en) * 2019-10-15 2020-05-01 西北大学 Human face attractive force data processing method based on global contour and facial structure classification

Also Published As

Publication number Publication date
CN102254180B (en) 2014-07-09

Similar Documents

Publication Publication Date Title
CN102254180B (en) Geometrical feature-based human face aesthetics analyzing method
Fu et al. Age synthesis and estimation via faces: A survey
CN109815826B (en) Method and device for generating face attribute model
Laurentini et al. Computer analysis of face beauty: A survey
Savran et al. Regression-based intensity estimation of facial action units
Zhang et al. Computer models for facial beauty analysis
Yuen et al. Human face image searching system using sketches
Sha et al. Feature level analysis for 3D facial expression recognition
Zhai et al. Asian female facial beauty prediction using deep neural networks via transfer learning and multi-channel feature fusion
Duan et al. Expression of Concern: Ethnic Features extraction and recognition of human faces
CN109670406A (en) A kind of contactless emotion identification method of combination heart rate and facial expression object game user
Tang et al. Facial expression recognition using AAM and local facial features
CN117195148A (en) Ore emotion recognition method based on expression, electroencephalogram and voice multi-mode fusion
Kalayci et al. Automatic analysis of facial attractiveness from video
CN103258186A (en) Integrated face recognition method based on image segmentation
CN104732247A (en) Human face feature positioning method
Izadpanahi et al. Human age classification with optimal geometric ratios and wrinkle analysis
Tin et al. Gender and age estimation based on facial images
Gunay et al. Facial age estimation based on decision level fusion of amm, lbp and gabor features
Liao Facial age feature extraction based on deep sparse representation
Zalewski et al. Synthesis and recognition of facial expressions in virtual 3d views
Liu et al. Multimodal face aging framework via learning disentangled representation
CN108108715A (en) It is inspired based on biology and depth attribute learns the face aesthetic feeling Forecasting Methodology being combined
Whitehill Automatic real-time facial expression recognition for signed language translation
Jan Deep learning based facial expression recognition and its applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140709

Termination date: 20160628

CF01 Termination of patent right due to non-payment of annual fee