WO2015078261A1 - Methods and systems for processing facial images - Google Patents

Methods and systems for processing facial images Download PDF

Info

Publication number
WO2015078261A1
WO2015078261A1 PCT/CN2014/089885 CN2014089885W WO2015078261A1 WO 2015078261 A1 WO2015078261 A1 WO 2015078261A1 CN 2014089885 W CN2014089885 W CN 2014089885W WO 2015078261 A1 WO2015078261 A1 WO 2015078261A1
Authority
WO
WIPO (PCT)
Prior art keywords
eigenvalue
target
feature points
facial
surface area
Prior art date
Application number
PCT/CN2014/089885
Other languages
French (fr)
Inventor
Zhihao Zheng
Fang HOU
Yongjian Wu
Hui Ni
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015078261A1 publication Critical patent/WO2015078261A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis

Definitions

  • the present disclosure relates to image processing and, more particularly, to methods and systems for processing human facial images.
  • a user may lighten the skin tone or smooth out the skin texture of a human face in a photo to generate a more appealing image.
  • Technologies related to facial recognitions provide users with various facial image models that can be used to edit facial images.
  • a user may obtain a facial image model that reflects the facial characteristics of a celebrity and edit photos using the facial image model.
  • an image processing system may apply various facial recognition methods. Such methods may determine the position of the center of the eyes, nose, lips, and then calculate the ratio of the distances between these center positions, such as the ratio of the distance between the center of eyes and the nose over the distance between the nose and the lips. Based on the determined ratios, the users may compare the calculated ratios to a facial image of the conventionally recognized “ideal” facial ratios to assess the degree of attractiveness of a face.
  • the current facial image processing systems often use a single position for each facial element (e.g., eyes, nose, etc. ) , thus are not accurate and cannot dynamically assess the degree of attractiveness of facial images.
  • the disclosed method and system are directed to solve one or more problems set forth above and other problems.
  • Embodiments consistent with the present disclosure provide a method, system, mobile device, or a server for processing facial images.
  • One aspect of the present disclosure provides a method for processing facial images.
  • the method includes obtaining pre-selected feature points from an element of a target facial image; and determining a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element.
  • the method further includes obtaining a positive eigenvalue (PE) corresponding to a positive sample facial element image; obtaining a negative eigenvalue (NE) corresponding to a negative sample facial element image; determining a standard deviation of determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determining a target eigenvalue based on the standard deviation.
  • the method also includes applying a weight factor to the target eigenvalue; determining a result from processing the target facial image based on the weighted eigenvalue; and presenting the result to a user.
  • the system includes a facial element processing module configured to obtain pre-selected feature points from an element of a target facial image; and determine a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element.
  • the system also includes an eigenvalue processing module configured to obtain a positive eigenvalue (PE) corresponding to a positive sample facial element image; obtain a negative eigenvalue (NE) corresponding to a negative sample facial element image; determining a standard deviation of determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determine a target eigenvalue based on the standard deviation.
  • the system further includes a resulting image processing module configured to apply a weight factor to the target eigenvalue; determine a result from processing the target facial image based on the weighted eigenvalue; and present the result to a user.
  • Figure 1 is a flow chart of a method for processing facial images implemented by embodiments consistent with the present disclosure
  • Figure 2 is a diagram showing the feature points of a facial image implemented by an embodiment consistent with the present disclosure
  • Figure 3 is a block diagram illustrating a system for processing facial images consistent with the present disclosure.
  • Figure 4 is another block diagram showing the modules of a system for processing facial images consistent with the present disclosure.
  • devices used to process facial images include, but are not limited to, digital cameras, video cameras, smart phones, laptop computers, Personal Digital Assistants, and other terminals with cameras.
  • the system for process facial images includes user terminals that a user may use to process facial image data.
  • a facial image may be any image that includes a human face.
  • the facial image may be recorded by a camera or a user terminal with a camera.
  • the facial image may also be a facial image that is extracted from other images, such as a photo of a street scene.
  • the system for processing facial images may implement a method with the following steps.
  • the system may use an off-line training module, such as an off-line training module of a facial recognition system, to collect a large number (e.g., more than 10,000) of facial and non-facial images.
  • the system may then extract the Haar-like features from the images.
  • the system may use an adaptive boosting classifier, which increases the accuracy of a method using weak classification, to select the optimal Haar-like features, the related threshold values, and weight factors.
  • the system may then implement a cascade classifier.
  • a user may submit an image to the system.
  • the system may decode the image data.
  • the system may then send the de-coded image data to a facial recognition system.
  • the facial recognition system uses an online classifier to scan the de-coded image using windows of various sizes at different positions of the image.
  • the facial recognition system may extract the Haar-like features.
  • the system may send the feature data in each search window to the cascade classifier to determine whether the window includes a facial image.
  • the system may consolidate all determination results based on each position of the window.
  • the system may then output the position and size of the human faces in the submitted image, and retrieve the facial images.
  • Embodiments consistent with the present disclosure provide methods and systems for processing facial data.
  • the system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element.
  • the system may further compute the deviation between the eigenvalue of the facial element (based on pre-set points) and the positives/negative sample eigenvalues.
  • the system may further apply a weighting strategy to obtain the result of the facial image data process.
  • the system may further display the result of the facial image processing on a monitor.
  • Embodiments consistent with the present disclosure can improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
  • Embodiments consistent with the present disclosure may be used in a system for evaluating facial images from photos.
  • Figure 1 and Figure 2 further describe the method for process facial image data.
  • Figure 1 shows a flow chart of a method for process facial images consistent with the present disclosure. The method includes steps S101-S103.
  • the system for process facial images may obtain the pre-set feature points of human face elements.
  • the system may compute an eigenvalue of the pre-set facial feature positions.
  • the facial elements may include a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and an edge of a face.
  • the pre-set feature points of the elements of a human face may be determined by using a facial matching template to process the selected facial element.
  • the facial matching template may be determined by an Active Shape Model (ASM) .
  • ASM Active Shape Model
  • the ASM is based on the Point Distribution Model (PDM) .
  • PDM Point Distribution Model
  • the ASMs are statistical models of the shape of objects, which deform to fit to an example of the object in a new image (e.g., target facial image) .
  • the shapes are constrained by the PDM to vary only in ways seen in a training set of shape examples.
  • the shape of an object is represented by a set of points (controlled by the shape model) .
  • the ASM algorithm aims to match the shape model to a new image.
  • the target image is a target human facial image.
  • users or software developers need to collect a large number of facial images (e.g., over 10,000 images) , then manually annotate and record the positions of a set of feature points on the facial images in the training set. Further, to prepare the training set, the system needs to calculate the eigenvalue vector of the feature points based on the gray scale model of the feature points.
  • a facial recognition system may first fit a shape model onto a target facial image then fit the shape model to the target image by adjusting the positions of the feature points.
  • the suggested positions of the feature points are determined based on the minimum value of local gray model Mahalanobis distance.
  • the system may determine a suggested shape.
  • the system may then fit the suggest shape to the target image.
  • the system may repeat such iterations until convergence is achieved.
  • the system may thus determine the shape of a target facial image based on the facial image templates (shapes) stored in the system.
  • the system may pre-set the number of feature points for each facial element of a facial image, such as 88, 99, 155 points.
  • the number of feature points is determined by the feature points in the training set. If the shape template uses a shape model from the training set with 88 feature points, then the target facial image would have 88 pre-set feature points. In general, more feature points indicate a more accurate image recognition or assessment process.
  • the system for processing facial images provides a diagram to show the feature points on a facial image.
  • 2 (a) shows all feature points (88) on an exemplary facial image.
  • 2 (b) shows the feature points on the edge of the human face.
  • There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye.
  • feature points There are 8 feature points, which are feature points Nos. 25- 32.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45.2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 46-67.
  • the system may calculate the eigenvalue of each facial element based on the feature points of the facial element. Specifically, the system may calculate the surface area, gray scale value, etc. based on the feature points.
  • the left eyebrow is defined by 8 feature points, which are feature point Nos. 1-8.
  • Feature point No. 8 is the top of the eyebrow, which forms triangles with any two of the feature point Nos. 1-7.
  • the system may calculate the area of each triangle and add the triangle areas together to determine the area of the left eyebrow.
  • 2 (e) shows the 8 feature points for the left eye, feature point Nos. 17-24.
  • the system may calculate the gray scale value of the straight line connecting feature point No. 17 to No. 21.
  • step S102 the system may obtain the eigenvalues of the positive and negative facial element sample images.
  • the system may further calculate the standard deviation between the eigenvalue determined in step S101 and the eigenvalues corresponding to the positive and negative facial element images.
  • the system may first obtain the eigenvalues of the positive and negative facial element sample images, the facial element samples corresponding to the selected facial elements. Specifically, the system may extract the positive/negative sample facial element images from a database of sample facial images. The system may further classify the facial elements to obtain the positive/negative sample facial element images.
  • Exemplary positive/negative facial element images may be a positive facial element (eye) image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc.
  • the positive/negative eigenvalues may be determined by applying a facial template to the positive/negative facial element images.
  • the positive/negative eigenvalues may be the eigenvalue for a positive facial element image of a big eye, the eigenvalue of a negative facial element image of a small eye, the eigenvalue of a positive facial element image of a big nose, the eigenvalue of a negative facial element image of a small nose, etc.
  • the system may further calculate the standard deviation between the pre-determined facial feature’s eigenvalue and the eigenvalues corresponding to the positive/negative facial element images to determine the target eigenvalues.
  • the target eigenvalues may include the eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for a nose, eigenvalues for a mouth, eigenvalues for a light skin tone, or eigenvalues for a smooth skin texture.
  • the target eigenvalue may be determined by: (pre-set facial element eigenvalue -eigenvalue of negative sample) / (eigenvalue of positive sample -eigenvalue of negative sample) .
  • step S103 the system may apply pre-set weights to target eigenvalues to determine the result of the facial image processing.
  • the system may further present the results on a display.
  • the system may apply pre-set weights to target eigenvalues to determine the result of the facial image processing.
  • the system may further present the results on a display.
  • the pre-determined weights may be based on the gender corresponding to the facial image, or based on pre-determined values.
  • the system may further present the results on a display.
  • the system may use a facial image display module to display the results.
  • the results may be a facial image, an assessment score, assessment scores for facial elements, etc.
  • the display of results may be: “Your face has a beauty score of XX (over 100) . You have big eyes and smooth skin. You beauty ranking is at XX%, ” etc.
  • the system may execute the following steps.
  • the system may determine the gender of a facial image based on a gender determination template.
  • the system may establish a gender determination template by pre-processing training images (filling in light, rotating images, etc. ) to extract Gabor features.
  • the system may then convert the two-dimensional matrix of the training sample information to a one-dimensional vector. The system thus decreases the complexity of the process.
  • the system may then input the SVM (Support Vector Machine) classifier to train the image recognition process to obtain the gender determination template.
  • SVM Small Vector Machine
  • Embodiments consistent with the present disclosure input a facial image to the facial recognition process and determine the gender of the facial image using the gender determination template.
  • the system may apply weights to target eigenvalues to determine the facial image processing results.
  • Pre-determined weights may include the weights for eigenvalues for eyes, pupils, eyebrows, a nose, a mouth, a light skin tone, or a smooth skin texture, etc.
  • the system may execute the following steps.
  • the system may extract features from the images of the sample facial image database, and classify the facial elements to obtain the positive/negative sample facial element image.
  • the system may use the facial matching template to extract features from sample images.
  • the positive/negative sample facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc.
  • the system may update in real time the positive/negative sample images in the database. For example, if the system determines that a newly extracted big eye sample image 002 includes an eye bigger than that of a big eye sample image 001, the system may update the database and use big eye sample image 002 as the sample big eye image in subsequent processes.
  • Embodiments consistent with the present disclosure provide methods and systems for process facial image data.
  • the system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to assess facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial assessment process.
  • the system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element.
  • the system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element.
  • the system may further apply a weighting strategy to obtain the results of the facial image process.
  • the system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
  • Figure 2 shows an exemplary method for processing facial image data consistent with the present disclosure.
  • the system for processing facial image provides a diagram to show the feature points on a facial image.
  • 2 (a) shows all feature points (88) on a facial image.
  • 2 (b) shows the feature points on the edge of the human face.
  • feature points which are feature points Nos. 1-8.2
  • (d) shows the feature points of the right eyebrow.
  • There are 8 feature points, which are feature points Nos. 9-16.2 (e) shows the feature points of the left eye.
  • 17-24.2 (f) shows the feature points of the right eye.
  • the method for determining the eigenvalues of a pre-determined facial element may include the following steps. First, the system may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The system may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye. The system may calculate the ratio of the feature surface area of the target eye over the feature surface area of the face edge and determine a first eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big eye may be a first positive sample eigenvalue
  • an eigenvalue corresponding to a small eye may be a first negative sample eigenvalue.
  • the eigenvalue of eyes may be determined by (first eigenvalue -first negative sample eigenvalue) / (first positive sample eigenvalue -first negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 25-32, and record the surface area as S12.
  • the first positive eigenvalue (corresponding to a big eye) is P10.
  • the first negative eigenvalue (corresponding to a small eye) is P11.
  • the eigenvalue of the eyes T01 (D01-P11) / (P10-P11) .
  • the system may calculate the first positive eigenvalue (corresponding to a big eye) P10 using the same method used to calculate the first eigenvalue D01.
  • the system may also calculate the first negative eigenvalue (corresponding to a small eye) P11 using the same method used to calculate the first eigenvalue D01.
  • the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue.
  • the system may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
  • the system may compare the surface areas of the two eyebrows.
  • the system may identify the eyebrow with the larger feature surface area as the target eyebrow.
  • the system may calculate the ratio of the feature surface area of the target eyebrow over the feature surface area of the face edge and determine a second eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a thick eyebrow may be a second positive sample eigenvalue; and an eigenvalue corresponding to a thin eyebrow may be a second negative sample eigenvalue.
  • the eigenvalue of eyebrows may be determined by (second eigenvalue -second negative sample eigenvalue) / (second positive sample eigenvalue -second negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (c) shows the feature points of the left eyebrow.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 1-8, and record it as S21.2 (d) shows the feature points of the right eyebrow.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 9-16, and record it as S22.
  • the second positive eigenvalue (corresponding to a thick eyebrow) is P20.
  • the second negative eigenvalue (corresponding to a thin eyebrow) is P21.
  • the system may calculate the second positive eigenvalue (corresponding to a thick eyebrow) P20 using the same method used to calculate the second eigenvalue D02.
  • the system may also calculate the second negative eigenvalue (corresponding to a thin eyebrow) P21 using the same method used to calculate the second eigenvalue D02.
  • the method for calculating a pre-determined facial element eigenvalue may further determine a third eigenvalue.
  • the system may calculate a nose feature area and a facial edge surface area.
  • the system may compare the surface area of the nose and the surface area of the face edge to determine the third eigenvalue.
  • the system may calculate the ratio of the feature surface area of the nose over the feature surface area of the face edge and determine the third eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big nose may be a third positive sample eigenvalue; and an eigenvalue corresponding to a small nose may be a third negative sample eigenvalue.
  • the eigenvalue of nose may be determined by (third eigenvalue -third negative sample eigenvalue) / (third positive sample eigenvalue –third negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (g) shows the feature points of the nose.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 33-45, and record it as S21.
  • the third positive eigenvalue (corresponding to a big nose) is P30.
  • the third negative eigenvalue (corresponding to a small nose) is P31.
  • the eigenvalue of the eyes T03 (D03-P31) / (P30-P31) .
  • the system may calculate the third positive eigenvalue (corresponding to a big nose) P30 using the same method used to calculate the third eigenvalue D03.
  • the system may also calculate the third negative eigenvalue (corresponding to a small nose) P31 using the same method used to calculate the third eigenvalue D03.
  • the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The system may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye.
  • the system may calculate the ratio of the gray scale of the target eye over the gray scale of the pupil of the target eye and determine a fourth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big pupil may be a fourth positive sample eigenvalue; and an eigenvalue corresponding to a small pupil may be a fourth negative sample eigenvalue.
  • the eigenvalue of pupils may be determined by (fourth eigenvalue -fourth negative sample eigenvalue) / (fourth positive sample eigenvalue -fourth negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye.
  • the system may calculate the surface area of the polygon formed by feature point Nos. 25-32, and record the surface area as S12.
  • the system may obtain the feature points of the left eye (point Nos. 17-21) .
  • the system may draw a straight line between point Nos. 17 and 21 and obtain pixels along the straight line.
  • the system may further convert the obtained pixels into a gray scale value of 0-255. If the right eye is the target eye, the system may determine the gray scale value in the same manner.
  • the pupil of an eye usually has a smaller gray scale.
  • the system may determine that the area inside an eye with a gray scale of less than 50 is the pupil area.
  • the system may count the number of pixels obtain in the eye area as S41.
  • the system my count the number of pixels obtained in the pupil area (gray scale ⁇ 50) as S42.
  • the first fourth eigenvalue (corresponding to a big pupil) is P41.
  • the fourth negative eigenvalue (corresponding to a small pupil) is P42.
  • the eigenvalue of the pupils T04 (D04-P42) / (P41-P42) .
  • the system may calculate the fourth positive eigenvalue (corresponding to a big pupil) P41 using the same method used to calculate the fourth eigenvalue D04.
  • the system may also calculate the fourth negative eigenvalue (corresponding to a small pupil) P42 using the same method used to calculate the fourth eigenvalue D04.
  • the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the skin feature surface area. The system may then determine an average gray scale of the skin feature surface area to determine a fifth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a light skin tone may be a fifth positive sample eigenvalue; and an eigenvalue corresponding to a darker skin tone may be a fifth negative sample eigenvalue.
  • the eigenvalue of skin may be determined by (fifth eigenvalue -fifth negative sample eigenvalue) / (fifth positive sample eigenvalue -fifth negative sample eigenvalue) .
  • the system may determine a skin feature surface area based on feature point No. 19 in 2 (e) and feature point No. 46 in 2 (h) .
  • the system may draw a straight light between point No. 19 and No. 26, and obtain pixels along the straight line.
  • the system may also calculate the skin surface area based on point No. 27 in 2 (f) and point No. 52 in 2 (h) .
  • the system may draw a straight light between point No. 27 and No. 52, and obtain pixels along the straight line.
  • the system may further convert the obtained pixels into a gray scale value of 0-255.
  • the system may calculate the average gray scale value of the skin feature area to determine the fifth eigenvalue D05.
  • the fifth positive eigenvalue (corresponding to a light skin tone) is P51.
  • the fifth negative eigenvalue (corresponding to a dark skin tone) is P52.
  • the eigenvalue of the skin tone T05 (D05-P52) / (P51-P52) .
  • the system may calculate the fifth positive eigenvalue (corresponding to a big pupil) P51 using the same method used to calculate the fifth eigenvalue D05.
  • the system may also calculate the fifth negative eigenvalue (corresponding to a small pupil) P52 using the same method used to calculate the fifth eigenvalue D05.
  • the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the face edge feature area. The system may then determine an average gray scale of the face edge surface area to determine a sixth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a smooth skin texture may be a sixth positive sample eigenvalue; and an eigenvalue corresponding to a rough skin texture may be a sixth negative sample eigenvalue.
  • the eigenvalue of skin texture may be determined by (sixth eigenvalue -sixth negative sample eigenvalue) / (sixth positive sample eigenvalue -sixth negative sample eigenvalue) .
  • the system may also use an edge recognition system to detect the edge (s) in the facial image. If the face has dark spots or rough spots, the edges of the spots can be detected.
  • the eyes, nose, mouth, eyebrows also have corresponding edges.
  • 2 (b) shows the feature points along the edge of the face, which are feature point Nos. 68-88 (21 points) .
  • the system may use the edge recognition system to detect the edges between point No. 68-88. Then the system may take away the edged of the eyes, nose, mouth, eyebrows.
  • the system may determine an edge feature surface area.
  • the system may calculate a gray scale of the edge surface area of 0-255.
  • the system may calculate the average gray scale of the face edge feature area to determine the sixth eigenvalue D06.
  • the sixth positive eigenvalue (corresponding to a smooth skin texture) is P61.
  • the sixth negative eigenvalue (corresponding to a rough skin texture) is P62.
  • the eigenvalue of the pupils T06 (D06-P62) / (P61-P62) .
  • the system may calculate the sixth positive eigenvalue (corresponding to a smooth skin texture) P61 using the same method used to calculate the sixth eigenvalue D06.
  • the system may also calculate the sixth negative eigenvalue (corresponding to a rough skin texture) P62 using the same method used to calculate the sixth eigenvalue D06.
  • the method for determining the eigenvalues of pre-determined facial elements may include the following steps.
  • the system may calculate the distance between the left eye to the center of the two eyes and the distance between the right eye to the center of the two eyes.
  • the system may calculate the distance between the left corner of the mouth to the center of the mouth and the distance between the right corner of the mouth to the center of the mouth.
  • the system may then calculate a ratio of the distance between the two eyes and the distance between the two corners of the mouth and determine the seventh eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a small mouth may be a seventh positive sample eigenvalue; and an eigenvalue corresponding to a big mouth may be a seventh negative sample eigenvalue.
  • the eigenvalue of the mouth feature may be determined by (seventh eigenvalue -seventh negative sample eigenvalue) / (seventh positive sample eigenvalue -seventh negative sample eigenvalue) .
  • 2 (h) shows the feature points of the mouth.
  • the system may calculate the distance between the two corners of the mouth (between point No. 46 and point No. 52) to determine the mouth width and record it as L1.2 (e) shows the feature points of the left eye.
  • the system may calculate the center of the left eye O1 based on point Nos. 17 and 21.
  • the system may calculate the center of the right eye O2 based on point Nos. 25 and 29.
  • the system may calculate the distance between the two eyes (between O1 and O2) to determine the eye width and record it as L2.
  • the seventh eigenvalue D07 L1/L2.
  • the seventh positive eigenvalue corresponding to a small mouth is P71.
  • the seventh negative eigenvalue corresponding to a big mouth is P72.
  • the eigenvalue of the mouth T07 (D07-P72) / (P71-P72) .
  • the system may calculate the seventh positive eigenvalue (corresponding to a small mouth) P71 using the same method used to calculate the seventh eigenvalue D07.
  • the system may also calculate the seventh negative eigenvalue (corresponding to a big mouth) P72 using the same method used to calculate the seventh eigenvalue D07.
  • the method for determining the eigenvalues of pre-determined facial elements may include the following steps.
  • the system may calculate the distance from the center of the two eyes to the tip of the nose L3, from the tip of the nose to the center of the bottom lip L4, and from the center of the bottom lip to the bottom tip of the chin L5.
  • the system may then determine the eighth eigenvalue.
  • 2 (e) shows the feature points of the left eye.
  • feature points which are feature points Nos. 17-24.2
  • f shows the feature points of the right eye.
  • feature points which are feature points Nos. 25-32.
  • the system may calculate the center of the inner eye corners O3 based on point Nos. 21 and 29.2 (g) shows the feature points of the nose.
  • feature points Nos. 33-45 There are 13 feature points, which are feature points Nos. 33-45.
  • the system may determine the distance L3 between the center of the inner eye corners O3 and the tip of the nose (point No. 33) .
  • 2 (h) shows the feature points of the mouth.
  • There are 22 feature points which are feature points Nos. 47-67.
  • the system may determine the distance L4 between the tip of the nose (point No. 33) and the center of the bottom lip (point No. 60) . Further, the system may determine the distance L5 between the center of the bottom lip (point No. 60) to the tip of the chin (point No. 78) .
  • the eighth eigenvalue wherein
  • the eighth positive eigenvalue corresponding to a well-proportioned face is P81.
  • the eighth negative eigenvalue corresponding to a poorly proportioned face is P82.
  • the eigenvalue of the face proportion T08 (D08-P82) / (P81-P82) .
  • the system may calculate the eighth positive eigenvalue (corresponding to a well-proportioned face) P81 using the same method used to calculate the eighth eigenvalue D08.
  • the system may also calculate the eighth negative eigenvalue (corresponding to a poorly proportioned face) P82 using the same method used to calculate the eighth eigenvalue D08.
  • the system may then determine the ninth eigenvalue.
  • 2(b) shows the feature points on the edge of the human face.
  • the system may calculate the angle ⁇ between point No. 68, No. 88, and No. 78 (point No. 78 being the tip of the angel) .
  • the ninth eigenvalue D09 ⁇ .
  • the ninth positive eigenvalue corresponding to a small face is P91.
  • the ninth negative eigenvalue corresponding to a large face is P92.
  • the eigenvalue of the face size T09 (D09-P92) / (P91-P92) .
  • the system may calculate the ninth positive eigenvalue (corresponding to a small face) P91 using the same method used to calculate the ninth eigenvalue D09.
  • the system may also calculate the ninth negative eigenvalue (corresponding to a large face) P92 using the same method used to calculate the ninth eigenvalue D09.
  • the system calculates the follow target eigenvalues of the facial images: eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09.
  • the absolute values of the eigenvalues are between 0 and 1. The closer an eigenvalue is to 0, the closer the image is to the negative sample target image. The closer an eigenvalue is to 1, the close the image is to the positive sample target image. For example, if an eye eigenvalue is a negative number, then the eye in the facial image file may be larger than the positive eye sample image.
  • the system for processing facial images may apply weights to the eigenvalues.
  • a pre-determined weight application is select among the 9 eigenvalues (eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09) which eigenvalues to apply weight factors.
  • An exemplary weight application is shown in the table below. In this table, y means a weight factor will be applied to that facial element eigenvalue; n means no weight factor will be applied to that facial element eigenvalue.
  • a recognized facial image is a man’s facial image
  • the system applies the weight factors to eye eigenvalue T01, eyebrow eigenvalue T02, pupil eigenvalue T04, skin texture eigenvalue T06, face proportion eigenvalue T08, and face size eigenvalue T09.
  • the weight factors may be pre-determined.
  • the system may also use various factors and criteria to determine the weight factors.
  • G00 40 + min (T01, T02, T04, T06, T08, T09) *30 +(sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09) ) *30.
  • G11 40 +min(T01, T03, T04, T05, T06, T07, T08, T09) *30 + (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min(T01, T03, T04, T05, T06, T07, T08, T09)) *30.
  • Embodiments consistent with the present disclosure provide methods and systems for process facial image data.
  • the system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to assess facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial assessment process.
  • the system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element.
  • the system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element.
  • the system may further apply a weighting strategy to obtain the results of the facial image process.
  • the system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
  • Figure 3 shows an exemplary system for processing facial images.
  • the embodiment shown in Figure 3 may be used to implement the method shown in Figure 1.
  • the various components of the system may also be understood in view of the descriptions related to Figure 1.
  • the system in Figure 3 includes a facial element processing module 301, a feature processing module 302, a resulting image processing module 303, a sample image processing module 304, and a gender determination module 305.
  • the facial element processing module 301 may compute an eigenvalue of the pre-set facial feature points.
  • the facial elements may include a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and an edge of a face.
  • the pre-set feature points of the elements of a human face may be determined by using a facial matching template to process the pre-determined facial elements.
  • the facial matching template may be determined by the Active Shape Model (ASM) .
  • the ASM is based on the Point Distribution Model (PDM) .
  • PDM Point Distribution Model
  • the ASMs are statistical models of the shape of objects, which iteratively deform to fit to an example of the object in a new image (e.g., target facial image) .
  • the shapes are constrained by the PDM to vary only in ways seen in a training set of shape examples.
  • the shape of an object is represented by a set of points (controlled by the shape model) .
  • the ASM algorithm aims to match the shape model to a new image.
  • the target image is a target human facial image.
  • users or software developers need to collect a large number of facial images, then manually annotate and record the positions of a set of feature points on the facial images in the training set. Further, to prepare the training set, the system needs to calculate the eigenvalue vector of the feature points based on the gray scale model of the feature points.
  • a facial recognition system may first fit a shape model onto a target facial image then fit the shape model to the target image by adjusting the positions of the feature points.
  • the suggested positions of the feature points are determined based on the minimum value of local gray model Mahalanobis distance.
  • the system may determine a suggested shape.
  • the system may then fit the suggest shape to the target image.
  • the system may repeat iterations until convergence is achieved.
  • the system may thus determine the shape of a target facial image based on the facial image templates (shapes) in the system.
  • the system may pre-set the number of feature points for each facial element of a facial image, such as 88, 99, 155 points.
  • the number of feature points is determined by the feature points in the training set. If the shape template uses a shape model from the training set with 88 feature points, then the target facial image would have 88 feature points. In general, more feature points indicate a higher resolution of the process image.
  • the system for processing facial image provides a diagram to show the feature points on a facial image. 2 (a) shows all feature points (88) on a facial image.
  • the facial element processing module 301 may calculate the eigenvalue of each facial element based on the feature points of the facial element. Specifically, the system may calculate the surface area, gray scale value, etc. based on the feature points. For example, in 2 (c) , the left eyebrow is defined by 8 feature points, which are feature point Nos. 1-8. Feature point No. 8 is the top of the eyebrow, which forms triangles with any two of the feature point Nos. 1-7. The system may calculate the area of each triangle and add the areas together to determine the area of the left eyebrow. 2 (e) shows the 8 feature points for the left eye, feature point Nos. 17-24. The system may calculate the gray scale value of the straight lines between feature point No. 17 to No. 21.
  • the feature processing module 302 may further calculate the standard deviation between the pre-set face element eigenvalue and the eigenvalues corresponding to the positive/negative facial element images.
  • the feature processing module 302 may extract the positive/negative sample facial element images from a database of sample facial images.
  • the system may further classify the facial elements to obtain the positive/negative sample facial element images.
  • Exemplary positive/negative facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc.
  • the positive/negative eigenvalues may be determined by applying a facial template to the positive/negative facial element images.
  • the positive/negative facial element images may be the eigenvalue for a positive facial element image of a big eye, the eigenvalue of a negative facial element image of a small eye, the eigenvalue of a positive facial element image of a big nose, the eigenvalue of a negative facial element image of a small nose, etc.
  • the feature processing module 302 may further calculate the standard deviation between the pre-determined eigenvalue and the eigenvalues corresponding to the positive/negative facial element images to determine the facial element eigenvalues.
  • the eigenvalues may include the eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for a nose, eigenvalues for a mouth, eigenvalues for light skin tone, or eigenvalues for smooth skin texture.
  • the target eigenvalue may be determined by (pre-set facial element eigenvalue -eigenvalue of negative sample) / (eigenvalue of positive sample -eigenvalue of negative sample) .
  • the resulting image processing module 303 may apply pre-set weights to the facial element eigenvalues to determine the result of the facial image processing.
  • the sample image processing module 304 may further present the results on a display.
  • the resulting image processing module 303 may apply pre-set weights to the facial element eigenvalues to determine the result of the facial image processing.
  • the resulting image processing module 303 may further present the results on a display.
  • the pre-determined weights may be based on the gender corresponding to the facial image, or based on pre-determined values.
  • the system may further present the results on a display.
  • the system may use a facial image display module to display the results.
  • the results may be a facial image, an assessment score, assessment scores for facial elements, etc.
  • the display of results may be: “Your face has a beauty score of XX (over 100) . You have big eyes and smooth skin. You beauty ranking is at XX%, ” etc.
  • the sample image processing module 304 may extract features from the images of the facial image database, and classify the facial elements to obtain the positive/negative sample facial element image.
  • the sample image processing module 104 may use the facial matching template to extract features from sample images.
  • the positive/negative sample facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc.
  • the sample image processing module 104 may update in real time the positive/negative sample images in the database. For example, if the system determines that a newly extracted big eye sample image 002 includes an eye bigger than that of a big eye sample image 001, the system may update the database and use big eye sample image 002 as the sample big eye image in subsequent processes.
  • the gender determination module 305 may execute the following steps.
  • the gender determination module 305 may determine the gender of a facial image based on a gender determination template.
  • the system may establish a gender determination template by pre-processing training images (filling in light, rotating images, etc. ) to extract Gabor features.
  • the system may then convert the two-dimensional matrix of the training sample information to a one-dimensional vector. The system thus decreases the complexity of the process.
  • the system may then input the SVM (Support Vector Machine) classifier to train the image recognition process to obtain the gender determination template.
  • SVM Serial Vector Machine
  • Embodiments consistent with the present disclosure input a facial image to the facial recognition process and determine the gender of the facial image using the gender determination template.
  • the resulting image processing module 303 may apply pre-determined weights to the facial element eigenvalues to determine the facial image processing results.
  • the pre-determined weights may include the weights for eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for nose, eigenvalues for mouth, eigenvalues for light skin tone, or eigenvalues for smooth skin texture, etc.
  • Embodiments consistent with the present disclosure provide methods and systems for process facial image data.
  • the system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.
  • the system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element.
  • the system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element.
  • the system may further apply a weighting strategy to obtain the results of the facial image process.
  • the system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
  • Figure 4 shows a detailed diagram of the exemplary facial element processing module 301.
  • Figure 4 is discussed in relation to Figure 2 below to illustrate the facial element image processing consistent with the present disclosure.
  • the system for processing facial image provides a diagram to show the feature points on a facial image.
  • 2 (a) shows all feature points (88) on a facial image.
  • 2 (b) shows the feature points on the edge of the human face.
  • feature points which are feature points, which are feature points Nos. 9-16.2 (e) shows the feature points of the left eye.
  • There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye.
  • There are 13 feature points, which are feature points Nos. 33-45.2 (h) shows the feature points of the mouth.
  • the facial element processing module 301 includes a first surface determination unit 401, a target eye determination unit 402, and a first eigenvalue determination unit 403.
  • the first surface determination unit 401 may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes.
  • the target eye determination unit 402 may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye.
  • the first eigenvalue determination unit 403 may calculate the ratio of the feature surface area of the target eye over the feature surface area of the face edge and determine a first eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big eye may be a first positive sample eigenvalue
  • an eigenvalue corresponding to a small eye may be a first negative sample eigenvalue.
  • the eigenvalue of eyes may be determined by (first eigenvalue - first negative sample eigenvalue) / (first positive sample eigenvalue -first negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 25-32, and record the surface area as S12.
  • the first positive eigenvalue (corresponding to a big eye) is P10.
  • the first negative eigenvalue (corresponding to a small eye) is P11.
  • the eigenvalue of the eyes T01 (D01-P11) / (P10-P11) .
  • the system may calculate the first positive eigenvalue (corresponding to a big eye) P10 using the same method used to calculate the first eigenvalue D01.
  • the system may also calculate the first negative eigenvalue (corresponding to a small eye) P11 using the same method used to calculate the first eigenvalue D01.
  • the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue.
  • the system may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
  • the facial element processing module 301 may further include a second surface determination unit 404, a target eyebrow determination unit 405, and a second eigenvalue determination unit 406.
  • the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue.
  • the second surface determination unit 404 may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
  • the target eyebrow determination unit 405 may compare the surface areas of the two eyebrows.
  • the target eyebrow determination unit 405 may identify the eyebrow with the larger feature surface area as the target eyebrow.
  • the second eigenvalue determination unit 406 may calculate the ratio of the feature surface area of the target eyebrow over the feature surface area of the face edge and determine a second eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a thick eyebrow may be a second positive sample eigenvalue; and an eigenvalue corresponding to a thin eyebrow may be a second negative sample eigenvalue.
  • the eigenvalue of eyebrows may be determined by (second eigenvalue -second negative sample eigenvalue) / (second positive sample eigenvalue -second negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (c) shows the feature points of the left eyebrow.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 1-8, and record it as S21.2 (d) shows the feature points of the right eyebrow.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 9-16, and record it as S22.
  • the second positive eigenvalue (corresponding to a thick eyebrow) is P20.
  • the second negative eigenvalue (corresponding to a thin eyebrow) is P21.
  • the eigenvalue of the eyebrows T02 (D02-P21) / (P20-P21) .
  • the system may calculate the second positive eigenvalue (corresponding to a thick eyebrow) P20 using the same method used to calculate the second eigenvalue D02.
  • the system may also calculate the second negative eigenvalue (corresponding to a thin eyebrow) P21 using the same method used to calculate the second eigenvalue D02.
  • the facial element processing module 301 shown in Figure 4 may further include a third surface determination unit 407 and the third eigenvalue determination unit 408.
  • the third surface determination unit 407 may calculate a nose feature area and a facial edge surface area.
  • the third eigenvalue determination unit 408 may compare the surface area of the nose and the feature surface area of the face edge to determine the third eigenvalue.
  • the third eigenvalue determination unit 408 may calculate the ratio of the feature surface area of the nose over the feature surface area of the face edge and determine a third eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big nose may be a third positive sample eigenvalue; and an eigenvalue corresponding to a small nose may be a third negative sample eigenvalue.
  • the eigenvalue of nose may be determined by (third eigenvalue -third negative sample eigenvalue) / (third positive sample eigenvalue –third negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (g) shows the feature points of the nose.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 33-45, and record it as S21.
  • the third positive eigenvalue (corresponding to a big nose) is P30.
  • the third negative eigenvalue (corresponding to a small nose) is P31.
  • the eigenvalue of the nose T03 (D03-P31) / (P30-P31) .
  • the system may calculate the third positive eigenvalue (corresponding to a big nose) P30 using the same method used to calculate the third eigenvalue D03.
  • the system may also calculate the third negative eigenvalue (corresponding to a small nose) P31 using the same method used to calculate the third eigenvalue D03.
  • the facial element processing module 301 shown in Figure 4 may further include a fourth eigenvalue determination unit 409.
  • the first surface determination unit 401 may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes.
  • the target eye determination unit 403 may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye.
  • the fourth eigenvalue determination unit 409 may calculate the ratio of the gray scale value of the target eye over the gray scale value of the pupil of the target eye and determine a fourth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a big pupil may be a fourth positive sample eigenvalue; and an eigenvalue corresponding to a small pupil may be a fourth negative sample eigenvalue.
  • the eigenvalue of pupils may be determined by (fourth eigenvalue -fourth negative sample eigenvalue) / (fourth positive sample eigenvalue -fourth negative sample eigenvalue) .
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye.
  • the system may calculate the surface area of the polygon forms by feature point Nos. 25-32, and record the surface area as S12.
  • the system may obtain the feature points of the left eye (point Nos. 17-21) .
  • the system may draw a straight line between point Nos. 17 and 21 and obtain pixels along the straight line.
  • the system may further convert the obtained pixels into a gray scale value of 0-255. If the right eye is the target eye, the system may determine the gray scale in the same manner.
  • the pupil of an eye usually has a smaller gray scale.
  • the system may determine that the area inside an eye with a gray scale of less than 50 is the pupil area.
  • the system may count the number of pixels obtain in the eye area as S41.
  • the system my count the number of pixels obtained in the pupil area (gray scale value ⁇ 50) as S42.
  • the fourth positive eigenvalue (corresponding to a big pupil) is P41.
  • the fourth negative eigenvalue (corresponding to a small pupil) is P42.
  • the eigenvalue of the pupils T04 (D04-P42) / (P41-P42) .
  • the system may calculate the fourth positive eigenvalue (corresponding to a big pupil) P41 using the same method used to calculate the fourth eigenvalue D04.
  • the system may also calculate the fourth negative eigenvalue (corresponding to a small pupil) P42 using the same method used to calculate the fourth eigenvalue D04.
  • the facial element processing module 301 shown in Figure 4 may further include a first obtaining unit 410 and a fifth eigenvalue determination unit 411.
  • the first obtaining unit 410 may calculate the skin feature surface area.
  • the fifth eigenvalue determination unit 411 may then determine an average gray scale of the skin feature surface area to determine a fifth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a light skin tone may be a fifth positive sample eigenvalue; and an eigenvalue corresponding to a darker skin tone may be a fifth negative sample eigenvalue.
  • the eigenvalue of skin may be determined by (fifth eigenvalue -fifth negative sample eigenvalue) / (fifth positive sample eigenvalue -fifth negative sample eigenvalue) .
  • the system may determine a skin feature surface area based on feature point No. 19 in 2 (e) and feature point No. 46 in 2 (h) .
  • the system may draw a straight light between point No. 19 and No. 26, and obtain pixels along the straight line.
  • the system may also calculate the skin surface area based on point No. 27 in 2 (f) and point No. 52 in 2 (h) .
  • the system may draw a straight light between point No. 27 and No. 52, and obtain pixels along the straight line.
  • the system may further convert the obtained pixels into a gray scale value of 0-255.
  • the system may calculate the average gray scale of the skin feature area to determine the fifth eigenvalue D05.
  • the fifth positive eigenvalue (corresponding to a light skin tone) is P51.
  • the fifth negative eigenvalue (corresponding to a dark skin tone) is P52.
  • the eigenvalue of the skin tone T05 (D05-P52) / (P51-P52) .
  • the system may calculate the fifth positive eigenvalue (corresponding to a big pupil) P51 using the same method used to calculate the fifth eigenvalue D05.
  • the system may also calculate the fifth negative eigenvalue (corresponding to a small pupil) P52 using the same method used to calculate the fifth eigenvalue D05.
  • the facial element processing module 301 shown in Figure 4 may further include a second obtaining unit 412 and a sixth eigenvalue determination unit 413.
  • the second obtaining unit 412 may calculate the face edge feature area.
  • the sixth eigenvalue determination unit 413 may then determine an average gray scale of the face edge surface area to determine a sixth eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a smooth skin texture may be a sixth positive sample eigenvalue; and an eigenvalue corresponding to a rough skin texture may be a sixth negative sample eigenvalue.
  • the eigenvalue of skin texture may be determined by (sixth eigenvalue -sixth negative sample eigenvalue) / (sixth positive sample eigenvalue -sixth negative sample eigenvalue) .
  • the system may also use an edge recognition system to detect the edge (s) in the facial image. If the face has dark spots or rough spots, the edges of the spots can be detected.
  • the eyes, nose, mouth, eyebrows also have corresponding edges.
  • 2 (b) shows the feature points along the edge of the face, which are feature point Nos. 68-88 (21 points) .
  • the system may use the edge recognition system to detect the edges between point No. 68-88. Then the system may take away the edged of the eyes, nose, mouth, eyebrows, etc.
  • the system may determine an edge feature surface area.
  • the system may calculate a gray scale value of the edge surface area of 0-255.
  • the system may calculate the average gray scale value of the face edge feature area to determine the sixth eigenvalue D06.
  • the sixth positive eigenvalue (corresponding to a smooth skin texture) is P61.
  • the sixth negative eigenvalue (corresponding to a rough skin texture) is P62.
  • the eigenvalue of the skin texture T06 (D06-P62) /(P61-P62) .
  • the system may calculate the sixth positive eigenvalue (corresponding to a smooth skin texture) P61 using the same method used to calculate the sixth eigenvalue D06.
  • the system may also calculate the sixth negative eigenvalue (corresponding to a rough skin texture) P62 using the same method used to calculate the sixth eigenvalue D06.
  • the facial element processing module 301 shown in Figure 4 may further include an eye distance determination unit 414, a mouth width determination unit 415, and a seventh eigenvalue determination unit 416 .
  • the eye distance determination unit 414 may calculate the distance between the left eye to the center of the two eyes and the distance between the right eye to the center of the two eyes.
  • the mouth width determination unit 415 may calculate the distance between the left corner of the mouth to the center of the mouth and the distance between the right corner of the mouth to the center of the mouth.
  • the seventh eigenvalue determination unit 416 may then calculate a ratio of the distance between the two eyes and the distance between the two corners of the mouth and determine the seventh eigenvalue.
  • the pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues.
  • an eigenvalue corresponding to a small mouth may be a seventh positive sample eigenvalue; and an eigenvalue corresponding to a big mouth may be a seventh negative sample eigenvalue.
  • the eigenvalue of the mouth feature may be determined by (seventh eigenvalue -seventh negative sample eigenvalue) / (seventh positive sample eigenvalue -seventh negative sample eigenvalue) .
  • 2 (h) shows the feature points of the mouth.
  • the system may calculate the distance between the two corners of the mouth (between point No. 46 and point No. 52) to determine the mouth width and record it as L1.2 (e) shows the feature points of the left eye.
  • the system may calculate the center of the left eye O1 based on point Nos. 17 and 21.
  • the system may calculate the center of the right eye O2 based on point Nos. 25 and 29.
  • the system may calculate the distance between the two eyes (between O1 and O2) to determine the eye width and record it as L2.
  • the seventh eigenvalue D07 L1/L2.
  • the seventh positive eigenvalue corresponding to a small mouth is P71.
  • the seventh negative eigenvalue corresponding to a big mouth is P72.
  • the eigenvalue of the mouth T07 (D07-P72) / (P71-P72) .
  • the system may calculate the seventh positive eigenvalue (corresponding to a small mouth) P71 using the same method used to calculate the seventh eigenvalue D07.
  • the system may also calculate the seventh negative eigenvalue (corresponding to a big mouth) P72 using the same method used to calculate the seventh eigenvalue D07.
  • the facial element processing module 301 may calculate the distance from the center of the two eyes to the tip of the nose L3, from the tip of the nose to the center of the bottom lip L4, and from the center of the bottom lip to the bottom tip of the chin L5. The system may then determine the eighth eigenvalue.
  • 2 (e) shows the feature points of the left eye.
  • feature points which are feature points Nos. 17-24.2
  • f shows the feature points of the right eye.
  • feature points which are feature points Nos. 25-32.
  • the system may calculate the center of the inner eye corners O3 based on point Nos. 21 and 29.2 (g) shows the feature points of the nose.
  • feature points Nos. 33-45 There are 13 feature points, which are feature points Nos. 33-45.
  • the system may determine the distance L3 between the center of the inner eye corners O3 and the tip of the nose (point No. 33) .
  • 2 (h) shows the feature points of the mouth.
  • There are 22 feature points which are feature points Nos. 47-67.
  • the system may determine the distance L4 between the tip of the nose (point No. 33) and the center of the bottom lip (point No. 60) . Further, the system may determine the distance L5 between the center of the bottom lip (point No. 60) to the tip of the chin (point No. 78) .
  • the eighth eigenvalue wherein
  • the eighth positive eigenvalue corresponding to a well-proportioned face is P81.
  • the eighth negative eigenvalue corresponding to a poorly proportioned face is P82.
  • the eigenvalue of the mouth T08 (D08-P82) / (P81-P82) .
  • the system may calculate the eighth positive eigenvalue (corresponding to a well-proportioned face) P81 using the same method used to calculate the eighth eigenvalue D08.
  • the system may also calculate the eighth negative eigenvalue (corresponding to a poorly proportioned face) P82 using the same method used to calculate the eighth eigenvalue D08.
  • the facial element processing module 301 may determine the ninth eigenvalue.
  • 2 (b) shows the feature points on the edge of the human face.
  • the system may calculate the angle ⁇ between point No. 68, No. 88, and No. 78 (point No. 78 being the tip of the angel) .
  • the ninth eigenvalue D09 ⁇ .
  • the ninth positive eigenvalue corresponding to a small face is P91.
  • the ninth negative eigenvalue corresponding to a large face is P92.
  • the eigenvalue of the face size T09 (D09-P92) /(P91-P92) .
  • the system may calculate the ninth positive eigenvalue (corresponding to a small face) P91 using the same method used to calculate the ninth eigenvalue D09.
  • the system may also calculate the ninth negative eigenvalue (corresponding to a large face) P92 using the same method used to calculate the ninth eigenvalue D09.
  • the system calculates the follow eigenvalues of the facial element images: eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09.
  • the absolute values of eigenvalues are between 0 and 1. The closer an eigenvalue is to 0, the closer the image is to the negative sample target image. The closer an eigenvalue is to 1, the close the image is to the positive sample target image. For example, if an eye eigenvalue is a negative number, then the eye in the facial image file is larger than the positive eye sample image.
  • the system for processing facial images may apply weights to the eigenvalues.
  • a pre-determined weight application is select among the 9 eigenvalues (eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09) which eigenvalues to apply weight factors.
  • G00 40 + min (T01, T02, T04, T06, T08, T09) *30 +(sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09) ) *30.
  • G11 40 +min(T01, T03, T04, T05, T06, T07, T08, T09) *30 + (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min(T01, T03, T04, T05, T06, T07, T08, T09)) *30.
  • Embodiments consistent with the present disclosure provide methods and systems for process facial image data.
  • the system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.
  • the system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element.
  • the system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element.
  • the system may further apply a weighting strategy to obtain the results of the facial image process.
  • the system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
  • the facial element processing module 301 as shown in Figure 4 may implement the method shown in Figure 1.
  • the method described in relation to Figure 1 may be implemented by servers for processing facial images.
  • the components of the facial element processing module 301 can also be understood in relation to the method described in Figure 1.
  • embodiments of the present disclosure provide a user terminal, which may include the components described in Figures 3 and 4.
  • the functions of the user terminal may also be understood in relation to the embodiments described in Figures 1-4.
  • one or more non-transitory storage medium storing a computer program are provided to implement the system and method for process facial image data.
  • the one or more non-transitory storage medium may be installed in a computer or provided separately from a computer.
  • a computer may read the computer program from the storage medium and execute the program to perform the methods consistent with embodiments of the present disclosure.
  • the storage medium may be a magnetic storage medium, such as hard disk, floppy disk, or other magnetic disks, a tape, or a cassette tape.
  • the storage medium may also be an optical storage medium, such as optical disk (for example, CD or DVD) .
  • the storage medium may further be semiconductor storage medium, such as DRAM, SRAM, EPROM, EEPROM, flash memory, or memory stick.
  • the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on the gender corresponded to the facial image.
  • the system for processing facial images may apply different weight factors depending on other characteristics of the facial image, or a combination of the characteristics of the facial image.
  • the system for processing facial images can assess facial images more accurately, taking into consideration various characteristics (e.g., gender, race) associated with the facial image.
  • the system for processing facial images may apply different weight factors to the eigenvalue of different facial elements depending on the age associated to the facial image.
  • the system may create an age determination template from the facial images in its training database, by sorting out the training facial images by age to learn the age specific facial characteristics.
  • the system may then apply the template to the received image to determine the age range of the facial image.
  • the system may further apply different weight factors to the eigenvalue of different facial elements depending on the age associated with the facial image.
  • the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on the race associated with the facial image.
  • the system may create a race determination template from the facial images in its training database. The system may then apply the template to the received image to determine the race associated with the facial image. The system may further apply different weight factors to the eigenvalue of different facial elements depending on the race associated with the facial image.
  • the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on a combination of the gender and race associated with the facial image.
  • the system may create a race determination template and a gender determination template from the facial images in its training database.
  • the system may then apply the race determination template to the received image to determine the race associated with the facial image and apply the gender determination template to determine the gender associated with the facial image.
  • the system may further apply different weight factors to the eigenvalue of different facial elements depending on the race and gender associated with the facial image.
  • Embodiments consistent with the present disclosure provide methods and systems for process facial image data.
  • the system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method and system for processing facial images are disclosed. The method includes obtaining pre-set feature points from an element of a target facial image; and determining a pre-set feature eigenvalue (E) based on the feature points associated with the facial element. The method further includes obtaining a positive eigenvalue (PE) corresponding to a positive sample facial element image; obtaining a negative eigenvalue (NE) corresponding to a negative sample facial element image; determining a standard deviation of the determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determining a target eigenvalue based on the standard deviation. The method also includes applying a weight factor to the target eigenvalue; determining a result from processing the target facial image based on the weighted eigenvalue; and presenting the result to a user.

Description

METHODS AND SYSTEMS FOR PROCESSING FACIAL IMAGES
CROSS-REFERENCES TO RELATED APPLICATIONS
Related Applications
This application is based upon and claims the benefit of priority from Chinese Patent Application No. 201310636576.2 filed on November 27, 2013, the entire content of which is incorporated herein by reference.
FIELD OF THE TECHNOLOGY
The present disclosure relates to image processing and, more particularly, to methods and systems for processing human facial images.
BACKGROUND
With the development of digital cameras, smart phones, and video cameras, people have moved beyond just taking photos using cameras. Camera users often edit the photos to achieve desired effects. For example, a user may lighten the skin tone or smooth out the skin texture of a human face in a photo to generate a more appealing image. Technologies related to facial recognitions provide users with various facial image models that can be used to edit facial images. In one example, a user may obtain a facial image model that reflects the facial characteristics of a celebrity and edit photos using the facial image model.
To assess the appeal of a facial image, an image processing system may apply various facial recognition methods. Such methods may determine the position of the center of the eyes, nose, lips, and then calculate the ratio of the distances between these center positions, such as the ratio of the distance between the center of eyes and the nose over the distance between the nose and the lips. Based on the determined ratios, the users may compare the calculated ratios to a facial image of the  conventionally recognized “ideal” facial ratios to assess the degree of attractiveness of a face. However, the current facial image processing systems often use a single position for each facial element (e.g., eyes, nose, etc. ) , thus are not accurate and cannot dynamically assess the degree of attractiveness of facial images.
The disclosed method and system are directed to solve one or more problems set forth above and other problems.
BRIEF SUMMARY OF THE DISCLOSURE
Embodiments consistent with the present disclosure provide a method, system, mobile device, or a server for processing facial images.
One aspect of the present disclosure provides a method for processing facial images. The method includes obtaining pre-selected feature points from an element of a target facial image; and determining a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element. The method further includes obtaining a positive eigenvalue (PE) corresponding to a positive sample facial element image; obtaining a negative eigenvalue (NE) corresponding to a negative sample facial element image; determining a standard deviation of determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determining a target eigenvalue based on the standard deviation. The method also includes applying a weight factor to the target eigenvalue; determining a result from processing the target facial image based on the weighted eigenvalue; and presenting the result to a user.
Another aspect of the present disclosure provides a system for processing facial images. The system includes a facial element processing module configured to obtain pre-selected feature points from an element of a target facial image; and determine a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element. The system also includes an eigenvalue processing module configured to obtain a positive eigenvalue (PE) corresponding to a positive sample facial element image; obtain a negative eigenvalue (NE)  corresponding to a negative sample facial element image; determining a standard deviation of determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determine a target eigenvalue based on the standard deviation. The system further includes a resulting image processing module configured to apply a weight factor to the target eigenvalue; determine a result from processing the target facial image based on the weighted eigenvalue; and present the result to a user.
Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
To illustrate embodiments of the invention, the following are a few drawings illustrating embodiments consistent with the present disclosure.
Figure 1 is a flow chart of a method for processing facial images implemented by embodiments consistent with the present disclosure;
Figure 2 is a diagram showing the feature points of a facial image implemented by an embodiment consistent with the present disclosure;
Figure 3 is a block diagram illustrating a system for processing facial images consistent with the present disclosure; and
Figure 4 is another block diagram showing the modules of a system for processing facial images consistent with the present disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to exemplary embodiments of the invention, which are illustrated in the accompanying drawings. Hereinafter, embodiments consistent with the disclosure will be described with reference to drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. It is apparent that the  described embodiments are some but not all of the embodiments of the present invention. Based on the disclosed embodiment, persons of ordinary skill in the art may derive other embodiments consistent with the present disclosure, all of which are within the scope of the present invention.
In the present disclosure, devices used to process facial images include, but are not limited to, digital cameras, video cameras, smart phones, laptop computers, Personal Digital Assistants, and other terminals with cameras. The system for process facial images includes user terminals that a user may use to process facial image data. A facial image may be any image that includes a human face. The facial image may be recorded by a camera or a user terminal with a camera. The facial image may also be a facial image that is extracted from other images, such as a photo of a street scene.
Specifically, the system for processing facial images may implement a method with the following steps. (1) The system may use an off-line training module, such as an off-line training module of a facial recognition system, to collect a large number (e.g., more than 10,000) of facial and non-facial images. The system may then extract the Haar-like features from the images. Further, the system may use an adaptive boosting classifier, which increases the accuracy of a method using weak classification, to select the optimal Haar-like features, the related threshold values, and weight factors. The system may then implement a cascade classifier. (2) A user may submit an image to the system. The system may decode the image data. The system may then send the de-coded image data to a facial recognition system. (3) The facial recognition system then uses an online classifier to scan the de-coded image using windows of various sizes at different positions of the image. The facial recognition system may extract the Haar-like features. The system may send the feature data in each search window to the cascade classifier to determine whether the window includes a facial image. The system may consolidate all determination results based on each position of the window. The system may then output the position and size of the human faces in the submitted image, and retrieve the facial images.
Embodiments consistent with the present disclosure provide methods and systems for processing facial data. The system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element. The system may further compute the deviation between the eigenvalue of the facial element (based on pre-set points) and the positives/negative sample eigenvalues. The system may further apply a weighting strategy to obtain the result of the facial image data process. The system may further display the result of the facial image processing on a monitor. Embodiments consistent with the present disclosure can improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility. Embodiments consistent with the present disclosure may be used in a system for evaluating facial images from photos.
Figure 1 and Figure 2 further describe the method for process facial image data. Figure 1 shows a flow chart of a method for process facial images consistent with the present disclosure. The method includes steps S101-S103.
In step S101, the system for process facial images may obtain the pre-set feature points of human face elements. The system may compute an eigenvalue of the pre-set facial feature positions. The facial elements may include a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and an edge of a face.
In one embodiment, the pre-set feature points of the elements of a human face may be determined by using a facial matching template to process the selected facial element. The facial matching template may be determined by an Active Shape Model (ASM) .
The ASM is based on the Point Distribution Model (PDM) . The ASMs are statistical models of the shape of objects, which deform to fit to an example of the object in a new image (e.g., target facial image) . The shapes are constrained by the PDM to vary only in ways seen in a training set of shape examples. The shape of an object is represented by a set of points (controlled by the shape model) . The ASM algorithm aims to match the shape model to a new image.
In the present disclosure, the target image is a target human facial image. To create ASM training sets, users or software developers need to collect a large number of facial images (e.g., over 10,000 images) , then manually annotate and record the positions of a set of feature points on the facial images in the training set. Further, to prepare the training set, the system needs to calculate the eigenvalue vector of the feature points based on the gray scale model of the feature points.
Using the ASMs, a facial recognition system may first fit a shape model onto a target facial image then fit the shape model to the target image by adjusting the positions of the feature points. The suggested positions of the feature points are determined based on the minimum value of local gray model Mahalanobis distance. After calculating all suggested positions for the feature points, the system may determine a suggested shape. The system may then fit the suggest shape to the target image. The system may repeat such iterations until convergence is achieved. In embodiments consistent with the present disclosure, the system may thus determine the shape of a target facial image based on the facial image templates (shapes) stored in the system.
In one embodiment, the system may pre-set the number of feature points for each facial element of a facial image, such as 88, 99, 155 points. The number of feature points is determined by the feature points in the training set. If the shape template uses a shape model from the training set with 88 feature points, then the target facial image would have 88 pre-set feature points. In general, more feature points indicate a more accurate image recognition or assessment process.
In one embodiment, as shown in Figure 2, the system for processing facial images provides a diagram to show the feature points on a facial image. 2 (a) shows all feature points (88) on an exemplary facial image. 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88.2 (c) shows the feature points of the left eyebrow. There are 8 feature points, which are feature points Nos. 1-8.2 (d) shows the feature points of the right eyebrow. There are 8 feature points, which are feature points Nos. 9-16.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25- 32.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45.2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 46-67.
In one embodiment, in step S101, the system may calculate the eigenvalue of each facial element based on the feature points of the facial element. Specifically, the system may calculate the surface area, gray scale value, etc. based on the feature points. For example, in 2 (c) , the left eyebrow is defined by 8 feature points, which are feature point Nos. 1-8. Feature point No. 8 is the top of the eyebrow, which forms triangles with any two of the feature point Nos. 1-7. The system may calculate the area of each triangle and add the triangle areas together to determine the area of the left eyebrow. 2 (e) shows the 8 feature points for the left eye, feature point Nos. 17-24. The system may calculate the gray scale value of the straight line connecting feature point No. 17 to No. 21.
In step S102, the system may obtain the eigenvalues of the positive and negative facial element sample images. The system may further calculate the standard deviation between the eigenvalue determined in step S101 and the eigenvalues corresponding to the positive and negative facial element images.
In one embodiment, in step S102, the system may first obtain the eigenvalues of the positive and negative facial element sample images, the facial element samples corresponding to the selected facial elements. Specifically, the system may extract the positive/negative sample facial element images from a database of sample facial images. The system may further classify the facial elements to obtain the positive/negative sample facial element images.
Exemplary positive/negative facial element images may be a positive facial element (eye) image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc. The positive/negative eigenvalues may be determined by applying a facial template to the positive/negative facial element images. For example, the positive/negative eigenvalues may be the eigenvalue for a positive facial element image of a big eye, the eigenvalue of a negative facial element image of a small eye, the  eigenvalue of a positive facial element image of a big nose, the eigenvalue of a negative facial element image of a small nose, etc.
In one embodiment, in step S102, the system may further calculate the standard deviation between the pre-determined facial feature’s eigenvalue and the eigenvalues corresponding to the positive/negative facial element images to determine the target eigenvalues. The target eigenvalues may include the eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for a nose, eigenvalues for a mouth, eigenvalues for a light skin tone, or eigenvalues for a smooth skin texture. The target eigenvalue may be determined by: (pre-set facial element eigenvalue -eigenvalue of negative sample) / (eigenvalue of positive sample -eigenvalue of negative sample) .
In step S103, the system may apply pre-set weights to target eigenvalues to determine the result of the facial image processing. The system may further present the results on a display.
In one embodiment, in step S103, the system may apply pre-set weights to target eigenvalues to determine the result of the facial image processing. The system may further present the results on a display. Specifically, the pre-determined weights may be based on the gender corresponding to the facial image, or based on pre-determined values. The system may further present the results on a display. For example, the system may use a facial image display module to display the results. The results may be a facial image, an assessment score, assessment scores for facial elements, etc. For example, the display of results may be: “Your face has a beauty score of XX (over 100) . You have big eyes and smooth skin. You beauty ranking is at XX%, ” etc.
In embodiments consistent with the present disclosure, after step S101, the system may execute the following steps. The system may determine the gender of a facial image based on a gender determination template. For example, the system may establish a gender determination template by pre-processing training images (filling in light, rotating images, etc. ) to extract Gabor features. The system may then convert the two-dimensional matrix of the training sample information to a one-dimensional vector. The system thus decreases the complexity of the process.  The system may then input the SVM (Support Vector Machine) classifier to train the image recognition process to obtain the gender determination template. Embodiments consistent with the present disclosure input a facial image to the facial recognition process and determine the gender of the facial image using the gender determination template.
In step S103, the system may apply weights to target eigenvalues to determine the facial image processing results. Pre-determined weights may include the weights for eigenvalues for eyes, pupils, eyebrows, a nose, a mouth, a light skin tone, or a smooth skin texture, etc.
In some embodiments, before step S101, the system may execute the following steps. The system may extract features from the images of the sample facial image database, and classify the facial elements to obtain the positive/negative sample facial element image. The system may use the facial matching template to extract features from sample images. The positive/negative sample facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc. Further, the system may update in real time the positive/negative sample images in the database. For example, if the system determines that a newly extracted big eye sample image 002 includes an eye bigger than that of a big eye sample image 001, the system may update the database and use big eye sample image 002 as the sample big eye image in subsequent processes.
Embodiments consistent with the present disclosure provide methods and systems for process facial image data. The system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to assess facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial assessment process.
The system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element. The system may further compute the standard  deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element. The system may further apply a weighting strategy to obtain the results of the facial image process. The system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
Figure 2 shows an exemplary method for processing facial image data consistent with the present disclosure. In one embodiment, as shown in Figure 2, the system for processing facial image provides a diagram to show the feature points on a facial image. 2 (a) shows all feature points (88) on a facial image. 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88.2 (c) shows the feature points of the left eyebrow. There are 8 feature points, which are feature points Nos. 1-8.2 (d) shows the feature points of the right eyebrow. There are 8 feature points, which are feature points Nos. 9-16.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45.2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 47-67.
In one embodiment, the method for determining the eigenvalues of a pre-determined facial element may include the following steps. First, the system may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The system may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye. The system may calculate the ratio of the feature surface area of the target eye over the feature surface area of the face edge and determine a first eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big eye may be a first positive sample eigenvalue; and an eigenvalue corresponding to a small eye may be a  first negative sample eigenvalue. The eigenvalue of eyes may be determined by (first eigenvalue -first negative sample eigenvalue) / (first positive sample eigenvalue -first negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24. The system may calculate the surface area of the polygon formed by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the surface area of the polygon formed by feature point Nos. 25-32, and record the surface area as S12.
Next, the system may determine the difference M01 of S11 and S12 (M01=S11-S12) . If M01 is greater than 0, then the left eye (corresponding to S11) is the target eye. If M01 is less than 0, then the right eye (corresponding to S12) is the target eye.
The first eigenvalue is D01=max (S11, S12) /S00. The first positive eigenvalue (corresponding to a big eye) is P10. The first negative eigenvalue (corresponding to a small eye) is P11. The eigenvalue of the eyes T01= (D01-P11) / (P10-P11) .
In one embodiment, the system may calculate the first positive eigenvalue (corresponding to a big eye) P10 using the same method used to calculate the first eigenvalue D01. The system may also calculate the first negative eigenvalue (corresponding to a small eye) P11 using the same method used to calculate the first eigenvalue D01.
In addition, the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue. The system may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
Next, the system may compare the surface areas of the two eyebrows. The system may identify the eyebrow with the larger feature surface area as the target eyebrow. The system may calculate the ratio of the feature surface area of the target eyebrow over the feature surface area of the face edge and determine a second eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a thick eyebrow may be a second positive sample eigenvalue; and an eigenvalue corresponding to a thin eyebrow may be a second negative sample eigenvalue. The eigenvalue of eyebrows may be determined by (second eigenvalue -second negative sample eigenvalue) / (second positive sample eigenvalue -second negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (c) shows the feature points of the left eyebrow. There are 8 feature points, which are feature points Nos. 1-8. The system may calculate the surface area of the polygon formed by feature point Nos. 1-8, and record it as S21.2 (d) shows the feature points of the right eyebrow. There are 8 feature points, which are feature points Nos. 9-16. The system may calculate the surface area of the polygon formed by feature point Nos. 9-16, and record it as S22.
Next, the system may determine the difference M02 of S21 and S22 (M02=S21-S22) . If M02 is greater than 0, then the left eyebrow (corresponding to S21) is the target eye. If M02 is less than 0, then the right eyebrow (corresponding to S22) is the target eye.
The second eigenvalue is D02=max (S21, S22) /S00. The second positive eigenvalue (corresponding to a thick eyebrow) is P20. The second negative eigenvalue (corresponding to a thin eyebrow) is P21. The eigenvalue of the eyebrows is T02= (D02-P21) / (P20-P21) .
In one embodiment, the system may calculate the second positive eigenvalue (corresponding to a thick eyebrow) P20 using the same method used to calculate the second  eigenvalue D02. The system may also calculate the second negative eigenvalue (corresponding to a thin eyebrow) P21 using the same method used to calculate the second eigenvalue D02.
In addition, the method for calculating a pre-determined facial element eigenvalue may further determine a third eigenvalue. The system may calculate a nose feature area and a facial edge surface area.
Next, the system may compare the surface area of the nose and the surface area of the face edge to determine the third eigenvalue. The system may calculate the ratio of the feature surface area of the nose over the feature surface area of the face edge and determine the third eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big nose may be a third positive sample eigenvalue; and an eigenvalue corresponding to a small nose may be a third negative sample eigenvalue. The eigenvalue of nose may be determined by (third eigenvalue -third negative sample eigenvalue) / (third positive sample eigenvalue –third negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45. The system may calculate the surface area of the polygon formed by feature point Nos. 33-45, and record it as S21.
The third eigenvalue is D03=S31/S00. The third positive eigenvalue (corresponding to a big nose) is P30. The third negative eigenvalue (corresponding to a small nose) is P31. The eigenvalue of the eyes T03= (D03-P31) / (P30-P31) .
In one embodiment, the system may calculate the third positive eigenvalue (corresponding to a big nose) P30 using the same method used to calculate the third eigenvalue D03.  The system may also calculate the third negative eigenvalue (corresponding to a small nose) P31 using the same method used to calculate the third eigenvalue D03.
In one embodiment, the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The system may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye.
The system may calculate the ratio of the gray scale of the target eye over the gray scale of the pupil of the target eye and determine a fourth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big pupil may be a fourth positive sample eigenvalue; and an eigenvalue corresponding to a small pupil may be a fourth negative sample eigenvalue. The eigenvalue of pupils may be determined by (fourth eigenvalue -fourth negative sample eigenvalue) / (fourth positive sample eigenvalue -fourth negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon formed by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24. The system may calculate the surface area of the polygon formed by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the surface area of the polygon formed by feature point Nos. 25-32, and record the surface area as S12.
Next, the system may determine the difference M01 of S11 and S12 (M01=S11-S12) . If M01 is greater than 0, then the left eye (corresponding to S11) is the target eye. If M01 is less than 0, then the right eye (corresponding to S12) is the target eye.
If the left eye is the target eye, the system may obtain the feature points of the left eye (point Nos. 17-21) . The system may draw a straight line between point Nos. 17 and 21 and obtain pixels along the straight line. The system may further convert the obtained pixels into a gray scale value of 0-255. If the right eye is the target eye, the system may determine the gray scale value in the same manner.
The smaller the gray scale value the darker the corresponding image. The pupil of an eye usually has a smaller gray scale. In one example, the system may determine that the area inside an eye with a gray scale of less than 50 is the pupil area. The system may count the number of pixels obtain in the eye area as S41. The system my count the number of pixels obtained in the pupil area (gray scale <50) as S42.
The fourth eigenvalue is D04=S42/S41. The first fourth eigenvalue (corresponding to a big pupil) is P41. The fourth negative eigenvalue (corresponding to a small pupil) is P42. The eigenvalue of the pupils T04= (D04-P42) / (P41-P42) .
In one embodiment, the system may calculate the fourth positive eigenvalue (corresponding to a big pupil) P41 using the same method used to calculate the fourth eigenvalue D04. The system may also calculate the fourth negative eigenvalue (corresponding to a small pupil) P42 using the same method used to calculate the fourth eigenvalue D04.
In one embodiment, the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the skin feature surface area. The system may then determine an average gray scale of the skin feature surface area to determine a fifth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a light skin tone may be a fifth positive sample eigenvalue; and an eigenvalue corresponding to a darker skin tone may be a fifth negative sample eigenvalue. The eigenvalue of skin may be determined by  (fifth eigenvalue -fifth negative sample eigenvalue) / (fifth positive sample eigenvalue -fifth negative sample eigenvalue) .
In Figure 2, the system may determine a skin feature surface area based on feature point No. 19 in 2 (e) and feature point No. 46 in 2 (h) . The system may draw a straight light between point No. 19 and No. 26, and obtain pixels along the straight line. The system may also calculate the skin surface area based on point No. 27 in 2 (f) and point No. 52 in 2 (h) . The system may draw a straight light between point No. 27 and No. 52, and obtain pixels along the straight line. The system may further convert the obtained pixels into a gray scale value of 0-255. The system may calculate the average gray scale value of the skin feature area to determine the fifth eigenvalue D05. The fifth positive eigenvalue (corresponding to a light skin tone) is P51. The fifth negative eigenvalue (corresponding to a dark skin tone) is P52. The eigenvalue of the skin tone T05= (D05-P52) / (P51-P52) .
In one embodiment, the system may calculate the fifth positive eigenvalue (corresponding to a big pupil) P51 using the same method used to calculate the fifth eigenvalue D05. The system may also calculate the fifth negative eigenvalue (corresponding to a small pupil) P52 using the same method used to calculate the fifth eigenvalue D05.
In one embodiment, the method for determining the eigenvalues of pre-determined facial elements may include the following steps. First, the system may calculate the face edge feature area. The system may then determine an average gray scale of the face edge surface area to determine a sixth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a smooth skin texture may be a sixth positive sample eigenvalue; and an eigenvalue corresponding to a rough skin texture may be a sixth negative sample eigenvalue. The eigenvalue of skin texture may be determined by (sixth eigenvalue -sixth negative sample eigenvalue) / (sixth positive sample eigenvalue -sixth negative sample eigenvalue) .
The system may also use an edge recognition system to detect the edge (s) in the facial image. If the face has dark spots or rough spots, the edges of the spots can be detected. The eyes, nose, mouth, eyebrows also have corresponding edges.
In Figure 2, 2 (b) shows the feature points along the edge of the face, which are feature point Nos. 68-88 (21 points) . The system may use the edge recognition system to detect the edges between point No. 68-88. Then the system may take away the edged of the eyes, nose, mouth, eyebrows. The system may determine an edge feature surface area. The system may calculate a gray scale of the edge surface area of 0-255. The system may calculate the average gray scale of the face edge feature area to determine the sixth eigenvalue D06. The sixth positive eigenvalue (corresponding to a smooth skin texture) is P61. The sixth negative eigenvalue (corresponding to a rough skin texture) is P62. The eigenvalue of the pupils T06= (D06-P62) / (P61-P62) .
In one embodiment, the system may calculate the sixth positive eigenvalue (corresponding to a smooth skin texture) P61 using the same method used to calculate the sixth eigenvalue D06. The system may also calculate the sixth negative eigenvalue (corresponding to a rough skin texture) P62 using the same method used to calculate the sixth eigenvalue D06.
In one embodiment, the method for determining the eigenvalues of pre-determined facial elements may include the following steps. The system may calculate the distance between the left eye to the center of the two eyes and the distance between the right eye to the center of the two eyes. The system may calculate the distance between the left corner of the mouth to the center of the mouth and the distance between the right corner of the mouth to the center of the mouth. The system may then calculate a ratio of the distance between the two eyes and the distance between the two corners of the mouth and determine the seventh eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a small mouth may be a seventh positive sample eigenvalue; and an eigenvalue corresponding to a big mouth may be a seventh negative sample eigenvalue. The eigenvalue of the mouth feature may be  determined by (seventh eigenvalue -seventh negative sample eigenvalue) / (seventh positive sample eigenvalue -seventh negative sample eigenvalue) .
In Figure 2, 2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 46-67. The system may calculate the distance between the two corners of the mouth (between point No. 46 and point No. 52) to determine the mouth width and record it as L1.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the center of the left eye O1 based on point Nos. 17 and 21. The system may calculate the center of the right eye O2 based on point Nos. 25 and 29. The system may calculate the distance between the two eyes (between O1 and O2) to determine the eye width and record it as L2. The seventh eigenvalue D07=L1/L2. The seventh positive eigenvalue corresponding to a small mouth is P71. The seventh negative eigenvalue corresponding to a big mouth is P72. The eigenvalue of the mouth T07= (D07-P72) / (P71-P72) .
In one embodiment, the system may calculate the seventh positive eigenvalue (corresponding to a small mouth) P71 using the same method used to calculate the seventh eigenvalue D07. The system may also calculate the seventh negative eigenvalue (corresponding to a big mouth) P72 using the same method used to calculate the seventh eigenvalue D07.
In one embodiment, the method for determining the eigenvalues of pre-determined facial elements may include the following steps. The system may calculate the distance from the center of the two eyes to the tip of the nose L3, from the tip of the nose to the center of the bottom lip L4, and from the center of the bottom lip to the bottom tip of the chin L5. The system may then determine the eighth eigenvalue.
In Figure 2, 2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the center of the inner eye corners O3 based on point Nos. 21 and 29.2 (g) shows the feature points of the nose. There are  13 feature points, which are feature points Nos. 33-45. The system may determine the distance L3 between the center of the inner eye corners O3 and the tip of the nose (point No. 33) . 2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 47-67. The system may determine the distance L4 between the tip of the nose (point No. 33) and the center of the bottom lip (point No. 60) . Further, the system may determine the distance L5 between the center of the bottom lip (point No. 60) to the tip of the chin (point No. 78) . The eighth eigenvalue
Figure PCTCN2014089885-appb-000001
Figure PCTCN2014089885-appb-000002
wherein
Figure PCTCN2014089885-appb-000003
The eighth positive eigenvalue corresponding to a well-proportioned face is P81. The eighth negative eigenvalue corresponding to a poorly proportioned face is P82. The eigenvalue of the face proportion T08= (D08-P82) / (P81-P82) .
In one embodiment, the system may calculate the eighth positive eigenvalue (corresponding to a well-proportioned face) P81 using the same method used to calculate the eighth eigenvalue D08. The system may also calculate the eighth negative eigenvalue (corresponding to a poorly proportioned face) P82 using the same method used to calculate the eighth eigenvalue D08.
In one embodiment, the system may then determine the ninth eigenvalue. In Figure 2, 2(b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the angle α between point No. 68, No. 88, and No. 78 (point No. 78 being the tip of the angel) . The ninth eigenvalue D09=α. The ninth positive eigenvalue corresponding to a small face is P91. The ninth negative eigenvalue corresponding to a large face is P92. The eigenvalue of the face size T09= (D09-P92) / (P91-P92) .
In one embodiment, the system may calculate the ninth positive eigenvalue (corresponding to a small face) P91 using the same method used to calculate the ninth eigenvalue D09. The system may also calculate the ninth negative eigenvalue (corresponding to a large face) P92 using the same method used to calculate the ninth eigenvalue D09.
In the above example, the system calculates the follow target eigenvalues of the facial images: eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04,  skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09. The absolute values of the eigenvalues are between 0 and 1. The closer an eigenvalue is to 0, the closer the image is to the negative sample target image. The closer an eigenvalue is to 1, the close the image is to the positive sample target image. For example, if an eye eigenvalue is a negative number, then the eye in the facial image file may be larger than the positive eye sample image.
The system for processing facial images may apply weights to the eigenvalues. A pre-determined weight application is select among the 9 eigenvalues (eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09) which eigenvalues to apply weight factors. An exemplary weight application is shown in the table below. In this table, y means a weight factor will be applied to that facial element eigenvalue; n means no weight factor will be applied to that facial element eigenvalue.
Weight Male Female 
Eye eigenvalue T01 y y
Eyebrow eigenvalue T02 y n
Nose eigenvalue T03 n y
Pupil eigenvalue T04 y y
Skin tone eigenvalue T05 n y
Skin texture eigenvalue T06 y y
Mouth eigenvalue T07 n y
Face proportion eigenvalue T08 y y
Face size eigenvalue T09 y y
For example, if a recognized facial image is a man’s facial image, then the system applies the weight factors to eye eigenvalue T01, eyebrow eigenvalue T02, pupil eigenvalue T04, skin texture eigenvalue T06, face proportion eigenvalue T08, and face size eigenvalue T09. If a recognized facial image is a woman’s facial image, then the system applies the weight factors to eye eigenvalue T01, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09. The weight factors may be pre-determined. The system may also use various factors and criteria to determine the weight factors.
For example, the system may calculated the weighted target eigenvalues by using G=40+min (T01, T02, T03, ……, T0n) *30+ (sum (T01, T02, T03, ……T0n) -min (T01, T02, T03, ……、T0n) ) *30, wherein min (T01, T02, T03, ……, T0n) is the minimum value of all weighted eigenvalues, sum(T01, T02, T03, ……T0n) is the sum of all weighted eigenvalues.
If a recognized facial image is a man’s facial image, then the system applies the weight factors as follows: G00 = 40 + min (T01, T02, T04, T06, T08, T09) *30 +(sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09) ) *30. If a recognized facial image is a woman’s facial image, then the system applies the weight factors as follows: G11 = 40 +min(T01, T03, T04, T05, T06, T07, T08, T09) *30 + (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min(T01, T03, T04, T05, T06, T07, T08, T09)) *30.
Embodiments consistent with the present disclosure provide methods and systems for process facial image data. The system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to assess facial features, the system determines the facial features by using  the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial assessment process.
The system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element. The system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element. The system may further apply a weighting strategy to obtain the results of the facial image process. The system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
Figure 3 shows an exemplary system for processing facial images. The embodiment shown in Figure 3 may be used to implement the method shown in Figure 1. For the convenience of description, only certain components of the system are discussed below. The various components of the system may also be understood in view of the descriptions related to Figure 1. The system in Figure 3 includes a facial element processing module 301, a feature processing module 302, a resulting image processing module 303, a sample image processing module 304, and a gender determination module 305.
The facial element processing module 301 may compute an eigenvalue of the pre-set facial feature points. The facial elements may include a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and an edge of a face.
In one embodiment, the pre-set feature points of the elements of a human face may be determined by using a facial matching template to process the pre-determined facial elements. The facial matching template may be determined by the Active Shape Model (ASM) .
The ASM is based on the Point Distribution Model (PDM) . The ASMs are statistical models of the shape of objects, which iteratively deform to fit to an example of the object in a new image (e.g., target facial image) . The shapes are constrained by the PDM to vary only in ways seen  in a training set of shape examples. The shape of an object is represented by a set of points (controlled by the shape model) . The ASM algorithm aims to match the shape model to a new image.
In the present disclosure, the target image is a target human facial image. To create ASM training sets, users or software developers need to collect a large number of facial images, then manually annotate and record the positions of a set of feature points on the facial images in the training set. Further, to prepare the training set, the system needs to calculate the eigenvalue vector of the feature points based on the gray scale model of the feature points.
Using the ASMs, a facial recognition system may first fit a shape model onto a target facial image then fit the shape model to the target image by adjusting the positions of the feature points. The suggested positions of the feature points are determined based on the minimum value of local gray model Mahalanobis distance. After calculating all suggested positions for the feature points, the system may determine a suggested shape. The system may then fit the suggest shape to the target image. The system may repeat iterations until convergence is achieved. In embodiments consistent with the present disclosure, the system may thus determine the shape of a target facial image based on the facial image templates (shapes) in the system.
In one embodiment, the system may pre-set the number of feature points for each facial element of a facial image, such as 88, 99, 155 points. The number of feature points is determined by the feature points in the training set. If the shape template uses a shape model from the training set with 88 feature points, then the target facial image would have 88 feature points. In general, more feature points indicate a higher resolution of the process image.
In one embodiment, as shown in Figure 2, the system for processing facial image provides a diagram to show the feature points on a facial image. 2 (a) shows all feature points (88) on a facial image.
The facial element processing module 301 may calculate the eigenvalue of each facial element based on the feature points of the facial element. Specifically, the system may calculate the surface area, gray scale value, etc. based on the feature points. For example, in 2 (c) , the left eyebrow  is defined by 8 feature points, which are feature point Nos. 1-8. Feature point No. 8 is the top of the eyebrow, which forms triangles with any two of the feature point Nos. 1-7. The system may calculate the area of each triangle and add the areas together to determine the area of the left eyebrow. 2 (e) shows the 8 feature points for the left eye, feature point Nos. 17-24. The system may calculate the gray scale value of the straight lines between feature point No. 17 to No. 21.
The feature processing module 302 may further calculate the standard deviation between the pre-set face element eigenvalue and the eigenvalues corresponding to the positive/negative facial element images. The feature processing module 302 may extract the positive/negative sample facial element images from a database of sample facial images. The system may further classify the facial elements to obtain the positive/negative sample facial element images.
Exemplary positive/negative facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc. The positive/negative eigenvalues may be determined by applying a facial template to the positive/negative facial element images. For example, the positive/negative facial element images may be the eigenvalue for a positive facial element image of a big eye, the eigenvalue of a negative facial element image of a small eye, the eigenvalue of a positive facial element image of a big nose, the eigenvalue of a negative facial element image of a small nose, etc.
In one embodiment, the feature processing module 302 may further calculate the standard deviation between the pre-determined eigenvalue and the eigenvalues corresponding to the positive/negative facial element images to determine the facial element eigenvalues. The eigenvalues may include the eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for a nose, eigenvalues for a mouth, eigenvalues for light skin tone, or eigenvalues for smooth skin texture. The target eigenvalue may be determined by (pre-set facial element eigenvalue -eigenvalue of negative sample) / (eigenvalue of positive sample -eigenvalue of negative sample) .
The resulting image processing module 303 may apply pre-set weights to the facial element eigenvalues to determine the result of the facial image processing. The sample image processing module 304 may further present the results on a display.
The resulting image processing module 303 may apply pre-set weights to the facial element eigenvalues to determine the result of the facial image processing. The resulting image processing module 303 may further present the results on a display. Specifically, the pre-determined weights may be based on the gender corresponding to the facial image, or based on pre-determined values. The system may further present the results on a display. For example, the system may use a facial image display module to display the results. The results may be a facial image, an assessment score, assessment scores for facial elements, etc. For example, the display of results may be: “Your face has a beauty score of XX (over 100) . You have big eyes and smooth skin. You beauty ranking is at XX%, ” etc.
The sample image processing module 304 may extract features from the images of the facial image database, and classify the facial elements to obtain the positive/negative sample facial element image. The sample image processing module 104 may use the facial matching template to extract features from sample images. The positive/negative sample facial element images may be a positive facial element image of a big eye, a negative facial element image of a small eye, a positive facial element image of a big nose, a negative facial element image of a small nose, etc. Further, the sample image processing module 104 may update in real time the positive/negative sample images in the database. For example, if the system determines that a newly extracted big eye sample image 002 includes an eye bigger than that of a big eye sample image 001, the system may update the database and use big eye sample image 002 as the sample big eye image in subsequent processes.
The gender determination module 305 may execute the following steps. The gender determination module 305 may determine the gender of a facial image based on a gender determination template. For example, the system may establish a gender determination template by pre-processing training images (filling in light, rotating images, etc. ) to extract Gabor features. The  system may then convert the two-dimensional matrix of the training sample information to a one-dimensional vector. The system thus decreases the complexity of the process. The system may then input the SVM (Support Vector Machine) classifier to train the image recognition process to obtain the gender determination template. Embodiments consistent with the present disclosure input a facial image to the facial recognition process and determine the gender of the facial image using the gender determination template.
The resulting image processing module 303 may apply pre-determined weights to the facial element eigenvalues to determine the facial image processing results. The pre-determined weights may include the weights for eigenvalues for eyes, eigenvalues for pupils, eigenvalues for eyebrows, eigenvalues for nose, eigenvalues for mouth, eigenvalues for light skin tone, or eigenvalues for smooth skin texture, etc.
Embodiments consistent with the present disclosure provide methods and systems for process facial image data. The system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.
The system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element. The system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue of the same face element. The system may further apply a weighting strategy to obtain the results of the facial image process. The system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
Figure 4 shows a detailed diagram of the exemplary facial element processing module 301. Figure 4 is discussed in relation to Figure 2 below to illustrate the facial element image processing consistent with the present disclosure.
In Figure 2, the system for processing facial image provides a diagram to show the feature points on a facial image. 2 (a) shows all feature points (88) on a facial image. 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88.2 (c) shows the feature points of the left eyebrow. There are 8 feature points, which are feature points Nos. 1-8.2 (d) shows the feature points of the right eyebrow. There are 8 feature points, which are feature points Nos. 9-16.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45.2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 46-67.
As shown in Figure 4, the facial element processing module 301 includes a first surface determination unit 401, a target eye determination unit 402, and a first eigenvalue determination unit 403.
The first surface determination unit 401may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The target eye determination unit 402 may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye. The first eigenvalue determination unit 403 may calculate the ratio of the feature surface area of the target eye over the feature surface area of the face edge and determine a first eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big eye may be a first positive sample eigenvalue; and an eigenvalue corresponding to a small eye may be a first negative sample eigenvalue. The eigenvalue of eyes may be determined by (first eigenvalue - first negative sample eigenvalue) / (first positive sample eigenvalue -first negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24. The system may calculate the surface area of the polygon forms by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the surface area of the polygon forms by feature point Nos. 25-32, and record the surface area as S12.
Next, the system may determine the difference M01 of S11 and S12 (M01=S11-S12) . If M01 is greater than 0, then the left eye (corresponding to S11) is the target eye. If M01 is less than 0, then the right eye (corresponding to S12) is the target eye.
The first eigenvalue is D01=max (S11, S12) /S00. The first positive eigenvalue (corresponding to a big eye) is P10. The first negative eigenvalue (corresponding to a small eye) is P11. The eigenvalue of the eyes T01= (D01-P11) / (P10-P11) .
In one embodiment, the system may calculate the first positive eigenvalue (corresponding to a big eye) P10 using the same method used to calculate the first eigenvalue D01. The system may also calculate the first negative eigenvalue (corresponding to a small eye) P11 using the same method used to calculate the first eigenvalue D01.
In addition, the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue. The system may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
Further, as shown in Figure 4, the facial element processing module 301 may further include a second surface determination unit 404, a target eyebrow determination unit 405, and a second eigenvalue determination unit 406.
In addition, the method for calculating a pre-determined facial element eigenvalue may further determine the second eigenvalue. The second surface determination unit 404 may calculate a left eyebrow feature area and a right eyebrow feature area, and a facial edge surface area.
Next, the target eyebrow determination unit 405 may compare the surface areas of the two eyebrows. The target eyebrow determination unit 405 may identify the eyebrow with the larger feature surface area as the target eyebrow. The second eigenvalue determination unit 406 may calculate the ratio of the feature surface area of the target eyebrow over the feature surface area of the face edge and determine a second eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a thick eyebrow may be a second positive sample eigenvalue; and an eigenvalue corresponding to a thin eyebrow may be a second negative sample eigenvalue. The eigenvalue of eyebrows may be determined by (second eigenvalue -second negative sample eigenvalue) / (second positive sample eigenvalue -second negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (c) shows the feature points of the left eyebrow. There are 8 feature points, which are feature points Nos. 1-8. The system may calculate the surface area of the polygon forms by feature point Nos. 1-8, and record it as S21.2 (d) shows the feature points of the right eyebrow. There are 8 feature points, which are feature points Nos. 9-16. The system may calculate the surface area of the polygon forms by feature point Nos. 9-16, and record it as S22.
Next, the system may determine the difference M02 of S21 and S22 (M02=S21-S22) . If M02 is greater than 0, then the left eyebrow (corresponding to S21) is the target eye. If M02 is less than 0, then the right eyebrow (corresponding to S22) is the target eye.
The second eigenvalue is D02=max (S21, S22) /S00. The second positive eigenvalue (corresponding to a thick eyebrow) is P20. The second negative eigenvalue (corresponding to a thin eyebrow) is P21. The eigenvalue of the eyebrows T02= (D02-P21) / (P20-P21) .
In one embodiment, the system may calculate the second positive eigenvalue (corresponding to a thick eyebrow) P20 using the same method used to calculate the second eigenvalue D02. The system may also calculate the second negative eigenvalue (corresponding to a thin eyebrow) P21 using the same method used to calculate the second eigenvalue D02.
Furthermore, the facial element processing module 301 shown in Figure 4 may further include a third surface determination unit 407 and the third eigenvalue determination unit 408.
The third surface determination unit 407 may calculate a nose feature area and a facial edge surface area.
Next, the third eigenvalue determination unit 408 may compare the surface area of the nose and the feature surface area of the face edge to determine the third eigenvalue. The third eigenvalue determination unit 408 may calculate the ratio of the feature surface area of the nose over the feature surface area of the face edge and determine a third eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big nose may be a third positive sample eigenvalue; and an eigenvalue corresponding to a small nose may be a third negative sample eigenvalue. The eigenvalue of nose may be determined by (third eigenvalue -third negative sample eigenvalue) / (third positive sample eigenvalue –third negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45. The system may calculate the surface area of the polygon forms by feature point Nos. 33-45, and record it as S21.
The third eigenvalue is D03=S31/S00. The third positive eigenvalue (corresponding to a big nose) is P30. The third negative eigenvalue (corresponding to a small nose) is P31. The eigenvalue of the nose T03= (D03-P31) / (P30-P31) .
In one embodiment, the system may calculate the third positive eigenvalue (corresponding to a big nose) P30 using the same method used to calculate the third eigenvalue D03. The system may also calculate the third negative eigenvalue (corresponding to a small nose) P31 using the same method used to calculate the third eigenvalue D03.
Furthermore, the facial element processing module 301 shown in Figure 4 may further include a fourth eigenvalue determination unit 409. The first surface determination unit 401 may calculate the left eye’s feature surface area and the right eye’s feature surface area based on the multiple feature points of the two eyes. The target eye determination unit 403 may compare the feature surface area of the two eyes, and identify the eye with the larger feature surface area as the target eye. The fourth eigenvalue determination unit 409 may calculate the ratio of the gray scale value of the target eye over the gray scale value of the pupil of the target eye and determine a fourth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a big pupil may be a fourth positive sample eigenvalue; and an eigenvalue corresponding to a small pupil may be a fourth negative sample eigenvalue. The eigenvalue of pupils may be determined by (fourth  eigenvalue -fourth negative sample eigenvalue) / (fourth positive sample eigenvalue -fourth negative sample eigenvalue) .
In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the surface area of the polygon forms by feature point Nos. 68-88 and record half of the surface are as S00.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24. The system may calculate the surface area of the polygon forms by feature point Nos. 17-24, and record the surface area as S11.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the surface area of the polygon forms by feature point Nos. 25-32, and record the surface area as S12.
Next, the system may determine the difference M01 of S11 and S12 (M01=S11-S12) . If M01 is greater than 0, then the left eye (corresponding to S11) is the target eye. If M01 is less than 0, then the right eye (corresponding to S12) is the target eye.
If the left eye is the target eye, the system may obtain the feature points of the left eye (point Nos. 17-21) . The system may draw a straight line between point Nos. 17 and 21 and obtain pixels along the straight line. The system may further convert the obtained pixels into a gray scale value of 0-255. If the right eye is the target eye, the system may determine the gray scale in the same manner.
The smaller the gray scale the darker the corresponding image. The pupil of an eye usually has a smaller gray scale. In one example, the system may determine that the area inside an eye with a gray scale of less than 50 is the pupil area. The system may count the number of pixels obtain in the eye area as S41. The system my count the number of pixels obtained in the pupil area (gray scale value <50) as S42.
The fourth eigenvalue is D04=S42/S41. The fourth positive eigenvalue (corresponding to a big pupil) is P41. The fourth negative eigenvalue (corresponding to a small pupil) is P42. The eigenvalue of the pupils T04= (D04-P42) / (P41-P42) .
In one embodiment, the system may calculate the fourth positive eigenvalue (corresponding to a big pupil) P41 using the same method used to calculate the fourth eigenvalue D04. The system may also calculate the fourth negative eigenvalue (corresponding to a small pupil) P42 using the same method used to calculate the fourth eigenvalue D04.
Furthermore, the facial element processing module 301 shown in Figure 4 may further include a first obtaining unit 410 and a fifth eigenvalue determination unit 411. The first obtaining unit 410 may calculate the skin feature surface area. The fifth eigenvalue determination unit 411 may then determine an average gray scale of the skin feature surface area to determine a fifth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a light skin tone may be a fifth positive sample eigenvalue; and an eigenvalue corresponding to a darker skin tone may be a fifth negative sample eigenvalue. The eigenvalue of skin may be determined by (fifth eigenvalue -fifth negative sample eigenvalue) / (fifth positive sample eigenvalue -fifth negative sample eigenvalue) .
In Figure 2, the system may determine a skin feature surface area based on feature point No. 19 in 2 (e) and feature point No. 46 in 2 (h) . The system may draw a straight light between point No. 19 and No. 26, and obtain pixels along the straight line. The system may also calculate the skin surface area based on point No. 27 in 2 (f) and point No. 52 in 2 (h) . The system may draw a straight light between point No. 27 and No. 52, and obtain pixels along the straight line. The system may further convert the obtained pixels into a gray scale value of 0-255. The system may calculate the average gray scale of the skin feature area to determine the fifth eigenvalue D05. The fifth positive eigenvalue (corresponding to a light skin tone) is P51. The fifth negative eigenvalue (corresponding to a dark skin tone) is P52. The eigenvalue of the skin tone T05= (D05-P52) / (P51-P52) .
In one embodiment, the system may calculate the fifth positive eigenvalue (corresponding to a big pupil) P51 using the same method used to calculate the fifth eigenvalue D05.  The system may also calculate the fifth negative eigenvalue (corresponding to a small pupil) P52 using the same method used to calculate the fifth eigenvalue D05.
Furthermore, the facial element processing module 301 shown in Figure 4 may further include a second obtaining unit 412 and a sixth eigenvalue determination unit 413. The second obtaining unit 412 may calculate the face edge feature area. The sixth eigenvalue determination unit 413 may then determine an average gray scale of the face edge surface area to determine a sixth eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a smooth skin texture may be a sixth positive sample eigenvalue; and an eigenvalue corresponding to a rough skin texture may be a sixth negative sample eigenvalue. The eigenvalue of skin texture may be determined by (sixth eigenvalue -sixth negative sample eigenvalue) / (sixth positive sample eigenvalue -sixth negative sample eigenvalue) .
The system may also use an edge recognition system to detect the edge (s) in the facial image. If the face has dark spots or rough spots, the edges of the spots can be detected. The eyes, nose, mouth, eyebrows also have corresponding edges.
In Figure 2, 2 (b) shows the feature points along the edge of the face, which are feature point Nos. 68-88 (21 points) . The system may use the edge recognition system to detect the edges between point No. 68-88. Then the system may take away the edged of the eyes, nose, mouth, eyebrows, etc. The system may determine an edge feature surface area. The system may calculate a gray scale value of the edge surface area of 0-255. The system may calculate the average gray scale value of the face edge feature area to determine the sixth eigenvalue D06. The sixth positive eigenvalue (corresponding to a smooth skin texture) is P61. The sixth negative eigenvalue (corresponding to a rough skin texture) is P62. The eigenvalue of the skin texture T06= (D06-P62) /(P61-P62) .
In one embodiment, the system may calculate the sixth positive eigenvalue (corresponding to a smooth skin texture) P61 using the same method used to calculate the sixth eigenvalue D06. The system may also calculate the sixth negative eigenvalue (corresponding to a rough skin texture) P62 using the same method used to calculate the sixth eigenvalue D06.
Furthermore, the facial element processing module 301 shown in Figure 4 may further include an eye distance determination unit 414, a mouth width determination unit 415, and a seventh eigenvalue determination unit 416 .
The eye distance determination unit 414 may calculate the distance between the left eye to the center of the two eyes and the distance between the right eye to the center of the two eyes. The mouth width determination unit 415 may calculate the distance between the left corner of the mouth to the center of the mouth and the distance between the right corner of the mouth to the center of the mouth. The seventh eigenvalue determination unit 416 may then calculate a ratio of the distance between the two eyes and the distance between the two corners of the mouth and determine the seventh eigenvalue.
The pre-determined positive/negative sample images of the facial element may correspond to positive/negative eigenvalues. For example, an eigenvalue corresponding to a small mouth may be a seventh positive sample eigenvalue; and an eigenvalue corresponding to a big mouth may be a seventh negative sample eigenvalue. The eigenvalue of the mouth feature may be determined by (seventh eigenvalue -seventh negative sample eigenvalue) / (seventh positive sample eigenvalue -seventh negative sample eigenvalue) .
In Figure 2, 2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 46-67. The system may calculate the distance between the two corners of the mouth (between point No. 46 and point No. 52) to determine the mouth width and record it as L1.2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the center of the left eye O1 based on point Nos.  17 and 21. The system may calculate the center of the right eye O2 based on point Nos. 25 and 29. The system may calculate the distance between the two eyes (between O1 and O2) to determine the eye width and record it as L2. The seventh eigenvalue D07=L1/L2. The seventh positive eigenvalue corresponding to a small mouth is P71. The seventh negative eigenvalue corresponding to a big mouth is P72. The eigenvalue of the mouth T07= (D07-P72) / (P71-P72) .
In one embodiment, the system may calculate the seventh positive eigenvalue (corresponding to a small mouth) P71 using the same method used to calculate the seventh eigenvalue D07. The system may also calculate the seventh negative eigenvalue (corresponding to a big mouth) P72 using the same method used to calculate the seventh eigenvalue D07.
In addition, the facial element processing module 301 may calculate the distance from the center of the two eyes to the tip of the nose L3, from the tip of the nose to the center of the bottom lip L4, and from the center of the bottom lip to the bottom tip of the chin L5. The system may then determine the eighth eigenvalue.
In Figure 2, 2 (e) shows the feature points of the left eye. There are 8 feature points, which are feature points Nos. 17-24.2 (f) shows the feature points of the right eye. There are 8 feature points, which are feature points Nos. 25-32. The system may calculate the center of the inner eye corners O3 based on point Nos. 21 and 29.2 (g) shows the feature points of the nose. There are 13 feature points, which are feature points Nos. 33-45. The system may determine the distance L3 between the center of the inner eye corners O3 and the tip of the nose (point No. 33) . 2 (h) shows the feature points of the mouth. There are 22 feature points, which are feature points Nos. 47-67. The system may determine the distance L4 between the tip of the nose (point No. 33) and the center of the bottom lip (point No. 60) . Further, the system may determine the distance L5 between the center of the bottom lip (point No. 60) to the tip of the chin (point No. 78) . The eighth eigenvalue 
Figure PCTCN2014089885-appb-000004
Figure PCTCN2014089885-appb-000005
wherein
Figure PCTCN2014089885-appb-000006
The eighth positive eigenvalue corresponding to a well-proportioned face is P81. The eighth negative eigenvalue corresponding to a poorly proportioned face is P82. The eigenvalue of the mouth T08= (D08-P82) / (P81-P82) .
In one embodiment, the system may calculate the eighth positive eigenvalue (corresponding to a well-proportioned face) P81 using the same method used to calculate the eighth eigenvalue D08. The system may also calculate the eighth negative eigenvalue (corresponding to a poorly proportioned face) P82 using the same method used to calculate the eighth eigenvalue D08.
Moreover, the facial element processing module 301may determine the ninth eigenvalue. In Figure 2, 2 (b) shows the feature points on the edge of the human face. There are 21 feature points, which are feature points Nos. 68 to 88. The system may calculate the angle α between point No. 68, No. 88, and No. 78 (point No. 78 being the tip of the angel) . The ninth eigenvalue D09=α. The ninth positive eigenvalue corresponding to a small face is P91. The ninth negative eigenvalue corresponding to a large face is P92. The eigenvalue of the face size T09= (D09-P92) /(P91-P92) .
In one embodiment, the system may calculate the ninth positive eigenvalue (corresponding to a small face) P91 using the same method used to calculate the ninth eigenvalue D09. The system may also calculate the ninth negative eigenvalue (corresponding to a large face) P92 using the same method used to calculate the ninth eigenvalue D09.
In the above examples, the system calculates the follow eigenvalues of the facial element images: eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09. The absolute values of eigenvalues are between 0 and 1. The closer an eigenvalue is to 0, the closer the image is to the negative sample target image. The closer an eigenvalue is to 1, the close the image is to the positive sample target image. For example, if an eye eigenvalue is a negative number, then the eye in the facial image file is larger than the positive eye sample image.
The system for processing facial images may apply weights to the eigenvalues. A pre-determined weight application is select among the 9 eigenvalues (eye eigenvalue T01, eyebrow eigenvalue T02, nose eigenvalue T03, pupil eigenvalue T04, skin tone eigenvalue T05, skin texture eigenvalue T06, mouth eigenvalue T07, face proportion eigenvalue T08, and face size eigenvalue T09) which eigenvalues to apply weight factors.
For example, the system may calculated the weighted eigenvalues of using G=40+min (T01, T02, T03, ……, T0n) *30+ (sum (T01, T02, T03, ……T0n) -min (T01, T02, T03, ……、T0n) ) *30, wherein min (T01, T02, T03, ……, T0n) is the minimum value of all weighted eigenvalues, sum(T01, T02, T03, ……T0n) is the sum of all weighted eigenvalues.
If a recognized facial image is a man’s facial image, then the system applies the weight factors as follows: G00 = 40 + min (T01, T02, T04, T06, T08, T09) *30 +(sum (T01, T02, T04, T06, T08, T09) -min (T01, T02, T04, T06, T08, T09) ) *30. If a recognized facial image is a woman’s facial image, then the system applies the weight factors as follows: G11 = 40 +min(T01, T03, T04, T05, T06, T07, T08, T09) *30 + (sum (T01, T03, T04, T05, T06, T07, T08, T09) -min(T01, T03, T04, T05, T06, T07, T08, T09)) *30.
Embodiments consistent with the present disclosure provide methods and systems for process facial image data. The system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.
The system may compute the positions of the pre-set feature points of a facial element to determine an eigenvalue of the facial element. The system may further compute the standard deviation between the eigenvalue of the facial element and the positives/negative sample eigenvalue  of the same face element. The system may further apply a weighting strategy to obtain the results of the facial image process. The system may further display the results of the facial image data process on a monitor. Embodiments consistent with the present disclosure improve the accuracy of facial image processing and assess the degree of attractiveness with flexibility.
As explained above, the facial element processing module 301 as shown in Figure 4 may implement the method shown in Figure 1. The method described in relation to Figure 1 may be implemented by servers for processing facial images. The components of the facial element processing module 301 can also be understood in relation to the method described in Figure 1. Further, embodiments of the present disclosure provide a user terminal, which may include the components described in Figures 3 and 4. The functions of the user terminal may also be understood in relation to the embodiments described in Figures 1-4.
Consistent with embodiments of the present disclosure, one or more non-transitory storage medium storing a computer program are provided to implement the system and method for process facial image data. The one or more non-transitory storage medium may be installed in a computer or provided separately from a computer. A computer may read the computer program from the storage medium and execute the program to perform the methods consistent with embodiments of the present disclosure. The storage medium may be a magnetic storage medium, such as hard disk, floppy disk, or other magnetic disks, a tape, or a cassette tape. The storage medium may also be an optical storage medium, such as optical disk (for example, CD or DVD) . The storage medium may further be semiconductor storage medium, such as DRAM, SRAM, EPROM, EEPROM, flash memory, or memory stick.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the claims.
INDUSTRIAL APPLICABILITY AND ADVANTAGEOUS EFFECTS
Without limiting the scope of any claim and/or the specification, examples of industrial applicability and certain advantageous effects of the disclosed embodiments are listed for illustrative purposes. Various alternations, modifications, or equivalents to the technical solutions of the disclosed embodiments can be obvious to those skilled in the art and can be included in this disclosure.
By using the disclosed methods and systems, various systems for facial image assessment and recognition may be implemented. In one embodiment, the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on the gender corresponded to the facial image. In other embodiments, the system for processing facial images may apply different weight factors depending on other characteristics of the facial image, or a combination of the characteristics of the facial image. As a result, the system for processing facial images can assess facial images more accurately, taking into consideration various characteristics (e.g., gender, race) associated with the facial image.
In one embodiment, the system for processing facial images may apply different weight factors to the eigenvalue of different facial elements depending on the age associated to the facial image. The system may create an age determination template from the facial images in its training database, by sorting out the training facial images by age to learn the age specific facial characteristics. The system may then apply the template to the received image to determine the age range of the facial image. The system may further apply different weight factors to the eigenvalue of different facial elements depending on the age associated with the facial image.
In another embodiment, the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on the race associated with the facial image. The system may create a race determination template from the facial images in its training database. The system may then apply the template to the received image to determine the  race associated with the facial image. The system may further apply different weight factors to the eigenvalue of different facial elements depending on the race associated with the facial image.
In another embodiment, the system for processing facial image may apply different weight factors to the eigenvalue of different facial elements depending on a combination of the gender and race associated with the facial image. The system may create a race determination template and a gender determination template from the facial images in its training database. The system may then apply the race determination template to the received image to determine the race associated with the facial image and apply the gender determination template to determine the gender associated with the facial image. The system may further apply different weight factors to the eigenvalue of different facial elements depending on the race and gender associated with the facial image.
Embodiments consistent with the present disclosure provide methods and systems for process facial image data. The system may determine the eigenvalues of pre-determined facial elements based on multiple pre-determined feature points of facial elements, each pre-determined facial element corresponding to multiple feature points. Instead of using the distances between different facial elements to calculate facial features, the system determines the facial features by using the eigenvalue of the facial elements. As a result, the system improves the accuracy of the facial recognition process.

Claims (26)

  1. A method for processing facial images, comprising:
    obtaining pre-selected feature points from an element of a target facial image;
    determining a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element;
    obtaining a positive eigenvalue (PE) corresponding to a positive sample of the facial element image;
    obtaining a negative eigenvalue (NE) corresponding to a negative sample of the facial element image;
    determining a standard deviation of the determined eigenvalue associated with the facial element and the positive and negative eigenvalues of the samples of the facial element;
    determining a target eigenvalue based on the standard deviation;
    applying a weight factor to the target eigenvalue;
    determining a result from processing the target facial image based on the weighted eigenvalue; and
    presenting the result to a user.
  2. The method according to claim 1, wherein the target eigenvalue is calculated using a formula: (E -NE) / (PE -NE) .
  3. The method according to claim 2, wherein the facial element is a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, or an edge of a face.
  4. The method according to claim 3, wherein the target eigenvalue is the target eigenvalue associated with the eyes, the eyebrows, the nose, the mouth, a skin tone, or a skin texture.
  5. The method according to claim 4, further comprising:
    determining a left eye surface area based on feature points associated with the left eye;
    determining a right eye surface area based on feature points associated with the right eye;
    determining a face surface area based on feature points associated with the edge of the face;
    determining a target eye by comparing the left eye surface area with the right eye surface area;
    determining a first target eigenvalue (1st E) based on the ratio of the surface area of the target eye over the face surface area;
    determining a first positive eigenvalue (1st PE) corresponding to a large eye image;
    determining a negative eigenvalue (1st NE) corresponding to a small eye image; and
    determining the target eigenvalue associated with using a formula: (1st E –1st NE) / (1st PE –1st NE) .
  6. The method according to claim 4, further comprising:
    determining a left eyebrow surface area based on feature points associated with the left eyebrow;
    determining a right eyebrow surface area based on feature points associated with the right eyebrow;
    determining a face surface area based on based on feature points associated with the edge of the face;
    determining a target eyebrow by comparing the left eyebrow surface area with the right eyebrow surface area;
    determining a second target eigenvalue (2nd E) based on the ratio of a surface area of the target eyebrow over the face surface area;
    determining a second positive eigenvalue (2nd PE) corresponding to a thick eyebrow image;
    determining a second negative eigenvalue (2nd NE) corresponding to a thin eyebrow image; and
    determining the target eigenvalue associated with the eyebrows using a formula: (2nd E –2nd NE) / (2nd PE –2nd NE) .
  7. The method according to claim 4, further comprising:
    determining a nose surface area based on feature points associated with the nose;
    determining a face surface area based on based on feature points associated with the edge of the face;
    determining a third target eigenvalue (3rd E) based on the ratio of the nose surface over the face surface area;
    determining a third positive eigenvalue (3rd PE) corresponding to a big nose image;
    determining a third negative eigenvalue (3rd NE) corresponding to a small nose image; and
    determining the target eigenvalue associated with the nose using a formula: (3rd E –3rd NE) / (3rd PE –3rd NE) .
  8. The method according to claim 4, further comprising:
    determining a left eye surface area based on feature points associated with the left eye;
    determining a right eye surface area based on feature points associated with the right eye;
    determining a target eye by comparing the left eye surface area with the right eye surface area;
    determining a target eye gray scale value based on feature points associated with the target eye;
    determining a target pupil gray scale value based on feature points associated with a pupil of the target eye;
    determining a fourth target eigenvalue (4th E) based on the ratio of the target eye gray scale value over the target eye pupil gray scale value;
    determining a fourth positive eigenvalue (4th PE) corresponding to a big pupil image;
    determining a fourth negative eigenvalue (4th NE) corresponding to a small pupil image; and
    determining the target eigenvalue associated with the pupil using a formula: (4th E –4th NE) / (4th PE –4th NE) .
  9. The method according to claim 4, further comprising:
    determining a skin feature surface area based on feature points;
    determining a gray scale value of the skin feature surface and the fifth target eigenvalue (5th E) ;
    determining a fifth positive eigenvalue (5th PE) corresponding to a light skin tone image;
    determining a fifth negative eigenvalue (5th NE) corresponding to a dark skin tone image; and
    determining the target eigenvalue associated with the skin tone using a formula: (5th E –5th NE) / (5th PE –5th NE) .
  10. The method according to claim 4, further comprising:
    determining a face surface area based on based on feature points associated with the edge of the face;
    determining a gray scale value of the face surface area and the sixth target eigenvalue (6th E) ;
    determining a sixth positive eigenvalue (6th PE) corresponding to a smooth skin texture image;
    determining a sixth negative eigenvalue (6th NE) corresponding to a rough skin texture image; and
    determining the target eigenvalue associated with the skin texture using a formula: (6th E –6th NE) / (6th PE –6th NE) .
  11. The method according to claim 4, further comprising:
    determining a distance between two eyes based on feature points associated with the two eyes;
    determining a distance between two corners of the mouth based on feature points associated with the two corners of the mouth;
    determining the ratio of the distance between two eyes over the distance between two corners of the mouth;
    determining a seventh target eigenvalue (7th E) based on the ratio;
    determining a seventh positive eigenvalue (7th PE) corresponding to a small mouth image;
    determining a sixth negative eigenvalue (7th NE) corresponding to a large mouth image; and
    determining the target eigenvalue associated with the mouth using a formula: (7th E –7th NE) / (7th PE –7th NE) .
  12. The method according to one of claims 5-11, further comprising:
    determining a gender of the facial image based on a gender determination template; and
    applying one or more weight factors, according to the determined gender, to one or more of the eigenvalues associated with eyes, eyebrows, the nose, the mouth, the skin tone, or the skin texture.
  13. The method according to claim 1, further comprising:
    extracting face features from a database of sample facial images; and
    classifying extracted dace features into positive and negative facial element images.
  14. A system for processing facial images, comprising:
    a facial element processing module configured to obtain pre-selected feature points from an element of a target facial image; and determine a pre-selected feature eigenvalue (E) based on the feature points associated with the facial element;
    an eigenvalue processing module configured to obtain a positive eigenvalue (PE) corresponding to a positive sample of the facial element image; obtain a negative eigenvalue (NE) corresponding to a negative sample of the facial element image; determining a standard deviation of determined eigenvalue associated with the facial element and the positive and negative eigenvalues; and determine a target eigenvalue based on the standard deviation; and
    a resulting image processing module configured to apply a weight factor to the target eigenvalue; determine a result from processing the target facial image based on the weighted eigenvalue; and present the result to a user.
  15. The system according to claim 14, wherein the target eigenvalue is calculated using a formula: (E -NE) / (PE -NE) .
  16. The system according to claim 15, wherein the face element is a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, or an edge of a face.
  17. The system according to claim 16, wherein the target eigenvalue is the target eigenvalue associated with the eyes, the eyebrows, the nose, the mouth, a skin tone, or a skin texture.
  18. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a left eye surface area based on feature points associated with the left eye;
    determine a right eye surface area based on feature points associated with the right eye;
    determine a face surface area based on feature points associated with the edge of the face;
    determine a target eye by comparing the left eye surface area with the right eye surface area;
    determine a first target eigenvalue (1st E) based on the ratio of a surface area of the target eye over the face surface area;
    determine a first positive eigenvalue (1st PE) corresponding to a large eye image;
    determine a negative eigenvalue (1st NE) corresponding to a small eye image; and
    determine the target eigenvalue associated with using a formula: (1st E –1st NE) / (1st PE –1st NE) .
  19. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a left eyebrow surface area based on feature points associated with the left eyebrow;
    determine a right eyebrow surface area based on feature points associated with the right eyebrow;
    determine a face surface area based on based on feature points associated with the edge of the face;
    determine a target eyebrow by comparing the left eyebrow surface area with the right eyebrow surface area;
    determine a second target eigenvalue (2nd E) based on the ratio of a surface area of the target eyebrow over the face surface area;
    determine a second positive eigenvalue (2nd PE) corresponding to a thick eyebrow image;
    determine a second negative eigenvalue (2nd NE) corresponding to a thin eyebrow image; and
    determine the target eigenvalue associated with the eyebrow using a formula: (2nd E –2nd NE) / (2nd PE –2nd NE) .
  20. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a nose surface area based on feature points associated with the nose;
    determine a face surface area based on based on feature points associated with the edge of the face;
    determine a third target eigenvalue (3rd E) based on the ratio of the nose surface over the face surface area;
    determine a third positive eigenvalue (3rd PE) corresponding to a big nose image;
    determine a third negative eigenvalue (3rd NE) corresponding to a small nose image; and
    determine the target eigenvalue associated with the nose using a formula: (3rd E –3rd NE)/ (3rd PE –3rd NE) .
  21. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a left eye surface area based on feature points associated with the left eye;
    determine a right eye surface area based on feature points associated with the right eye;
    determine a target eye by comparing the left eye surface area with the right eye surface area;
    determine a target eye gray scale value based on feature points associated with the right eye;
    determine a target pupil gray scale value based on feature points associated with the right eye;
    determine a fourth target eigenvalue (4th E) based on the ratio of the target eye gray scale value over the target pupil gray scale value;
    determine a fourth positive eigenvalue (4th PE) corresponding to a big pupil image;
    determine a fourth negative eigenvalue (4th NE) corresponding to a small pupil image; and
    determine the target eigenvalue associated with the pupil using a formula: (4th E –4th NE)/ (4th PE –4th NE) .
  22. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a skin feature surface area based on feature points;
    determine a gray scale value of the skin feature surface and the fifth target eigenvalue (5th E) ;
    determine a fifth positive eigenvalue (5th PE) corresponding to a light skin tone image;
    determine a fifth negative eigenvalue (5th NE) corresponding to a dark skin tone image; and
    determine the target eigenvalue associated with the skin tone using a formula: (5th E –5th NE) / (5th PE –5th NE) .
  23. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a face surface area based on based on feature points associated with the edge of the face;
    determine a gray scale value of the face surface area and the sixth target eigenvalue (6th E) ;
    determine a sixth positive eigenvalue (6th PE) corresponding to a smooth skin texture image;
    determine a sixth negative eigenvalue (6th NE) corresponding to a rough skin texture image; and
    determine the target eigenvalue associated with the skin texture using a formula: (6th E –6th NE) / (6th PE –6th NE) .
  24. The system according to claim 17, wherein the facial element processing module is further configured to:
    determine a distance between two eyes based on feature points associated with the two eyes;
    determine a distance between two corners of the mouth based on feature points associated with the two corners of the mouth;
    determine the ratio of the distance between two eyes over the distance between two corners of the mouth;
    determine a seventh target eigenvalue (7th E) based on the ratio;
    determine a seventh positive eigenvalue (7th PE) corresponding to a small mouth image;
    determine a sixth negative eigenvalue (7th NE) corresponding to a large mouth image; and
    determine the target eigenvalue associated with the mouth using a formula: (7th E –7th NE) / (7th PE –7th NE) .
  25. The system according to one of claims 17-24, further comprising:
    a gender determination module configured to determine a gender of the facial image based on a gender determination template;
    wherein the resulting image processing module is further configured to apply one or more weight factors, according to the determined gender, to one or more of the eigenvalues associated with eyes, eyebrows, the nose, the mouth, the skin tone, or the skin texture.
  26. The system according to claim 14, wherein the facial element processing module is further configured to:
    extract face features from a database of sample facial images; and
    classify extracted dace features into positive and negative facial element images.
PCT/CN2014/089885 2013-11-27 2014-10-30 Methods and systems for processing facial images WO2015078261A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310636576.2 2013-11-27
CN201310636576.2A CN104680121B (en) 2013-11-27 2013-11-27 Method and device for processing face image

Publications (1)

Publication Number Publication Date
WO2015078261A1 true WO2015078261A1 (en) 2015-06-04

Family

ID=53198334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089885 WO2015078261A1 (en) 2013-11-27 2014-10-30 Methods and systems for processing facial images

Country Status (3)

Country Link
CN (1) CN104680121B (en)
HK (1) HK1206463A1 (en)
WO (1) WO2015078261A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329402A (en) * 2017-07-03 2017-11-07 湖南工业大学 The control method that a kind of combined integral link is combined with PPI controller algorithm
WO2018076495A1 (en) * 2016-10-28 2018-05-03 广州炒米信息科技有限公司 Method and system for retrieving face image

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205490B (en) * 2015-09-23 2019-09-24 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105450664B (en) * 2015-12-29 2019-04-12 腾讯科技(深圳)有限公司 A kind of information processing method and terminal
CN107122327B (en) 2016-02-25 2021-06-29 阿里巴巴集团控股有限公司 Method and training system for training model by using training data
CN108229279B (en) 2017-04-14 2020-06-02 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN108229278B (en) 2017-04-14 2020-11-17 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN110490177A (en) 2017-06-02 2019-11-22 腾讯科技(深圳)有限公司 A kind of human-face detector training method and device
CN109299632A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 Skin detecting method, system, equipment and storage medium
CN108288023B (en) * 2017-12-20 2020-10-16 深圳和而泰数据资源与云技术有限公司 Face recognition method and device
CN108346130B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108629303A (en) * 2018-04-24 2018-10-09 杭州数为科技有限公司 A kind of shape of face defect identification method and system
CN109063597A (en) * 2018-07-13 2018-12-21 北京科莱普云技术有限公司 Method for detecting human face, device, computer equipment and storage medium
CN110929073A (en) * 2018-08-30 2020-03-27 上海掌门科技有限公司 Method and equipment for pushing information and collecting data
CN110968723B (en) * 2018-09-29 2023-05-12 深圳云天励飞技术有限公司 Image characteristic value searching method and device and electronic equipment
CN109978836B (en) * 2019-03-06 2021-01-19 华南理工大学 User personalized image aesthetic feeling evaluation method, system, medium and equipment based on meta learning
CN110717373B (en) * 2019-08-19 2023-01-03 咪咕文化科技有限公司 Image simulation method, electronic device, and computer-readable storage medium
CN111768336B (en) * 2020-07-09 2022-11-01 腾讯科技(深圳)有限公司 Face image processing method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063263A1 (en) * 2006-09-08 2008-03-13 Li Zhang Method for outlining and aligning a face in face processing of an image
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
US7822696B2 (en) * 2007-07-13 2010-10-26 Microsoft Corporation Histogram-based classifiers having variable bin sizes
US20120328199A1 (en) * 2011-06-24 2012-12-27 Lg Innotek Co., Ltd. Method for detecting facial features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472694A (en) * 2002-10-25 2004-02-04 Global anti-terrorism face identifying codes and computer storing and searching method
CN101305913B (en) * 2008-07-11 2010-06-09 华南理工大学 Face beauty assessment method based on video
JP5651385B2 (en) * 2010-06-22 2015-01-14 花王株式会社 Face evaluation method
CN102496002A (en) * 2011-11-22 2012-06-13 上海大学 Facial beauty evaluation method based on images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080063263A1 (en) * 2006-09-08 2008-03-13 Li Zhang Method for outlining and aligning a face in face processing of an image
US7822696B2 (en) * 2007-07-13 2010-10-26 Microsoft Corporation Histogram-based classifiers having variable bin sizes
CN101833672A (en) * 2010-04-02 2010-09-15 清华大学 Sparse representation face identification method based on constrained sampling and shape feature
US20120328199A1 (en) * 2011-06-24 2012-12-27 Lg Innotek Co., Ltd. Method for detecting facial features

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076495A1 (en) * 2016-10-28 2018-05-03 广州炒米信息科技有限公司 Method and system for retrieving face image
CN107329402A (en) * 2017-07-03 2017-11-07 湖南工业大学 The control method that a kind of combined integral link is combined with PPI controller algorithm

Also Published As

Publication number Publication date
CN104680121A (en) 2015-06-03
HK1206463A1 (en) 2016-01-08
CN104680121B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
WO2015078261A1 (en) Methods and systems for processing facial images
CN107145857B (en) Face attribute recognition method and device and model establishment method
Ghimire et al. Recognition of facial expressions based on salient geometric features and support vector machines
WO2018205801A1 (en) Facial animation implementation method, computer device, and storage medium
US11915514B2 (en) Method and apparatus for detecting facial key points, computer device, and storage medium
WO2019128646A1 (en) Face detection method, method and device for training parameters of convolutional neural network, and medium
US9317785B1 (en) Method and system for determining ethnicity category of facial images based on multi-level primary and auxiliary classifiers
Ng et al. A review of facial gender recognition
WO2017088432A1 (en) Image recognition method and device
Feng et al. Face detection, bounding box aggregation and pose estimation for robust facial landmark localisation in the wild
WO2019075666A1 (en) Image processing method and apparatus, terminal, and storage medium
WO2016192477A1 (en) Method and terminal for locating critical point of face
Tome et al. Identification using face regions: Application and assessment in forensic scenarios
KR20160101973A (en) System and method for identifying faces in unconstrained media
WO2019228040A1 (en) Facial image scoring method and camera
Singh et al. Comparison of face recognition algorithms on dummy faces
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
WO2020037963A1 (en) Facial image identifying method, device and storage medium
WO2020037962A1 (en) Facial image correction method and apparatus, and storage medium
WO2019075656A1 (en) Image processing method and device, terminal, and storage medium
Ban et al. Tiny and blurred face alignment for long distance face recognition
Galdámez et al. Ear recognition using a hybrid approach based on neural networks
Kroon et al. Eye localization in low and standard definition content with application to face matching
Chen et al. Robust facial expressions recognition using 3d average face and ameliorated adaboost
WO2016192213A1 (en) Image feature extraction method and device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14866550

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 12/10/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14866550

Country of ref document: EP

Kind code of ref document: A1