CN108629336B - Face characteristic point identification-based color value calculation method - Google Patents

Face characteristic point identification-based color value calculation method Download PDF

Info

Publication number
CN108629336B
CN108629336B CN201810569938.3A CN201810569938A CN108629336B CN 108629336 B CN108629336 B CN 108629336B CN 201810569938 A CN201810569938 A CN 201810569938A CN 108629336 B CN108629336 B CN 108629336B
Authority
CN
China
Prior art keywords
face
user
face image
image
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810569938.3A
Other languages
Chinese (zh)
Other versions
CN108629336A (en
Inventor
裴鹏飞
李守斌
蒋保健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiansou Inc
Original Assignee
Qiansou Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiansou Inc filed Critical Qiansou Inc
Priority to CN201810569938.3A priority Critical patent/CN108629336B/en
Publication of CN108629336A publication Critical patent/CN108629336A/en
Application granted granted Critical
Publication of CN108629336B publication Critical patent/CN108629336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face characteristic point identification-based color value calculation method, which comprises the following steps: extracting and storing the characteristic values, the characteristic points and the sex information of the determined face image and the face image of the user; calculating the similarity between the proportion of the five eyes of the three families of the user and the standard proportion according to the characteristic points of the face image of the user, and weighting to obtain a first score; calculating facial features of a user and determining facial feature distance; obtaining a second score; and calculating the difference between the facial image of the user and the characteristic point of the determined facial image to obtain a third score. And performing weighted calculation on the first score, the second score and the third score to obtain the face image color value of the user. Because the face image color value is calculated based on the characteristic points, simple addition, subtraction, multiplication and division calculation is carried out based on the characteristic points, and the algorithm time complexity and the algorithm space complexity are relatively small constants. And a plurality of deep learning model algorithms are not needed, so that the calculation speed is high.

Description

Face characteristic point identification-based color value calculation method
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a face characteristic point recognition-based face value calculation method.
Background
Color value is a popular word produced in recent years and refers to the degree of beauty of a person. The color value is the same as that of the person who is very handsome and beautiful, and the color value generally means that the person is common in the long term. However, the evaluation of the color value has no uniform standard and is strong in subjectivity.
In recent years, with the advent of artificial intelligence, color value quantization calculation has become possible. The face value calculation is a quantitative calculation method for the beauty degree of a person, and has an important role in improving the quality of life of the person and helping to improve the confidence of the person. The existing color value calculation is carried out by marking the picture score and utilizing a classification model. This calculation method has the following disadvantages: firstly, the respective scores of the five sense organs cannot be calculated while the color values are calculated based on the classification model, and a plurality of model algorithms need to be trained for the five sense organs with the scores to be calculated respectively; secondly, the color value of the person is planned as a classification problem, which is low in recognition accuracy under the conditions of a small amount of samples (500- & ltSUB & gt 5000- & ltSUB & gt) and a small amount of classifications (3-5), and the diversification effect can not be achieved due to the few classifications; and thirdly, the training of a new classification model consumes manpower, the classification model is huge, and the process interpretability is not strong. Although the classification problem can also be blurred by providing more classification samples like circles at the regular polygon-like limit, this cost and the resulting gain are unknown.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention is directed to a method for calculating a color value based on face feature point recognition.
The technical scheme adopted by the invention is as follows:
the face characteristic point identification-based color value calculation method comprises the following steps:
s101, inputting the determined face image into a face characteristic point recognition model, and extracting and storing the feature value and the feature point of the determined face image; and inputting the determined face image into a face gender classification model, and identifying and storing the gender information of the determined face image.
S102, inputting a user face image into a face characteristic point recognition model, and extracting and storing characteristic values and characteristic points of the user face image; and inputting the face image of the user into the face gender classification model, and identifying and storing the gender information of the face image of the user.
S103, calculating the sizes of the face and the five sense organs of the face image of the user according to the feature points of the face image of the user, calculating the proportion of the five eyes of the three families of the user according to the sizes, comparing the proportion of the five eyes of the three families of the user with a standard proportion, calculating the similarity between the proportion of the five eyes of the three families of the user and the standard proportion by using an approximation algorithm, and weighting to obtain a first score, wherein the first score is a positive number;
s104, extracting common facial features of the user facial image and a plurality of determined common facial features of the user facial image corresponding to the gender, calculating the Hausdorff distance between the common facial features of the user facial image and each determined common facial feature of the user facial image, and adding the Hausdorff distances between the common facial features of the user facial image and each determined common facial feature of the user facial image to obtain the common facial feature distance; the common five sense organs are eyes, eyebrows and cheeks; the common distance between the five sense organs is negative;
s105, calculating the distances between the feature points of the complex facial features of the user facial image and the corresponding feature points of the complex facial features corresponding to each determined facial image on the X axis and the Y axis respectively according to the feature points of the user facial image extracted in the step S102 and the plurality of determined facial image feature points of the corresponding gender extracted in the step S101, and adding the distances between the feature points of the complex facial features of the user facial image and the feature points of each determined facial image on the X axis and the Y axis to obtain the complex facial feature distance; the complex five sense organs are nose and mouth; the complex distance between five sense organs is negative;
adding the common distance between the five sense organs and the complex distance between the five sense organs to obtain a second score, wherein the second score is a negative number;
s106, comparing the characteristic value of the user face image extracted in the step S102 with the characteristic values of a plurality of confirmed face images of corresponding genders extracted in the step S101, and extracting a confirmed face image with the highest similarity to the user face image; calculating the distance between the feature point of the face image of the user and the corresponding feature point of the determined face image on the X axis and the Y axis to obtain a third score, wherein the third score is a negative number;
the characteristic value is a characteristic matrix extracted according to a face image, and the characteristic point is a coordinate point of a two-dimensional plane of the facial features;
and performing weighted calculation on the first score, the second score and the third score to obtain the face image color value of the user.
The invention has the beneficial effects that:
1. because the invention calculates the face image color value based on the characteristic points and performs simple addition, subtraction, multiplication and division calculation based on the characteristic points, the algorithm time complexity and the algorithm space complexity are relatively small constants. And a plurality of deep learning model algorithms are not needed, so that the calculation speed is high.
2. The recognition rate reaches over 99.6 percent based on the existing face recognition technology, the model optimization speed is high, and additional training of the model is not needed. Because the invention does not need to mark data, the defects that a large amount of data needs to be marked and the data quality is ensured because a plurality of classification models are trained are avoided.
3. The scoring range is large, and the difference is effectively opened to avoid scoring and assimilating; the result is reasonable, and the same person has small difference in the same situation.
4. The scores of five sense organs and cheeks can be calculated respectively, and the scores of the five sense organs can be used as the basis of other applications. Such as if the eye score is low: a beauty makeup scenario may be for eye makeup; the commodity recommendation can be used for pertinently recommending eye commodities.
Drawings
FIG. 1 is a schematic diagram of feature point mark positions of a face image sample;
FIG. 2 is a schematic diagram illustrating the calculation of the sizes of human face and five sense organs;
FIG. 3 is a schematic diagram of feature points selected for nose distance calculation;
fig. 4 is a schematic diagram of feature points selected by mouth distance calculation.
Detailed Description
The invention is further explained below with reference to the drawings and the specific embodiments.
The face characteristic point identification-based color value calculation method comprises the following steps:
s101, inputting the determined face image into a face characteristic point recognition model, and extracting and storing the feature value and the feature point of the determined face image; and inputting the determined face image into a face gender classification model, and identifying and storing the gender information of the determined face image.
Determining the face image as an image which is subjectively considered to be high in face value, determining that the face image is stored in a database and storing the face image according to gender classification.
Subjectively, a person with a high facial value may be a person who, from a medical point of view, has well-balanced facial features, part of the facial features, or one of the facial shapes, and has no particularly obvious defects; people considered beautiful by most people can be selected through network voting; it can be a star or a picture selected by a person.
The number of face images in the database is more than 500, a large data set (the data set refers to a collection of a large number of face images, and the number refers to more than 500) can average five sense organs or face shapes to a higher standard, and the data set added into the database requires objectivity, accuracy, positive face angle and no mask for the five sense organs. The face images added into the database are manually selected, and in order to accelerate the selection process, the face images can also be manually selected after being initially selected by a search engine so as to ensure the quality of the face images added into the database.
In one embodiment, before the face image is input into the database, the face image needs to be subjected to a normalization process, which includes normalizing the face image in space and size. The size normalization is to set each face image to the same pixel size. The space normalization is that the positions and the sizes of all points of each face image relative to the origin (0, 0) are the same. Because the face image processing needs to occupy more time, the preprocessing of the face image library can accelerate the subsequent calculation speed.
S102, inputting a user face image into a face characteristic point recognition model, and extracting and storing characteristic values and characteristic points of the user face image; and inputting the face image of the user into the face gender classification model, and identifying and storing the gender information of the face image of the user.
The face image of the user can be from a photo and a video, or can be a face image extracted in real time when a camera faces the face.
S103, calculating the sizes of the face and the five sense organs of the face image of the user according to the feature points of the face image of the user, calculating the proportion of the five eyes of the three families of the user according to the sizes, comparing the proportion of the five eyes of the three families of the user with a standard proportion, calculating the similarity between the proportion of the five eyes of the three families of the user and the standard proportion by using an approximation algorithm, and weighting to obtain a first score.
Referring to fig. 2: face and facial features dimensions are the width and length of the face and facial features, including but not limited to face width, face length, atrium of three atrium, mouth opening height, upper lip thickness, mouth width, chin height (i.e., nose tip to chin tip height), eyebrow to mouth height, lower lip to chin distance, upper lip to chin distance, mouth width, left eye width, right eye width, eye spacing, left eye to face distance, right eye to face distance, eyebrow height (distance between eyes and eyebrows), nose width, and nose height.
The face and five sense organs dimensions were calculated as shown in table 1:
TABLE 1
Figure GDA0002556082470000051
Figure GDA0002556082470000061
Figure GDA0002556082470000071
The user's trio five-eye ratio includes, but is not limited to, the ratio of face length to face width, the ratio of atrium to nose height in trios, the ratio of face width to mouth width when the mouth is open, the ratio of face length (eyebrow to chin) to mouth width when the mouth is open, the ratio of face width to mouth width when the mouth is closed, the ratio of face length (eyebrow to chin) to mouth width when the mouth is closed, the ratio of left eye width to nose width, the ratio of left eye width to mouth width, the ratio of atrium to eyebrow H3 in trios, the ratio of atrium to eyebrow H4 in trios, the ratio of atrium to eyebrow H5 in trios, the ratio of eyebrow to upper lip height to lower lip tip height, and the ratio of atrium to atrium.
The first score is the sum of the proportions of the three different five eyes, and the first score is a positive number.
The standard values of the three-family five-eye ratio are shown in tables 2 and 3:
TABLE 2
Figure GDA0002556082470000072
Figure GDA0002556082470000081
TABLE 3
Figure GDA0002556082470000082
Figure GDA0002556082470000091
In this embodiment, an approximation algorithm is used to compare the ratio of the eyes of the user to the standard ratio, and the specific algorithm is as follows:
①Similarity=SmallNum/(BigNum*part/parts);
②Similarity=Similarity>1?2-Similarity:Similarity;
SmallNum is a small number in the ratio of the eyes of the user to the three families, BigNum is a large number in the ratio of the eyes of the user to the three families, part is a small number in the standard ratio, and part is a large number in the standard ratio. If the user proportion is as follows: the ratio of face length to face width is 150: 166, SmallNum is 150, BigNum is 166, part and parts are the same. Similarity is the Similarity of the ratio of five eyes of the user's family three to the standard ratio.
In the above formula, when the Similarity value is greater than 1 in the formula, the Similarity value is 2 minus the Similarity, otherwise, the Similarity value is the original value.
Similarity is calculated above and score is the total score. threshold is the similarity threshold, here fixed at 0.7, and base is the benchmark score of 100 x 100. When the Similarity degree Simiarity is lower than the threshold, 1 point is added to the total point.
Figure GDA0002556082470000101
In one embodiment, to exclude the cases of zygomatic protrusion, the proportion of the triplet to the five eyes also includes the proportion of the facial form, which is calculated by the following method:
and respectively taking the distance between the points of the left eyeball and the right eyeball and the corresponding side face characteristic points on the X axis and the Y axis, calculating the ratio of the distance between the X axis and the Y axis, calculating the similarity between the face ratio of the user and the standard ratio by adopting the approximate value algorithm, and weighting the numerical value with the maximum similarity to obtain the face ratio score. The first score includes the face proportion score.
Taking the left half face as an example, distances from eyeball left to feature points 2, 4 and 6 on the X and Y axes are calculated, and are respectively denoted as lx1, lx2, lx3, ly1, ly2 and ly 3. Face proportions and standard values are shown in table 4:
TABLE 4
Characteristic point Proportion of face shape Standard value Weight of
2 ry1:rx1 96:14 4
4 ry2:rx2 57:37 4
6 ry3:rx3 15:43 4
When calculating the eyeball point, the default eyeball is located in the middle of the eye, and the coordinate value of the eyeball point is calculated by the characteristic point of the eye.
In one embodiment, the lower face proportion is also calculated to determine the lower face contour. The face proportion of the lower part of the face selects three corresponding characteristic points positioned at the lower part of the left and right side faces respectively, the distance between the X axis and the Y axis of two adjacent characteristic points in the three corresponding characteristic points is calculated, the proportion of the distance between the X axis and the distance between the Y axis is calculated, the similarity between the face proportion of the lower part of the face of the user and the standard proportion is calculated by adopting the approximate value algorithm, and the face proportion score is obtained by weighting the numerical value with the maximum similarity. The first score includes the face proportion score.
In an exemplary embodiment, the left face feature points are taken as 6, 7 and 8, the right face corresponding feature points are taken as 13, 12 and 11, the distances of the feature points 6 and 7 on the X axis and the Y axis are respectively calculated and are denoted as X1 and Y1, the distances of the feature points 7 and 8 on the X axis and the Y axis are calculated and are denoted as X2 and Y2, and the right eye feature points are calculated similarly. The proportion of the lower face and the standard value are shown in Table 5:
TABLE 5
Characteristic point Proportion of face shape Standard value Weight of
6 and 7 x2:y2 9:21 5
7 and 8 x1:y1 20:41 5
The standard values in tables 2 to 5 are preset values.
S104, extracting the common facial features of the user facial image and a plurality of determined common facial features of the user facial image corresponding to the gender, calculating the Hausdorff distance between the common facial features of the user facial image and each determined common facial feature of the user facial image, and adding the Hausdorff distances between the common facial features of the user facial image and each determined common facial feature of the user facial image to obtain the common facial feature distance. The common five sense organs are eyes, eyebrows, and cheeks. The common distance between the five sense organs is negative.
First, according to the sex information of the face image of the user extracted in S102, a plurality of determined face images of the corresponding sex are extracted from the database. The plurality of fingers is more than 2, and in one embodiment, 100 determined face images of corresponding genders are extracted from the database.
Since there is a large difference between the five sense organs between the male and female, in one embodiment, when the face image of the user is male, the determined face image extracted from the database is male, and when the face image of the user is female, the determined face image extracted from the database is female.
Before computing the Hausdorff distance, normalization processing of the user face image and the determined face image is also required, and the normalization processing comprises normalization of the user face image and the common face image in space and size. The size normalization is to set the user face image and the general face image to the same pixel size. The space normalization is to make the positions and sizes of points of the user face image and the common face image equal relative to the origin (0, 0).
Extracting common facial features of the user facial image, determining the common facial features of the facial image, calculating the face image of the user and determining the Hausdorff distance of each common facial feature of the facial image to obtain the common facial feature distance. Common five sense organs are eyes, eyebrows, and cheeks. The common distance between the five sense organs comprises the distance between the eyes, the distance between the eyebrows and the distance between the cheeks, and the common distance between the five sense organs is obtained by adding the distance between the eyes, the distance between the eyebrows and the distance between the cheeks.
In one embodiment, the eyes of the user face image and the eyes of the determined face image are respectively extracted, so that the original points of the user face image and the eyes of the determined face image coincide. Respectively calculating the minimum distance value between a certain point of the eye outline of the user face image and each point for determining the eye outline of the face image, and taking the maximum value of the minimum distance values, namely the eye distance.
The eyebrow distance and cheek distance are the same as the eye distance calculation method, and will not be described again here.
In one embodiment, the distance between the user's face image and each of the determined facial image's common five sense organs needs to be calculated. Namely, 100 common face images are extracted, the common five sense organs of the face image of the user and 100 Hausdorff distances for determining the common five sense organs of the face image are respectively calculated, and the 100 Hausdorff distances are added to obtain the common five sense organs distance.
The distance between the common five sense organs of the face image of the user represents the similarity between the common five sense organs of the face image of the user and the common five sense organs of the face image determined in the database, and the larger the distance is, the smaller the similarity is (i.e. the more dissimilar). Therefore, when the common distance between the five sense organs is calculated as a fraction, it needs to be set to a negative number.
S105, calculating the distances between the feature points of the complex facial features of the user facial image and the corresponding feature points of the complex facial features corresponding to each determined facial image on the X axis and the Y axis respectively according to the feature points of the user facial image extracted in the step S102 and the plurality of determined facial image feature points of the corresponding gender extracted in the step S101, and adding the distances between the feature points of the complex facial features of the user facial image and the feature points of each determined facial image on the X axis and the Y axis to obtain the complex facial feature distance; the complex five sense organs are nose and mouth; the complex distance between five sense organs is negative.
Compared with the common five sense organs, the complex five sense organs determine the color value of one person more. The complex five sense organs are the nose and mouth. Extracting the complex facial feature points of the user facial image, extracting a plurality of complex facial feature points corresponding to the determined facial images with the same gender as the gender of the user facial image, respectively calculating the distance between each feature point of the complex facial feature points of the user facial image and the corresponding feature point of the complex facial feature points corresponding to each determined facial image on the X axis and the Y axis, and adding the distances of each X axis and the Y axis to obtain the complex facial feature distance. The plurality of fingers is more than two, and in one embodiment, 100 determined face images are extracted.
Referring to fig. 3: in one embodiment, the feature points of the nose replace the two end points representing the length of the nose and the two end points representing the width of the nose plus the middle point for a total of 5 feature points. Extracting the 5 characteristic points of the nose of the user face image and the 5 corresponding characteristic points of the nose of 100 determined face images, respectively calculating the distance between the 5 characteristic points of the user face image and the 5 corresponding characteristic points of each determined face image on the X axis and the Y axis, adding the distances of the 5 characteristic points on the X axis and the Y axis, and adding the distances of the 100 face images on the X axis and the Y axis to obtain the nose distance.
Referring to fig. 4: in one embodiment, the characteristic points of the mouth replace two end points of the face width, the middle three points representing the upper lip radian and the middle three points representing the lower lip radian, and the total number is 8 characteristic points. The calculation of the mouth distance is the same as the calculation of the nose distance, and the description thereof will not be repeated.
The complex facial features distance represents the similarity between the complex facial features of the user face image and the determined complex facial features in the database, and the greater the distance, the smaller the similarity (i.e., the more dissimilar). Therefore, when the complex distance between five sense organs is calculated as a fraction, it needs to be set to a negative number.
And adding the common distance between the five sense organs and the complex distance between the five sense organs to obtain a second score, wherein the second score is a negative number.
S106, comparing the characteristic value of the user face image extracted in the step S102 with the characteristic values of a plurality of confirmed face images of corresponding genders extracted in the step S101, and extracting a confirmed face image with the highest similarity to the user face image; and calculating the distance between the feature point of the face image of the user and the corresponding feature point of the determined face image on the X axis and the Y axis to obtain a third score, wherein the third score is a negative number.
The feature value is a feature matrix extracted according to the face image, and the feature value can describe the face image, and in this embodiment, the extracted feature value is 1024 bytes. The characteristic points are coordinate points of a two-dimensional plane of the human face contour and the facial features contour.
And comparing the extracted characteristic value of the user face image with the characteristic value of the determined face image of the corresponding gender in the database, and extracting the determined face image with the highest similarity to the user face image, wherein the determined face image is one. And calculating the distances between the feature points of the user face image and the corresponding feature points of the determined face image on the X axis and the Y axis, wherein the feature points comprise 68 extracted feature points, and adding the distances between the 68 feature points on the X axis and the Y axis to obtain a third score.
The greater the distance between the user face image feature point and the determined face image feature point on the X axis and the Y axis is, the smaller the similarity (i.e. the more dissimilar) of the two images is. Therefore, when the above distance is calculated as a fraction, it needs to be set to a negative number, that is, the third fraction is a negative number.
And multiplying the first score, the second score and the third score by corresponding weights, and adding to obtain the face image color value of the user. The weight value is a determined value obtained empirically. In one embodiment, the first score, the second score, and the third score have weight values of 6, 5, and 4, respectively.
Of course, before calculating the distance between the user face image and the feature point of the determined face image in the X axis and the Y axis, the two images need to be normalized, and the normalization processing method is the same as above, and will not be described again here.
In one embodiment, before step S101, training a face recognition model is further included. The specific method comprises the following steps:
s201, inputting a human face image sample marked with feature points into a neural network training model for training to obtain a human face feature point recognition model, wherein the human face feature point recognition model is used for extracting feature values of a human face image and recognizing feature points of the human face image.
Before the face image sample is input into the neural network, the face image sample needs to be subjected to normalization processing, and the normalization processing method is the same as that described herein, and will not be repeated.
Referring to fig. 1: in the first training, each human face image sample is marked with feature points manually, and the feature points are marked along the human face contour and the facial features contour in the human face image sample. The number of the feature points can be set as required, in this embodiment, 68 feature points are marked along the face contour and the facial features contour in each face image sample, including 17 face contours, 6 eye contours including left and right eyes, 5 eyebrows including left and right eyebrows, 9 noses, and 20 mouths. Because the hat and the hair can shield the forehead, the characteristic points are not marked on the forehead any more.
The marks of the feature points are too few, so that the features of the human face and the five sense organs cannot be accurately reflected; the invention sets 68 characteristic points, which can better reflect the face contour and the shape of five sense organs without influencing the calculation speed and has high calculation speed.
The neural network training model adopts a FaceNet neural network model (the FaceNet neural network model is a neural network model proposed by a face recognition system developed by google team). A mask frame (the mask English is called as the Convolitional Architecture for Fast Feature Embedding, and the Convolutional neural network frame) is adopted in training a face image sample, the face image can be directly mapped to an Euclidean space, and the length of the space distance represents the similarity of the face image. By adopting the training model, the characteristic value and the characteristic points of the human face can be recognized, the characteristic value is 1024byte characteristics, and the characteristic points are 68 characteristic points marked by the marks.
The method adopts a FaceNet neural network training model to train the feature points of the face image, the FaceNet recognition accuracy rate is very high and is between 99.6% and 99.7% of the global best result, and the recognition accuracy rates of FaceNet on a laboratory Faces the Wild (LFW) and a YouTube Faces DB data set respectively reach 99.63% and 95.12%. FaceNet allows the result to converge faster because only 128 bytes are used to mark a face feature value, and a ternary loss function (the loss function is a function for measuring the recognition loss and the error degree of the current model, and the parameters used for gradually correcting the network model in the training process are used to ensure that the accuracy is gradually provided) is used. The fact that the face features are marked with less 128 bytes means that the using time of the face comparison process is shorter, and the complexity of the ternary loss function is lower compared with that of a traditional loss function Softmax algorithm, so that the training and recognition speed of the neural network faceNet is high. Speed and accuracy are both essential considerations in industrial use.
The FaceNet neural network training model developed by the Google team uses two convolution models. The first is NN1, the second is NN 2. NN2 is the GoogleNet-type inclusion model, and NN2 is adopted as the neural network training model in the invention. The NN2 model parameters need only be 4.3M, which is 20 times smaller than NN 1; each picture requires the CPU computing resources 20MFLOPS, one fifth of the NN 1.
In one embodiment, before step S101, the following steps are further included:
s301, HOG characteristics of the face image sample with the gender determined are extracted through an OpenCV computer vision library.
S302, inputting the HOG characteristics of the face image samples extracted in the step S301 into an SVM classifier for training to obtain a gender classification model, wherein the gender classification model is used for identifying the gender of the face images.
Because men and women have great difference in five sense organs, the method of the invention also relates to the identification training of the gender of the face image sample. Before training the feature points of the face image samples, the face image samples need to be classified according to gender, and the classifying step can occur before the face image samples are manually marked or after the face image samples are manually marked. In this embodiment, the face image samples are divided into two types according to male and female, feature point labeling is performed on the face image samples manually, and normalization processing is performed on the face image samples.
Because gender identification is a dichotomy problem, the invention adopts a Histogram of Oriented Gradient (HOG) feature combined with a Support Vector Machine (SVM) classifier to train face image samples.
The OpenCV computer vision library is a cross-platform computer vision library issued based on BSD license (open source), can run on Linux, Windows, Android and Mac OS operating systems, and is used for extracting HOG characteristics of face image samples.
The HOG features of the face image samples extracted in the S303 are input into an SVM classifier for training to obtain a gender classification model, and the gender classification model can identify the gender of the face image.
Histogram of Oriented Gradient (HOG) features are a kind of feature descriptors used for object detection in computer vision and image processing. The HOG features are constructed by calculating and counting the histogram of gradient direction of local area of image. First, because the HOG operates on local grid elements of the image, it remains fairly invariant to both geometric and optical distortions of the image that occur only over a much larger spatial domain. Secondly, under the conditions of coarse spatial sampling, fine directional sampling, strong local optical normalization and the like, as long as the pedestrian can keep a substantially upright posture, the pedestrian can be allowed to have some slight limb movements, and the slight movements can be ignored without affecting the detection effect. The HOG feature is therefore particularly suitable for human detection in images.
SVM is a common discrimination method. In the field of machine learning, a supervised learning model is typically used for pattern recognition, classification, and regression analysis.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (9)

1. The face feature point identification-based color value calculation method is characterized by comprising the following steps of: the method comprises the following steps:
s101, inputting the determined face image into a face characteristic point recognition model, and extracting and storing the feature value and the feature point of the determined face image; inputting the determined face image into a face gender classification model, and identifying and storing the gender information of the determined face image;
s102, inputting a user face image into a face characteristic point recognition model, and extracting and storing characteristic values and characteristic points of the user face image; inputting the face image of the user into a face gender classification model, and identifying and storing the gender information of the face image of the user;
s103, calculating the sizes of the face and the five sense organs of the face image of the user according to the feature points of the face image of the user, calculating the proportion of the five eyes of the three families of the user according to the sizes, comparing the proportion of the five eyes of the three families of the user with a standard proportion, calculating the similarity between the proportion of the five eyes of the three families of the user and the standard proportion by using an approximation algorithm, and weighting to obtain a first score, wherein the first score is a positive number;
s104, extracting common facial features of the user facial image and a plurality of determined common facial features of the user facial image corresponding to the gender, calculating the Hausdorff distance between the common facial features of the user facial image and each determined common facial feature of the user facial image, and adding the Hausdorff distances between the common facial features of the user facial image and each determined common facial feature of the user facial image to obtain the common facial feature distance; the common five sense organs are eyes, eyebrows and cheeks; the common distance between the five sense organs is negative;
s105, calculating the distances between the feature points of the complex facial features of the user facial image and the corresponding feature points of the complex facial features corresponding to each determined facial image on the X axis and the Y axis respectively according to the feature points of the user facial image extracted in the step S102 and the plurality of determined facial image feature points of the corresponding gender extracted in the step S101, and adding the distances between the feature points of the complex facial features of the user facial image and the feature points of each determined facial image on the X axis and the Y axis to obtain the complex facial feature distance; the complex five sense organs are nose and mouth; the complex distance between five sense organs is negative;
adding the common distance between the five sense organs and the complex distance between the five sense organs to obtain a second score, wherein the second score is a negative number;
s106, comparing the characteristic value of the user face image extracted in the step S102 with the characteristic values of a plurality of confirmed face images of corresponding genders extracted in the step S101, and extracting a confirmed face image with the highest similarity to the user face image; calculating the distance between the feature point of the face image of the user and the corresponding feature point of the determined face image on the X axis and the Y axis to obtain a third score, wherein the third score is a negative number;
the characteristic value is a characteristic matrix extracted according to a face image, and the characteristic point is a coordinate point of a two-dimensional plane of the facial features;
and performing weighted calculation on the first score, the second score and the third score to obtain the face image color value of the user.
2. The face feature point recognition-based color value calculation method according to claim 1, wherein: before step S101, the following steps are also included:
s201, inputting a human face image sample marked with feature points into a neural network training model for training to obtain a human face feature point recognition model, wherein the human face feature point recognition model is used for extracting feature values of a human face image and recognizing feature points of the human face image.
3. The face feature point recognition-based color value calculation method according to claim 1, wherein: before step S101, the following steps are also included:
s301, extracting HOG characteristics of the face image sample with the gender determined through an OpenCV computer vision library;
s302, inputting the HOG characteristics of the face image samples extracted in the step S301 into an SVM classifier for training to obtain a gender classification model, wherein the gender classification model is used for identifying the gender of the face images.
4. The face feature point recognition-based color value calculation method according to claim 2, wherein: the number of the feature points is 68.
5. The face feature point recognition-based color value calculation method according to claim 2, wherein: the neural network training model is a FaceNet neural network model.
6. The face feature point recognition-based color value calculation method according to claim 1, wherein: the face and facial features dimensions are the width and length of the face and facial features, including one or more of face width, face length, atrium of three atriums, mouth opening height, upper lip thickness, chin height, eyebrow to mouth height, lower lip to chin distance, upper lip to chin distance, mouth width, left eye width, right eye width, eye separation, left eye to face distance, right eye to face distance, eyebrow height, nose width, or nose height.
7. The face feature point recognition-based color value calculation method according to claim 6, wherein: the three-family five-eye ratio comprises one or more of the ratio of the face length to the face width, the ratio of the atrium to the nose height of the three families, the ratio of the face width to the mouth width when the mouth is open, the ratio of the face length to the mouth width when the mouth is open, the ratio of the face width to the mouth width when the mouth is closed, the ratio of the face length to the mouth width when the mouth is closed, the ratio of the left eye width to the nose width, the ratio of the left eye width to the mouth width, the ratio of the atrium to the eyebrow H3 of the three families, the ratio of the atrium to the eyebrow H4 of the three families, the ratio of the atrium to the eyebrow H5 of the three families, the ratio of the height of the eyebrow to the upper lip to the height of the lower lip to the lower base point, and the ratio of the atrium to the atrium, wherein the eyebrow H3 refers to the absolute value of the Y-axis difference between the outer eyebrow angle, the eyebrow H4 refers to the absolute value of the Y-axis difference between the upper eyebrow angle and the absolute value of the inner eyebrow 5.
8. The face feature point recognition-based color value calculation method according to claim 7, wherein: the proportion of the three subjects to five eyes also comprises a facial form proportion.
9. The face feature point recognition-based color value calculation method according to claim 7, wherein: the three-family five-eye proportion also comprises the face shape proportion of the lower part of the face.
CN201810569938.3A 2018-06-05 2018-06-05 Face characteristic point identification-based color value calculation method Active CN108629336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810569938.3A CN108629336B (en) 2018-06-05 2018-06-05 Face characteristic point identification-based color value calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810569938.3A CN108629336B (en) 2018-06-05 2018-06-05 Face characteristic point identification-based color value calculation method

Publications (2)

Publication Number Publication Date
CN108629336A CN108629336A (en) 2018-10-09
CN108629336B true CN108629336B (en) 2020-10-16

Family

ID=63691264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810569938.3A Active CN108629336B (en) 2018-06-05 2018-06-05 Face characteristic point identification-based color value calculation method

Country Status (1)

Country Link
CN (1) CN108629336B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214373B (en) * 2018-11-05 2020-11-13 绍兴文理学院 Face recognition system and method for attendance checking
CN109657539B (en) * 2018-11-05 2022-01-25 达闼机器人有限公司 Face value evaluation method and device, readable storage medium and electronic equipment
CN109376712A (en) * 2018-12-07 2019-02-22 广州纳丽生物科技有限公司 A kind of recognition methods of face forehead key point
CN109858343B (en) * 2018-12-24 2021-07-06 深圳云天励飞技术有限公司 Control method, device and storage medium based on face recognition
CN109871822A (en) * 2019-03-05 2019-06-11 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN110221699B (en) * 2019-06-13 2022-03-25 北京师范大学珠海分校 Eye movement behavior identification method of front-facing camera video source
CN110874567B (en) * 2019-09-23 2024-01-09 平安科技(深圳)有限公司 Color value judging method and device, electronic equipment and storage medium
CN111062260B (en) * 2019-11-25 2024-03-05 杭州绿度信息技术有限公司 Automatic generation method of face-beautifying recommendation scheme
CN110784600A (en) * 2019-11-28 2020-02-11 西南民族大学 Device and method for detecting distance between human eyes and mobile phone
CN113822171A (en) * 2021-08-31 2021-12-21 苏州中科先进技术研究院有限公司 Pet color value scoring method, device, storage medium and equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4775306B2 (en) * 2007-04-23 2011-09-21 ソニー株式会社 Image processing apparatus, imaging apparatus, image display control method, and computer program
CN101604377A (en) * 2009-07-10 2009-12-16 华南理工大学 A kind of facial beauty classification method that adopts computing machine to carry out woman image
CN105701468A (en) * 2016-01-12 2016-06-22 华南理工大学 Face attractiveness evaluation method based on deep learning
CN105718869B (en) * 2016-01-15 2019-07-02 网易(杭州)网络有限公司 The method and apparatus of face face value in a kind of assessment picture
CN106548156A (en) * 2016-10-27 2017-03-29 江西瓷肌电子商务有限公司 A kind of method for providing face-lifting suggestion according to facial image
CN106815557A (en) * 2016-12-20 2017-06-09 北京奇虎科技有限公司 A kind of evaluation method of face features, device and mobile terminal
CN107169408A (en) * 2017-03-31 2017-09-15 北京奇艺世纪科技有限公司 A kind of face value decision method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"东方女性人脸颜值量化算法研究";韩晓旭;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415;全文 *
"基于多特征融合的人脸颜值预测";蒋婷 等;《网络新媒体技术》;20170426;第7-13页 *

Also Published As

Publication number Publication date
CN108629336A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN108629336B (en) Face characteristic point identification-based color value calculation method
US10049262B2 (en) Method and system for extracting characteristic of three-dimensional face image
JP7078803B2 (en) Risk recognition methods, equipment, computer equipment and storage media based on facial photographs
CN106815566B (en) Face retrieval method based on multitask convolutional neural network
CN109389074B (en) Facial feature point extraction-based expression recognition method
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
JP6664163B2 (en) Image identification method, image identification device, and program
CN107169455B (en) Face attribute recognition method based on depth local features
WO2019232866A1 (en) Human eye model training method, human eye recognition method, apparatus, device and medium
US9158970B2 (en) Devices, systems, and methods for visual-attribute refinement
WO2017107957A9 (en) Human face image retrieval method and apparatus
Zhang et al. Adaptive facial point detection and emotion recognition for a humanoid robot
KR20220150868A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN111062328B (en) Image processing method and device and intelligent robot
CN111126240B (en) Three-channel feature fusion face recognition method
WO2021196721A1 (en) Cabin interior environment adjustment method and apparatus
CN110909618A (en) Pet identity recognition method and device
WO2021127916A1 (en) Facial emotion recognition method, smart device and computer-readabel storage medium
Kalansuriya et al. Neural network based age and gender classification for facial images
CN110826408A (en) Face recognition method by regional feature extraction
Upadhyay et al. A review on different facial feature extraction methods for face emotions recognition system
Chen et al. Robust gender recognition for uncontrolled environment of real-life images
CN114241542A (en) Face recognition method based on image stitching
CN113705466A (en) Human face facial feature occlusion detection method used for occlusion scene, especially under high-imitation occlusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant