CN111597972B - Makeup recommendation method based on ensemble learning - Google Patents

Makeup recommendation method based on ensemble learning Download PDF

Info

Publication number
CN111597972B
CN111597972B CN202010407658.XA CN202010407658A CN111597972B CN 111597972 B CN111597972 B CN 111597972B CN 202010407658 A CN202010407658 A CN 202010407658A CN 111597972 B CN111597972 B CN 111597972B
Authority
CN
China
Prior art keywords
makeup
picture
learning
training
recommendation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010407658.XA
Other languages
Chinese (zh)
Other versions
CN111597972A (en
Inventor
张金
诸佳昕
陈颖
陈孚生
黄伦松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN202010407658.XA priority Critical patent/CN111597972B/en
Publication of CN111597972A publication Critical patent/CN111597972A/en
Application granted granted Critical
Publication of CN111597972B publication Critical patent/CN111597972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention relates to the technical field of intelligent terminals, in particular to a makeup recommendation method based on ensemble learning. The method comprises the following steps of obtaining a facial image, and performing facial feature recognition analysis on people in the image; inputting the facial feature analysis result into a learning model, and finding a recommended makeup picture from a makeup set; and performing makeup transfer according to the recommended makeup picture to obtain a transfer picture. The dressing recommendation principle trained by the ensemble learning can consider the problem that different facial features have different influences on the dressing, and is closer to the human dressing recommendation mode. The result is superior to the result obtained by the traditional similarity analysis, the time for manually carrying out facial analysis and makeup recommendation is saved, and the problem that a user does not know what makeup is suitable for the user is solved.

Description

Makeup recommendation method based on ensemble learning
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a makeup recommendation method based on ensemble learning.
Background
Most of the existing makeup recommendation algorithms are based on face similarity comparison for makeup recommendation. For example, the oriental female face makeup recommendation algorithm research recommends makeup according to the VGGFACE feature descriptor similarity, and the face automatic makeup and deep Hash algorithm based on deep learning selects the minimum Euclidean distance from the current face features as a recommendation result. But in fact such recommendations are not entirely reasonable. Similar makeup does not equate to a proper makeup for people with different facial shapes and different five sense organs. If one person does not have a consistent eyebrow shape and face shape, the result recommended according to the similarity principle is not suitable for the makeup. The practical reference value of such cosmetic recommendations is not large.
Disclosure of Invention
The facial features of the face are too complex, simple feature classification cannot be completely matched with the facial features of one person, and the predicted influence of different facial features is not absolute, so that the invention provides a makeup recommendation method based on ensemble learning, the ensemble learning is introduced, a plurality of weak decisions are integrated, a decision with relatively high reliability is given, and the most appropriate makeup is recommended. Solves the problems of poor effect and humanization of the traditional makeup recommendation. The problem of facial feature weight distribution caused by different facial features affecting the overall effect of makeup is solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a makeup recommendation method based on ensemble learning, the method comprising the steps of,
acquiring a facial image, and performing facial feature recognition analysis on people in the image;
inputting the facial feature analysis result into a learning model, and finding a recommended makeup picture from a makeup set;
and performing makeup transfer according to the recommended makeup picture to obtain a transfer picture.
The technical proposal is further optimized, the dressing recommendation method comprises the following steps,
s201, acquiring a training set, and randomly extracting n samples from the training set;
s202, repeating the first step k times to obtain k groups of training sets;
s203, training by using one training set each time to obtain one learning model, wherein k training sets obtain k models in total;
s204, predicting by using the k models obtained in the previous step to obtain k prediction results;
and S205, counting the prediction results, wherein the most prediction result is the recommended makeup picture.
In a further optimization of the technical solution, in step S205, the result with the largest occurrence frequency is counted by using a voting method, and then conversion is performed according to the initial distribution condition of the color difference between the migration chart and the plain chart, and the finally obtained result corresponding to the highest value is the recommended makeup chart; the specific method for converting the initial distribution comprises the following steps:
s2051, recording the probability of each grade of the initial face value difference by using a direct statistical method as ai;
s2052, counting the votes of all the categories for k prediction results, and dividing the votes of each category by k to obtain the probability that the result is k categories and recording the probability as bi;
s2052, max ((bi-ai) × qi/ai) of the final predicted result, where qi is a weight calculated from the initial distribution probability.
The technical proposal is further optimized, the method for acquiring the training set of the learning model comprises the following steps,
and comparing the makeup picture B with the makeup picture A to obtain an X, transferring the makeup picture B to the makeup picture A to obtain a transfer picture C, obtaining a value Y of the transfer picture C, and comparing and transferring the makeup picture with all the makeup pictures in the makeup set to obtain a training set.
In a further optimization of the technical solution, the learning model is a linear function:
Y=W T X+β,
wherein, Y is a color value, W is a weight, X is a comparison between a plain face chart and a makeup chart, and beta is a weight bias.
In the further optimization of the technical scheme, the learning model adopts the data of the training set to calculate the weight W and the weight offset beta,
the mean square error MSE is used as the Loss function Loss,
Figure BDA0002491930860000031
wherein m is the number of training data sets, i represents the ith training data set, f (xi) is a predicted color value, Yi is an actual color value, if the Loss function value is smaller, the difference between f (xi) and Yi is smaller, and a gradient descent algorithm is adopted to calculate W and beta in an iterative manner.
In the technical scheme, the gradient descent algorithm is further optimized, and the gradient descent tuning formula is as follows:
W j =W’ j -learnrate*2*MSE W *X k
β j =β’ j -learnrate*2*MSE W
wherein j represents the jth iteration, the learrate is the parameter learning rate, W 'and β' represent the W and β obtained in the previous iteration, Xk is the kth group of data selected in the random gradient descent process, MSEW is the Loss calculated according to the W and β calculated in the previous iteration, and k represents the randomly selected kth group of data for random gradient descent.
According to the further optimization of the technical scheme, the facial features comprise five eyes, namely a face shape, an eye shape, a nose shape, a lip shape, an eyebrow shape and a three-court shape.
Different from the prior art, the technical scheme does not follow the traditional intelligent makeup recommendation principle of 'similarity, namely fit', but uses the intelligent pre-makeup and post-makeup comparison as the feedback of appropriate makeup through an integrated learning means, so that the artificial personalized makeup recommendation concept in reality is better met. The integrated learning utilizes a weighted combination strategy, namely follows the principle of 'few obeying most', so that the obtained learning machine has more universality and wide applicable population. The dressing recommendation principle trained by the ensemble learning can consider the problem that different facial features have different influences on the dressing, and is closer to the human dressing recommendation mode. The result is superior to the result obtained by the traditional similarity analysis, the time for manually carrying out facial analysis and makeup recommendation is saved, and the problem that a user does not know what makeup is suitable for the user is solved.
Drawings
FIG. 1 is a schematic view of makeup migration;
FIG. 2 is a diagram of a training data set;
FIG. 3 is a graph of the convergence effect tested under a small portion of data;
FIG. 4 is a schematic illustration of ensemble learning;
FIG. 5 is a schematic diagram of n samples taken at a time;
FIG. 6 is a schematic view of k models;
FIG. 7 is a diagram illustrating an optimal number of iterations;
FIG. 8 is a diagram illustrating an optimal learning rate;
fig. 9 is a makeup migration view.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
The invention provides a makeup recommendation method based on ensemble learning, which comprises the following steps,
and S1, acquiring a facial image, and performing facial feature recognition analysis on the person in the image.
And analyzing the facial features, namely acquiring the facial features of the people in the image, and performing color value analysis based on the facial features. The facial feature analysis includes analyzing facial features and performing a color value analysis based on the analyzed facial feature recognition. The facial feature analysis can be used for analyzing facial features of the human face according to a single front face picture, wherein the facial features comprise five eyes, namely a face shape, an eye shape, a nose shape, a lip shape, an eyebrow shape and a three-part shape.
The specific attribute values for each category are as follows:
1) the facial form: melon seed face/oval face/diamond face/round face/long face/square face/normal face
2) Eye type: round _ eyes/thin _ eyes/big _ eyes/small _ eyes/normal _ eyes
3) Nose type: normal nose/thick nose/thin nose
4) Lip shape: thin lip/thick lip/smile lip/upper lip/normal lip
5) Eyebrow shape: bushy _ eyebrows/eight-character eyebrows/rain _ eyebrows/straight _ eyebrows/arch _ eyebrows/arm _ eyebrows/thin _ eyebrows
6) Three courts (upper middle and lower three courts, the same principle applies to the middle and lower parts): face _ normal/face _ long/face _ short
7) Five eyes: inner canthus spacing: eye _ normal (proper inter-canthus distance)/eye _ short (narrow inter-canthus distance)/eye _ long (wide inter-canthus distance)
And the color value analysis is to perform face detection and face analysis according to the facial features, and identify the color value score by using the face value score, wherein the face value score is a floating point number and is in a range of 0 to 100. Each image is subjected to color value analysis, i.e. each image has a color value score.
The method for judging the makeup effect is as follows: according to the color value score scoreA of the original color chart a and the color value score scoreC of the migration chart after makeup, the color value difference δ after makeup before makeup is obtained as scoreC-scoreA, and the higher the color value difference is, the better the makeup effect is.
In the makeup recommendation, the score of the face value is required to be increased as high as possible without predicting the face value difference after the decimal point, so that the face value difference is classified into a grade, and the concept of "face value increasing grade" is provided.
The formula of the 'color value upgrade level' division dependence is set according to the distribution range of the color value difference in the experiment as follows:
round(δ/4.0)=round[(scoreC–scoreA)/4.0]
the final 'face value upgrade rating' is 14 ratings of [ -7, -6, -5, -4, -3, -2, -1,0,1,2,3,4,5,6], and from small to large 'the face value upgrade rating' indicates that the make-up effect is better.
The denominator 4.0 of the formula "color lifting class" is obtained in the application, and round (delta/9.0) is set at the beginning, so that the output color value class is less, but in the practical application of the model after training, such division can result in many cosmetic pictures B obtaining the same highest class result, and further differentiation is difficult, so that this parameter is modified, and the "color lifting class" is further subdivided.
And S2, inputting the facial feature analysis result into the learning model, and finding out the recommended makeup picture from the makeup set.
Firstly, constructing a base learner for integrated learning, and finally realizing the training of the base learner: some features X related to the original face chart A and the makeup chart B are input, and the output chart A is drawn on the makeup in the chart B to obtain a 'color value promotion grade' Y of a makeup chart C. Referring to fig. 1, a diagram of makeup migration is shown. X, Y are described in detail as follows:
the input X is a 10-dimensional vector, one attribute for each dimension:
x 0: whether the face shapes of the makeup picture B and the plain picture A are the same, wherein the same is 1, and the different is 0;
x 1: whether the eye shapes of the makeup picture B and the plain picture A are the same or not is 1, and the eye shapes are 0;
x 2: whether the nose types of the makeup chart B and the plain chart A are the same, namely 1, and 0 is different;
x 3: whether the lips of the makeup picture B and the plain picture A are the same or not is 1, and the difference is 0;
x 4: whether the eyebrow shapes of the makeup picture B and the plain picture A are the same, wherein the same is 1, and the different is 0;
x 5: whether the upper court proportions of the makeup picture B and the plain picture A belong to the same type, the same is 1, and the different is 0;
x 6: whether the proportions of the atrium in the dressing chart B and the plain chart A belong to the same type, the same is 1, and the difference is 0;
x 7: whether the proportions of the makeup chart B and the plain chart A belong to the same type, the same is 1, and the difference is 0;
x 8: whether the interocular corner distance proportions in the makeup picture B and the natural face picture A belong to the same type, the same is 1, and the different is 0;
x 9: color score of cosmetic chart B.
The output Y is a 1-dimensional vector:
y: drawing makeup in makeup picture A and B to obtain new makeup picture C, Y represents "color value promotion grade" of C compared with A "
Linear function:
Y=w0x0+w1x1+w2x2+…+w9x9+β
writing in vector form is:
Y=W T X+β
wherein β is the weight offset,,W T Are weights.
What needs to be obtained through machine learning training is the W weight vector representing the magnitudes of the influencing factors of different features in the above equation.
S103, data construction, and construction of a data set X, Y.
X is a (1X10) vector and Y is a (1X1) vector.
Pairing all the makeup pictures B in the makeup picture set with all the makeup pictures A in the makeup picture set into a group. For each group: and determining the values of x 0-x 9 of the group of data according to the facial feature attributes of the A and B images to obtain the top 10-dimensional data. Then, the makeup chart A and the makeup chart B are subjected to makeup migration to obtain a migration chart C, and a 'color value promotion grade' Y is calculated according to the color value scores of A and C and recorded as 11-dimensional data.
Assuming that there are n1 plain figures in the plain figure set and n2 makeup figures in the makeup figure set, n1 × n2 groups of data can be finally obtained as a training set. Each row of the training set is an 11-dimensional vector, the first 10 dimensions represent X, and the last dimension represents Y, and the format is shown in fig. 2.
With training set, i.e. linear function Y ═ W T X and Y in X + β are known, and then training can be performed to find W and β. W is a vector of (1x10) representing the weight of the influence of each of the ten attributes x0 x9 on the final result, and β is a vector of (1x1) representing a final global adjusted weight bias.
In order to predict the resulting "color value improvement level" f (xi) ═ W T X + β can be as close as possible to the actual "color value boosting level" Yi, where MSE (mean square error MSE) is used as Loss function, and assuming that there are m training data in total, i represents the ith training data, there are:
Figure BDA0002491930860000071
the smaller the Loss function value is, the smaller the difference between the "color value improvement level" f (xi) predicted by integration and the actual "color value improvement level" Yi is. According to the principle of gradient descent: the derivative represents the rate of change of a function at a point, moving in the opposite direction to the derivative results in a point with a smaller value, and a derivative of 0 indicates convergence to an extreme point. Therefore, W and β can be calculated iteratively using a gradient descent algorithm to obtain a Loss as small as possible.
Training is carried out through a random gradient descent algorithm, and a random W and a random beta are initialized and constructed. j represents the j iteration, lernarrate is the parameter learning rate to be trained, W 'and beta' represent the W, beta, X obtained in the previous iteration k The kth group of data, MSE, is selected in the process of random gradient descent W The Loss is calculated according to W and beta calculated in the previous round, k represents that the kth group of data is randomly selected for random gradient descent, and the tuning formula of the random gradient descent is as follows:
W j =W’ j -learnrate*2*MSE W *X k
β j =β’ j -learnrate*2*MSE W
thus, a set of W and beta can be obtained through the training set by one training, if the effect of drawing the makeup picture B on the original picture A is predicted, X is obtained through the facial features of A and B, and then according to the model,
Y=W T X+β
the final predicted "color value upgrade level" can be calculated.
The linear function model has problems:
1. the generalization degree is poor, and only the data in the training set is well fitted, so that the new data prediction effect is poor.
2. The loss is slow to decrease and difficult to converge under a limited number of iterations, and as shown in fig. 3, the convergence effect graph is tested under a small part of data.
The model obtained only according to a group of W and β has great chance, and in order to improve the reliability and accuracy of the model, the embodiment introduces a Bagging algorithm of ensemble learning, which is shown in fig. 4 and is an ensemble learning schematic diagram.
S201, acquiring a training set, and randomly extracting n samples from the training set;
s202, repeating the first step k times to obtain k groups of training sets;
s203, training by using one training set each time to obtain one learning model, wherein k training sets obtain k models in total;
s204, predicting by using the k models obtained in the previous step to obtain k prediction results;
s205, counting the result with the largest occurrence frequency by adopting a voting mode, and then converting according to the initial distribution situation of the color difference between the migration diagram and the plain diagram, wherein the result corresponding to the highest value is the recommended makeup diagram.
The specific method for converting according to the initial distribution of the color value upgrading grade comprises the following steps:
s2051, recording the probability of each grade of the initial color value difference by using a direct statistical method, and recording the probability as ai, wherein the frequency of the appearance of the color value upgrade grade in the training set is divided by the total number of results, and the probability of the initial appearance of the color value upgrade grade is obtained;
s2052, counting the votes of all categories (color value promotion levels) for the k prediction results, namely the occurrence frequency of the color value promotion levels in the prediction results, and dividing the votes of each category by k to obtain the probability that the result is k categories and recording the probability as bi;
s2052, the final prediction result is max ((bi-ai) × qi/ai), and ai and bi have the same meanings as described above, wherein qi is a weight set according to the initial distribution probability, the weights from 0 to 4 are 0,1,2, 4, and 8, and the number of occurrences of data equal to or greater than 5 is relatively small, and thus the weight qi is set to 100.
The calculation process is illustrated as follows: assuming that there are 1000, 2000, 500, and 200 data with "color boost level" of 1,2,3, and 4 in the initial 3700 samples, the initial probabilities a1 ═ 1000/3700, a2 ═ 2000/3700, a3 ═ 500/3700, and a4 ═ 200/3700. In the training process of ensemble learning, for the plain face map a and the makeup map B, among 250 results obtained through prediction of 250 models, 50, 100, 50, and 50 data with "face value promotion levels" of 1,2,3, and 4 respectively, B1-50/250, B2-100/250, B3-50/250, and B4-50/250 can be obtained. Then, according to the formula max ((bi-ai) × qi/ai), (b1-a1) × q1/a1 ═ 0.26, (b2-a2) × q2/a2 ═ 0.52, (b3-a3) × q3/a3 ═ 1.92, (b4-a4) × q4/a4 ═ 21.6, the largest of the four results is the group for which the "color value raising level" is 4, and thus the final predicted "color value raising level" is 4.
And S3, performing makeup transfer according to the recommended makeup picture to obtain a transfer picture.
The parameters of the ensemble learning and the parameters to be adjusted in the training process are 4 as follows.
1: ensemble learning parameter n (n samples taken each time)
The total number of samples is 3700, so the sample selection range is set at 50,3700, the step size is 100, although the difference of the output mse under different parameters n of the final output is not large, the lowest point is basically taken around 2000, so the value of n is selected to be 2000. Referring to fig. 5, a schematic diagram of n samples taken at a time is shown.
2: ensemble learning parameter k (k training sets, k training times, k models)
No obvious rule between k and mse is found in experiments. Since the final output is 14 in number, setting k at a more central value 250 solves the problems of unreasonable voting when k is too small and reduced training efficiency when k is too large. Referring to fig. 6, a schematic diagram of k models is shown.
3: number of iterations
Referring to fig. 7, a diagram of the optimal number of iterations, approximately 100 times may converge when tested on the complete training set, and therefore the number of iterations is selected to be 50.
4: the learning rate of the stochastic gradient descent is selected from the following learning rates:
[0.0001,0.0003,0.001,0.003,0.01,0.03,0.1,0.3]
fig. 8 is a schematic diagram of the optimal learning rate. The results were obtained as follows, and the learning rates at the learning rates different in the above order were all preferably 0.03, 0.1, and 0.3 for each point, and finally the learning rate was selected to be 0.03.
The trained model and the index content for storing the makeup picture information are used for calculation, so that the difference of the color values of the makeup picture transferred to the input plain picture on a certain picture can be predicted quickly, the difference of the color values is selected to be high, and then the makeup transfer operation is carried out.
In the random experiment, three makeup faces with the same face shape are directly and randomly selected for migration, and the average color value of all 31 plain color charts tested is improved to 1.47. In the ensemble learning experiment, three makeup pictures with the highest calculated color value difference (highest level of the color value difference) are randomly selected for migration, and as shown in fig. 9, for example, the makeup migration picture shows that the color value of the element color picture 1 is increased by 5.95 points, the color value of the element color picture 2 is increased by 6.11 points, the color value of the element color picture 3 is increased by 2.34 points, and the color value of the element color picture 4 is increased by 31 points and 4.76 points. The mean color lifting score for all 31 plain color charts tested was 3.58.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or terminal device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (6)

1. The makeup recommendation method based on ensemble learning is characterized by comprising the following steps,
acquiring a facial image, and performing facial feature recognition analysis on people in the image;
inputting the facial feature analysis result into a learning model, and finding a recommended makeup picture from a makeup set;
performing makeup migration according to the recommended makeup diagram to obtain a migration diagram;
the makeup recommendation method is as follows,
s201, acquiring a training set, and randomly extracting n samples from the training set;
s202, repeating the first step k times to obtain k groups of training sets;
s203, training by using one training set each time to obtain one learning model, wherein k training sets obtain k models in total;
s204, predicting by using the k models obtained in the previous step to obtain k prediction results;
s205, counting the prediction results, wherein the most prediction result is the recommended makeup picture;
in step S205, counting the result with the largest occurrence frequency by using a voting method, and then performing conversion according to the initial distribution of the color difference between the migration diagram and the elemental color diagram, wherein the result corresponding to the highest value obtained finally is the recommended makeup diagram; the specific method for converting the initial distribution comprises the following steps:
s2051, recording the probability of each grade of the initial face value difference by using a direct statistical method as ai;
s2052, counting the votes of all the categories for k prediction results, and dividing the votes of each category by k to obtain the probability that the result is k categories and recording the probability as bi;
s2053, the makeup map corresponding to the final prediction result max ((bi-ai) × qi/ai) is the recommended makeup map, where qi is the weight calculated according to the initial distribution probability.
2. The integrated learning-based makeup recommendation method according to claim 1, wherein the training set of the learning model is obtained by a method comprising,
and comparing the makeup picture B with the plain picture A to obtain X, transferring the makeup picture B to the plain picture A to obtain a transfer picture C, obtaining a value Y of the transfer picture C, and comparing and transferring the plain picture with all the makeup pictures in the makeup set to obtain a training set.
3. The integrated learning-based makeup recommendation method according to claim 1, wherein the learning model is a linear function:
Y=W T X+β,
wherein, Y is a color value, W is a weight, X is a comparison between a plain face chart and a makeup chart, and beta is a weight bias.
4. The integrated learning-based makeup recommendation method according to claim 3, wherein the learning model calculates a weight W and a weight offset β using data of a training set,
the mean square error MSE is used as the Loss function Loss,
Figure FDA0003705166150000021
wherein m is the number of training data sets, i represents the ith training data set, f (xi) is a predicted color value, Yi is an actual color value, if the Loss function value is smaller, the difference between f (xi) and Yi is smaller, and a gradient descent algorithm is adopted to calculate W and beta in an iterative manner.
5. The integrated learning-based makeup recommendation method according to claim 4, wherein the gradient descent algorithm, the tuning formula of gradient descent is:
W j =W' j -learnrate*2*MSE W *X k
β j =β' j -learnrate*2*MSE W
j represents the jth iteration, learrate is a parameter learning rate, W 'and beta' represent W and beta obtained in the previous iteration, Xk is the kth group of data selected in the random gradient descent process, MSEW is the Loss calculated according to W and beta calculated in the previous iteration, and k represents that the kth group of data is randomly selected for random gradient descent.
6. The ensemble learning-based makeup recommendation method according to claim 1, wherein the facial features include a face shape, an eye shape, a nose shape, a lip shape, and an eyebrow shape.
CN202010407658.XA 2020-05-14 2020-05-14 Makeup recommendation method based on ensemble learning Active CN111597972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010407658.XA CN111597972B (en) 2020-05-14 2020-05-14 Makeup recommendation method based on ensemble learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010407658.XA CN111597972B (en) 2020-05-14 2020-05-14 Makeup recommendation method based on ensemble learning

Publications (2)

Publication Number Publication Date
CN111597972A CN111597972A (en) 2020-08-28
CN111597972B true CN111597972B (en) 2022-08-12

Family

ID=72185550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010407658.XA Active CN111597972B (en) 2020-05-14 2020-05-14 Makeup recommendation method based on ensemble learning

Country Status (1)

Country Link
CN (1) CN111597972B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723173A (en) * 2021-06-29 2021-11-30 厦门大学 Automatic dressing recommendation method and system
CN114418837B (en) * 2022-04-02 2023-06-13 荣耀终端有限公司 Dressing migration method and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484681A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Hyperspectral remote sensing image classification method based on space information and ensemble learning
CN106021448A (en) * 2016-05-17 2016-10-12 南阳师范学院 Method for automatically judging that Taobao shop belongs to area
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN108763362A (en) * 2018-05-17 2018-11-06 浙江工业大学 Method is recommended to the partial model Weighted Fusion Top-N films of selection based on random anchor point
CN109146076A (en) * 2018-08-13 2019-01-04 东软集团股份有限公司 model generating method and device, data processing method and device
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN110276382A (en) * 2019-05-30 2019-09-24 平安科技(深圳)有限公司 Listener clustering method, apparatus and medium based on spectral clustering
CN110458750A (en) * 2019-05-31 2019-11-15 北京理工大学 A kind of unsupervised image Style Transfer method based on paired-associate learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660037B (en) * 2018-06-29 2023-02-10 京东方科技集团股份有限公司 Method, apparatus, system and computer program product for face exchange between images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484681A (en) * 2014-10-24 2015-04-01 西安电子科技大学 Hyperspectral remote sensing image classification method based on space information and ensemble learning
CN106021448A (en) * 2016-05-17 2016-10-12 南阳师范学院 Method for automatically judging that Taobao shop belongs to area
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN108763362A (en) * 2018-05-17 2018-11-06 浙江工业大学 Method is recommended to the partial model Weighted Fusion Top-N films of selection based on random anchor point
CN109146076A (en) * 2018-08-13 2019-01-04 东软集团股份有限公司 model generating method and device, data processing method and device
CN110276382A (en) * 2019-05-30 2019-09-24 平安科技(深圳)有限公司 Listener clustering method, apparatus and medium based on spectral clustering
CN110458750A (en) * 2019-05-31 2019-11-15 北京理工大学 A kind of unsupervised image Style Transfer method based on paired-associate learning

Also Published As

Publication number Publication date
CN111597972A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN107480261B (en) Fine-grained face image fast retrieval method based on deep learning
Shao et al. Feature learning for image classification via multiobjective genetic programming
CN104866810B (en) A kind of face identification method of depth convolutional neural networks
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
CN109063911A (en) A kind of Load aggregation body regrouping prediction method based on gating cycle unit networks
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN111597972B (en) Makeup recommendation method based on ensemble learning
CN110097060B (en) Open set identification method for trunk image
CN106919951A (en) A kind of Weakly supervised bilinearity deep learning method merged with vision based on click
CN113128369B (en) Lightweight network facial expression recognition method fusing balance loss
Witten et al. Supervised multidimensional scaling for visualization, classification, and bipartite ranking
CN112115967B (en) Image increment learning method based on data protection
CN111753874A (en) Image scene classification method and system combined with semi-supervised clustering
CN109376772A (en) A kind of Combination power load forecasting method based on neural network model
Karnowski et al. Deep spatiotemporal feature learning with application to image classification
CN109886281A (en) One kind is transfinited learning machine color image recognition method based on quaternary number
CN110298434A (en) A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED
CN114692732A (en) Method, system, device and storage medium for updating online label
CN112633154A (en) Method and system for converting heterogeneous face feature vectors
CN116110089A (en) Facial expression recognition method based on depth self-adaptive metric learning
CN113779283B (en) Fine-grained cross-media retrieval method with deep supervision and feature fusion
CN108509840B (en) Hyperspectral remote sensing image waveband selection method based on quantum memory optimization mechanism
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN114463812A (en) Low-resolution face recognition method based on dual-channel multi-branch fusion feature distillation
CN113420173A (en) Minority dress image retrieval method based on quadruple deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant