CN107123027B - Deep learning-based cosmetic recommendation method and system - Google Patents

Deep learning-based cosmetic recommendation method and system Download PDF

Info

Publication number
CN107123027B
CN107123027B CN201710294695.2A CN201710294695A CN107123027B CN 107123027 B CN107123027 B CN 107123027B CN 201710294695 A CN201710294695 A CN 201710294695A CN 107123027 B CN107123027 B CN 107123027B
Authority
CN
China
Prior art keywords
detected
information
image
features
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710294695.2A
Other languages
Chinese (zh)
Other versions
CN107123027A (en
Inventor
范西岸
黄运保
张雁峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201710294695.2A priority Critical patent/CN107123027B/en
Publication of CN107123027A publication Critical patent/CN107123027A/en
Application granted granted Critical
Publication of CN107123027B publication Critical patent/CN107123027B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Image Processing (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

The invention discloses a deep learning-based cosmetic recommendation method and system, wherein the method comprises the following steps: acquiring a face image of a user to be detected as an image to be detected; extracting image features of an image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features; and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and acquiring the corresponding cosmetic information as recommended cosmetic information. Because the recommended cosmetics information obtained in the technical scheme disclosed by the application is obtained by aiming at the feature description of the face image of the user to be detected, the cosmetics features corresponding to the recommended cosmetics information can accord with the face features of the user to be detected, namely, the cosmetics corresponding to the recommended cosmetics information can be suitable for the user to be detected, so that the user can effectively find the cosmetics suitable for the user.

Description

Deep learning-based cosmetic recommendation method and system
Technical Field
The invention relates to the technical field of pattern recognition, in particular to a deep learning-based cosmetic recommendation method and system.
Background
Nowadays, cosmetics are increasingly diversified, and the market is also mixed with fish and dragon; this presents a challenge in how to select a suitable cosmetic product.
The currently popular cosmetics software on the market generally selects the appropriate cosmetics for the user by the following method: the evaluation of different users after using the cosmetics is collected, then the new user finds other users with similar conditions (including skin, age and the like) to the new user, the evaluation of the found other users on specific products is checked, and the cosmetics which are more suitable for the new user are selected according to the evaluation, or the new user finds products which are potentially needed by the new user through the cosmetics purchased by the found other users. However, since there is a certain difference in the individual situation of each person after all, cosmetics found in the above manner are not necessarily suitable for users.
In summary, how to provide a cosmetic that enables a user to effectively find a suitable cosmetic is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a deep learning-based cosmetic recommendation method and system, so that a user can effectively find a suitable cosmetic.
In order to achieve the above purpose, the invention provides the following technical scheme:
a deep learning based cosmetic recommendation method comprising:
acquiring a face image of a user to be detected as an image to be detected;
extracting image features of the image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features;
and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and acquiring the corresponding cosmetic information as recommended cosmetic information.
Preferably, the extracting the image features of the image to be detected includes:
detecting the image to be detected based on AAM to obtain human face characteristic points, and extracting the image to be detected based on the human face characteristic points by adopting a characteristic triangle area to obtain corresponding geometric characteristics;
extracting the image to be detected by adopting a triangular central point sampling Gabor texture feature extraction method based on the geometric features to obtain corresponding texture features;
and performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
Preferably, before the feature selection is performed on both the geometric feature and the texture feature by using the Wrapper method, the method further includes:
and performing dimensionality reduction on the extracted geometric features and textural features by using PCA.
Preferably, before extracting the image features of the image to be detected, the method further includes:
and carrying out preprocessing including face image enhancement operation and normalization operation on the image to be detected.
Preferably, the acquiring of the face image of the user to be detected as the image to be detected includes:
and shooting the face part of the user to be detected through different angles to obtain a corresponding image to be detected.
Preferably, before the information to be detected is used as the input of the second neural network, the method further includes:
and acquiring the environment information of the position of the user to be detected, and adding the environment information into the information to be detected.
Preferably, before the information to be detected is used as the input of the second neural network, the method further includes:
and acquiring the personal information of the user input by the user to be detected, and adding the personal information of the user into the information to be detected.
Preferably, before the information to be detected is used as the input of the second neural network, the method further includes:
acquiring skin care advice information, and adding the skin care advice information into the information to be detected;
correspondingly, after the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
and obtaining skin care suggestion information corresponding to the feature description, the user personal information and the environment information as recommended skin care suggestions.
A deep learning based cosmetic recommendation system comprising:
an acquisition module to: acquiring a face image of a user to be detected as an image to be detected;
an extraction module to: extracting image features of the image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features;
a recommendation module to: and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and obtaining the cosmetic information matched with the characteristic description as recommended cosmetic information.
Preferably, the extraction module comprises:
an extraction unit for: detecting the image to be detected based on AAM to obtain human face characteristic points, and extracting the image to be detected based on the human face characteristic points by adopting a characteristic triangle area to obtain corresponding geometric characteristics; extracting the image to be detected by adopting a triangular central point sampling Gabor texture feature extraction method based on the geometric features to obtain corresponding texture features; and performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
The invention provides a deep learning-based cosmetic recommendation method and system, wherein the method comprises the following steps: acquiring a face image of a user to be detected as an image to be detected; extracting image features of the image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features; and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and acquiring the corresponding cosmetic information as recommended cosmetic information. According to the technical scheme, the face image of the user to be detected for which cosmetics need to be recommended is obtained as the image to be detected, the corresponding feature description and the cosmetic information of the obtained image to be detected are input as a second neural network, and the cosmetic information matched with the feature description is obtained and is the recommended cosmetic information. Because the recommended cosmetics information obtained in the technical scheme disclosed by the application is obtained by aiming at the feature description of the face image of the user to be detected, the cosmetics features corresponding to the recommended cosmetics information can accord with the face features of the user to be detected, namely, the cosmetics corresponding to the recommended cosmetics information can be suitable for the user to be detected, so that the user can effectively find the cosmetics suitable for the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a deep learning-based cosmetic recommendation method according to an embodiment of the present invention;
fig. 2 is a general block diagram of an AAM system in a deep learning-based cosmetics recommendation method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a face feature point obtained by AAM detection in a deep learning-based cosmetic recommendation method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of geometric features extracted by using feature triangle areas in the deep learning-based cosmetic recommendation method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of texture features extracted by a Gabor texture feature extraction method using a triangular center point in the deep learning-based cosmetic recommendation method provided by the embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the selection of the Wrapper feature in the deep learning-based cosmetic recommendation method according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a deep learning-based cosmetic recommendation system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating a deep learning-based cosmetic recommendation method according to an embodiment of the present invention may include:
s11: and acquiring a face image of a user to be detected as an image to be detected.
The user to be detected is the user who needs to recommend cosmetics for the user, the cosmetics can comprise skin care products and color cosmetics products, and the method can be implemented for one or all of the skin care products and the color cosmetics products. The method comprises the steps of collecting a face image of a user to be detected, namely obtaining an image of a face part of the user to be detected, and recommending cosmetics which can be applied to the user to be detected for the user to be detected based on the face image.
S12: and extracting the image characteristics of the image to be detected, and taking the image characteristics as the input of a pre-trained first neural network to obtain the characteristic description corresponding to the image characteristics.
The image characteristics of the image to be detected are the characteristics which can represent the image to be detected; the first neural network may be pre-trained, and in particular, the training process for the first neural network may include: the method comprises the steps of obtaining image features of a plurality of different users, labeling corresponding feature descriptions for the different image features, and performing supervised training of a feedforward neural network on the image features labeled with the feature descriptions, wherein a BP algorithm can be mainly used to obtain a first neural network. It should be noted that the feature description is generally a text message, and the text message is generally a phrase level; in addition, the function realized by the first neural network in the embodiment of the present invention may also be realized by a Multimodal CNN (multi-modal convolutional neural network) model, where the model includes an image CNN for describing image information, and a matching (matching) CNN completes semantic construction of words in text information on the one hand, and more importantly, learns a matching relationship between an image and a text. Specifically, the input of the Multimodal CNN model may be a picture obtained by taking a face picture and performing simple image preprocessing, and the preprocessing may be consistent with the principles of normalization operation and the like referred to in the following of the present application; and correspondingly outputting the text information corresponding to the picture, namely the feature description of the picture. The training process of the Multimodal CNN model is consistent with the training process principle of the first neural network, and in brief, the picture marked with the text information is used for supervised training, which is not described herein again. The feature description can be set according to actual needs, that is, the output of the first neural network can be set as a text description, and the output of the first neural network can also be set as an image signal, which are within the protection scope of the present invention. S13: and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and acquiring the corresponding cosmetic information as recommended cosmetic information.
And the information to be detected is used as the input of the second neural network, and the cosmetic information matched with the feature description can be output as the recommended cosmetic information, and the information is matched with the feature description, namely matched with the face image of the user to be detected, so that the cosmetics corresponding to the information can be suitable for the user to be detected. The second neural network can be trained in advance, and specifically, the training process for the second neural network can include: feature descriptions corresponding to face images of a plurality of different users are obtained, cosmetic information matched with the feature descriptions is determined, and a second neural network is obtained by utilizing a deep learning method based on the feature descriptions, the cosmetic information and corresponding matching relations. The second neural network can be realized by a parallel matching architecture, and particularly, the architecture is one of DCNN-based semantic matching architectures. The matching model of this architecture inputs two sentences into two CNN sentence models, respectively, and their semantic representations (real-valued vectors) can be obtained. Then, the two semantic representations are input into a multilayer neural network, and the matching degree of the two sentence meanings is judged, so that whether the given two sentences can become a pair of sentence matching pairs (question-answer pairs) or not is judged. This is the basic idea of the parallel semantic matching model based on DCNN. If there is a large amount of information and data in the reply pair, the model can be trained. When the model is applied to the method, the recommended result is obtained according to the matching degree of the feature description and the cosmetic information, namely the cosmetic information with the highest matching degree with the input feature description.
The cosmetic information may include all the components of all the online and offline cosmetics of different types, the content of each component, the corresponding efficacy, the using method and the like, which meet the national quality standard, a database may be established by the information, and then the cosmetics may be labeled with corresponding labels according to the using requirements and efficacies of different parts, as shown in tables 1 to 4.
TABLE 1
Figure BDA0001282773560000061
TABLE 2
Figure BDA0001282773560000062
TABLE 3
Figure BDA0001282773560000071
TABLE 4
Figure BDA0001282773560000072
According to the technical scheme, the face image of the user to be detected for which cosmetics need to be recommended is obtained as the image to be detected, the corresponding feature description and the cosmetic information of the obtained image to be detected are input as a second neural network, and the cosmetic information matched with the feature description is obtained and is the recommended cosmetic information. Because the recommended cosmetics information obtained in the technical scheme disclosed by the application is obtained by aiming at the feature description of the face image of the user to be detected, the cosmetics features corresponding to the recommended cosmetics information can accord with the face features of the user to be detected, namely, the cosmetics corresponding to the recommended cosmetics information can be suitable for the user to be detected, so that the user can effectively find the cosmetics suitable for the user.
In addition, the technical scheme disclosed by the application is convenient to operate and strong in pertinence, and can provide appropriate cosmetics for each user.
The method for recommending cosmetics based on deep learning provided by the embodiment of the invention is used for extracting the image characteristics of an image to be detected, and can comprise the following steps:
s121: and detecting the image to be detected based on AAM to obtain the human face characteristic points, and extracting the corresponding geometric features of the image to be detected based on the human face characteristic points by adopting the area of the characteristic triangle.
In the application, an Activity Appearance Model (AAM) is used for locating and detecting human face feature points, and an AAM overall block diagram is shown in fig. 2, and the basic idea is to combine the shape and texture information of a human face to form a dynamic appearance model of the human face, describe the motion condition of a shape control point representing the feature point position by using a PCA method, define an energy function by using the square sum of the AAM model example and the aberration of an input image, and use the energy function as an evaluation function of the fitting degree. In the process of positioning the face characteristic points, the model parameters are changed through an effective fitting algorithm, so that the position change of the shape control points is controlled, the minimization of an energy function is realized, and the face characteristic points of the current object are finally positioned. The feature points detected by the AMM method can be shown in fig. 3.
In extracting geometric features of a human face by using the feature triangle areas, 68 triangle areas shown in fig. 4 can be normalized by the total face area and used as feature components to form 68-dimensional area vectors, and the 68 triangles correspond to face parts affecting the appearance of the human face, such as: eye size, nose size, chin size, mouth size, etc. The vertices of the 68 triangles are derived from 58 feature points, and the geometric feature is called a feature point triangle area feature or simply a triangle area feature.
S122: and extracting the image to be detected based on the geometric features by adopting a triangular central point sampling Gabor texture feature extraction method to obtain corresponding texture features.
In the method for extracting the Gabor texture feature by sampling the center points of the triangles, as shown in fig. 5, sampling points of the center points of 68 triangles formed by different feature points are selected to extract the Gabor texture feature of the human face. The feature extraction method is called a triangular central point sampling Gabor texture feature extraction method, and the obtained features are called triangular central point sampling Gabor features and are abbreviated as Triangle Center Gabor.
S123: and (4) performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
Since some of the features in the feature set are irrelevant and some are redundant, and such irrelevant and redundant information can certainly affect the performance of the machine learning algorithm, it is necessary to remove these redundant and irrelevant features from the feature set, which is a problem of selecting a feature subset. The invention adopts a supervised Wrapper method, in which the feature subset selection is performed by an Induction Algorithm like a black box, and the interaction of the training set and the Induction Algorithm is focused, as shown in fig. 6, and meanwhile, because the elimination of the feature value by the Wrapper method optimizes the evaluation measurement of the classification Algorithm, the Wrapper method generally has better performance than the filters and other methods, and can select the optimal feature subset from the feature set under the condition of considering the learning Algorithm (Induction Algorithm). Specifically, the Wrapper method considers a learning algorithm in searching for an optimal feature subset to evaluate whether the searched feature subset is optimal. The Wrapper feature selection algorithm mainly comprises the following three aspects: (1) feature Selection search (Feature Selection): performing a feature selection Search requires a state Space (States Space), an Initial state (Initial Status), a Termination Condition (Termination Condition) and a Search mechanism (Search Engine). The search space may be organized such that each state represents a subset of features. For n features, there are n bits (bits) per state, each bit (bit) indicating whether the feature is to be retained (present) or discarded (present). If reserved, the bit is marked as 1, otherwise, it is marked as 0. Adding or deleting a feature from a state is an operation (operator). For n features, the size of the search space is O (2n), and it is impractical to search all of the search spaces except for n, which is small. Therefore, a Best-first search mechanism may be selected to search forward (search forward) starting from an empty feature set. The termination condition is 5 backtracking (backtracking), depending on the search mechanism. (2) Feature Evaluation (Feature Evaluation) since it is unknown what the true recognition rate is after training, an accuracy estimation (accuracy Evaluation) is used as a heuristic function (heuristic function) and an Evaluation function (Evaluation function). The accuracy estimation method may be repeated a plurality of times for each feature using cross-validation. The number of repetitions is determined by the standard deviation of the accuracy estimates. (3) Learning Algorithm (indexing Algorithm): the learning algorithm used in the feature selection environment of the present invention is a regression Support Vector Machine (SVM). The data on which the learning algorithm operates is typically divided into two parts, one for training and the other for testing. In training, different features are discarded from the feature set each time, then the learning prediction precision of the regression support vector machine is calculated, and the feature subset with the highest prediction precision is selected as the optimal feature subset. The optimal feature subset selected by this process is evaluated on separate test data. Correspondingly, in the method, the obtained geometric features and textural features are subjected to feature selection by using a Wrapper method, and finally, the optimal feature subset containing the optimal features is obtained and used as the image features, so that the recommendation result is more accurate.
In the technical scheme disclosed by the application, the AAM is adopted to position and detect the human face characteristic points, the energy function optimization idea in the variable model is utilized, the human face shape is not only considered, the human face appearance is introduced to expand the characterization capability of the model, and the more accurate positioning of the human face characteristic points can be realized. The method comprises the steps of obtaining image features by adopting a Wrapper method, searching feature quantities which can reach the highest classification precision by taking high classification precision as an index, and particularly searching an optimal feature subset of a specific algorithm; thereby ensuring the accuracy of the recommendation.
It should be noted that the features used for characterizing the face texture in the present application are Gabor features. Wherein from the spatial domain, the Gabor filter can be considered as a sinusoidal plane wave modulated by a Gaussian function. The simplified two-dimensional Gabor filter is defined as:
Figure BDA0001282773560000091
x′=xcosθ+ysinθ,y′=-xsinθ+ycosθ
where (x, y) is the spatial domain pixel position coordinates, ω (ω ═ 2 π f) is the radial center frequency (scale), θ is the direction of the Gabor filter (i.e., the direction of the sinusoidal plane wave), and σ is the mean square error of the Gaussian function along the x-axis and the y-axis. The variance σ in relation to the frequency ω can be expressed as:
Figure BDA0001282773560000101
Figure BDA0001282773560000102
the bandwidth is the bandwidth of the octave (the bandwidth in octaves), and is generally 1-1.5.
The Gabor feature of the image I (x, y) is the convolution of the image I (x, y) with the Gabor filter ψ (x, y, ω, θ):
Om,n(x,y)=I(x,y)*ψ(x,y,ω,θ)
where the symbol is the convolution operator. The convolution output of the image is in the form of a complex number, and the real part, imaginary part, or module value of the complex number can be taken as a feature vector, and here, the module value of the complex number can be taken as the extracted Gabor texture feature in the present application.
The extracted geometric features are the characterization features of the human face, such as the skin condition of the human face, the three-dimensional structure of the human face, whether the skin is smooth and has color, whether the face has wrinkles, scars, tumors, color spots and the like, which are all important features of the human face skin. The human face does not exist in the form of points, lines or planes, but exists in the form of three-dimensional solid; the information of the sunken degree of the eyes, the length, the width and the depth of the nose and the sunken degree of the cheeks influences the three-dimensional stereoscopic impression of the face; therefore, a feature capable of expressing three-dimensional shape information and skin condition information is required. The appearance characteristics are the characteristics capable of expressing the information; the three-dimensional information reflected by the skin condition, the color depth change and the light and shade contrast of the human face can be captured, so that the cosmetics selected out are the cosmetics suitable for the user to be detected.
In addition, a face database belonging to the user can be established by the collected image characteristics, and the face database is stored in the terminal and uploaded to the cloud so as to be acquired at any time.
The deep learning-based cosmetic recommendation method provided by the embodiment of the invention can further comprise the following steps before feature selection is performed on geometric features and textural features by using a Wrapper method:
and performing dimensionality reduction on the extracted geometric features and textural features by using PCA.
Since the obtained geometric features and textural features are likely to be features with large dimensionality, the dimensionality reduction processing is carried out by adopting Principal Component Analysis (PCA), so that the subsequent processing is simpler and more convenient.
The deep learning-based cosmetic recommendation method provided by the embodiment of the invention can further comprise the following steps before extracting the image features of the image to be detected:
and preprocessing the image to be detected, including face image enhancement operation and normalization operation.
After the image to be detected is acquired, noise often exists due to the difference of the acquisition environments of the image to be detected, such as the brightness of illumination and the performance of equipment. In order to ensure the accuracy of face detection, irrelevant information in an image needs to be removed through preprocessing, interference and noise are filtered, useful information is recovered, the detectability of relevant information is enhanced, and data is simplified to the maximum extent, so that the reliability of extracted features is improved. The image preprocessing of the invention mainly comprises the work of face image enhancement operation (namely denoising processing), normalization operation (such as illumination normalization, size scale normalization, rotation direction normalization and the like); the image enhancement technology mainly comprises a spatial domain method and a frequency domain method, wherein the spatial domain method is to directly process image pixels in a spatial domain to achieve the purpose of removing or weakening noise, and the representative algorithm comprises mean filtering and median filtering; the frequency domain method firstly carries out frequency domain transformation on the image, then carries out corresponding operation on each frequency spectrum component, and finally obtains the required result through frequency domain inverse transformation. The normalization aims to obtain standardized face images with consistent sizes and same gray scale value ranges so as to facilitate post-processing.
The method for recommending cosmetics based on deep learning provided by the embodiment of the invention collects the face image of a user to be detected as an image to be detected, and can comprise the following steps:
and shooting the face part of the user to be detected through different angles to obtain a corresponding image to be detected.
The obtained image to be detected can comprise a front photograph, a side photograph, a head-down photograph and the like of a user to be detected, and in order to ensure that the image is accurate and reliable, the forehead and the five sense organs are exposed when the user to be detected shoots the image without adding any decorative object; thereby ensuring the accuracy of cosmetic recommendation.
Before the information to be detected is input to the second neural network, the method for recommending cosmetics based on deep learning provided by the embodiment of the invention may further include:
and acquiring the environment information of the position of the user to be detected, and adding the environment information into the information to be detected.
The external environment information such as seasons, weather (including temperature, humidity, wind direction, wind power, sun irradiation intensity, ultraviolet intensity and the like) corresponding to the position of the user to be detected can be automatically acquired from the cloud or according to the current weather station, and a database can be established, so that the external environment information is used as the information to be detected, the output recommended cosmetic information also corresponds to the environment information, and the applicability of the recommended cosmetic information is further ensured. Correspondingly, different environmental information and corresponding relation between the environmental information and the cosmetic information need to be added into data for training the second neural network when the second neural network is trained.
Before the information to be detected is input to the second neural network, the method for recommending cosmetics based on deep learning provided by the embodiment of the invention may further include:
and acquiring the personal information of the user input by the user to be detected, and adding the personal information of the user into the information to be detected.
The personal information of the user can comprise age, gender, personal preference, allergy history (for preventing allergy to certain cosmetic ingredients), self-induction skin state (greasy/dry), occasions to be attended, personal income, price requirements on skin care cosmetics and the like, and a database can be established so as to be used as the information to be detected, so that the output recommended cosmetic information corresponds to the personal information of the user, the recommended cosmetics better meet the requirements of the user, and the method is more humanized. Correspondingly, different user personal information and corresponding relations between the user personal information and the cosmetic information need to be added into data for training the second neural network when the second neural network is trained.
Before the information to be detected is input to the second neural network, the method for recommending cosmetics based on deep learning provided by the embodiment of the invention may further include:
acquiring skin care advice information, and adding the skin care advice information into the information to be detected;
correspondingly, after the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
and obtaining skin care suggestion information corresponding to the feature description, the personal information of the user and the environmental information as recommended skin care suggestions.
The skin care advice information can include skin care measures to be taken on the skin, including sun-screening wearing measures, eating habits, massage or exercise behaviors and the like, at different time intervals and different time periods in one day according to different skin types, and a database can be established so as to serve as the information to be detected, so that the second neural network also outputs recommended skin care advice, and daily skin care of a user is facilitated. Correspondingly, different skin care advice information and corresponding relations between the skin care advice information and the feature description, the personal information of the user and the environmental information are required to be added into the data for training the second neural network when the second neural network is trained.
Therefore, the technical scheme disclosed by the application can realize the responsibility of a cosmetologist, so that a user can obtain professional skin care suggestions without going out. And the cosmetics and skin care suggestions recommended by the application are tailored for users, and the method is strong in pertinence and high in applicability. In addition, various cosmetic information, environmental information and skin care suggestion information can be automatically loaded from the cloud, personal information of a user can be input when the user uses the skin care suggestion information for the first time, a real-time picture of the user can be shot and the user is required to input the occasion requirement to be attended and the like when the skin care suggestion information is used for each time, so that a deep learning method is utilized to train a large amount of collected data, and comprehensive, objective, efficient and scientific decision reference is provided for the user.
Parts of the technical solutions provided by the embodiments of the present invention that are consistent with the principles of the corresponding technical solutions in the prior art are not described in detail, so as to avoid redundant description.
An embodiment of the present invention further provides a deep learning-based cosmetic recommendation system, as shown in fig. 7, which may include:
an acquisition module 11, configured to: acquiring a face image of a user to be detected as an image to be detected;
an extraction module 12 for: extracting image features of an image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features;
a recommendation module 13 configured to: and taking information to be detected comprising the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and obtaining the cosmetic information matched with the characteristic description as recommended cosmetic information.
In the deep learning-based cosmetic recommendation system provided by the embodiment of the invention, the extraction module may include:
an extraction unit for: detecting an image to be detected based on AAM to obtain human face characteristic points, and extracting the image to be detected based on the human face characteristic points by adopting the area of a characteristic triangle to obtain corresponding geometric characteristics; extracting the image to be detected based on the geometric features by adopting a triangular central point sampling Gabor texture feature extraction method to obtain corresponding texture features; and (4) performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
The cosmetics recommendation system based on deep learning provided by the embodiment of the invention can further comprise:
a dimension reduction module to: before feature selection is carried out on the geometric features and the textural features by adopting a Wrapper method, the PCA is used for carrying out dimension reduction processing on the extracted geometric features and textural features.
The cosmetics recommendation system based on deep learning provided by the embodiment of the invention can further comprise:
a pre-processing module to: before extracting the image characteristics of the image to be detected, preprocessing including human face image enhancement operation and normalization operation is carried out on the image to be detected.
In the deep learning-based cosmetic recommendation system provided by the embodiment of the invention, the acquisition module may include:
a collection unit for: and shooting the face part of the user to be detected through different angles to obtain a corresponding image to be detected.
The cosmetics recommendation system based on deep learning provided by the embodiment of the invention can further comprise:
a first obtaining module to: and before the information to be detected is input as a second neural network, acquiring the environment information of the position of the user to be detected, and adding the environment information into the information to be detected.
The cosmetics recommendation system based on deep learning provided by the embodiment of the invention can further comprise:
a second obtaining module to: before the information to be detected is input as a second neural network, acquiring user personal information input by a user to be detected, and adding the user personal information into the information to be detected.
The cosmetics recommendation system based on deep learning provided by the embodiment of the invention can further comprise:
a third obtaining module configured to: before the information to be detected is input as a second neural network, skin care suggestion information is obtained and added into the information to be detected;
correspondingly, the method also comprises the following steps:
a suggestion module to: and after the information to be detected is input as a second neural network, skin care suggestion information corresponding to the characteristic description, the personal information of the user and the environmental information is obtained and is a recommended skin care suggestion.
For a description of a relevant part in the deep learning-based cosmetic recommendation system according to the embodiment of the present invention, reference is made to detailed descriptions of a corresponding part in the deep learning-based cosmetic recommendation method according to the embodiment of the present invention, and details are not repeated herein.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A deep learning-based cosmetic recommendation method is characterized by comprising the following steps:
acquiring a face image of a user to be detected as an image to be detected;
extracting image features of the image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features;
using information to be detected including the characteristic description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method to obtain corresponding cosmetic information as recommended cosmetic information;
before the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
acquiring environment information of the position of the user to be detected, and adding the environment information into the information to be detected;
the method for acquiring the face image of the user to be detected as the image to be detected comprises the following steps:
shooting the face part of the user to be detected through different angles to obtain a corresponding image to be detected;
before the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
acquiring user personal information input by the user to be detected, and adding the user personal information into the information to be detected;
before the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
acquiring skin care advice information, and adding the skin care advice information into the information to be detected;
correspondingly, after the information to be detected is used as the input of the second neural network, the method further comprises the following steps:
and obtaining skin care suggestion information corresponding to the feature description, the user personal information and the environment information as recommended skin care suggestions.
2. The method of claim 1, wherein extracting image features of the image to be detected comprises:
detecting the image to be detected based on AAM to obtain human face characteristic points, and extracting the image to be detected based on the human face characteristic points by adopting a characteristic triangle area to obtain corresponding geometric characteristics;
extracting the image to be detected by adopting a triangular central point sampling Gabor texture feature extraction method based on the geometric features to obtain corresponding texture features;
and performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
3. The method of claim 2, wherein before performing feature selection on both the geometric feature and the texture feature by using a Wrapper method, the method further comprises:
and performing dimensionality reduction on the extracted geometric features and textural features by using PCA.
4. The method according to claim 2, wherein before extracting the image features of the image to be detected, the method further comprises:
and carrying out preprocessing including face image enhancement operation and normalization operation on the image to be detected.
5. A deep learning based cosmetic recommendation system, comprising:
an acquisition module to: acquiring a face image of a user to be detected as an image to be detected;
an extraction module to: extracting image features of the image to be detected, and taking the image features as input of a pre-trained first neural network to obtain feature description corresponding to the image features;
a recommendation module to: using information to be detected including the feature description and the pre-acquired cosmetic information as input of a second neural network pre-trained by using a deep learning method, and obtaining the cosmetic information matched with the feature description as recommended cosmetic information;
the system further comprises:
a first obtaining module to: before the information to be detected is input as the second neural network, acquiring the environmental information of the position of the user to be detected, and adding the environmental information into the information to be detected;
the collection module includes:
a collection unit for: shooting the face part of the user to be detected through different angles to obtain a corresponding image to be detected;
the system further comprises:
a second obtaining module to: before the information to be detected is input as the second neural network, acquiring the personal information of the user input by the user to be detected, and adding the personal information of the user into the information to be detected;
a third obtaining module configured to: before the information to be detected is input as the second neural network, skin care suggestion information is obtained and added into the information to be detected;
a suggestion module to: and obtaining skin care suggestion information corresponding to the feature description, the user personal information and the environment information as recommended skin care suggestions.
6. The system of claim 5, wherein the extraction module comprises:
an extraction unit for: detecting the image to be detected based on AAM to obtain human face characteristic points, and extracting the image to be detected based on the human face characteristic points by adopting a characteristic triangle area to obtain corresponding geometric characteristics; extracting the image to be detected by adopting a triangular central point sampling Gabor texture feature extraction method based on the geometric features to obtain corresponding texture features; and performing feature selection on the geometric features and the textural features by adopting a Wrapper method to obtain corresponding optimal features as image features.
CN201710294695.2A 2017-04-28 2017-04-28 Deep learning-based cosmetic recommendation method and system Expired - Fee Related CN107123027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710294695.2A CN107123027B (en) 2017-04-28 2017-04-28 Deep learning-based cosmetic recommendation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710294695.2A CN107123027B (en) 2017-04-28 2017-04-28 Deep learning-based cosmetic recommendation method and system

Publications (2)

Publication Number Publication Date
CN107123027A CN107123027A (en) 2017-09-01
CN107123027B true CN107123027B (en) 2021-06-01

Family

ID=59725947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710294695.2A Expired - Fee Related CN107123027B (en) 2017-04-28 2017-04-28 Deep learning-based cosmetic recommendation method and system

Country Status (1)

Country Link
CN (1) CN107123027B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944093A (en) * 2017-11-02 2018-04-20 广东数相智能科技有限公司 A kind of lipstick color matching system of selection, electronic equipment and storage medium
US11157985B2 (en) * 2017-11-29 2021-10-26 Ditto Technologies, Inc. Recommendation system, method and computer program product based on a user's physical features
CN108399619B (en) * 2017-12-22 2021-12-24 联想(北京)有限公司 System and device for medical diagnosis
WO2019136354A1 (en) * 2018-01-05 2019-07-11 L'oreal Machine-implemented facial health and beauty assistant
CN108229415B (en) * 2018-01-17 2020-12-22 Oppo广东移动通信有限公司 Information recommendation method and device, electronic equipment and computer-readable storage medium
CN109410313B (en) * 2018-02-28 2023-03-24 南京恩瑞特实业有限公司 Meteorological three-dimensional information 3D simulation inversion method
US11010636B2 (en) * 2018-10-25 2021-05-18 L'oreal Systems and methods for providing personalized product recommendations using deep learning
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic
CN110033344A (en) * 2019-03-06 2019-07-19 百度在线网络技术(北京)有限公司 Skin care item recommended method, device and storage medium neural network based
CN110245590B (en) * 2019-05-29 2023-04-28 广东技术师范大学 Product recommendation method and system based on skin image detection
CN110399560A (en) * 2019-07-30 2019-11-01 厦门美图之家科技有限公司 Skin care information recommendation method, device, equipment and storage medium
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition
CN111064766A (en) * 2019-10-24 2020-04-24 青岛海尔科技有限公司 Information pushing method and device based on Internet of things operating system and storage medium
CN111414554B (en) * 2020-03-26 2023-08-22 透明生活(武汉)信息科技有限公司 Commodity recommendation method, commodity recommendation system, server and storage medium
CN113222712A (en) * 2021-05-31 2021-08-06 中国银行股份有限公司 Product recommendation method and device
CN113538114B (en) * 2021-09-13 2022-03-04 东莞市疾病预防控制中心 Mask recommendation platform and method based on small programs
CN117197541B (en) * 2023-08-17 2024-04-30 广州兴趣岛信息科技有限公司 User classification method and system based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160312A (en) * 2015-08-27 2015-12-16 南京信息工程大学 Recommendation method for star face make up based on facial similarity match
CN105455522A (en) * 2015-11-30 2016-04-06 深圳市欧蒙设计有限公司 Intelligent cosmetic mirror
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120011040A1 (en) * 2010-07-06 2012-01-12 Beydler Michael L Pre-bankruptcy pattern and transaction detection and recovery apparatus and method
KR101795601B1 (en) * 2011-08-11 2017-11-08 삼성전자주식회사 Apparatus and method for processing image, and computer-readable storage medium
CN103544506B (en) * 2013-10-12 2017-08-08 Tcl集团股份有限公司 A kind of image classification method and device based on convolutional neural networks
CN104217225B (en) * 2014-09-02 2018-04-24 中国科学院自动化研究所 A kind of sensation target detection and mask method
CN105183841B (en) * 2015-09-06 2019-03-26 南京游族信息技术有限公司 The recommended method of frequent item set and deep learning is combined under big data environment
CN106529394B (en) * 2016-09-19 2019-07-19 广东工业大学 A kind of indoor scene object identifies simultaneously and modeling method
CN106568783B (en) * 2016-11-08 2019-12-03 广东工业大学 A kind of hardware defect detecting system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160312A (en) * 2015-08-27 2015-12-16 南京信息工程大学 Recommendation method for star face make up based on facial similarity match
CN105455522A (en) * 2015-11-30 2016-04-06 深圳市欧蒙设计有限公司 Intelligent cosmetic mirror
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸美丽吸引力的特征分析与机器学习;毛慧芸;《中国博士学位论文全文数据库 信息科技辑》;20111215;正文第38页第2段,第49页第1-2段,第59页第1段,第75页第2段至第76页第1-2段, 第82页第1段, 第111页第2段 *

Also Published As

Publication number Publication date
CN107123027A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN107123027B (en) Deep learning-based cosmetic recommendation method and system
Liang et al. SCUT-FBP5500: A diverse benchmark dataset for multi-paradigm facial beauty prediction
Zhao et al. Towards age-invariant face recognition
Sharma et al. Local higher-order statistics (LHS) for texture categorization and facial analysis
Lu et al. Set-to-set distance-based spectral–spatial classification of hyperspectral images
Vu et al. Illumination-robust face recognition using retina modeling
González-Hernández et al. Recognition of learning-centered emotions using a convolutional neural network
Prasad et al. Medicinal plant leaf information extraction using deep features
CN109241890B (en) Face image correction method, apparatus and storage medium
Zhang et al. IL-GAN: Illumination-invariant representation learning for single sample face recognition
Hsieh et al. A novel anti-spoofing solution for iris recognition toward cosmetic contact lens attack using spectral ICA analysis
Jang et al. Analysis of deep features for image aesthetic assessment
CN110008912B (en) Social platform matching method and system based on plant identification
Suárez et al. Cross-spectral image patch similarity using convolutional neural network
Choudhary et al. Feature extraction and feature selection for emotion recognition using facial expression
Guo et al. Multifeature extracting CNN with concatenation for image denoising
Kumari et al. An optimal feature enriched region of interest (ROI) extraction for periocular biometric system
Chin et al. Facial skin image classification system using Convolutional Neural Networks deep learning algorithm
Ruiz-Garcia et al. Deep learning for illumination invariant facial expression recognition
Zhang et al. Better freehand sketch synthesis for sketch-based image retrieval: Beyond image edges
Maheshwari et al. Performance Analysis of Mango Leaf Disease using Machine Learning Technique
Travieso et al. Using a Discrete Hidden Markov Model Kernel for lip-based biometric identification
Hoang et al. Eyebrow deserves attention: Upper periocular biometrics
Boutin et al. Diffusion models as artists: are we closing the gap between humans and machines?
Dixit et al. Multi-feature based automatic facial expression recognition using deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210601