CN112084983A - ResNet-based hair style recommendation method and application thereof - Google Patents

ResNet-based hair style recommendation method and application thereof Download PDF

Info

Publication number
CN112084983A
CN112084983A CN202010966106.2A CN202010966106A CN112084983A CN 112084983 A CN112084983 A CN 112084983A CN 202010966106 A CN202010966106 A CN 202010966106A CN 112084983 A CN112084983 A CN 112084983A
Authority
CN
China
Prior art keywords
hair style
face
court
data
preparation operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010966106.2A
Other languages
Chinese (zh)
Other versions
CN112084983B (en
Inventor
唐杰
肖鸿昭
宋弘健
帖千枫
黄翊琳
薛又天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010966106.2A priority Critical patent/CN112084983B/en
Publication of CN112084983A publication Critical patent/CN112084983A/en
Application granted granted Critical
Publication of CN112084983B publication Critical patent/CN112084983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention discloses a modeling recommendation method based on ResNet and application thereof, wherein the method comprises the following steps: preparing a face data set; extracting key points of the human face from the human face data set by using a Dlib library; dividing the face into face data sets of an upper court, a middle court and a lower court according to the points of the highest points of eyebrows and the tips of noses; modifying the image file name for label marking; adjusting the picture size of the three data sets; establishing a mapping table of facial features and hair style attributes; training and storing models respectively aiming at three families by adopting a ResNet neural network; performing facial feature analysis on the human face, and giving a result of recommending attributes to the hair style according to a mapping table; establishing a hair style library, marking the hair style attribute for each hair style, and searching in the hair style library to obtain a recommended hair style number; and replacing the actual hair style of the user by using an augmented reality technology according to the recommended hair style to preview the hair style. The invention provides an intelligent hair style discovery recommendation service for a user and realizes real-time effect preview by using an augmented reality technology.

Description

ResNet-based hair style recommendation method and application thereof
Technical Field
The invention relates to the technical field of artificial intelligence and hair style design, in particular to a ResNet-based hair style recommendation method and application thereof.
Background
In the traditional hairdressing industry, when a user needs to change a hair style, the user often only can roughly describe the hair style desired by the user to a stylist through image data or give suggestions to the stylist, which is not efficient, and the final effect does not always meet the requirements of the user. On the other hand, the general user may not have a deep understanding of hairdressing, cannot accurately describe the hairstyle effect of the self-centering apparatus, and may also bear corresponding risks. The personalized custom-made shape is previewed in real time through application software, and the hairstyle suitable for the user is freely selected, so that the method is convenient and quick, the risk that the original hairstyle is damaged and cannot be restored is not needed, and meanwhile, the group with large personalized shape customization demand has better user experience, and the method has great use significance.
In the technical aspect, some existing mobile phone software such as a beauty camera and the like can display the face makeup effect in real time through an Augmented Reality (AR) technology, and a mature technology is provided in the aspects of effect preview and makeup selection, but the problems of unreal display image, inaccurate angle and the like exist. In the aspect of hair style replacement, corresponding software such as a hair style camera and the like in the market generally provides simple planar hair style replacement only based on cartoon face pictures of users, cannot show the overall effect of the hair style, is not friendly to users lacking in hair style design knowledge, and does not have the function of personalized hair style recommendation.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a ResNet-based hair style recommendation method, which realizes the ResNet-based hair style recommendation service for users and realizes the real-time hair trial effect display in an augmented reality expression form.
The second purpose of the invention is to provide a hair style recommendation device based on ResNet;
the third purpose of the invention is to provide a hair style recommendation system based on ResNet;
a fourth object of the present invention is to provide a storage medium.
It is a fifth object of the invention to provide a computing device.
In order to achieve the purpose, the invention adopts the following technical scheme:
a ResNet-based hair style recommendation method comprises the following steps:
constructing a human face data set for facial feature prediction, and sequentially performing different data preparation operations, wherein the data preparation operations comprise a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
the first data preparation operation is used for carrying out face detection on a face picture and intercepting an image containing a forehead area, the second data preparation operation is used for adjusting the format of the image to that only a single face exists in a single picture, the third data preparation operation is used for extracting face key points from a face data set by using a digital library, the fourth data preparation operation is used for dividing the human data set into three face data sets of an upper court, an intermediate court and a lower court according to the extracted face key points, the fifth data preparation operation is used for carrying out corresponding label labeling on the diversity data, and the sixth data preparation operation is used for adjusting the pictures of the data sets to be the same size;
preprocessing three face data sets of an upper court, an intermediate court and a lower court, wherein the preprocessing operation is based on a data enhancement expansion data set;
collecting corresponding characteristics of the facial characteristics and hair style attributes, establishing a mapping table of the facial characteristics and the hair style attributes, and dividing the facial characteristics into five indexes of upper court length, upper court width, middle court length, eye spacing and face shape;
training by adopting a ResNet neural network based on three face data sets of an upper court, an intermediate court and a lower court to respectively obtain a model for predicting the length of the upper court and the width of the upper court, a model for predicting the length of the intermediate court and the inter-ocular distance and a model for predicting the face shape;
classifying and predicting the facial features of the human face according to the model obtained by training, and giving a result of hair style recommendation attributes according to the facial features and a hair style attribute mapping table;
establishing a hair style library, marking hair style attributes for each hair style and numbering;
according to the result of model recommendation, searching in a hair style library to obtain a recommended hair style number;
and replacing the actual hair style of the user by using an augmented reality technology according to the recommended hair style to preview the hair style.
As a preferred technical solution, the first data preparation operation performs face detection on a face picture, and intercepts an image including a forehead region, and the specific steps include:
detecting and framing a face area of the face picture by adopting OpenCV;
and intercepting and storing the frame selection area as a new face picture.
As a preferred technical solution, the fourth data preparation operation divides the human data set into three face data sets of upper court, middle court and lower court according to the extracted face key points, and the specific steps include:
extracting key points of the human face by the third data preparation operation, extracting 68 personal face characteristic points, and extracting the eyebrow highest point and nose tip point in the 68 personal face characteristic points;
making two horizontal lines according to the points of the highest point of the eyebrow and the tip of the nose;
dividing the face into an upper part, a middle part and a lower part according to two horizontal lines;
and cutting the image, and dividing the original face data set into three face data sets which are respectively an upper court, an atrium and a lower court.
As a preferred technical solution, the preprocessing operation is based on data enhancement to expand a data set, and the specific steps of the data enhancement include: small-angle rotation, plane turnover, picture translation, brightness adjustment and contrast stretching.
Preferably, the facial features are divided into five indexes of a court length, a court width, a court length, an eye spacing and a face shape, wherein the face shape comprises a square face, an oval face, a long face and a round face.
As a preferred technical solution, the replacing the actual hairstyle of the user with the augmented reality technology according to the recommended hairstyle for previewing the hairstyle specifically includes the steps of:
building a 3D environment, and rendering a 3D hairstyle view;
rotation and translation of a hairstyle are realized based on an accelerometer of an Android smart phone;
realizing free switching of hair styles based on the icon control under an Android frame;
and combining the camera rendering view and the hairstyle rendering view under the Android framework in combination with face feature point detection.
In order to achieve the second object, the present invention adopts the following technical solutions:
a ResNet based hair style recommendation device comprising: the system comprises a face data set construction module, a data preprocessing module, a mapping table construction module, a prediction model construction module, a hair style recommendation module, a hair style library construction module, a retrieval module and a hair style preview module;
the face data set construction module constructs a face data set for facial feature prediction, and sequentially performs different data preparation operations, wherein the data preparation operations comprise a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
the first data preparation operation is used for carrying out face detection on a face picture and intercepting an image containing a forehead area, the second data preparation operation is used for adjusting the format of the image to that only a single face exists in a single picture, the third data preparation operation is used for extracting face key points from a face data set by using a digital library, the fourth data preparation operation is used for dividing the human data set into three face data sets of an upper court, an intermediate court and a lower court according to the extracted face key points, the fifth data preparation operation is used for carrying out corresponding label labeling on the diversity data, and the sixth data preparation operation is used for adjusting the pictures of the data sets to be the same size;
the data preprocessing module is used for preprocessing three face data sets of an upper court, a middle court and a lower court, and the preprocessing operation is based on data enhancement expansion data sets;
the mapping table building module is used for collecting corresponding characteristics of the facial features and the hair style attributes and building a mapping table of the facial features and the hair style attributes, and the facial features are divided into five indexes of upper court length, upper court width, middle court length, inter-ocular distance and face shape;
the prediction model construction module is used for training by adopting a ResNet neural network based on three face data sets of an upper court, an intermediate court and a lower court to respectively obtain a model for predicting the length of the upper court and the width of the upper court, a model for predicting the length of the intermediate court and the inter-eye distance and a model for predicting the face shape;
the hair style recommending module is used for classifying and predicting the facial features of the human face according to the model obtained by training and outputting a result of hair style recommending attributes according to the facial features and a hair style attribute mapping table;
the hairstyle library building module is used for building a hairstyle library, and marking the attribute of the hairstyle for each hairstyle and numbering the hairstyle;
the retrieval module is used for retrieving in a hair style library to obtain a recommended hair style number according to the result of model recommendation;
and the hair style previewing module is used for replacing the actual hair style of the user by using an augmented reality technology according to the recommended hair style to preview the hair style.
In order to achieve the third object, the present invention adopts the following technical solutions:
a hair style recommendation system based on ResNet is provided with the hair style recommendation device based on ResNet, and further comprises: the system comprises a UI (user interface), an Http client communication module and an Http server communication module;
the UI user interface is used for calling a camera and recommending hair style selection;
the Http client communication module is used for sending the face picture of the user to the server, monitoring the response of the server and receiving the number of the recommended hairstyle;
and the Http server-side communication module is used for receiving the face picture of the user and transmitting the recommended hairstyle number back to the client.
In order to achieve the fourth object, the present invention adopts the following technical means:
a storage medium stores a program that when executed by a processor implements the above-described ResNet-based hair style recommendation method.
In order to achieve the fifth object, the present invention adopts the following technical solutions:
a computing device comprises a processor and a memory for storing a processor executable program, and when the processor executes the program stored in the memory, the ResNet-based hair style recommendation method is realized.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) the invention realizes the purpose of providing the hair style recommendation service based on ResNet for the user and realizes the real-time hair trial effect display in the form of augmented reality expression;
(2) the invention provides a series of data preparation operations for data preparation, and also provides a data preprocessing method and a neural network training method, and a good prediction effect is achieved based on three diversity data sets;
(3) the method and the system adopt the augmented reality technology to realize the real-time preview of the hair style recommended by the user, solve the technical problem that the model fitting is unreal, have real and vivid effect and have higher practicability.
Drawings
Fig. 1 is a schematic flowchart of a hairstyle recommendation method based on ResNet in this embodiment 1;
fig. 2 is a block diagram of a structure of a ResNet-based hair style recommendation system according to this embodiment 3;
fig. 3 is a schematic workflow diagram of a hairstyle recommendation system based on ResNet in this embodiment 3.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, the present embodiment provides a ResNet-based hair style recommendation method, which includes the following steps:
s1: preparing a face dataset for facial feature prediction, including selecting and web downloading pictures from the CelebA face dataset openly provided by hong kong chinese university, creating an original face dataset, creating a completed dataset for further data preparation operations;
s2: sequentially performing different data preparation operations on an original data set, wherein the data preparation operations comprise a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
the first data preparation operation adopts OpenCV to detect and select a face region of a face picture, and the selection region is intercepted and stored as a new face picture, wherein the selection region comprises a forehead part;
the second data preparation operation adjusts the format of the image into that only a single face exists in a single picture;
in the third data preparation operation, a human face key point detection model of a Dlib library is used for extracting human face key points from a human face data set, 68 key points are extracted from the human face key point detection model, and only eyebrow peaks with the numbers of 18 and 25 and nose tip points with the number of 33 are adopted in the embodiment;
a fourth data preparation operation is carried out by drawing a straight line a according to the extracted two eyebrow peaks, drawing a straight line b parallel to the straight line a at the point of crossing the nose tip, and dividing the human face area into three parts by the two straight lines, namely dividing the human face data set into three face data sets of an upper court, a middle court and a lower court;
the fifth data preparation operation performs corresponding label labeling on the data after diversity, and there are five indexes in total, which are respectively: the length of the atrium, the width of the atrium, the length of the atrium, the distance between eyes and the shape of face. The upper courtyard length is represented by a lower case letter l or s, wherein l represents long and s represents short; the upper court width is represented by w or d, w represents the width, and d represents the width; the length of the label used by the court is consistent with the length of the court; the labels adopted by the eye distance are consistent with the court width; the facial shapes are divided into four types, namely a square face, an oval face, a long face and a round face, and are respectively represented by s, o, l and r. The annotation is carried out by directly modifying the file name, for example, if a certain face data with the number n has the characteristics of a court length, a court width, a middle court length, a narrow inter-eye distance and an oval face, the file name of the upper court data is n _ l _ w, the file name of the court data is n _ l _ d, and the file name of the lower court data is n _ o. During training, reading letters of file names of corresponding data sets according to different models to obtain labels;
a sixth data preparation operation resizes the data set picture to the same size, where the picture size used in this embodiment is 256 × 256, when the picture size is smaller than 256 × 256, the picture size is expanded to 256 × 256 by upsampling using bilinear interpolation, and when the picture size is smaller than 256 × 256, the picture is reduced by downsampling;
s3: respectively carrying out the same preprocessing operation on the internal pictures of the three data sets;
the preprocessing operation expands the data set based on data enhancement, which comprises the following specific steps: small-angle rotation, plane turnover, picture translation, brightness adjustment and contrast stretching.
S4: collecting corresponding attributes of the facial features and the hair style attributes, establishing a mapping table of the facial features and the hair style attributes, and dividing the facial features into four two-classification indexes and one multi-classification index, wherein the two classifications comprise: the first index, the second index, the third index, the fourth index, the multi-category index includes: a fifth index;
the first index is used for distinguishing the length of a court, the second index is used for distinguishing the width of the court, the third index is used for distinguishing the length of the court, the fourth index is used for distinguishing the length of an eye space, and the fifth index is used for distinguishing a face type, wherein the face type comprises a square face, an oval face, a long face and a round face.
In the present embodiment, the mapping relationship between facial features and hairstyle is shown in table 1 below:
TABLE 1 mapping relationship Table of facial features and hairstyle
Figure BDA0002682369720000081
Figure BDA0002682369720000091
S5: training three families respectively by adopting a ResNet neural network based on the three data sets;
training a court data set by adopting a ResNet neural network to obtain a model for predicting the length of a court and the width of the court;
training an atrium data set by adopting a ResNet neural network to obtain a model for predicting the atrium length and the eye distance;
and training the next family data set by adopting a ResNet neural network to obtain a model for predicting the face shape of the human face.
The training process of the three models adopts the following steps:
dividing the preprocessed three data sets into a training set, a verification set and a test set according to the proportion of 7:2:1, setting a model storage path, a data set path and a label reading mode, and selecting proper batch size and learning rate according to the GPU cache size of a used machine to ensure that the GPU cache does not overflow;
when the label is read, reading letters in the file names of the corresponding data sets according to different training models.
Training by using the divided training set, dynamically storing the weight of the network, observing the magnitude of a loss function value loss of network training in real time, and adjusting the parameters of the training network according to the verification effect of the verification set;
after the network parameters are adjusted for multiple times through the verification set, when the loss is maintained to be a value lower than 0.01 and the accuracy of the model on the verification set exceeds 0.85, the training is stopped, and the final model is obtained.
S6: classifying and predicting the facial features of the human face according to the model obtained by training, and giving a result of hair style recommendation attributes according to the facial features and a hair style attribute mapping table;
s7: establishing a hair style library, marking hair style attributes for each hair style and numbering;
the 3D hair style library is divided into two parts, the first part is a specific 3D model and the serial number thereof, and the part is deployed at a client, so that the transmission quantity of data can be reduced, and the effect preview efficiency is improved; the second part is the attribute and the number corresponding to the hair style, and the second part is deployed in a server to facilitate the retrieval of the subsequent steps.
S8, according to the result of model recommendation, searching in a hair style library to obtain a recommended hair style number;
s9, replacing the actual hairstyle of the user by the augmented reality technology according to the recommended hairstyle to preview the hairstyle, which specifically comprises the following steps: rendering a corresponding 3D display hair style view according to the output hair style number, synthesizing the 3D display hair style rendering view and the camera rendering view, and providing a hair style real-time effect preview.
In this embodiment, an OpenGL ES 2.0/3.0 engine rajawaii at an Android end is used to implement an augmented reality technology, which is specifically implemented as follows:
building a 3D environment, and rendering a 3D hairstyle view SurfaceView1 by using Rajawaii;
compiling a script program based on an accelerometer of an Android smart phone to realize rotation and translation of a hairstyle view SurfaceView 1;
realizing free switching of hair styles based on the icon control under an Android frame;
detecting characteristic points of a human face by using OpenCV under an Android framework, and positioning temple points on the left side and the right side of the human face;
according to the two detected temple points, the 3D hairstyle is attached to the face by using an onRender () method of a Rajawaii frame;
the 3D hairstyle view SurfaceView1 is superimposed onto the camera rendered view SurfaceView2 using Rajawaii.
Example 2
The embodiment provides a hair style recommending device based on ResNet, which comprises: the system comprises a face data set construction module, a data preprocessing module, a mapping table construction module, a prediction model construction module, a hair style recommendation module, a hair style library construction module, a retrieval module and a hair style preview module;
in the embodiment, the face data set construction module constructs a face data set for facial feature prediction, and sequentially performs different data preparation operations including a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
in this embodiment, a first data preparation operation performs face detection on a face picture, intercepts an image containing a forehead area, adjusts the format of the image to that only a single face exists in a single picture, a third data preparation operation performs face key point extraction on a face data set by using a Dlib library, a fourth data preparation operation divides the human data set into three face data sets of an upper family, an intermediate family and a lower family according to the extracted face key point, a fifth data preparation operation performs corresponding label labeling on the data after diversity, and a sixth data preparation operation adjusts the pictures of the data sets to the same size;
in this embodiment, the data preprocessing module is configured to perform preprocessing operations on three face data sets, namely an entrance, an atrium, and an exit, where the preprocessing operations are based on data enhancement extended data sets;
in this embodiment, the mapping table building module is configured to collect corresponding features of facial features and hair style attributes and build a mapping table of the facial features and the hair style attributes, and divide the facial features into five indexes, namely a court length, a court width, a court length, an eye distance and a face shape;
in this embodiment, the prediction model construction module is configured to perform training by using a ResNet neural network based on three face data sets of an upper court, an atrium and a lower court, and obtain a model for predicting the length of the upper court and the width of the upper court, a model for predicting the length of the atrium and the inter-eye distance, and a model for predicting the face shape;
in this embodiment, the hair style recommendation module is configured to perform classification prediction on facial features of a human face according to a model obtained through training, and output a result of hair style recommendation attributes according to a facial feature and a hair style attribute mapping table;
in this embodiment, the hair style library construction module is configured to establish a hair style library, and label and number a hair style attribute for each hair style;
in this embodiment, the retrieval module is configured to retrieve, in the hair style library, a recommended hair style number according to a result of the model recommendation;
in this embodiment, the hair style previewing module is configured to perform hair style previewing by replacing the actual hair style of the user with the augmented reality technology according to the recommended hair style.
Example 3
As shown in fig. 2 and fig. 3, the present embodiment provides a ResNet-based hair style recommendation system, which is provided with the ResNet-based hair style recommendation device of embodiment 2, and further includes: a UI user interface, an Http client communication module and an Http server communication module,
in this embodiment, the UI user interface is used to invoke the camera and recommend hair style selection;
in this embodiment, the Http client communication module is configured to send the user face image to the server, monitor a server response, and receive a recommended hairstyle number;
in this embodiment, the Http server-side communication module is configured to receive a picture of a face of a user, and transmit a recommended hair style number back to the client.
Example 4
The present embodiment further provides a storage medium, which may be a storage medium such as a ROM, a RAM, a magnetic disk, or an optical disk, and the storage medium stores one or more programs, and when the programs are executed by a processor, the method for recommending a hairstyle based on ResNet according to embodiment 1 above is implemented.
The storage medium comprises the 3D hair style library in the ResNet-based hair style recommendation system, which is divided into two parts, wherein one part stores a 3D hair style model and a number and is deployed at a client, the other part stores a corresponding hair style attribute and a number and is deployed at a server, and the server scheduling module is allowed to read and dynamically update the content of the 3D hair style library through an API (application programming interface) of a C language and a MySQL (MySQL structured query language) database.
Example 5
The embodiment provides a computing device, which may be a desktop computer, a notebook computer, a smart phone, a PDA handheld terminal, a tablet computer, or other terminal devices with a display function, and the computing device includes a processor and a memory, where the memory stores one or more programs, and when the processor executes the programs stored in the memory, the ResNet-based hair style recommendation method of embodiment 1 is implemented.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A ResNet-based hair style recommendation method is characterized by comprising the following steps:
constructing a human face data set for facial feature prediction, and sequentially performing different data preparation operations, wherein the data preparation operations comprise a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
the first data preparation operation is used for carrying out face detection on a face picture and intercepting an image containing a forehead area, the second data preparation operation is used for adjusting the format of the image to that only a single face exists in a single picture, the third data preparation operation is used for extracting face key points from a face data set by using a digital library, the fourth data preparation operation is used for dividing the human data set into three face data sets of an upper court, an intermediate court and a lower court according to the extracted face key points, the fifth data preparation operation is used for carrying out corresponding label labeling on the diversity data, and the sixth data preparation operation is used for adjusting the pictures of the data sets to be the same size;
preprocessing three face data sets of an upper court, an intermediate court and a lower court, wherein the preprocessing operation is based on a data enhancement expansion data set;
collecting corresponding characteristics of the facial characteristics and hair style attributes, establishing a mapping table of the facial characteristics and the hair style attributes, and dividing the facial characteristics into five indexes of upper court length, upper court width, middle court length, eye spacing and face shape;
training by adopting a ResNet neural network based on three face data sets of an upper court, an intermediate court and a lower court to respectively obtain a model for predicting the length of the upper court and the width of the upper court, a model for predicting the length of the intermediate court and the inter-ocular distance and a model for predicting the face shape;
classifying and predicting the facial features of the human face according to the model obtained by training, and giving a result of hair style recommendation attributes according to the facial features and a hair style attribute mapping table;
establishing a hair style library, marking hair style attributes for each hair style and numbering;
according to the result of model recommendation, searching in a hair style library to obtain a recommended hair style number;
and replacing the actual hair style of the user by using an augmented reality technology according to the recommended hair style to preview the hair style.
2. The ResNet-based hair style recommendation method according to claim 1, wherein the first data preparation operation performs face detection on a face picture, and intercepts an image containing a forehead area, and the specific steps include:
detecting and framing a face area of the face picture by adopting OpenCV;
and intercepting and storing the frame selection area as a new face picture.
3. The ResNet-based hair style recommendation method according to claim 1, wherein the fourth data preparation operation divides the human data set into three face data sets of upper court, middle court and lower court according to the extracted face key points, and comprises the following specific steps:
the third data preparation operation performs extraction of face key points, extracts 68 points of the face features,
extracting 68 eyebrow peak points and nose tip points in the individual face characteristic points;
making two horizontal lines according to the points of the highest point of the eyebrow and the tip of the nose;
dividing the face into an upper part, a middle part and a lower part according to two horizontal lines;
and cutting the image, and dividing the original face data set into three face data sets which are respectively an upper court, an atrium and a lower court.
4. The ResNet-based hair style recommendation method according to claim 1, wherein said preprocessing operation is based on data enhancement to expand a data set, and the specific steps of said data enhancement comprise: small-angle rotation, plane turnover, picture translation, brightness adjustment and contrast stretching.
5. The ResNet-based hair style recommendation method according to claim 1, wherein the facial features are divided into five indexes of upper court length, upper court width, middle court length, eye spacing and face type, and the face type comprises a square face, an oval face, a long face and a round face.
6. The ResNet-based hair style recommendation method according to claim 1, wherein the step of previewing the hair style by replacing the actual hair style of the user with an augmented reality technology according to the recommended hair style comprises the following specific steps:
building a 3D environment, and rendering a 3D hairstyle view;
rotation and translation of a hairstyle are realized based on an accelerometer of an Android smart phone;
realizing free switching of hair styles based on the icon control under an Android frame;
and combining the camera rendering view and the hairstyle rendering view under the Android framework in combination with face feature point detection.
7. A ResNet-based hair style recommendation device, comprising: the system comprises a face data set construction module, a data preprocessing module, a mapping table construction module, a prediction model construction module, a hair style recommendation module, a hair style library construction module, a retrieval module and a hair style preview module;
the face data set construction module constructs a face data set for facial feature prediction, and sequentially performs different data preparation operations, wherein the data preparation operations comprise a first data preparation operation, a second data preparation operation, a third data preparation operation, a fourth data preparation operation, a fifth data preparation operation and a sixth data preparation operation;
the first data preparation operation is used for carrying out face detection on a face picture and intercepting an image containing a forehead area, the second data preparation operation is used for adjusting the format of the image to that only a single face exists in a single picture, the third data preparation operation is used for extracting face key points from a face data set by using a digital library, the fourth data preparation operation is used for dividing the human data set into three face data sets of an upper court, an intermediate court and a lower court according to the extracted face key points, the fifth data preparation operation is used for carrying out corresponding label labeling on the diversity data, and the sixth data preparation operation is used for adjusting the pictures of the data sets to be the same size;
the data preprocessing module is used for preprocessing three face data sets of an upper court, a middle court and a lower court, and the preprocessing operation is based on data enhancement expansion data sets;
the mapping table building module is used for collecting corresponding characteristics of the facial features and the hair style attributes and building a mapping table of the facial features and the hair style attributes, and the facial features are divided into five indexes of upper court length, upper court width, middle court length, inter-ocular distance and face shape;
the prediction model construction module is used for training by adopting a ResNet neural network based on three face data sets of an upper court, an intermediate court and a lower court to respectively obtain a model for predicting the length of the upper court and the width of the upper court, a model for predicting the length of the intermediate court and the inter-eye distance and a model for predicting the face shape;
the hair style recommending module is used for classifying and predicting the facial features of the human face according to the model obtained by training and outputting a result of hair style recommending attributes according to the facial features and a hair style attribute mapping table;
the hairstyle library building module is used for building a hairstyle library, and marking the attribute of the hairstyle for each hairstyle and numbering the hairstyle;
the retrieval module is used for retrieving in a hair style library to obtain a recommended hair style number according to the result of model recommendation;
and the hair style previewing module is used for replacing the actual hair style of the user by using an augmented reality technology according to the recommended hair style to preview the hair style.
8. A ResNet based hair style recommendation system, provided with a ResNet based hair style recommendation device according to claim 7, further comprising: the system comprises a UI (user interface), an Http client communication module and an Http server communication module;
the UI user interface is used for calling a camera and recommending hair style selection;
the Http client communication module is used for sending the face picture of the user to the server, monitoring the response of the server and receiving the number of the recommended hairstyle;
and the Http server-side communication module is used for receiving the face picture of the user and transmitting the recommended hairstyle number back to the client.
9. A storage medium storing a program, wherein the program, when executed by a processor, implements a ResNet based hair style recommendation method according to any one of claims 1-6.
10. A computing device comprising a processor and a memory for storing processor-executable programs, wherein the processor, when executing a program stored in the memory, implements a ResNet-based hair style recommendation method as recited in any of claims 1-6.
CN202010966106.2A 2020-09-15 2020-09-15 Hair style recommendation method based on ResNet and application thereof Active CN112084983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010966106.2A CN112084983B (en) 2020-09-15 2020-09-15 Hair style recommendation method based on ResNet and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010966106.2A CN112084983B (en) 2020-09-15 2020-09-15 Hair style recommendation method based on ResNet and application thereof

Publications (2)

Publication Number Publication Date
CN112084983A true CN112084983A (en) 2020-12-15
CN112084983B CN112084983B (en) 2022-07-26

Family

ID=73737870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010966106.2A Active CN112084983B (en) 2020-09-15 2020-09-15 Hair style recommendation method based on ResNet and application thereof

Country Status (1)

Country Link
CN (1) CN112084983B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744367A (en) * 2021-09-10 2021-12-03 电子科技大学 System and method for editing portrait hairstyle in two-dimensional image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120046652A (en) * 2010-11-02 2012-05-10 에스케이플래닛 주식회사 System and method for recommending hair based on face recognition
CN103489219A (en) * 2013-09-18 2014-01-01 华南理工大学 3D hair style effect simulation system based on depth image analysis
CN108629303A (en) * 2018-04-24 2018-10-09 杭州数为科技有限公司 A kind of shape of face defect identification method and system
CN108664569A (en) * 2018-04-24 2018-10-16 杭州数为科技有限公司 A kind of hair style recommends method, system, terminal and medium
US20180349979A1 (en) * 2017-06-01 2018-12-06 The Gillette Company Llc Method for providing a customized product recommendation
CN110489634A (en) * 2018-05-10 2019-11-22 合刃科技(武汉)有限公司 A kind of build information recommended method, device, system and terminal device
CN110598097A (en) * 2019-08-30 2019-12-20 中国科学院自动化研究所南京人工智能芯片创新研究院 Hair style recommendation system, method, equipment and storage medium based on CNN

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120046652A (en) * 2010-11-02 2012-05-10 에스케이플래닛 주식회사 System and method for recommending hair based on face recognition
CN103489219A (en) * 2013-09-18 2014-01-01 华南理工大学 3D hair style effect simulation system based on depth image analysis
US20180349979A1 (en) * 2017-06-01 2018-12-06 The Gillette Company Llc Method for providing a customized product recommendation
CN108629303A (en) * 2018-04-24 2018-10-09 杭州数为科技有限公司 A kind of shape of face defect identification method and system
CN108664569A (en) * 2018-04-24 2018-10-16 杭州数为科技有限公司 A kind of hair style recommends method, system, terminal and medium
CN110489634A (en) * 2018-05-10 2019-11-22 合刃科技(武汉)有限公司 A kind of build information recommended method, device, system and terminal device
CN110598097A (en) * 2019-08-30 2019-12-20 中国科学院自动化研究所南京人工智能芯片创新研究院 Hair style recommendation system, method, equipment and storage medium based on CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIXIN LIANG ET AL: ""Research on the Personalized Recommendation Algorithm for Hairdressers"", 《SCIRES》 *
张帅等: ""基于协同过滤的发型推荐算法"", 《现代计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744367A (en) * 2021-09-10 2021-12-03 电子科技大学 System and method for editing portrait hairstyle in two-dimensional image
CN113744367B (en) * 2021-09-10 2023-08-08 电子科技大学 System and method for editing portrait hairstyle in two-dimensional image

Also Published As

Publication number Publication date
CN112084983B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
US10853987B2 (en) Generating cartoon images from photos
US20220239988A1 (en) Display method and apparatus for item information, device, and computer-readable storage medium
US10489683B1 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
CN109310196B (en) Makeup assisting device and makeup assisting method
CN104637035B (en) Generate the method, apparatus and system of cartoon human face picture
US11657575B2 (en) Generating augmented reality content based on third-party content
CN106919738A (en) A kind of hair style matching process
JP6934632B2 (en) Make Trend Analyzer, Make Trend Analysis Method, and Make Trend Analysis Program
US20210406996A1 (en) Systems and methods for improved facial attribute classification and use thereof
CN107742273A (en) A kind of virtual try-in method of 2D hair styles and device
CN115668263A (en) Identification of physical products for augmented reality experience in messaging systems
WO2024046189A1 (en) Text generation method and apparatus
CN114913303A (en) Virtual image generation method and related device, electronic equipment and storage medium
CN114266621A (en) Image processing method, image processing system and electronic equipment
US11842457B2 (en) Method for processing slider for virtual character, electronic device, and storage medium
CN112084983B (en) Hair style recommendation method based on ResNet and application thereof
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN112015934B (en) Intelligent hair style recommendation method, device and system based on neural network and Unity
CN115546361A (en) Three-dimensional cartoon image processing method and device, computer equipment and storage medium
CN113361419A (en) Image processing method, device, equipment and medium
CN108875496A (en) The generation of pedestrian's portrait and the pedestrian based on portrait identify
KR101757184B1 (en) System for automatically generating and classifying emotionally expressed contents and the method thereof
CN110378979A (en) The method automatically generated based on the generation confrontation customized high-resolution human face picture of network implementations
CN114445528B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112987932B (en) Human-computer interaction and control method and device based on virtual image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant