CN108920490A - Assist implementation method, device, electronic equipment and the storage medium of makeup - Google Patents
Assist implementation method, device, electronic equipment and the storage medium of makeup Download PDFInfo
- Publication number
- CN108920490A CN108920490A CN201810456104.1A CN201810456104A CN108920490A CN 108920490 A CN108920490 A CN 108920490A CN 201810456104 A CN201810456104 A CN 201810456104A CN 108920490 A CN108920490 A CN 108920490A
- Authority
- CN
- China
- Prior art keywords
- makeup
- user
- face
- image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 239000002537 cosmetic Substances 0.000 claims abstract description 106
- 238000010835 comparative analysis Methods 0.000 claims description 27
- 230000004048 modification Effects 0.000 claims description 26
- 238000012986 modification Methods 0.000 claims description 26
- 238000004458 analytical method Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 14
- 230000004044 response Effects 0.000 claims description 14
- 230000003993 interaction Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000012512 characterization method Methods 0.000 claims description 8
- 239000002699 waste material Substances 0.000 abstract description 7
- 230000001815 facial effect Effects 0.000 description 22
- 210000004709 eyebrow Anatomy 0.000 description 17
- 238000004590 computer program Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 235000002673 Dioscorea communis Nutrition 0.000 description 8
- 241000544230 Dioscorea communis Species 0.000 description 8
- 208000035753 Periorbital contusion Diseases 0.000 description 8
- 239000000049 pigment Substances 0.000 description 7
- 230000037308 hair color Effects 0.000 description 6
- 230000036555 skin type Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 238000000265 homogenisation Methods 0.000 description 2
- 238000002955 isolation Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010013786 Dry skin Diseases 0.000 description 1
- 241000124033 Salix Species 0.000 description 1
- 206010039792 Seborrhoea Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000037336 dry skin Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000037312 oily skin Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
- G06Q30/0271—Personalized advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- Finance (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- Multimedia (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses implementation method, device, electronic equipment and the storage mediums of a kind of auxiliary makeup, including:Extract the individualized feature of user;Calculate separately the matching degree of each face characteristic in the individualized feature and database of the user;Select in database the corresponding dressing of the highest face characteristic of matching degree as recommending dressing to recommend user.The comparison for the face characteristic in individualized feature and database that the embodiment of the present invention passes through user, recommend the dressing for being suitble to user, avoid the dressing that beginner's study is not suitable for oneself, it can also recommend suitable cosmetics to user according to dressing, avoiding beginner, blindly purchase is not suitable for waste caused by the cosmetics of oneself.
Description
Technical Field
The present invention relates to the field of human-computer interaction technologies, and in particular, to a method and an apparatus for implementing makeup assistance, an electronic device, and a storage medium.
Background
Nowadays, people have increasingly high pursuit of beauty, especially girls, and choose makeup to make their face more beautiful. However, makeup often needs to be mastered after long-time practice, and beginners often have the problems that the drawn makeup is not suitable for themselves or the makeup is not harmonious and the beautifying effect is not achieved due to insufficient understanding, insufficient knowledge of makeup and lack of makeup skills.
The person who is beginner in makeup generally chooses to learn by oneself the video or find training institution to train. People usually obtain the beauty making skills through self-learning modes such as books, the internet, oral and oral transmission and the like, and the obtained beauty making skills usually cannot consider the characteristics of the five sense organs of a user, so that the personalized requirements of the user on fashion make-up cannot be met. And self-learning can result in slow learning progress and easy walking of a curved road, and the selected cosmetics are not suitable for the self, so that time and money are wasted. On one hand, the training institution is selected for training, and on the other hand, the learning cost is expensive; on the other hand, makeup requires a long time of practice, and it is troublesome and time-consuming to frequently visit a training facility.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, an electronic device and a storage medium for assisting makeup, so as to solve the problem that a user cannot efficiently grasp makeup skills.
According to a first aspect of the present invention, there is provided a method for implementing makeup assistance, comprising:
extracting personalized features of a user;
respectively calculating the matching degree of the personalized features of the user and each face feature in a database;
and selecting the makeup corresponding to the face features with the highest matching degree in the database as recommended makeup to be recommended to the user.
In some embodiments of the invention, extracting personalized features of the user comprises:
the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user.
In some embodiments of the present invention, extracting the personalized features of the user further comprises:
and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
In some embodiments of the invention, the method further comprises:
and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
In some embodiments of the invention, the method further comprises:
displaying teaching contents corresponding to the current makeup step to a user;
and detecting whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, scoring the exercise result of the current makeup step based on the currently acquired face image of the user.
In some embodiments of the invention, the method further comprises:
judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the collected face image of the user and the reference makeup image corresponding to the current makeup step.
In some embodiments of the present invention, the displaying the modification opinions of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step includes:
acquiring a face image of a user so as to acquire a face makeup image of the user;
carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user;
intercepting an area makeup image corresponding to the current makeup step from the reference makeup image;
and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
In some embodiments of the invention, the method further comprises:
and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
In some embodiments of the present invention, recommending a target cosmetic to a user according to a makeup procedure of the target makeup and a personalized feature of the user, comprises:
respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup;
and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
According to a second aspect of the present invention, there is provided a makeup assisting implement device, comprising:
the personalized feature extraction module is configured to extract personalized features of the user;
the face feature matching module is configured to respectively calculate the matching degree of the personalized features of the user and each face feature in the database;
and the makeup recommending module is configured to select the makeup corresponding to the face feature with the highest matching degree in the database to be recommended to the user as a recommended makeup.
In some embodiments of the invention, the personalized feature extraction module is configured to:
the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user.
In some embodiments of the invention, the personalized feature extraction module is further configured to:
and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
In some embodiments of the invention, the makeup recommendation module is further configured to:
and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
In some embodiments of the invention, the apparatus further comprises:
the display module is configured to display teaching contents corresponding to the current makeup step to a user;
and the scoring module is configured to detect whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, the scoring module is used for scoring the exercise result of the current makeup step based on the currently acquired face image of the user.
In some embodiments of the invention, the scoring module is further configured to:
judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
In some embodiments of the present invention, the displaying the modification opinions of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step includes:
acquiring a face image of a user so as to acquire a face makeup image of the user;
carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user;
intercepting an area makeup image corresponding to the current makeup step from the reference makeup image;
and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
In some embodiments of the invention, the makeup recommendation module is further configured to:
and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
In some embodiments of the present invention, recommending a target cosmetic to a user according to a makeup procedure of the target makeup and a personalized feature of the user, comprises:
respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup;
and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
According to a third aspect of the present invention, there is provided an electronic device comprising a processor and a memory, wherein the memory is used for storing computer instructions, and the computer instructions are executed by the processor to implement the method for implementing makeup assistance in any of the above embodiments.
According to a fourth aspect of the present invention, there is provided a storage medium storing computer instructions adapted to be executed by a processor, the computer instructions being executed by the processor to perform the method for implementing makeup assistance according to any one of the above embodiments.
According to the embodiment of the invention, the makeup suitable for the user is recommended by comparing the personalized features of the user with the face features in the database, so that the condition that a beginner learns the makeup unsuitable for the user is avoided, the suitable cosmetics can be recommended to the user according to the makeup, and the waste caused by the fact that the beginner blindly purchases the cosmetics unsuitable for the user is avoided. Compared with the traditional mode of finding the makeup teacher for tutoring, the embodiment of the invention can enable the user to systematically learn the makeup knowledge, and can simply and conveniently assist the user to carry out multiple exercises according to the makeup level of the user. Therefore, the embodiment of the invention can not only accelerate the progress of the user in learning makeup, but also recommend proper cosmetics to the user, thereby reducing waste.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for implementing make-up assistance in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of a method for implementing make-up aid in accordance with another embodiment of the present invention;
FIG. 3 is a flow chart illustrating a method for implementing makeup assistance according to still another embodiment of the present invention;
FIG. 4 is a schematic diagram of zones on a display screen in accordance with one embodiment of the present invention;
FIG. 5 is a schematic view showing the construction of a device for implementing makeup assistance according to an embodiment of the present invention;
FIG. 6 is a schematic view showing the construction of a makeup assisting realization device according to another embodiment of the present invention;
FIG. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the invention;
fig. 8 is a schematic view of an electronic device for assisting makeup according to another embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In one embodiment, as shown in fig. 1, an embodiment of the present invention provides a method for implementing makeup assistance, including:
step 101, extracting personalized features of a user.
Optionally, the personalized features may include features of 9 dimensions such as skin color, face shape, pigment, black eye, hair color, age, gender, preference, skin type, but are not limited to these features of 9 dimensions. As another embodiment of the invention, personalized features of the user can be extracted by acquiring facial images of the user and/or by a question-answering mode of man-machine interaction. Different users have different personalized features that affect subsequent makeup and cosmetic recommendations.
The step of extracting the personalized features of the user may include: the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user so as to extract a first personalized feature of the user; and extracting a second characterization feature of the user in a question-answering mode of man-machine interaction. The first personalized features may include features such as skin color, facial form, pigment, black eye, hair color and the like, and the second personalized features may include features such as age, gender, preference, skin type and the like.
In the embodiment, the face image of the user is collected through the camera in a multi-angle manner, and the image is transmitted to the personalized feature extraction module to be used for extracting first personalized features such as skin color, facial form, pigment, black eye, hair color and the like. The face image may be acquired by a mobile terminal. The mobile terminal is provided with a camera which can be used for shooting, a user can initiate a shooting instruction through the mobile terminal, and the mobile terminal collects shooting images through the camera after detecting the shooting instruction. It is to be understood that the face image may also be obtained by other ways, which are not limited herein. For example, the face may be downloaded from a web page, or imported from an external storage device, etc. And the second personalized features of age, gender, hobbies, skin type and the like are obtained by a man-machine interaction question-and-answer mode, such as displaying related questions on a touch screen, answering by a user or realizing the second personalized features by a voice question-and-answer mode.
As another embodiment of the present invention, the step of extracting the first characterization feature may specifically include:
1) acquiring a face image of a user, and carrying out face detection on the face image so as to extract a face contour;
2) performing key point positioning on the face image so as to extract five sense organs such as eyes, a nose, a mouth, a chin and the like of the positioned face, and then obtaining facial features of the user according to the face contour and the key point positioning result, wherein the facial features comprise a round facial form, an oval facial form, an inverted oval facial form, a square facial form, a rectangular facial form, a trapezoid facial form, an inverted trapezoid facial form, a rhombus facial form, a pentagon facial form and the like;
3) the skin color characteristics of the user, such as white, yellow, black and black, are obtained through analysis by extracting the color characteristics of different regions of the face image. And further analyzing the skin color homogenization characteristics of the user according to the color characteristics of different areas, thereby judging the skin color homogenization degree of the user. Alternatively, the following formula may be used for calculation:
where Nor is the skin tone normalization feature, N is the number of regions, siIs the skin tone value of each region, and m is the skin tone mean of the N regions.
4) And (3) carrying out dark spot analysis and statistics on different areas of the face image, if a preset quantity threshold value is reached, indicating that more pigment spots exist in the user, and adding a step of concealing the blemishes during subsequent makeup practice.
5) And according to the key point positioning result, carrying out pixel value analysis on the image around the eyes of the user, judging the degree of the black eye, and if the black eye is deeper, adding a step of partial concealing when the follow-up makeup suggestion is carried out.
6) The user's color, such as black, yellow, brown, reddish brown, etc., is extracted.
And 102, respectively calculating the matching degree of the personalized features of the user and each face feature in the database.
A large number of different types of makeup and suitable face features are stored in the cloud database, after the personalized features of the user are obtained, the face features corresponding to the different types of makeup in the database respectively calculate the matching degree of the personalized features of the user and the face features in the database.
Alternatively, n-dimensional facial features feature { f1, f2, …, fn }, which respectively represent n features of a human face (skin color, face shape, pigment, black eye, hair color, age, gender, taste, skin type, etc.), the matching degree sim of the personalized features f of the user with the facial features f' in the database is calculated as follows:
wherein,and the values are unchanged.
In general, cosmetic categories can be divided into: light makeup (life makeup, work makeup, travel makeup), heavy makeup (evening makeup, party makeup, stage makeup), personal makeup (lovely makeup, smoke makeup, classical makeup, bosom makeup, cool makeup, etc.), colorful makeup, etc. Thus, different types of makeup correspond to different facial features, as shown in table 1:
TABLE 1 different types of makeup and corresponding face features
For example, if the feature of the human face { f1, f2, f3, f4, f5, f6, f7, f8, f9}, respectively represents 9 features of the human face (skin color, face shape, pigment, black eye, hair color, age, gender, taste, skin type), the matching degree sim of the personalized feature f of the user with the feature f' of the human face in the database is calculated as follows:
wherein,and the values are unchanged.
Therefore, by executing step 202, the matching degree of the personalized features of the user and the facial features in the database can be calculated respectively.
And 103, selecting the makeup corresponding to the face features with the highest matching degree in the database as recommended makeup to be recommended to the user.
And based on the calculation result in the step 102, selecting the makeup corresponding to the face feature with the highest matching degree as a recommended makeup to be output. It should be noted that different types of makeup may be selected, including different sub-makeup cases, such as Look1, Look2, Look3, etc., and a certain type of makeup (e.g., light life makeup) is recommended to the user, and the user may select the sub-type (e.g., Look1) of the type of makeup according to his or her needs.
Therefore, the makeup with the highest matching degree is taken as the recommended makeup, the most suitable makeup is recommended to the user, such as eyebrow shapes (such as natural eyebrows, willow eyebrows, flat eyebrows, sword eyebrows, I-shaped eyebrows and the like), eye shapes and the like.
As still another embodiment of the present invention, after step 103, the method may further include: and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user. When a user selects a sub-makeup in a certain makeup category as a target makeup, a corresponding makeup is generated according to the selection of the user, and the makeup is superimposed on a face image of the user and displayed on a display screen. The user's selection operation of the target makeup may be a touch operation, a pressing operation of a physical button, a voice control operation, or a shaking operation of the electronic device. The touch operation may be a touch click operation, a touch long press operation, a touch slide operation, a multi-point touch operation, and the like. The electronic device may provide a selection button, and when a click operation on the selection button is detected, the target makeup is displayed while being superimposed on the face image of the user. The electronic equipment can also preset starting voice information. And receiving corresponding voice information by calling the voice receiving device, and triggering the selection operation when the voice information is matched with the starting voice information. Through analysis, the voice information can be judged to be matched with the preset opening voice information, so that the target makeup is superposed on the face image of the user for display.
As still another embodiment of the present invention, after step 103, the method may further include: and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user. And when the user selects a sub-makeup in a certain makeup category as a target makeup, acquiring a makeup step of the target makeup from the database according to the target makeup selected by the user, thereby determining a proper target cosmetic according to the makeup step of the target makeup and the personalized features of the user, and recommending the target cosmetic to the user. Therefore, the embodiment of the invention can recommend the brand and style of the cosmetics suitable for the user, and the user is prevented from purchasing the cosmetics blindly. The user's selection operation of the target makeup may be a touch operation, a pressing operation of a physical button, a voice control operation, or a shaking operation of the electronic device. The touch operation may be a touch click operation, a touch long press operation, a touch slide operation, a multi-point touch operation, and the like. The electronic device may provide a selection button, and when a click operation of the selection button is detected, the target cosmetic is recommended to the user. The electronic equipment can also preset starting voice information. And receiving corresponding voice information by calling the voice receiving device, and triggering the selection operation when the voice information is matched with the starting voice information. Through analysis, the voice information can be judged to be matched with the preset starting voice information, so that the target cosmetics are recommended to the user.
As another embodiment of the present invention, the step of recommending the target cosmetics to the user according to the makeup step of the target makeup and the personalized features of the user may specifically include: respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup; and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
In this embodiment, the brand and style of the cosmetics required for the makeup can be recommended to the user according to the target makeup selected by the user. The makeup steps required by different makeup are different, and different cosmetics are recommended according to the human face characteristics of the user in different steps. For example, a person with black hair may need a different eyebrow color number than a person with yellow hair, and a person with dry skin may have a different foundation than a person with oily skin. Similarity matching can be performed on the personalized features of the user and the face features in the database sequentially according to the makeup steps based on the personalized features of the user acquired in the step 201. In each makeup step, cosmetics corresponding to the face features with the maximum matching degree are selected as target cosmetics, and the target cosmetics are recommended to the user.
Take the Look of Look1 in light make-up in life as an example, which sequentially comprises the steps of making up such as isolation, concealer, foundation make-up, make-up fixation and the like. Different makeup steps correspond to different human face features and recommended cosmetics, as shown in table 2:
table 2 list of face features and recommended cosmetics in a certain makeup stored in data center
Taking the isolation step as an example, calculating the matching degree sim of the personalized user characteristics f ═ { f1, f8, f9} of the user and the face characteristics corresponding to Ge 1:
and then, calculating the matching degrees of the personalized user features f { f1, f8, f9} of the user and the face features corresponding to Ge2, … and GeN in sequence, and finally selecting the cosmetic corresponding to the face feature with the highest matching degree as the target cosmetic.
It should be noted that the facial features corresponding to different makeup steps are different, some makeup steps only correspond to facial features of three dimensions, and some makeup steps need to correspond to facial features of eight dimensions, so that some personalized features in the multi-dimensional personalized features of the user need to be selected according to the facial features corresponding to the makeup steps to calculate the matching degree.
As still another embodiment of the present invention, the method for implementing auxiliary makeup may further include building a hand recognition model by a machine learning method or a training deep learning model in advance, and then after the user selects the target makeup, the method further includes: displaying teaching contents corresponding to the current makeup step to a user, detecting whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, scoring the exercise result of the current makeup step based on the face image of the user acquired currently.
Acquiring a large number of positive and negative samples related to different postures of the hand offline, obtaining a hand recognition model by utilizing machine learning methods such as Adaboost (an iterative algorithm) and random forests or training deep learning models, detecting whether the hand exists in the acquired image in real time through the hand recognition model, and when no hand continuously appears in a camera visual field within a preset time threshold value, considering that the user finishes the exercise of the current makeup step, and scoring the exercise result of the current makeup step based on the face image of the user currently acquired. Optionally, the method may further include: judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and if not, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
As still another embodiment of the present invention, as shown in fig. 2, the method for implementing the supplementary makeup may include:
step 201, displaying teaching contents corresponding to the current makeup step, such as makeup techniques, cautions, video courses and the like of the current step, to a user;
step 202, detecting whether hands appear in the acquired image in real time through a hand recognition model;
step 203, judging whether a hand is not detected continuously within a preset time threshold, if so, executing step 204, otherwise, executing step 202;
step 204, finishing the current makeup step, and scoring the practice result of the current makeup step based on the collected current face image of the user;
step 205, determining whether the score exceeds a score threshold, if yes, executing step 206, and if not, executing step 207;
step 206, entering practice of the next makeup step, so as to display teaching contents corresponding to the next makeup step for the user;
step 207, displaying modification opinions of the current makeup step to the user based on a comparative analysis result between the currently collected face image of the user and a reference makeup image corresponding to the current makeup step;
step 208, determining whether an instruction to skip the exercise again is received, if yes, executing step 209, and if not, executing step 201;
step 209, determining whether to finish the practice of all the makeup steps, if yes, executing step 210, and if not, executing step 206;
and step 210, evaluating and displaying the whole makeup based on the comparison analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step, and recording the omitted makeup step.
It should be noted that if the user skips some steps in step 210, the text and video guidance regarding the step will be more detailed during the next exercise, so that the user can exercise the skipped steps with emphasis.
After a user selects a target makeup, the user can practice according to the target makeup, the whole practice process is guided, the manipulation and the attention of each step are displayed on a display screen for the user to refer to, the target makeup and the current face image of the user are compared and displayed in real time, when one step is finished, the current step is scored, and a modification suggestion is given in real time, so that the learning efficiency of the user is improved.
As still another embodiment of the present invention, as shown in fig. 3, the step of displaying the modification opinions of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step may include:
step 301, collecting a face image of a user, thereby obtaining a face makeup image of the user;
step 302, carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user;
step 303, intercepting a regional makeup image corresponding to the current makeup step from the reference makeup image;
and 304, comparing and analyzing the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparison analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
After a certain step is finished in the practice process, the local effect is displayed in a contrast mode, for example, after eyebrow painting is finished, an eyebrow image of a reference makeup face can be displayed in a contrast mode with an eyebrow image finished by a user, and the user can see problems more clearly. Because some areas are smaller and influence the observation effect of a user, the acquired local area images can be uniformly amplified to a fixed size, and distortion brought by the amplification process is reduced by utilizing a bicubic interpolation method or other machine learning algorithms in the amplification process.
It should be noted that whether the partial makeup is evaluated and scored or the total makeup is scored, the Structural Similarity (SSIM) between the makeup image X of the face of the user and the reference makeup image Y may be used to evaluate the partial makeup or the total makeup, and the Structural similarity may be measured in terms of brightness, contrast, and structure.
SSIM(X,Y)=l(X,Y)·c(X,Y)·s(X,Y)
Wherein, muX、μYRespectively representing the mean values, σ, of X and YX、σYDenotes the variance, σ, of X and YXYIs the covariance of X and Y, C1、C2、C3Is a constant. Namely:
if I (X, Y) is low, the brightness of the makeup representing the user differs from the brightness of the reference makeup. If c (X, Y) is low, the contrast representing the makeup of the user is not good, such as a highlight and shadow step is problematic. If s (X, Y) is low, taking the eyebrow shape as an example, the shape of the eyebrow drawn by the user is different from the shape of the eyebrow of the reference makeup.
During the practice process, the user displays the whole reference makeup image, the partial reference makeup image, the current step technique, the notice and the related video tutorial on the display screen, and the sections displayed on the screen are shown in fig. 5. Wherein, the area 1 is a mirror area and is used as a mirror for users; area 2 is a display area of the partial reference makeup image for the current makeup step, for example, if the current makeup step is eyebrow drawing, then area 2 shows a reference eyebrow shape; zone 3 is the display area of the user's partial makeup image after the user has completed the current makeup step; 4, a display area of the face makeup image of the user after the user finishes the whole makeup; area 5 is a display area of the reference makeup image; the 6 area is a display area for putting the makeup technique and the attention corresponding to the current makeup step; the area 7 is a display area of the video tutorial corresponding to the current makeup step; the 8-field is a display field for scoring and modifying opinions.
As still another embodiment of the present invention, the method for implementing makeup assistance further includes: periodically reminding the user to practice certain makeup steps according to the user's historical practice time and practice results (such as scores and modification opinions, etc.), thereby consolidating the user's makeup level.
As still another embodiment of the present invention, the method for implementing makeup assistance further includes: according to the personalized features, the exercise process and the exercise results of the user, the user portrait is depicted for the user, and other similar user behaviors are recommended to the user according to big data analysis. For example, if the eyebrow of the user is not good enough, short videos of other similar users about eyebrow drawing steps are recommended to the user, and more targeted guidance is achieved.
Therefore, the embodiment of the invention can give the user guidance during the exercise process, score the makeup process after the completion, give suggestions and record, and then perform targeted and focused guidance (local exercise time increase and more detailed guided exercise) according to the last exercise result at the next exercise.
In one embodiment, as shown in fig. 5, an implementation apparatus 50 for assisting makeup is provided, where the implementation apparatus 50 for assisting makeup includes a personalized feature extraction module 51, a face feature matching module 52 and a makeup recommendation module 53, the personalized feature extraction module 51 is configured to extract personalized features of a user, the face feature matching module 52 is configured to calculate matching degrees of the personalized features of the user and each face feature in a database respectively, and the makeup recommendation module 53 is configured to select a makeup corresponding to a face feature with the highest matching degree in the database as a recommended makeup to be recommended to the user.
In some embodiments of the invention, the personalized feature extraction module 51 is configured to: the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user.
In some embodiments of the invention, the personalized feature extraction module 51 is further configured to: and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
In some embodiments of the present invention, the makeup recommendation module 53 is further configured to: and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
In some embodiments of the present invention, the apparatus further includes a display module and a scoring module, wherein the display module is configured to display teaching content corresponding to the current makeup step to the user, and the scoring module is configured to detect whether a hand is present in the collected images in real time through a pre-established hand recognition model, and if no hand is continuously detected within a preset time threshold, score the exercise result of the current makeup step based on the currently collected face image of the user.
In some embodiments of the invention, the scoring module is further configured to: judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
In some embodiments of the present invention, the displaying the modification opinions of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step includes: acquiring a face image of a user so as to acquire a face makeup image of the user; carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user; intercepting an area makeup image corresponding to the current makeup step from the reference makeup image; and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
In some embodiments of the present invention, the makeup recommendation module 53 is further configured to: and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
In some embodiments of the present invention, recommending a target cosmetic to a user according to a makeup procedure of the target makeup and a personalized feature of the user, comprises: respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup; and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
Therefore, the makeup suitable for the user is recommended by comparing the personalized features of the user with the face features in the database, so that the condition that a beginner learns the makeup unsuitable for the user is avoided, the suitable cosmetics can be recommended to the user according to the makeup, and waste caused by the fact that the beginner purchases the cosmetics unsuitable for the user blindly is avoided. Compared with the traditional mode of finding the makeup teacher for tutoring, the embodiment of the invention can enable the user to systematically learn the makeup knowledge, and can simply and conveniently assist the user to carry out multiple exercises according to the makeup level of the user. Therefore, the embodiment of the invention can not only accelerate the progress of the user in learning makeup, but also recommend proper cosmetics to the user, thereby reducing waste.
It will be understood by those skilled in the art that the division of the modules and units in the implementation device of the auxiliary makeup is only for illustration, and in other embodiments, the implementation device of the auxiliary makeup may be divided into different modules and units as needed to complete all or part of the functions of the implementation device of the auxiliary makeup.
Referring to fig. 6, it is a schematic structural diagram of a device for implementing auxiliary makeup according to another embodiment of the present invention. In the embodiment, the device comprises a personalized feature extraction module, a makeup recommendation module, a storage module, a cosmetic recommendation module, a makeup guidance module, a display module, a camera and a wireless communication unit, wherein the personalized feature extraction module comprises a human face feature detection unit, a key point positioning unit, a skin color analysis unit and a human-computer interaction unit. In the embodiment, the camera collects face images of a user from multiple angles, the images are transmitted to the personalized feature extraction module, the face feature detection unit, the key point positioning unit and the skin color analysis unit are used for extracting first personalized features such as skin color, face shape, pigment, black eye, hair color and the like, and the human-computer interaction unit interacts with the user to obtain second personalized features such as age, gender, preference, skin type and the like.
Different types of makeup and suitable human face characteristics stored in a data center, makeup steps corresponding to different makeup, cosmetics corresponding to each makeup step, human face characteristics corresponding to each cosmetic, makeup images corresponding to each makeup step and the like are obtained through a wireless communication unit. The storage module can store personalized features of a user, and can also store different types of makeup downloaded from the data center, suitable face features, makeup steps corresponding to different makeup, various cosmetics corresponding to various makeup steps, face features corresponding to each cosmetic, makeup images corresponding to various makeup steps and the like. The makeup recommendation center of the data center can store different types of makeup, suitable face features, makeup steps corresponding to different makeup, and the like, the makeup recommendation center can store various cosmetics corresponding to various makeup steps, face features corresponding to each cosmetic, and the like, and the makeup guidance center can store makeup images corresponding to various makeup steps, and the like.
Further, the makeup recommendation module calculates the matching degree of the personalized features of the user and each face feature in the database, selects the makeup corresponding to the face feature with the highest matching degree in the database as a recommended makeup to be recommended to the user, and displays the recommended makeup through the display module. The makeup guidance module displays teaching contents corresponding to the current makeup step for a user, detects whether hands exist in the collected images in real time through a pre-established hand recognition model, and scores exercise results of the current makeup step based on the currently collected face images of the user if the hands are not continuously detected within a preset time threshold. The makeup instruction module also judges whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
And the cosmetic recommending module is used for responding to the operation of selecting the target makeup from the recommended makeup by the user, and recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized characteristics of the user. Specifically, the cosmetic recommending module respectively determines each cosmetic corresponding to each makeup step according to the makeup steps of the target makeup, respectively calculates the matching degree of the face feature corresponding to each cosmetic and the personalized feature of the user for each makeup step, and selects the cosmetic corresponding to the face feature with the highest matching degree as the target cosmetic, so as to recommend the target cosmetic to the user.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program is executed by a processor to implement the steps of the method for implementing auxiliary makeup provided by the above embodiments.
The embodiment of the invention also provides a computer program product. A computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the method for implementing make-up assistance provided by the above embodiments.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present invention.
Fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present invention. The implementation method of the auxiliary makeup provided by the embodiment of the invention can be applied to the electronic equipment shown in fig. 7, and as shown in fig. 7, the electronic equipment comprises a processor, a memory, a display screen and a camera which are connected through a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and the memory stores at least one computer program which can be executed by the processor to realize the auxiliary makeup implementation method suitable for the electronic equipment provided by the embodiment of the invention. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a database, and a computer program. The database stores data related to implementing the method for assisting makeup provided in the following embodiments, for example, data such as a face image and personalized features of a user may be stored. The computer program can be executed by a processor to implement a method for implementing makeup assistance provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The camera can include first camera module and second camera module, all can be used to shoot and generate the image. The display screen may be a touch screen, such as a capacitive screen or an electronic screen, for displaying visual information such as images, and may also be used to detect a touch operation applied to the display screen and generate a corresponding instruction.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the inventive arrangements and does not constitute a limitation on the electronic device to which the inventive arrangements may be applied, and that a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components. For example, the electronic device may further include a network interface connected via the system bus, and communicate with other devices via the network interface, for example, via the network interface, data such as face images and/or personalized features on other devices may be acquired.
There is also provided, according to an embodiment of the present invention, an electronic device, as shown in fig. 8, comprising a processor 81 and a memory 82, the memory 82 being configured to store computer program instructions, the computer program instructions being adapted to be loaded by the processor and to perform the method of: extracting personalized features of a user; respectively calculating the matching degree of the personalized features of the user and each face feature in a database; and selecting the makeup corresponding to the face features with the highest matching degree in the database as recommended makeup to be recommended to the user.
The processor may be any suitable processor, for example, implemented in the form of a central processing unit, a microprocessor, an embedded processor, or the like, and may employ an architecture such as X86, ARM, or the like; the memory 82 may be any suitable memory device including, but not limited to, magnetic memory devices, semiconductor memory devices, optical memory devices, etc., and is not limited by the embodiments of the present invention.
Any reference to memory, storage, database, or other medium used by the invention may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
Further, according to an embodiment of the present invention, the processor may further load and execute: the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user. Further, according to an embodiment of the present invention, the processor may further load and execute: and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
Further, according to an embodiment of the present invention, the processor may further load and execute: and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
Further, according to an embodiment of the present invention, the processor may further load and execute: displaying teaching contents corresponding to the current makeup step to a user; and detecting whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, scoring the exercise result of the current makeup step based on the currently acquired face image of the user.
Further, according to an embodiment of the present invention, the processor may further load and execute: judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
Further, according to an embodiment of the present invention, the processor may further load and execute: acquiring a face image of a user so as to acquire a face makeup image of the user; carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user; intercepting an area makeup image corresponding to the current makeup step from the reference makeup image; and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
Further, according to an embodiment of the present invention, the processor may further load and execute: and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
Further, according to an embodiment of the present invention, the processor may further load and execute: respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup;
and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
Therefore, the makeup suitable for the user is recommended by comparing the personalized features of the user with the face features in the database, so that the condition that a beginner learns the makeup unsuitable for the user is avoided, the suitable cosmetics can be recommended to the user according to the makeup, and waste caused by the fact that the beginner purchases the cosmetics unsuitable for the user blindly is avoided. Compared with the traditional mode of finding the makeup teacher for tutoring, the embodiment of the invention can enable the user to systematically learn the makeup knowledge, and can simply and conveniently assist the user to carry out multiple exercises according to the makeup level of the user. Therefore, the embodiment of the invention can not only accelerate the progress of the user in learning makeup, but also recommend proper cosmetics to the user, thereby reducing waste.
It should be noted that, for the sake of simplicity, the above-mentioned embodiments of the system, method and electronic device are all described as a series of acts or a combination of modules, but those skilled in the art should understand that the present invention is not limited by the described order of acts or the connection of modules, because some steps may be performed in other orders or simultaneously and some modules may be connected in other manners according to the present invention.
It should also be understood by those skilled in the art that the embodiments described in the specification are included in one embodiment, the number of the above embodiments is merely for description, and the actions and modules involved are not necessarily essential to the invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technical contents can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes volatile storage medium or non-volatile storage medium, such as various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (20)
1. An implementation method of auxiliary makeup is characterized by comprising the following steps:
extracting personalized features of a user;
respectively calculating the matching degree of the personalized features of the user and each face feature in a database;
and selecting the makeup corresponding to the face features with the highest matching degree in the database as recommended makeup to be recommended to the user.
2. The method for implementing makeup assistance according to claim 1, wherein extracting personalized features of a user comprises:
the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user.
3. The method for implementing makeup assistance according to claim 2, wherein extracting personalized features of a user further comprises:
and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
4. The method for implementing makeup assistance according to claim 2 or 3, further comprising:
and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
5. The method for implementing makeup assistance according to claim 3, further comprising:
displaying teaching contents corresponding to the current makeup step to a user;
and detecting whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, scoring the exercise result of the current makeup step based on the currently acquired face image of the user.
6. The method for implementing makeup assistance according to claim 5, further comprising:
judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
7. The method for implementing auxiliary makeup according to claim 6, wherein the step of displaying the modification opinions of the current makeup procedure to the user based on the result of the comparative analysis between the currently collected face image of the user and the reference makeup image corresponding to the current makeup procedure comprises:
acquiring a face image of a user so as to acquire a face makeup image of the user;
carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user;
intercepting an area makeup image corresponding to the current makeup step from the reference makeup image;
and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
8. The method for implementing makeup assistance according to claim 2 or 3, further comprising:
and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
9. The method for implementing makeup assistance according to claim 8, wherein recommending a target cosmetic to a user according to a makeup procedure of the target makeup and a personalized feature of the user comprises:
respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup;
and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
10. An implementation device for assisting makeup, comprising:
the personalized feature extraction module is configured to extract personalized features of the user;
the face feature matching module is configured to respectively calculate the matching degree of the personalized features of the user and each face feature in the database;
and the makeup recommending module is configured to select the makeup corresponding to the face feature with the highest matching degree in the database to be recommended to the user as a recommended makeup.
11. The device for implementing makeup assistance according to claim 10, wherein said personalized feature extraction module is configured to:
the method comprises the steps of collecting a face image of a user, and carrying out face detection, key point positioning and color feature analysis on the face image of the user, so as to extract a first personalized feature of the user.
12. The device for implementing makeup assistance according to claim 11, wherein said personalized feature extraction module is further configured to:
and extracting a second characterization feature of the user through a question-answering mode of man-machine interaction.
13. The makeup assisting implementation device according to claim 11 or 12, wherein the makeup recommendation module is further configured to:
and in response to the operation of selecting the target makeup from the recommended makeup by the user, displaying the target makeup by being superposed on the face image of the user.
14. The makeup-assisting realizing device according to claim 13, further comprising:
the display module is configured to display teaching contents corresponding to the current makeup step to a user;
and the scoring module is configured to detect whether hands exist in the acquired images in real time through a pre-established hand recognition model, and if the hands are not detected continuously within a preset time threshold, the scoring module is used for scoring the exercise result of the current makeup step based on the currently acquired face image of the user.
15. The implementation device of makeup assist according to claim 14, wherein said scoring module is further configured to:
judging whether the score for scoring the practice result of the current makeup step exceeds a score threshold value; if yes, continuing to display teaching contents corresponding to the next makeup step to the user; and otherwise, displaying the modification suggestion of the current makeup step to the user based on the comparative analysis result between the currently collected face image of the user and the reference makeup image corresponding to the current makeup step.
16. The device for implementing makeup assistance according to claim 15, wherein the step of displaying the modification opinions of the current makeup procedure to the user based on the result of the comparative analysis between the currently captured face image of the user and the reference makeup image corresponding to the current makeup procedure comprises:
acquiring a face image of a user so as to acquire a face makeup image of the user;
carrying out face key point positioning on the face makeup image of the user, and intercepting a regional makeup image corresponding to the current makeup step from the face makeup image of the user;
intercepting an area makeup image corresponding to the current makeup step from the reference makeup image;
and performing comparative analysis on the area makeup image of the user and the area makeup image of the reference makeup by adopting a structural similarity algorithm to obtain a comparative analysis result, thereby displaying the modification suggestion of the current makeup step to the user.
17. The makeup assisting implementation device according to claim 11 or 12, wherein the makeup recommendation module is further configured to:
and in response to the operation of selecting the target makeup from the recommended makeup, recommending the target cosmetics to the user according to the makeup steps of the target makeup and the personalized features of the user.
18. The makeup-assisting realization device according to claim 17, wherein recommending a target cosmetic to a user according to a makeup procedure of the target makeup and a personalized feature of the user comprises:
respectively determining each cosmetic corresponding to each cosmetic step according to the cosmetic steps of the target makeup;
and aiming at each makeup step, respectively calculating the matching degree of the face features corresponding to each cosmetic and the personalized features of the user, and selecting the cosmetic corresponding to the face feature with the highest matching degree as a target cosmetic, so as to recommend the target cosmetic to the user.
19. An electronic device comprising a processor and a memory for storing computer instructions, wherein the computer instructions, when executed by the processor, perform a method of implementing make-up assistance according to any one of claims 1 to 9.
20. A storage medium storing computer instructions adapted to be executed by a processor, the computer instructions, when executed by the processor, performing a method of implementing make-up assistance according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810456104.1A CN108920490A (en) | 2018-05-14 | 2018-05-14 | Assist implementation method, device, electronic equipment and the storage medium of makeup |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810456104.1A CN108920490A (en) | 2018-05-14 | 2018-05-14 | Assist implementation method, device, electronic equipment and the storage medium of makeup |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108920490A true CN108920490A (en) | 2018-11-30 |
Family
ID=64403403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810456104.1A Pending CN108920490A (en) | 2018-05-14 | 2018-05-14 | Assist implementation method, device, electronic equipment and the storage medium of makeup |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108920490A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110069716A (en) * | 2019-04-29 | 2019-07-30 | 清华大学深圳研究生院 | A kind of makeups recommended method, system and computer readable storage medium |
CN110866139A (en) * | 2019-08-22 | 2020-03-06 | 北京新氧科技有限公司 | Cosmetic treatment method, device and equipment |
CN110929146A (en) * | 2019-10-23 | 2020-03-27 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN111291642A (en) * | 2020-01-20 | 2020-06-16 | 深圳市商汤科技有限公司 | Dressing method, dressing device, electronic equipment and storage medium |
CN111339804A (en) * | 2018-12-19 | 2020-06-26 | 北京奇虎科技有限公司 | Automatic makeup method, device and system |
CN111797306A (en) * | 2020-05-23 | 2020-10-20 | 同济大学 | Intelligent makeup recommendation system based on machine vision and machine learning |
CN112528057A (en) * | 2020-12-11 | 2021-03-19 | 广东科学中心 | Dressing recommendation method, recommendation device, storage medium and terminal |
CN113208373A (en) * | 2021-05-20 | 2021-08-06 | 厦门希烨科技有限公司 | Control method of intelligent cosmetic mirror and intelligent cosmetic mirror |
CN113468932A (en) * | 2020-04-28 | 2021-10-01 | 海信集团有限公司 | Intelligent mirror and makeup teaching method |
CN113455807A (en) * | 2020-06-02 | 2021-10-01 | 海信集团有限公司 | Intelligent device |
CN113592591A (en) * | 2021-07-28 | 2021-11-02 | 张士娟 | Make-up recommendation system based on facial recognition |
CN113723173A (en) * | 2021-06-29 | 2021-11-30 | 厦门大学 | Automatic dressing recommendation method and system |
CN113837016A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708575A (en) * | 2012-05-17 | 2012-10-03 | 彭强 | Daily makeup design method and system based on face feature region recognition |
CN104331564A (en) * | 2014-11-10 | 2015-02-04 | 深圳市中兴移动通信有限公司 | Dressing instruction method based on terminal equipment and terminal equipment |
US20160128450A1 (en) * | 2011-03-01 | 2016-05-12 | Sony Corporation | Information processing apparatus, information processing method, and computer-readable storage medium |
CN106204691A (en) * | 2016-07-19 | 2016-12-07 | 马志凌 | Virtual make up system |
CN106294820A (en) * | 2016-08-16 | 2017-01-04 | 深圳市金立通信设备有限公司 | A kind of method instructing cosmetic and terminal |
-
2018
- 2018-05-14 CN CN201810456104.1A patent/CN108920490A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160128450A1 (en) * | 2011-03-01 | 2016-05-12 | Sony Corporation | Information processing apparatus, information processing method, and computer-readable storage medium |
CN102708575A (en) * | 2012-05-17 | 2012-10-03 | 彭强 | Daily makeup design method and system based on face feature region recognition |
CN104331564A (en) * | 2014-11-10 | 2015-02-04 | 深圳市中兴移动通信有限公司 | Dressing instruction method based on terminal equipment and terminal equipment |
CN106204691A (en) * | 2016-07-19 | 2016-12-07 | 马志凌 | Virtual make up system |
CN106294820A (en) * | 2016-08-16 | 2017-01-04 | 深圳市金立通信设备有限公司 | A kind of method instructing cosmetic and terminal |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339804A (en) * | 2018-12-19 | 2020-06-26 | 北京奇虎科技有限公司 | Automatic makeup method, device and system |
CN110069716A (en) * | 2019-04-29 | 2019-07-30 | 清华大学深圳研究生院 | A kind of makeups recommended method, system and computer readable storage medium |
CN110069716B (en) * | 2019-04-29 | 2022-03-18 | 清华大学深圳研究生院 | Beautiful makeup recommendation method and system and computer-readable storage medium |
CN110866139A (en) * | 2019-08-22 | 2020-03-06 | 北京新氧科技有限公司 | Cosmetic treatment method, device and equipment |
CN110929146A (en) * | 2019-10-23 | 2020-03-27 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN110929146B (en) * | 2019-10-23 | 2024-04-02 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and storage medium |
CN111291642A (en) * | 2020-01-20 | 2020-06-16 | 深圳市商汤科技有限公司 | Dressing method, dressing device, electronic equipment and storage medium |
CN111291642B (en) * | 2020-01-20 | 2023-11-28 | 深圳市商汤科技有限公司 | Dressing processing method and device, electronic equipment and storage medium |
CN113468932A (en) * | 2020-04-28 | 2021-10-01 | 海信集团有限公司 | Intelligent mirror and makeup teaching method |
CN111797306A (en) * | 2020-05-23 | 2020-10-20 | 同济大学 | Intelligent makeup recommendation system based on machine vision and machine learning |
CN113455807A (en) * | 2020-06-02 | 2021-10-01 | 海信集团有限公司 | Intelligent device |
CN112528057A (en) * | 2020-12-11 | 2021-03-19 | 广东科学中心 | Dressing recommendation method, recommendation device, storage medium and terminal |
CN113208373A (en) * | 2021-05-20 | 2021-08-06 | 厦门希烨科技有限公司 | Control method of intelligent cosmetic mirror and intelligent cosmetic mirror |
CN113723173A (en) * | 2021-06-29 | 2021-11-30 | 厦门大学 | Automatic dressing recommendation method and system |
CN113592591A (en) * | 2021-07-28 | 2021-11-02 | 张士娟 | Make-up recommendation system based on facial recognition |
CN113592591B (en) * | 2021-07-28 | 2024-02-02 | 张士娟 | Face recognition-based dressing recommendation system |
CN113837016A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108920490A (en) | Assist implementation method, device, electronic equipment and the storage medium of makeup | |
US10799010B2 (en) | Makeup application assist device and makeup application assist method | |
US9760935B2 (en) | Method, system and computer program product for generating recommendations for products and treatments | |
CN105813548B (en) | Method for evaluating at least one facial clinical sign | |
Zhang et al. | Computer models for facial beauty analysis | |
US20190026013A1 (en) | Method and system for interactive cosmetic enhancements interface | |
WO2021147920A1 (en) | Makeup processing method and apparatus, electronic device, and storage medium | |
US10559102B2 (en) | Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program | |
US10453097B2 (en) | Sentiments based transaction systems and methods | |
WO2015122195A1 (en) | Impression analysis device, game device, health management device, advertising support device, impression analysis system, impression analysis method, program, and program recording medium | |
WO2023029500A1 (en) | Health scheme recommendation method and apparatus based on deep learning, and device and medium | |
KR20180130778A (en) | Cosmetic recommendation method, and recording medium storing program for executing the same, and recording medium storing program for executing the same, and cosmetic recommendation system | |
US9104905B2 (en) | Automatic analysis of individual preferences for attractiveness | |
CN108932654A (en) | A kind of virtually examination adornment guidance method and device | |
CN106942878A (en) | Partial enlargement make up system, apparatus and method | |
Park et al. | An automatic virtual makeup scheme based on personal color analysis | |
KR20210065418A (en) | Mild cognitive impairment improvement system | |
CN116830073A (en) | Digital color palette | |
CN112116589A (en) | Method, device and equipment for evaluating virtual image and computer readable storage medium | |
US11227424B2 (en) | Method and system to provide a computer-modified visualization of the desired face of a person | |
CN113781271B (en) | Cosmetic teaching method and device, electronic equipment and storage medium | |
CN112364713A (en) | Intelligent makeup suggestion method and system | |
CN110443122A (en) | Information processing method and Related product | |
WO2020261531A1 (en) | Information processing device, method for generating learned model of make-up simulation, method for realizing make-up simulation, and program | |
KR20200085006A (en) | Beauty technology smart learning system and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |