CN111968248B - Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium - Google Patents

Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium Download PDF

Info

Publication number
CN111968248B
CN111968248B CN202010801807.0A CN202010801807A CN111968248B CN 111968248 B CN111968248 B CN 111968248B CN 202010801807 A CN202010801807 A CN 202010801807A CN 111968248 B CN111968248 B CN 111968248B
Authority
CN
China
Prior art keywords
makeup
user
dressing
video
strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010801807.0A
Other languages
Chinese (zh)
Other versions
CN111968248A (en
Inventor
常向月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202010801807.0A priority Critical patent/CN111968248B/en
Publication of CN111968248A publication Critical patent/CN111968248A/en
Application granted granted Critical
Publication of CN111968248B publication Critical patent/CN111968248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an intelligent cosmetic method based on an avatar, which comprises the following steps: displaying a recommendation list of the makeup; acquiring a target dressing form selected by a user in the recommendation list; acquiring a dressing policy corresponding to the target dressing; and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video, wherein the virtual image is generated in advance according to the face image of the user. According to the embodiment of the application, the user can be guided to make up through the virtual image generated by the face image of the user according to the makeup selected by the user, so that personalized make-up guidance is realized, and the effectiveness of the make-up guidance is improved.

Description

Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium
Technical Field
The present application relates to the field of man-machine interaction, and more particularly, to an intelligent cosmetic method, apparatus, electronic device, and storage medium based on an avatar.
Background
Along with the continuous improvement of living standard, people pursue beauty and the like, and many people can improve the image and gas quality of themselves through makeup in daily life or work. However, the facial forms and the facial features of different users are distributed differently, and the grasping degree of makeup is also different, so that it is difficult for the users to obtain personalized makeup guidance according to their own makeup levels, and thus, the makeup needs of the users are difficult to be satisfied.
Disclosure of Invention
In view of the above problems, the present application provides an intelligent makeup method, apparatus, electronic device and storage medium based on an avatar, which can acquire a recommended makeup style and make up guidance based on the avatar.
In a first aspect, an embodiment of the present application provides an avatar-based intelligent cosmetic method, the method including: displaying a recommendation list of the makeup; acquiring a target dressing form selected by a user in the recommendation list; acquiring a dressing policy corresponding to the target dressing; and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video, wherein the virtual image is generated in advance according to the face image of the user.
Optionally, the displaying the recommendation list of the makeup includes: acquiring a face image of a user, and generating facial features of the user according to the face image; acquiring at least one makeup look according to the facial features; and displaying the recommendation list, wherein the recommendation list comprises at least one dressing.
Optionally, the step of obtaining at least one makeup look according to the facial features includes: acquiring makeup product data and makeup style data input by a user; and acquiring a makeup look according to the facial features, the cosmetic product data and the makeup style data, wherein at least one makeup look in the makeup look is obtained by using the cosmetic product.
Optionally, the obtaining the target dressing selected by the user in the recommendation list includes: acquiring a basic makeup selected by a user in the recommendation list; judging whether an adjustment instruction input by a user is acquired or not; if an adjustment instruction input by a user is acquired, adjusting the basic makeup based on the adjustment instruction, and taking the adjusted basic makeup as the target makeup; and if the adjustment instruction is not acquired, taking the basic makeup as the target makeup.
Optionally, the obtaining the dressing policy corresponding to the target dressing includes: a makeup step of obtaining the target makeup, wherein the makeup step is used for representing makeup behaviors which are sequentially carried out according to time sequence and are required by completing the target makeup; and acquiring a dressing strategy corresponding to the dressing step, wherein the dressing strategy comprises a dressing product, a dressing instruction method and a dressing effect, the dressing product comprises the name and the dosage of the dressing product, the dressing instruction method comprises a dressing action and a using area of the dressing product on the face, and the dressing effect comprises the color of the dressing product combined with the face area after dressing and the outline of the five sense organs.
Optionally, after the intelligent makeup method based on the avatar obtains the dressing policy corresponding to the target dressing, the method further includes: acquiring a makeup video of a user; and comparing and analyzing the makeup video of the user with the makeup strategy, and displaying the result of the comparison and analysis.
Optionally, the comparing the makeup video of the user with the makeup policy, displaying a result of the comparing, includes: comparing and analyzing the makeup video of the user with the makeup strategy to obtain the finish degree of the makeup of the user, wherein the finish degree is used for representing the difference between the makeup video of the user and the makeup strategy; and if the completion degree is smaller than a preset numerical value, adjusting the dressing strategy into an alternative strategy, wherein the alternative strategy is a strategy with higher matching degree with the user's dressing video.
Optionally, the comparing the makeup video of the user with the makeup policy to obtain the finish degree of the user's makeup includes: analyzing the makeup video to obtain a makeup step and a makeup effect of the user in the makeup video of the user; according to the makeup step, analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy, and obtaining the completion degree of the makeup effect of the user; and if the finishing degree of the makeup effect of the user is smaller than a first specified threshold, generating and displaying first prompt information, wherein the prompt information comprises at least one of the finishing degree of the makeup effect, the makeup effect in the makeup strategy and the correction suggestion of the makeup effect.
Optionally, the comparing the makeup video of the user with the makeup policy to obtain the finish degree of the user's makeup includes: analyzing the makeup video, and acquiring a makeup step and a user makeup method in the makeup video of the user; according to the makeup step, analyzing the difference between the user makeup manipulation and the makeup instruction manipulation in the makeup strategy, and obtaining the completion degree of the user makeup manipulation; and if the completion degree of the makeup technique is smaller than a second designated threshold, generating and displaying second prompt information, wherein the second prompt information comprises at least one of the completion degree of the makeup technique, the makeup guiding technique and the makeup technique correction suggestion.
Optionally, the generating the video of the avatar guiding makeup according to the makeup policy and displaying the video includes: generating expression driving parameters and action driving parameters corresponding to the virtual image guiding user's make-up behaviors according to the make-up strategy; driving the expression and the action of the avatar based on the expression driving parameters and the action driving parameters, and generating a video of the avatar guiding makeup, wherein the video is composed of multi-frame images generated by driving the avatar; and displaying the video.
Optionally, the generating, according to the makeup policy, expression driving parameters and action driving parameters corresponding to the avatar guiding the user to make up, includes: inputting the makeup strategy and the face image of the user into a parameter generation model, and obtaining the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by training real-person makeup videos and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Optionally, the avatar-based intelligent cosmetic method further comprises: acquiring a face image of a user; acquiring skin data of a user from the face image; analyzing the skin data to generate skin care advice; displaying the skin care advice.
In a second aspect, an embodiment of the present application provides an avatar-based intelligent cosmetic apparatus, the apparatus including: the list display module is used for displaying a recommendation list of the makeup; the target dressing acquisition module is used for acquiring a target dressing selected by a user in the recommendation list; the dressing policy acquisition module is used for acquiring a dressing policy corresponding to the target dressing; and the video processing module is used for generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video, wherein the virtual image is generated in advance according to the face image of the user.
Optionally, the list display module includes: the system comprises a feature generation sub-module, a list dressing acquisition sub-module and a dressing display sub-module, wherein: the feature generation sub-module is used for acquiring a face image of a user and generating facial features of the user according to the face image; a list dressing acquisition sub-module for acquiring at least one dressing according to the facial features; and the dressing display sub-module is used for displaying the recommendation list, and the recommendation list comprises at least one dressing.
Optionally, the makeup obtaining submodule includes: data acquisition unit and product dressing obtain the unit, wherein: a data acquisition unit for acquiring makeup product data and makeup style data input by a user; and a product make-up acquisition unit for acquiring make-up according to the facial features, the product data and the make-up style data, wherein at least one make-up in the make-up is obtained by using the cosmetic product.
Optionally, the target makeup acquisition module includes: dressing selection submodule, instruction judgment submodule, first dressing acquisition submodule and second dressing acquisition submodule, wherein: the dressing selection sub-module is used for acquiring the foundation dressing selected by the user in the recommendation list; the instruction judging sub-module is used for judging whether an adjustment instruction input by a user is acquired or not; the first dressing acquisition sub-module is used for adjusting the basic dressing based on the adjustment instruction if the adjustment instruction input by the user is acquired, and taking the adjusted basic dressing as the target dressing; and the second dressing acquisition sub-module is used for taking the basic dressing as the target dressing if the adjustment instruction is not acquired.
Optionally, the makeup policy obtaining module includes: the cosmetic step obtains submodule and substep tactics and obtains submodule, wherein: the makeup step obtaining submodule is used for obtaining the makeup step of the target makeup, and the makeup step is used for representing the makeup behaviors which are needed by completing the target makeup and are sequentially carried out according to the time sequence; the step strategy obtaining submodule is used for obtaining a dressing strategy corresponding to the dressing step, the dressing strategy comprises a dressing product, a dressing guiding method and a dressing effect, the dressing product comprises a name and a dosage of the dressing product, the dressing guiding method comprises a dressing action and a using area of the dressing product on the face, and the dressing effect comprises a color of the dressing product combined with the face area after dressing and a contour line of a five sense organ.
Optionally, after the obtaining of the makeup policy corresponding to the target makeup, the avatar-based intelligent cosmetic apparatus further includes: the system comprises a video acquisition module and a contrast analysis module, wherein: the video acquisition module is used for acquiring the makeup video of the user; and the contrast analysis module is used for carrying out contrast analysis on the makeup video of the user and the makeup strategy and displaying the result of the contrast analysis.
Optionally, the contrast analysis module includes: the system comprises a completion degree acquisition sub-module and a strategy adjustment sub-module, wherein: the completion degree obtaining sub-module is used for comparing and analyzing the makeup video of the user with the makeup strategy to obtain the completion degree of the user's makeup, and the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy; and the strategy adjustment sub-module is used for adjusting the dressing strategy into an alternative strategy if the completion degree is smaller than a preset numerical value, wherein the alternative strategy is a strategy with higher matching degree with the cosmetic video of the user.
Optionally, the completion acquiring submodule includes: a first video analysis unit, a first completion degree acquisition unit, and a first information generation unit, wherein: the first video analysis unit is used for analyzing the makeup video and acquiring a makeup step and a makeup effect of the user in the makeup video of the user; the first completion degree obtaining unit is used for analyzing the difference between the user makeup effect and the makeup effect in the makeup strategy according to the makeup step and obtaining the completion degree of the user makeup effect; and the first information generation unit is used for generating and displaying first prompt information if the completion degree of the makeup effect of the user is smaller than a first specified threshold, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and the correction suggestion of the makeup effect.
Optionally, the completion acquiring submodule includes: a second video analysis unit, a second completion degree acquisition unit, and a second information generation unit, wherein: the second video analysis unit is used for analyzing the makeup video and acquiring a makeup step and a user makeup method in the makeup video of the user; a second completion degree obtaining unit, configured to analyze differences between the user makeup technique and the makeup instruction technique in the makeup policy according to the makeup step, and obtain a completion degree of the user makeup technique; and the second information generation unit is used for generating and displaying second prompt information if the completion degree of the makeup technique is smaller than a second designated threshold, wherein the second prompt information comprises at least one of the completion degree of the makeup technique, the makeup guiding technique and the makeup technique correction suggestion.
Optionally, the video processing module includes: a parameter generation sub-module, an avatar driving sub-module, and a video display sub-module, wherein: the parameter generation sub-module is used for generating expression driving parameters and action driving parameters corresponding to the virtual image guiding user's make-up behaviors according to the make-up strategy; the virtual image driving sub-module is used for driving the expression and the action of the virtual image based on the expression driving parameters and the action driving parameters, and generating a video for guiding makeup by the virtual image, wherein the video is formed by multi-frame images generated by driving the virtual image; and the video display sub-module is used for displaying the video.
Optionally, the parameter generating submodule includes a model processing unit, wherein: the model processing unit is used for inputting the makeup strategy and the face image of the user into a parameter generation model, obtaining the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by training real makeup videos and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Optionally, the avatar-based intelligent cosmetic apparatus further includes: an image acquisition module, a skin data acquisition module, and a suggestion generation module, wherein: the image acquisition module is used for acquiring a face image of a user; the skin data acquisition module acquires skin data of a user from the face image; and the suggestion generation module is used for analyzing the skin data and generating skin care suggestions.
In a third aspect, an embodiment of the present application provides an electronic device, which may include: a memory; one or more processors coupled to the memory; one or more applications, wherein the one or more applications are stored in memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method as described above in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, the program code being callable by a processor to perform a method as described above in the first aspect.
The embodiment of the application provides an intelligent makeup method, device, electronic equipment and storage medium based on an avatar, which are characterized in that a recommendation list of the makeup is displayed, then a target makeup selected by a user in the recommendation list is obtained, then a makeup strategy corresponding to the target makeup is obtained, finally a video for guiding the makeup of the avatar is generated according to the makeup strategy, and the video is displayed, wherein the avatar is generated in advance according to a face image of the user. Therefore, the user can be guided to make up through the virtual image generated by the face image of the user according to the dressing style selected by the user, thereby realizing personalized make-up guidance and improving the effectiveness of the make-up guidance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an application environment suitable for use with embodiments of the present application;
Fig. 2 is a flow chart illustrating an avatar-based intelligent cosmetic method according to an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of an interactive interface provided by an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating an avatar-based intelligent cosmetic method according to another embodiment of the present application;
FIG. 5 is a schematic flow chart of step S220 in FIG. 4 according to an exemplary embodiment of the present application;
Fig. 6 is a flowchart illustrating an avatar-based intelligent cosmetic method according to still another embodiment of the present application;
FIG. 7 illustrates yet another interactive interface schematic provided by an exemplary embodiment of the present application;
FIG. 8 illustrates another interactive interface schematic provided by an exemplary embodiment of the present application;
fig. 9 is a flowchart illustrating an avatar-based intelligent cosmetic method according to still another embodiment of the present application;
fig. 10 is a flowchart illustrating an avatar-based intelligent cosmetic method according to still another embodiment of the present application;
FIG. 11 is a flow chart illustrating step S560 in FIG. 10 according to an exemplary embodiment of the present application;
fig. 12 is a schematic flow chart of step S561 in fig. 11 according to an exemplary embodiment of the present application;
fig. 13 is a schematic flow chart of step S561 in fig. 11 according to another exemplary embodiment of the present application;
fig. 14 is a flowchart illustrating an avatar-based intelligent cosmetic method according to still another embodiment of the present application;
fig. 15 is a flowchart illustrating an avatar-based intelligent cosmetic method according to still another embodiment of the present application;
fig. 16 is a block diagram illustrating a construction of an avatar-based intelligent cosmetic apparatus according to an embodiment of the present application;
Fig. 17 is a block diagram illustrating a construction of an electronic device for performing an avatar-based intelligent cosmetic method according to an embodiment of the present application;
fig. 18 illustrates a storage unit for storing or carrying program codes for implementing an avatar-based intelligent cosmetic method according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Along with the progress of society, people increasingly pursue quality of life, many people can promote self image gas quality through making up in daily life or work, and although there is a large amount of makeup instruction videos on the net to consult, different users 'facial forms, five sense organs distribution are also different, make the user be difficult to find the makeup that is fit for oneself through studying the makeup instruction video, in addition, because different users' grasp degree to make up is different, the user also can not obtain effective makeup instruction according to own makeup level, therefore, current makeup instruction method can't satisfy user's actual demand to make up.
In order to improve the problems, the inventor researches the difficulty of current makeup and the makeup demands of users in practical application, and proposes the intelligent makeup method, the device, the electronic equipment and the storage medium based on the virtual image, which are provided by the embodiment of the application, according to the makeup selected by the users, the users are guided to make up through the virtual image generated by the face images of the users, thereby realizing personalized makeup guidance and improving the effectiveness of the makeup guidance.
In order to better understand the intelligent makeup method, the device, the electronic equipment and the storage medium based on the virtual image provided by the embodiment of the application, the application environment suitable for the embodiment of the application is described below.
Referring to fig. 1, fig. 1 shows a schematic view of an application environment suitable for an embodiment of the present application. The intelligent cosmetic method based on the avatar according to the embodiment of the present application may be applied to the multi-state interactive system 10 as shown in fig. 1. The multi-state interactive system 10 includes a terminal device 100 and a server 200, and the server 200 is communicatively connected to the terminal device 100. The server 200 may be a conventional server, or may be a cloud server, or may be a server cluster formed by a plurality of servers, or may be a server center formed by a plurality of servers. The server 200 may be used to provide a user with a background service, which may include, but is not limited to, acquiring a makeup policy corresponding to a target makeup, generating a video of an avatar guiding makeup according to the makeup policy, and the like, which is not limited herein.
The terminal device 100 may be any of a variety of electronic devices having a display screen and supporting data input, including, but not limited to, smart cosmetic mirrors, robots, smartphones, tablet computers, laptop portable computers, desktop computers, and wearable electronic devices or other electronic devices in which an avatar-based smart cosmetic apparatus is deployed, without limitation. Specifically, the data input may be based on a voice module, a character input module, an image input module, a video input module, and the like, which are provided on the terminal device 100, or may be based on a gesture recognition module, which is provided on the terminal device 100, so that a user may implement interaction modes such as gesture input, and the like.
The terminal device 100 may be provided with a client application program, a user may communicate with the server 200 based on the client application program (such as APP, weChat applet, etc.), specifically, the server 200 is provided with a corresponding server application program, the user may register a user account on the server 200 based on the client application program, and communicate with the server 200 based on the user account, for example, the user logs in to the user account on the client application program and inputs the user account through the client application program based on the user account, text information, voice information, image information or video information may be input, after the client application program receives the information input by the user, the information may be sent to the server 200, so that the server 200 may receive the information and process and store the information, and the server 200 may also receive the information and return a corresponding output information to the terminal device 100 according to the information.
In some embodiments, the device for processing the information input by the user may also be disposed on the terminal device 100, so that the terminal device 100 may implement interaction with the user without relying on establishing communication with the server 200, where the polymorphic interaction system 10 may only include the terminal device 100.
The above application environments are merely examples for facilitating understanding, and it is to be understood that embodiments of the present application are not limited to the above application environments.
The intelligent cosmetic method, the device, the electronic equipment and the medium based on the virtual image provided by the embodiment of the application are described in detail below through specific embodiments.
Referring to fig. 2, fig. 2 is a flow chart illustrating an intelligent makeup method based on an avatar according to an embodiment of the present application, and the intelligent makeup method based on an avatar according to the embodiment may be applied to an electronic device. The electronic device may be the terminal device having a display screen or other image output device, or may be the server. In a specific embodiment, the avatar-based intelligent cosmetic method may be applied to the avatar-based intelligent cosmetic apparatus 1600 shown in fig. 16 and the electronic device 1700 shown in fig. 17. The following will describe in detail the flow shown in fig. 2, and the intelligent cosmetic method based on an avatar may specifically include the steps of:
step S110: and displaying a recommendation list of the makeup.
The recommended list can comprise at least one dressing, and the dressing can be an integral dressing or a partial dressing. Specifically, the integral makeup may include a eyebrow-shaped makeup, a lip-shaped makeup, an eye-shaped makeup, a blush-shaped makeup, etc. to form an integral face-shaped makeup, and the partial makeup may be one or more of a eyebrow-shaped makeup, a lip-shaped makeup, an eye-shaped makeup, a blush-shaped makeup, etc., without limitation. As one way, the terminal device may acquire a makeup request instruction of the user, and display a recommended list of the makeup according to the makeup request instruction of the user. Specifically, if the user's makeup demand instruction is to acquire the whole makeup, the recommendation list of the whole makeup is displayed, and if the user's makeup demand instruction is to acquire the partial makeup, the recommendation list of the partial makeup corresponding to the user's makeup demand instruction is displayed. For example, when receiving a makeup demand instruction from a user to acquire an eye makeup, the terminal device displays a recommendation list including at least one eye makeup.
In some embodiments, the makeup in the recommendation list may be pre-stored in a database local to the terminal device, may be generated by the terminal device or the server in real time, or may be input by the user into the terminal device, which is not limited herein. The dressing in the recommendation list is obtained in various modes, so that the diversity of the dressing in the recommendation list can be improved, and the dressing range selectable by a user is increased.
As a way, the makeup in the recommendation list may be stored in a database local to the terminal device in advance, specifically, the database stores different makeup data such as a eyebrow makeup, a lip makeup, an eye makeup, a blush makeup, etc., and when the terminal device receives a request for displaying the recommendation list of the makeup, the makeup data of the recommendation list may be obtained from the database, thereby reducing the time required for generating the makeup in real time.
As another way, the makeup in the recommendation list may be generated by the terminal device or the server in real time, specifically, when a request to display the recommendation list of the makeup is received, the terminal device or the server generates the makeup data in real time and transmits the makeup data to the terminal device, and the recommendation list of the makeup is displayed on the screen of the terminal device. For example, the terminal device may acquire a face image of the user, acquire a makeup matching the face image of the user in the server, and send the makeup data to the terminal device and display.
As still another way, the makeup in the recommendation list may be a makeup entered by the user into the terminal device, where the makeup entered by the user may be a face image corresponding to the makeup, and the face image may be an image of the entire makeup or may be an image of a partial makeup. For example, the user may input a favorite makeup image of the star, may use the entire makeup in the image as a makeup of a recommended list, or may use the eye makeup in the image as a makeup of a recommended image. Alternatively, the makeup input by the user may be a makeup of a plurality of partial makeup compositions selected by the user. For example, the user may select the eyebrow form, the lip form, and the eye form among the partial forms, and the form composed of the plurality of partial forms selected by the user is displayed in the recommended list.
In some embodiments, the recommended list of makeup may be displayed after an instruction from the user requesting the makeup instruction is detected. Specifically, the instruction for requesting the makeup instruction may be preset by the terminal device, and the instruction may be a screen touch instruction, or may be a voice instruction, an action instruction, a gesture instruction, or the like. For example, when the terminal device detects a complete facial image of the user, it may inquire through voice whether a makeup instruction is required, and if a voice instruction is acquired that the user requires the makeup instruction, a recommended list of makeup is displayed.
In some embodiments, after acquiring the makeup data of the recommendation list, the terminal device may display the makeup effect diagram corresponding to the makeup to the user, so that the user may select the target makeup according to the makeup effect diagram. The makeup effect diagram can be a makeup effect diagram obtained by a real person using the makeup, or can be a makeup effect diagram corresponding to the makeup obtained based on the avatar, and optionally, the avatar can be generated according to a face image of a user, and can be an avatar identical to the length of the user, or can be a preset avatar different from the length of the user.
For example, fig. 3 shows a schematic view of an interactive interface provided by an exemplary embodiment of the present application, where the interface includes an avatar generated based on a face image of a user, and a recommendation list including a plurality of makeup, and if a makeup in the recommendation list selected by the user is received based on the interface, a makeup effect map of the avatar adopting the makeup is displayed on the interactive interface. The user can intuitively feel the makeup effect finished by the makeup on the face of the user through the makeup effect diagram of the virtual image, so that the user can conveniently select the makeup suitable for the user or favorite by the user.
In some embodiments, the operation instruction of the user may be obtained based on the interactive interface displayed with the recommendation list, and the makeup effect map displayed on the interactive interface may be enlarged or reduced according to the operation instruction of the user, so that the user may better understand details of the makeup, and select a makeup suitable for the user. For example, the terminal device may preset an area clicked on the screen in a manner of clicking twice on the screen, and when an operation instruction of clicking twice on the eye makeup position on the interactive interface is received, the area of the eye makeup in the makeup effect diagram may be enlarged.
It can be understood that step S110 may be performed locally by the terminal device, or may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S120: and acquiring a target dressing selected by the user in the recommendation list.
After displaying the recommendation list of the makeup, the terminal device may obtain a target makeup selected by the user in the recommendation list, where the target makeup is a makeup that the user wants to obtain a makeup instruction, and the selection operation of the user may be, without limitation, a screen touch instruction, a voice instruction, an action instruction, a gesture instruction, and the like. For example, the target dressing selected by the user may be obtained by obtaining a click instruction of the user on the dressing in the recommendation list on the screen, or the target dressing selected by the user may be obtained as dressing 1 in the recommendation list by obtaining a voice instruction "select dressing 1" of the user.
In some embodiments, a facial image of the user may be further obtained, facial features are extracted according to the facial image of the user, the facial features of the user are matched with the makeup data in the recommendation list, the matching probability of the facial features and the makeup data is calculated, the higher the matching probability is, the more suitable the makeup is for the user, and the makeup corresponding to the highest matching probability is used as the target makeup. As one way, the matching probability of the facial features and the makeup data may be calculated by a machine learning model obtained by training in advance, specifically, the model may be obtained by training in advance based on the face data associated with the makeup effect as a sample, and after the facial features of the user and the makeup data in the recommendation list are input into the machine learning model, the matching probability of the facial features and the makeup data may be output.
Because the dressing in the recommendation list may not necessarily meet the actual needs of the user, in some embodiments, the user may obtain the basic dressing selected in the recommendation list and an adjustment instruction for the basic dressing, and use the adjusted basic dressing as the target dressing selected by the user. In particular, please refer to the following examples.
It is understood that step S120 in this embodiment may be performed locally by the terminal device.
Step S130: and acquiring a dressing policy corresponding to the target dressing.
The dressing policy is operation information corresponding to the target dressing, and can provide information such as a dressing skill, an operation step and the like for a user according to the operation information to guide the user to make up the dressing close to the target dressing, the dressing policy can comprise a cosmetic product, a dressing guiding method, a dressing effect and the like, the cosmetic product comprises a name and a dosage of the cosmetic product, the dressing guiding method comprises a dressing action and a using area of the cosmetic product on the face, and the dressing effect comprises a color of the cosmetic product after combination with the face area and a contour line of a five sense organ. For example, when the user selected target make-up is eyebrow make-up, the make-up strategy may include: selecting brands and colors of the used eyebrow makeup products, drawing out the eyebrow outline corresponding to the target makeup by using an eyebrow pencil, filling the inside of the eyebrow outline by using eyebrow powder, naturally gigging the eyebrow by using the eyebrow pencil, and carrying out heavy drawing on the eyebrow peak.
In some embodiments, one target makeup may correspond to a different makeup policy. Specifically, the target dressing can correspond to dressing strategies with different difficulties, and the more details of dressing in the dressing strategies with higher difficulty are more difficult to finish, the higher the similarity with the target dressing is. For example, in a dressing strategy with lower difficulty corresponding to the eye shadow of the target dressing, only one eye shadow disc with one color and one cosmetic brush are needed to be used, and the dressing effect corresponding to the dressing strategy is similar to the target dressing by 70%. The dressing strategy with higher difficulty corresponding to the eye shadow of the target dressing comprises the steps of superposing a plurality of colors and using a plurality of cosmetic brushes for halation, and the similarity between the dressing effect corresponding to the dressing strategy and the target dressing is 100%.
As a way, the corresponding dressing policy may be obtained according to the selection of the user, for example, the difficulty of dressing may be divided into three modes of simple, moderate and difficult according to the different difficulty of the dressing policy, and if the user selects the simple mode, the simple dressing policy corresponding to the target dressing policy is obtained.
As still another way, a makeup video of the user may be acquired, the makeup level of the user may be obtained by analyzing the makeup video of the user, and a corresponding makeup policy may be selected according to the makeup level of the user. For example, by analyzing the makeup video of the user, it is possible to know that the user's makeup level is high, so that a more difficult makeup strategy can be selected.
In some embodiments, the dressing policy corresponding to the target dressing may also include a clothing collocation policy, a accessory collocation policy, a hairstyle policy, etc. that matches the dressing. For example, if the target makeup is an European and American makeup, it is possible to recommend garments of European and American style, and more exaggerated jewelry, etc. By the method, the user can acquire an omnibearing collocation strategy suitable for the dressing, and the overall dressing effect is improved.
It can be understood that step S130 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S140: and generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video.
After the makeup policy corresponding to the target makeup is obtained, a video for guiding makeup of the avatar may be generated according to a face image of the user in advance and displayed, and the video for guiding makeup of the avatar may include a makeup operation performed by completing the target makeup on the face of the avatar, so that an action like a real person of the user may be implemented, and a change of color, contour line, etc. corresponding to the makeup action may be displayed on the face of the avatar.
It is understood that the face image may be an image acquired by the terminal device through an image acquisition device such as a camera, or may be an image input by the user into the terminal device. Based on the face image of the user, an avatar model corresponding to the user can be obtained, and the similarity between each part of the head of the avatar model and the corresponding part of the head of the user meets the preset similarity condition. Wherein the avatar may be a two-dimensional avatar or a three-dimensional avatar, and each part of the head of the avatar includes at least one of the following: eyes, eyebrows, mouth, nose, face shape, head shape, skin color. Optionally, facial bone detection, as well as body posture recognition, may also be performed on the user to obtain an avatar that is similar not only to the user's head, but also to the user's body posture.
Specifically, the avatar model corresponding to the user may be obtained through a neural network model obtained through pre-training: firstly, extracting features of a face image of a user through a feature extraction layer of a neural network model to obtain feature vectors of different attributes corresponding to each part, wherein the attributes are used for representing feature information externally reflected by the part, for example, the attributes of eyes can comprise the shape of the eyes, the type of eyelid, the color of pupils, the length of eyelashes and the like; then, the feature vectors of the different attributes of the parts are spliced through a feature splicing layer of the neural network, so that the feature vectors corresponding to the parts are obtained; then, predicting according to the category to which the feature vector corresponding to each part belongs through a classification layer of the neural network; and finally, combining materials corresponding to all parts based on the categories of all parts to generate an avatar model corresponding to the user, rendering the avatar model through an image processor and the like to obtain an avatar corresponding to the face image of the user and displaying the avatar. Alternatively, the capturing of the avatar model based on the face image of the user may be performed by the terminal device or may be performed by the server.
In some embodiments, the makeup operation information and the used makeup product in the makeup policy may be displayed in the form of a prompt box before the video is displayed, so that the user may determine whether the makeup policy is suitable for himself or herself according to his or her own makeup level and his or her own makeup product, and if the user is not satisfied with the current makeup policy, may adjust the makeup policy or reselect the target makeup. In this way, the time for the user to watch the video can be saved, and the situation that the user finds that the makeup product required by the user is not finished when the makeup is half performed can be avoided.
In some embodiments, the avatar may be controlled to direct video play of the make-up according to an interactive instruction input by the user. The interaction instruction may be a preset control instruction for video display, for example, playing video at double speed, repeatedly playing video, suspending video, playing video, etc., and the user may input the interaction instruction by clicking a screen, voice, action, gesture, etc. For example, when the terminal device receives a voice command "slow down" input by the user, the speed of the currently played video can be correspondingly adjusted to be 0.8 times of the speed of the currently played video, so that the speed of the video playing is more consistent with the speed of the current makeup operation of the user, and the user can learn by referring to the content of the video more easily. By acquiring the polymorphic interaction instruction of the user, the man-machine interaction function of the terminal equipment can be enriched, so that the cosmetic instruction better meets the actual requirements of the user.
In some embodiments, the terminal device may acquire real-time makeup video of the user through an image acquisition device such as a camera, and obtain the current makeup progress of the user by analyzing the makeup video of the user, and the terminal device may display a video for guiding makeup by an avatar corresponding to the current makeup progress of the user, so that the user learns makeup against the video, and optionally may display a video corresponding to the next operation after the current makeup operation is finished. For example, when detecting that the current makeup operation of the user is eye shadow, the terminal device may display a video of the avatar eye shadow, or may display a video corresponding to eyelash brushing after the avatar eye shadow.
In some embodiments, the audio corresponding to the makeup policy may also be obtained, and the audio is played through the audio playing device of the terminal device. Optionally, characters corresponding to the makeup policy or images of the makeup effect of the avatar can be displayed through a prompt box. For example, a cosmetic product required to be used in a cosmetic process of a target makeup may be displayed on an interface, wherein the cosmetic product may include cosmetics, cosmetic tools, and the like. For example, when the makeup of the object is eyebrow makeup, a makeup product such as eyebrow pencil and eyebrow powder required for completing the makeup of the eyebrow may be displayed on the interface, and a makeup tool such as eyebrow pencil and eyebrow brush may be used. The user can select one or more modes such as video, audio, characters, images and the like to conduct makeup guidance according to the own makeup habit, so that the makeup experience of the user is improved. As one method, when a video in which an avatar instructs makeup is displayed, a link to a makeup product or a makeup tool used in the video may be displayed in the form of a prompt box on an interface in which the video is displayed, and a user may purchase by clicking the link. Through the mode, the user can conveniently acquire the product required by the user, and the merchant can realize higher purchase conversion rate of the commodity.
It can be understood that step S140 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
According to the intelligent makeup method based on the virtual image, the recommended list of the makeup is displayed, the target makeup selected by the user in the recommended list is obtained, the makeup strategy corresponding to the target makeup is obtained, the video for guiding the makeup of the virtual image is generated according to the makeup strategy, the video is displayed, the makeup guidance can be conducted on the makeup selected by the user, personalized makeup guidance is achieved, and the user can learn more easily through the virtual image guidance makeup generated according to the face image of the user, so that the effectiveness of the makeup guidance can be improved.
Referring to fig. 4, fig. 4 illustrates an avatar-based intelligent cosmetic method according to another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S210: and acquiring a face image of the user, and generating facial features of the user according to the face image.
The face image of the user may be an image acquired by the terminal device through an image acquisition device such as a camera, or may be uploaded to the terminal device by the user.
Wherein facial features are used to characterize externally embodied feature information of a user's face, including but not limited to contours, colors, etc. of various parts.
Specifically, facial features of a user can be extracted from an acquired facial image of the user through an image recognition technology, firstly, the acquired facial image of the user can be subjected to facial detection to determine the position of the face, then, the face of the user can be detected to acquire key points of the face, the face is divided into a plurality of areas according to the key points, the features of each area are extracted, and the features of different areas form a feature vector to serve as the facial features of the user. Key points may include, but are not limited to: the key parts of eyes, eyebrows, mouth, nose, face and the like, it is understood that the number and the parts of the key points can be determined according to actual situations.
As one mode, a face image of a user may be input into a face feature extraction model to obtain a face feature corresponding to an image of the face of the user, where the face feature extraction model is a machine learning model trained from a data sample set of a plurality of key points marked with the face, and is configured to output the face feature corresponding to the face image according to the input face image. As one method, a corresponding machine learning model may be trained for each attribute of a face, for example, a face image of a user may be input into a face classification model obtained by training in advance, a face corresponding to the face image may be obtained, and similarly, a nose, eye, eyebrow, lip, and the like corresponding to the face image may be determined.
In some embodiments, a three-dimensional face model of the user may be generated after the face image of the user is acquired, and facial features of the user may be extracted according to the three-dimensional face model. Specifically, two-dimensional face images of a plurality of angles of a user can be acquired, so that a face three-dimensional model of the user is constructed according to the two-dimensional face images, and facial features of the user are generated by analyzing the three-dimensional face model. Compared with a two-dimensional face image, the three-dimensional face model is closer to the image of the user, and three-dimensional facial features such as nose bridge height, cheekbone height and the like can be acquired.
In some embodiments, if the obtained face image of the user is an image after the user makes up, as one way, the obtained image may be prompted to be the make-up image in a text or voice manner, the face image of the user is required to more accurately obtain the facial features of the user, as another way, the corresponding face image may also be obtained according to the image in a machine learning model or the like.
It can be understood that step S210 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S220: at least one makeup look is obtained based on the facial features.
The makeup obtained according to the facial features may be a whole makeup or a partial makeup. Specifically, the integral makeup may include a eyebrow-shaped makeup, a lip-shaped makeup, an eye-shaped makeup, a blush-shaped makeup, etc. to form a integral face-shaped makeup, and the local makeup may be one or more of a eyebrow-shaped makeup, a lip-shaped makeup, an eye-shaped makeup, a blush-shaped makeup, etc., without limitation. As one mode, one makeup may be acquired based on facial features, or a plurality of makeup may be acquired.
In some embodiments, the terminal device or the server has a database storing various real-person makeup effect graphs, and the database may be preset or obtained by collecting the makeup effect graphs on the network in real time. After facial features of a user are generated according to the facial images, the dressing effect diagram in the database can be matched with the facial features, and if the matching value is larger than a preset matching value, the dressing in the dressing effect diagram is used as a recommended dressing, wherein the recommended dressing is a dressing in a recommended list.
As a way, the areas of the face of the user can be set to different weight values, the similarity between the areas of the face of the user and the dressing effect diagram in the database is calculated, the weighted average value of the similarity between the areas is used as the matching value of the dressing effect diagram, and if the matching value is larger than the preset matching value, the dressing in the dressing effect diagram is used as the recommended dressing, wherein the weight value of the areas of the face of the user can be set by the user or can be automatically set by the terminal equipment when the dressing guidance function is started. For example, the user sets the weight of eyes to 1, the weight of nose to 0.5, and when the similarity between eyes of the makeup effect diagram and eyes of the user reaches 80% and the similarity between nose reaches 60%, the matching value of the makeup effect diagram is 73%, and the matching value exceeds the preset matching value by 70%, and the makeup of the makeup effect diagram is taken as the recommended makeup.
As another way, the facial features of the user and the makeup effect map of the database can be input into a pre-trained makeup matching model, the makeup matching model is a machine learning model trained by a non-makeup data sample of the face and a makeup data sample corresponding to the face, the matching probability of the facial features and different makeup effect maps can be output, and the makeup in the makeup effect map corresponding to the matching probability larger than a preset value is used as a recommended makeup. The model can be constructed, trained and optimized in a server, configured in the server, or transplanted to a mobile terminal by the server and configured. Alternatively, the model building, training and optimization processes may be performed in the mobile terminal, if the processing capabilities of the mobile terminal allow.
Optionally, the makeup in the database can be adjusted according to the facial features of the user, so that the makeup is more suitable for the user.
In some embodiments, the makeup may be generated according to the acquired facial features of the user, and specifically, the terminal device may input the facial features of the user into a pre-trained makeup generation model to output a makeup corresponding to the facial features, where the makeup generation model is a machine learning model trained from a data sample of no makeup of a face and a makeup data sample corresponding to a face, and the model may generate a makeup corresponding to the facial features according to the facial features of the user.
It can be understood that step S220 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Although the makeup obtained according to the facial features of the user can meet the aesthetic requirements of the public, in practical applications, the user may not be able to meet the more personalized makeup requirements of the scene or the makeup style, and thus, in some embodiments, step S220 may include steps S221 to S222, please refer to fig. 5, fig. 5 shows a schematic flow chart of step S220 in fig. 4 provided by an exemplary embodiment of the present application, and step S220 may include:
step S221: and acquiring the cosmetic product data and the dressing style data input by the user.
The cosmetic product data may include types of makeup products such as foundation, powder, eyebrow, blush, highlight, cosmetic, eye shadow, lipstick, and the like, and brands, models, and the like corresponding to the types, and the cosmetic product data input by a user may be data of cosmetics purchased by the user, the user may be instructed to better utilize the purchased products by acquiring the cosmetic according to the cosmetic product data, the cosmetic product data may also be data of cosmetics of interest to the user, and the user may be assisted in judging whether the products are suitable for the user according to the cosmetic product data.
The makeup style data may be classified according to actual demands, for example, may be classified into makeup, etc. according to the degree of makeup, may be classified into dressing in ancient times, dressing in republic of China, dressing in modern times, etc., may be classified into dressing in europe and america, dressing in japanese and korean, dressing in china, etc. according to geographical regions, and may be classified into stage dressing, banquet dressing, daily dressing, etc. according to scenes, which is not limited herein.
In some embodiments, the terminal device may provide a makeup product interaction interface and a makeup style interaction interface, and detect a user operation based on the interfaces, so as to obtain makeup product data and makeup style data input by the user, where the user operation may be a selection operation based on options provided by the interfaces or an input operation, and is not limited herein.
In some embodiments, the terminal device may record user data, such as a user selected target makeup, or a browsing history of the makeup in the recommendation list, each time the user uses the makeup function, and then analyze the user-preferred makeup product data and the makeup style data according to the user data, in such a way that the user does not need to additionally input the data, and thus the user experience may be improved.
In some embodiments, the terminal device may also acquire only the makeup product data input by the user, or acquire only the makeup style data input by the user, without acquiring both the makeup product data and the makeup style data.
Step S221 may be performed locally by the terminal device.
Step S222: the makeup look is obtained based on the facial features, the cosmetic product data, and the makeup style data.
When the makeup product data and the makeup style data input by the user are acquired, the makeup style may be acquired based on the facial features, the makeup product data, and the makeup style data, wherein at least one of the makeup styles is acquired using the makeup product.
As one mode, after a plurality of makeup looks are acquired according to facial features, the acquired makeup looks may be screened using the makeup product data, the makeup style data as screening conditions, and the makeup looks satisfying the conditions may be taken as the acquired makeup looks. Specifically, the screening condition corresponding to the cosmetic product data is that the makeup can be obtained by using the cosmetic product, and the screening condition corresponding to the makeup style data is that the makeup accords with the makeup style input by a user. As another way, the makeup product data and the makeup style data may be used as screening conditions, the makeup meeting the conditions may be screened from a database containing a plurality of makeup, then the makeup meeting the conditions may be matched with the facial features, and if the matching value is greater than a preset matching value, the makeup in the makeup may be used as the acquired makeup.
In some embodiments, the manner in which makeup is obtained may be varied according to the needs of the user. As one way, the makeup may be obtained from facial features and cosmetic product data; as yet another way, the makeup may be obtained from facial features and makeup style data; as another way, the makeup may be obtained from facial features, cosmetic product data, and makeup style data; as yet another way, the makeup may be obtained based solely on the makeup product data or the makeup style data, so that the user may try some makeup that does not necessarily match the facial features of the user, but matches the makeup product data or the makeup style data of the user.
It can be understood that step S222 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S230: and displaying a recommendation list, wherein the recommendation list comprises at least one dressing.
After the makeup is acquired according to the facial features, a recommendation list containing the acquired makeup is displayed. Specifically, please refer to step S110.
Step S240: and acquiring a target dressing selected by the user in the recommendation list.
Step S250: and acquiring a dressing policy corresponding to the target dressing.
Step S260: and generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video.
It should be noted that, in this embodiment, the portions not described in detail may refer to the foregoing embodiments, and are not described herein again.
According to the intelligent makeup method based on the virtual image, facial features of a user are generated according to the facial features by acquiring the facial image of the user, at least one dressing is acquired according to the facial features, a recommendation list is displayed, the recommendation list comprises at least one dressing, a target dressing selected by the user in the recommendation list is acquired, a dressing strategy corresponding to the target dressing is acquired, and a video for guiding the virtual image to make up is generated and displayed according to the dressing strategy. Therefore, the method and the device realize that the makeup is generated according to the facial features of the user and the makeup guidance is carried out based on the virtual image, so that the makeup in the recommendation list is more suitable for the user and the makeup guidance is more personalized.
Referring to fig. 6, fig. 6 illustrates an avatar-based intelligent cosmetic method according to still another embodiment of the present application, which may be applied to the above-described terminal device, and the method may include:
Step S310: and displaying a recommendation list of the makeup.
Step S320: and acquiring the basic makeup selected by the user in the recommendation list.
Because the dressing of the recommendation list is a dressing which accords with the mass aesthetic, the actual demand of the user on the dressing is not necessarily met, after the basic dressing selected by the user in the recommendation list is obtained, the dressing can be adjusted according to the adjustment instruction of the user data, so that the dressing accords with the expectations of the user. The user can select among the dressing of the recommendation list through a screen touch instruction, a voice instruction, an action instruction, a gesture instruction and the like, and the selected dressing is used as a basic dressing.
Step S320 may be performed locally by the terminal device.
Step S330: and judging whether an adjustment instruction input by a user is acquired.
The adjustment instruction is used for representing adjustment operation on the basic makeup, and specifically, the adjustment instruction can be an operation instruction for adjusting the profile, color, position and other attributes of the local makeup such as eyebrow-shaped makeup, lip-shaped makeup, eye-shaped makeup, repair-shaped makeup, blush-shaped makeup and the like of the basic makeup. For example, the user can change the eye shadow color in the base makeup to the ground color by adjusting the instruction.
Step S330 may be performed locally by the terminal device.
In this embodiment, after determining whether to acquire the adjustment instruction input by the user, the method may further include:
if the adjustment instruction input by the user is obtained, step S340 may be executed;
If the adjustment command input by the user is not obtained, step S350 may be executed.
Step S340: and adjusting the basic makeup based on the adjustment instruction, and taking the adjusted basic makeup as a target makeup.
And if the adjustment instruction input by the user is acquired, adjusting the basic makeup based on the adjustment instruction, and taking the adjusted basic makeup as a target makeup. Specifically, a dressing parameter corresponding to a base dressing selected by a user in the recommendation list may be acquired, the base dressing is imaged on the face of the avatar according to the dressing parameter, and a dressing effect of the avatar employing the base dressing is displayed on the screen of the terminal device. If the adjustment instruction input by the user is obtained, modifying the dressing parameter corresponding to the adjustment instruction, and displaying the dressing effect of the virtual image obtained by adopting the modified dressing parameter on the screen of the terminal equipment. For example, when the adjustment instruction input by the user is to change the eye shadow color in the base makeup to the ground color, the terminal device changes the color parameter of the eye shadow part in the base makeup to the parameter corresponding to the ground color, and displays the makeup effect obtained by using the avatar of the modified makeup parameter. When the user finishes the adjustment operation, the adjusted basic makeup is used as the target makeup. In this way, the user can perform personalized adjustment operation on the makeup and intuitively see the adjusted makeup.
It can be understood that step S340 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
FIG. 7 is a schematic diagram showing another interactive interface provided by an exemplary embodiment of the present application, when it is obtained that a user needs to adjust a basic makeup, the terminal device may display the interactive interface shown in FIG. 7, where the interface may include a makeup effect diagram of an avatar corresponding to the basic makeup, and a local makeup list under the avatar, and may obtain, based on the interface, a local makeup to be adjusted selected by the user, where the user may obtain, as a way, a clicking operation on a face of the avatar by the user, where the makeup corresponding to the clicking operation is used as a local makeup to be adjusted, for example, the user is dissatisfied with makeup of eyebrows, and may adjust eyebrows by clicking an area of eyebrows; as still another method, a selection operation of the user in the partial makeup list may be acquired, and the partial makeup selected by the user may be used as the partial makeup to be adjusted, for example, the user may select the lip makeup under the avatar through a voice command, and adjust the lip makeup.
Fig. 8 is a schematic diagram of another interactive interface provided in an exemplary embodiment of the present application, where after obtaining a partial makeup to be adjusted selected by a user, the terminal device may display the interactive interface shown in fig. 8, where the interface may include a makeup effect diagram of an avatar corresponding to a basic makeup, a name of the partial makeup to be adjusted currently, and an attribute option that the partial makeup may be adjusted, such as a color, a contour, and so on. Based on the interactive interface shown in fig. 8, the makeup displayed on the avatar can be adjusted accordingly according to the acquired attribute options selected by the user. For example, when the user selects color 1 and contour 2 on the current interface, the lip makeup color of the avatar displayed on the interface is changed to color 1 and the lip makeup contour is changed to contour 2.
Step S350: and taking the basic makeup as a target makeup.
And if the adjustment instruction input by the user is not acquired, taking the basic makeup as the target makeup.
It is to be understood that step S350 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S360: and acquiring a dressing policy corresponding to the target dressing.
Step S370: and generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video.
It should be noted that, in this embodiment, the portions not described in detail may refer to the foregoing embodiments, and are not described herein again.
According to the intelligent makeup method based on the virtual image, through displaying a recommendation list of the makeup, basic makeup selected by a user in the recommendation list is obtained, then whether an adjustment instruction input by the user is obtained is judged, if the adjustment instruction input by the user is obtained, the basic makeup is adjusted based on the adjustment instruction, the adjusted basic makeup is used as a target makeup, if the adjustment instruction is not obtained, the basic makeup is used as the target makeup, then a makeup strategy corresponding to the target makeup is obtained, and according to the makeup strategy, a video for guiding makeup by the virtual image is generated and displayed. After the target dressing forms selected by the user in the recommendation list are obtained, the basic dressing forms are adjusted according to the adjustment instruction of the user to obtain the target dressing forms, and the makeup instruction is carried out based on the virtual image, so that the personalized dressing form requirements of the user can be met, more diversified target dressing forms are realized, and the dressing experience of the user is improved.
Referring to fig. 9, fig. 9 illustrates an avatar-based intelligent cosmetic method according to still another embodiment of the present application, which may be applied to the above-described terminal device, and the method may include:
step S410: and displaying a recommendation list of the makeup.
Step S420: and acquiring a target dressing selected by the user in the recommendation list.
Step S430: and (3) obtaining the makeup of the target dressing.
After the target makeup selected by the user in the recommended list is obtained, a makeup step of the target makeup may be obtained, where the makeup step is used to characterize the makeup behavior sequentially performed in time sequence required for completing the target makeup, for example, the makeup step may include sequentially performing the steps of making up, such as base makeup, blush, eyebrow makeup, eye makeup, cosmetic makeup, lip makeup, etc., where the base makeup step may be further subdivided into sub-steps of using a foundation solution, concealer, powder dispersion, etc. By dividing the makeup process of the target makeup into a plurality of makeup steps, the complex makeup process can be simplified into a plurality of simpler processes, and a beginner can learn conveniently.
As the same makeup uses different cosmetic steps, the corresponding makeup strategies may be different. In some embodiments, the makeup step of the target makeup may be adjusted to obtain a makeup policy corresponding to the adjusted makeup. As one way, the user-selected makeup step may be acquired so that only the user-selected makeup step corresponding to the target makeup is acquired. For example, the user usually only needs to draw the makeup base and the makeup eyebrows, so the user can select the makeup steps to be the makeup base and the makeup eyebrows, and after the target makeup selected by the user in the recommended list is obtained, only the makeup steps of the makeup base and the makeup eyebrows corresponding to the target makeup can be obtained. As another embodiment, the order of the makeup steps selected by the user may be acquired, and the order of the corresponding steps of the target makeup may be arranged in accordance with the order selected by the user. For example, if the user likes to use the eye shadow and then the blush after the make-up operation is performed, the make-up step corresponding to the target make-up may be the make-up, eye shadow, blush performed in sequence.
It can be understood that step S430 may be performed locally by the terminal device, or may be performed in a server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S440: and obtaining a dressing strategy corresponding to the dressing step.
After the makeup step of obtaining the target makeup, a makeup strategy corresponding to the makeup step may be obtained, wherein the makeup strategy includes a makeup product including a name and an amount of the makeup product, a makeup guiding method including a makeup action and a use area of the makeup product on the face, and a makeup effect including a color of the makeup product after the makeup product is combined with the face area, and a contour line of the five sense organs. For example, the makeup strategy corresponding to the lip makeup step in the makeup step may include: lipstick products corresponding to the lip makeup of the target makeup, cosmetic brushes needed for applying the lipstick products, an application area of the lipstick, an action of applying the lipstick from the left end of the upper lip to the right end of the upper lip in the application area, colors of the lipstick combined with the lips in the process of applying the lipstick, contour lines of the lips and the like.
It can be understood that step S440 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S450: and generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video.
It should be noted that, in this embodiment, the portions not described in detail may refer to the foregoing embodiments, and are not described herein again.
According to the intelligent makeup method based on the virtual image, through displaying the recommended list of the makeup, the target makeup selected by the user in the recommended list is obtained, then the makeup step corresponding to the target makeup is obtained, the makeup strategy corresponding to the makeup step is obtained, and then the video for guiding the makeup of the virtual image is generated and displayed according to the makeup strategy. Therefore, the corresponding makeup steps and the corresponding makeup strategies can be generated according to the target makeup selected by the user, the makeup is guided by the avatar, the user can learn the makeup by referring to the specific makeup steps, and the effectiveness of the makeup guidance is improved.
Referring to fig. 10, fig. 10 illustrates an avatar-based intelligent cosmetic method according to still another embodiment of the present application, which may be applied to the above-described terminal device, and the method may include:
Step S510: and displaying a recommendation list of the makeup.
Step S520: and acquiring a target dressing selected by the user in the recommendation list.
Step S530: and (3) obtaining the makeup of the target dressing.
Step S540: and obtaining a dressing strategy corresponding to the dressing step.
Step S550: and acquiring the makeup video of the user.
The makeup video of the user may be a video acquired by the terminal device through an image acquisition device such as a camera, or may be a video recorded by the user using other devices and uploaded to the terminal device to which the intelligent makeup method based on the avatar is applied.
In some embodiments, after the makeup video of the user is obtained, the user can share the makeup video of the user through the terminal device, and can also view the makeup video shared by other users, so that the interest of interaction with other people in the makeup process of the user is increased.
It will be appreciated that step S550 may be performed locally by the terminal device, and that the order of execution of step S550 is not limited to the currently enumerated order, and in some embodiments, this step and step S560 may also be performed after step S570 is performed.
Step S560: and (3) comparing the makeup video of the user with the makeup strategy, and displaying the result of the comparison analysis.
After the makeup video of the user is obtained, the makeup video of the user and the makeup policy can be subjected to comparative analysis, and the result of the comparative analysis is displayed, wherein the content of the result of the comparative analysis can be the same point and different points between the two, and the form can be text, voice, image and the like, and the form is not limited.
As a mode, the data such as the makeup technique and the makeup effect of the user can be obtained by analyzing the makeup video of the user, the obtained data is compared with the makeup instruction technique and the makeup effect in the makeup strategy, the difference between the makeup video of the user and the corresponding makeup strategy is obtained, and the prompt information of the difference is displayed. The user can adjust the self-dressing or the cosmetic technique according to the prompt information, so that the self-dressing is closer to the target dressing.
As yet another way, the difference between the dressing step in the user's cosmetic video and the dressing step in the dressing policy can be obtained by analyzing the user's cosmetic video and the dressing policy, and the prompt information of the corresponding dressing step in the dressing policy is displayed. The user can know the difference between the recommended dressing step and the dressing step in the actual makeup according to the prompt information, so that a perfect dressing can be obtained. For example, by comparing and analyzing the makeup video of the user with the makeup policy, it can be found that the user lacks the step of using loose powder to make up, and the step of lacking the loose powder to make up can be displayed in the form of a dialog box, and the user is prompted to use the loose powder to make up for a longer period of time.
It can be understood that step S560 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
In some embodiments, the step 560 may include steps S561 to S562, referring to fig. 11, fig. 11 shows a schematic flow chart of the step S560 in fig. 10 provided by an exemplary embodiment of the present application, and the step S560 may include:
Step S561: and comparing and analyzing the makeup video of the user with the makeup strategy to obtain the completion degree of the user's makeup.
The finish is used for representing the difference between the makeup video of the user and the makeup strategy, specifically, the smaller the difference between the makeup technique of the user in the makeup video of the user and the makeup guiding technique in the makeup strategy is, the smaller the difference between the makeup effect of the user and the makeup effect of the makeup technique and the makeup strategy is, and the higher the finish of the makeup of the user is. As a way, different weights in calculating the degree of completion can be given to the makeup instruction manipulation and the makeup effect in the makeup policy according to actual demands. For example, if the user wants to make-up with a similar actual make-up effect to the target make-up without taking care of whether the making-up method is normal, a larger weight value can be given to the make-up effect, and the finish of the make-up of the user is obtained mainly according to the difference of the make-up effects.
It can be understood that step S561 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
In some embodiments, step 561 may include steps S5611 to S5612, referring to fig. 12, fig. 12 shows a schematic flow chart of step S561 in fig. 11 provided by an exemplary embodiment of the present application, and step S561 may include:
Step S5611: and analyzing the makeup video, and acquiring the makeup steps and the makeup effects of the user in the makeup video of the user.
By analyzing the makeup video of the user, the makeup steps contained in the video and the makeup effects of the user corresponding to each makeup step can be obtained. Specifically, the makeup step included in the makeup video of the user may be obtained by analyzing the makeup video, and then the makeup video of the user may be divided into a plurality of video frames according to the makeup step, and for each makeup step, an image frame including the makeup effect of the user may be extracted from the video frame corresponding to the step as the makeup effect of the user corresponding to the step.
It is to be understood that step S5611 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S5612: according to the makeup steps, the difference between the makeup effect of the user and the makeup effect in the makeup strategy is analyzed, and the completion degree of the makeup effect of the user is obtained.
The difference between the user's make-up effect and the make-up effect in the make-up policy may include, but is not limited to, a difference between a color of a make-up product after making up combined with a facial area, a contour line of a five sense organs, and the like.
In some embodiments, for each makeup step included in the user makeup video, a difference between the makeup effect of the user corresponding to the step and the makeup effect corresponding to the step in the makeup policy may be analyzed, and the difference between the makeup effects of all the steps is integrated to obtain the overall completion of the makeup effect of the user. As a mode, the face in the makeup effect of the user can be aligned with the face of the makeup effect in the makeup strategy, and the difference value of the pixels of each area on the aligned face is calculated to obtain the completion degree of the makeup effect of the user, wherein the larger the difference value of the pixels is, the lower the completion degree of the makeup effect of the user is. In another mode, the make-up effect of the user and the make-up effect in the make-up strategy can be input into a make-up effect analysis model obtained through training in advance, a deviation value between the output make-up effect of the user and the make-up effect in the make-up strategy is obtained, and the finish degree of the make-up effect of the user is lower according to the deviation value, wherein the make-up effect analysis model can be a machine learning model obtained through training by taking a make-up effect image as sample data.
In some embodiments, according to the makeup step specified by the user, the difference between the user makeup effect and the makeup effect in the makeup policy may be analyzed, so as to obtain the completion degree of the user makeup effect corresponding to the specified step. For example, the user uploads the makeup video of the user on the terminal device, wants to know the difference between the eye makeup drawn by the user and the eye makeup in the target makeup, can select and analyze the completion degree of the makeup effect of the eye makeup, obtains the user makeup effect corresponding to the step of eye makeup by analyzing the makeup video of the user, and obtains the completion degree of the makeup effect corresponding to the eye makeup of the user according to the difference between the user makeup effect corresponding to the eye makeup and the makeup effect in the makeup strategy.
It is to be understood that step S5612 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S5613: and if the finishing degree of the makeup effect of the user is smaller than the first specified threshold value, generating and displaying first prompt information.
The first specified threshold may be a value preset by the terminal device or a value set by the user, if the completion degree of the makeup effect of the user is smaller than the first specified threshold, first prompt information is generated and displayed, and the prompt information includes at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and the makeup effect correction suggestion, so that the user can adjust the current makeup of the user according to the prompt information, and the makeup effect closer to the target makeup is obtained. For example, if the first specified threshold is 70% and the completion degree of the user's make-up effect is 50%, an image of the make-up effect in the make-up policy may be displayed, a portion having a large difference from the user's make-up effect may be marked in the image, and the make-up effect correction suggestion may be made in the form of text or voice.
In some embodiments, if the finish degree of the makeup effect of the user is greater than or equal to the first specified threshold, no processing may be performed, and a prompt message may be displayed to interact with the user. For example, if the first specified threshold is 80% and the completion of the user's make-up effect is 90%, a voice prompt message "your make-up level true stick-! "to encourage the user, thereby increasing the user's enjoyment of make-up.
It can be understood that step S5613 may be performed locally by the terminal device, or may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
In some embodiments, step 561 may include steps S5614 to S5616, referring to fig. 13, fig. 13 shows a flowchart of step S561 in fig. 11 according to another exemplary embodiment of the present application, and step S561 may include:
step S5614: and analyzing the makeup video, and acquiring the makeup steps and the user makeup methods in the makeup video of the user.
By analyzing the makeup video of the user, the makeup steps contained in the video and the makeup technique of the user corresponding to each makeup step can be obtained. Specifically, the makeup steps included in the makeup video of the user may be acquired through analysis of the makeup video, and then the makeup video of the user may be divided into a plurality of video frames according to the makeup steps, and for each makeup step, the makeup technique of the user corresponding to the makeup step may be acquired by identifying the action of the user in the video frame corresponding to the step.
It is to be understood that step S5614 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S5615: according to the makeup steps, the difference between the user makeup manipulation and the makeup instruction manipulation in the makeup strategy is analyzed, and the completion degree of the user makeup manipulation is obtained.
The difference between the user action skill and the makeup instruction skill in the makeup strategy is used to represent the difference between the user action skill and the makeup instruction skill in the makeup strategy, the larger the difference is, the lower the completion of the user makeup effect is, specifically, the difference between the user makeup skill and the makeup instruction skill in the makeup strategy may include the skill of the user using the makeup tool, the action track of the user during makeup, the use area of the makeup product on the face, and the like, and is not limited herein. The makeup manipulation analysis model is a machine learning model trained by taking makeup manipulation videos as sample data.
As one way, the difference between the user's makeup manipulation and the makeup instruction manipulation in the makeup policy may be analyzed according to the user-selected makeup step, so as to obtain the degree of completion of the user's makeup manipulation in the user-selected makeup step.
It is to be understood that step S5615 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S5616: and if the completion degree of the cosmetic manipulation is smaller than the second specified threshold, generating and displaying second prompt information.
The second specified threshold may be a value preset by the terminal device or a value set by the user, and if the completion of the makeup technique is smaller than the second specified threshold, second prompt information is generated and displayed, wherein the second prompt information includes at least one of the completion of the makeup technique, the makeup instruction technique and the correction suggestion of the makeup technique, so that the user can adjust the makeup technique according to the prompt information, and the makeup technique of the user is more close to the makeup instruction technique in the makeup strategy. In some embodiments, if the completion of the user cosmetic manipulation is greater than or equal to the second specified threshold, no processing may be performed, or a prompt message may be displayed to interact with the user.
It can be appreciated that step S5616 may be performed locally by the terminal device, or may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, which is not limited herein.
Step S562: and if the completion degree is smaller than the preset numerical value, adjusting the dressing strategy into an alternative strategy.
The preset value may be a value set by default by the terminal device, or may be data preset by a user.
The alternative strategy is a strategy with higher matching degree with the makeup video of the user, the matching degree can be used for representing the matching degree of the makeup level of the user and the difficulty of the strategy, and the matching degree can also be used for representing the matching degree of the makeup in the makeup video of the user and the makeup corresponding to the strategy.
As one way, when the degree of matching is a degree of matching that characterizes the level of makeup of the user with the difficulty of the policy, the alternative policy may be a less difficult than current cosmetic policies, i.e., a policy that is easier for the user to accomplish. After the makeup policy is adjusted to the alternative policy, a video of avatar guidance makeup may be generated and displayed according to the alternative policy. The dressing strategy is adjusted by analyzing the completion degree of the user makeup video, so that the user can obtain more proper dressing, and a more flexible makeup guiding mode can be realized.
As another way, when the matching degree is used for representing the matching degree of the makeup in the makeup video of the user and the makeup corresponding to the strategy, the alternative strategy may be a makeup strategy which is more matched with the current makeup effect of the user, specifically, the effect completed by the user in the makeup process is more different from the target makeup, the makeup more suitable for the current makeup effect of the user can be obtained in real time, and the corresponding makeup strategy is used as the alternative strategy. For example, when the eye shadow color used by the user in actual operation is different from the target makeup selected by the user, and the deviation is large, the makeup policy corresponding to the makeup of the current eye shadow color of the user can be used as the alternative policy, and the current makeup policy can be adjusted to the alternative policy.
In some embodiments, when the finish degree of the user's makeup is smaller than a preset value, a prompt for adjusting the makeup policy may be displayed, if an operation of adjusting the makeup policy according to the finish degree of the user's makeup is received, the makeup policy is adjusted to be an alternative policy, and if an operation of adjusting the makeup policy according to the finish degree of the user's makeup is not received, the makeup policy remains unchanged.
It can be understood that step S562 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S570: and generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video.
It should be noted that, in this embodiment, the portions not described in detail may refer to the foregoing embodiments, and are not described herein again.
According to the intelligent makeup method based on the virtual image, through displaying the recommended list of the makeup, the target makeup selected by the user in the recommended list is obtained, then the makeup step corresponding to the target makeup is obtained, the makeup strategy corresponding to the makeup step is obtained, the makeup video of the user and the makeup strategy are subjected to comparative analysis, the result of the comparative analysis is displayed, and then the video of virtual image guiding makeup is generated and displayed according to the makeup strategy. By analyzing the makeup video of the user, the user can know the difference between the actual makeup process and the makeup strategy corresponding to the target makeup, and more targeted makeup guidance is realized, so that the user can be helped to draw the makeup more similar to the target makeup.
Referring to fig. 14, fig. 14 illustrates an avatar-based intelligent cosmetic method according to still another embodiment of the present application, which may be applied to the above-described terminal device, and the method may include:
step S610: and displaying a recommendation list of the makeup.
Step S620: and acquiring a target dressing selected by the user in the recommendation list.
Step S630: and acquiring a dressing policy of the target dressing.
Step S640: and generating expression driving parameters and action driving parameters corresponding to the virtual image guiding user's make-up behaviors according to the make-up strategy.
Generating visual model parameters of the virtual image for guiding the makeup behavior of a user according to the makeup strategy, wherein the visual model parameters can comprise expression driving parameters and action driving parameters, the expression driving parameters can be a series of expression parameters of a human face model for adjusting the virtual image, and specifically, the expression driving parameters can adjust the makeup effect of the face of the virtual image, the region mouth actions for performing the makeup operation on the face, other facial actions and the like; the motion driving parameter may be a series of limb parameters for adjusting a body model of the avatar, and in particular, may adjust a gesture, a magnitude of motion, etc. when the avatar performs a cosmetic operation.
It can be understood that in the embodiment of the present application, the obtained expression driving parameters and motion driving parameters are multiple groups of parameters corresponding to time variation, for example, the number of frame images of the avatar in one second is 10, and the expression driving parameters and motion driving parameters of the avatar corresponding to the requirement in one second are 10 groups. In addition, if the avatar is a two-dimensional avatar, the expression driving parameter and the motion driving parameter of the avatar are driving parameters corresponding to the two-dimensional avatar, and if the avatar is a three-dimensional avatar, the expression driving parameter and the motion driving parameter are driving parameters corresponding to the three-dimensional avatar, which is not limited.
In some embodiments, the visual model parameters of the avatar may be pre-generated according to the makeup policy and stored in a database of the terminal or the server, and the terminal device may directly acquire the visual model parameters corresponding to the makeup policy in the server without considering the limitation of computing resources. As a way, the visual model parameters of the avatar may also be generated in real time through the machine learning model according to the makeup policy, and in this way, the method may not be limited to the makeup policy in the database, and the visual model parameters of the avatar may be generated in real time according to the target makeup selected by the user and the corresponding makeup policy, thereby increasing flexibility of the cosmetic method.
In some embodiments, generating the expression driving parameters and the action driving parameters corresponding to the avatar guiding the user's cosmetic behavior according to the makeup policy may be implemented using a machine learning model. Specifically, the makeup strategy can be input into a parameter generation model, expression driving parameters and action driving parameters corresponding to the makeup strategy are obtained, and the parameter generation model is a machine learning model obtained by training real makeup videos and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Specifically, after the virtual image corresponding to the user is obtained, feature extraction can be performed on a frame image in the real-person makeup video by using the machine learning model, expression features and action features in the real-person makeup process are extracted, the expression features of the virtual image corresponding to the makeup strategy are obtained based on the extracted expression features of the real person, the action features of the virtual image corresponding to the makeup strategy are obtained based on the extracted action features of the real person, expression driving parameters are obtained based on the expression features of the virtual image, and action driving parameters are obtained based on the action features of the virtual image. The machine learning model used is not limited herein. For example, a recurrent neural network (Recurrent Neural Network, RNN) model, a convolutional neural network (Convolutional Neural Networks, CNN) model, a generative challenge network (GENERATIVE ADVERSARIAL Networks, GAN), and the like may be employed, and variants, combinations, and the like of the above machine learning models may be employed.
For example, a large number of videos of real people for drawing eye shadows are used as a sample data training model, expression characteristics such as colors, contour lines and the like after eye shadow products are combined with eye areas in the makeup process, action characteristics such as areas using the eye shadows, motion tracks of smearing the eye shadows, gestures of a user using a makeup brush and the like can be extracted, and then expression driving parameters and action driving parameters of an avatar corresponding to the step of drawing the eye shadows in the makeup strategy can be obtained according to the expression characteristics and the action characteristics of the real people for drawing the eye shadows.
It will be understood that step S640 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, and is not limited herein.
Step S650: and driving the expression and the action of the avatar based on the expression driving parameters and the action driving parameters, and generating a video of the avatar guiding makeup.
After the expression driving parameters and the action driving parameters corresponding to the makeup behavior of the user are generated according to the makeup strategy, the expression of the avatar can be driven based on the expression driving parameters, the action of the avatar is driven based on the action driving parameters of the avatar, and a video of the avatar guiding the makeup is generated, wherein the video is formed by multi-frame images generated by driving the avatar. Specifically, the expression driving parameters and the action driving parameters can be aligned, so that the video durations corresponding to the expression driving parameters and the action driving parameters are consistent, then multiple continuous virtual images are generated according to the expression driving parameters and the action driving parameters which are in one-to-one correspondence, and the images are synthesized into the video.
It can be understood that step S650 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S660: and displaying the video.
The generated avatar-guiding cosmetic video is displayed on a display screen or other image output means of the terminal device. Optionally, the audio corresponding to the dressing policy can be obtained, and the generated virtual image is displayed to guide the video of the makeup, and the audio corresponding to the video content is played, so that the simulated image, sound and virtual image of the person with uniform behavior are presented to the user, and the experience of the user is further improved.
It will be appreciated that step S660 may be performed locally by the terminal device.
It should be noted that, in this embodiment, the portions not described in detail may refer to the foregoing embodiments, and are not described herein again.
According to the intelligent makeup method based on the virtual image, through displaying the recommended list of the makeup, the target makeup selected by the user in the recommended list is obtained, then the makeup strategy of the target makeup is obtained, according to the makeup strategy, the expression driving parameters and the action driving parameters corresponding to the makeup behavior of the virtual image guiding user are generated, then the expression and the action of the virtual image are driven based on the expression driving parameters and the action driving parameters, the video of the virtual image guiding makeup is generated, and the video is displayed. Therefore, the video guiding the makeup can be more in line with the actions and expressions of the people in the actual makeup scene, and the user can obtain more effective makeup guidance through the video.
Referring to fig. 15, fig. 15 illustrates an avatar-based intelligent cosmetic method according to still another embodiment of the present application, which may be applied to the above-described terminal device, and the method may include:
step S710: and acquiring a face image of the user.
The face image of the user may be an image acquired by the terminal device through an image acquisition device such as a camera, or may be uploaded to the terminal device by the user. Alternatively, the face image of the user obtained each time in the preset time period may be stored in a database of the terminal device or the server, so as to analyze the change condition of the skin data of the user in the preset time period.
It will be appreciated that step S710 may be performed locally by the terminal device.
Step S720: skin data of a user is obtained from the face image.
Wherein the skin data of the user is used for characterizing the skin condition of the user, may comprise: skin color, freckle, wrinkle, glossiness, etc., are not limited herein. Specifically, after the face image of the user is acquired, the image can be subjected to portrait identification to acquire portrait areas, the portrait areas are divided into a plurality of areas to be processed according to a preset feature database, and skin features of the areas to be processed are identified, wherein the preset feature database contains various skin feature information of the areas, and skin data of the user can be acquired by comparing the skin features of the areas to be processed of the user with features in a corresponding feature database.
For example, the forehead area in the feature database includes a head lifting line, the skin feature of the head lifting line may include quantifiable data such as a range of wrinkles, a number of wrinkles, and the like, and after the face image of the user is obtained, if it is detected that the skin feature in the forehead area of the user matches with the skin feature corresponding to the head lifting line, it may be determined that the head lifting line exists in the forehead area of the user.
It can be understood that step S720 may be performed locally by the terminal device, or may be performed in the server, or may be performed separately by the terminal device and the server, or may be performed according to the actual application scenario, and the task allocation may be performed according to the requirement, which is not limited herein.
Step S730: skin data is analyzed to generate skin care recommendations.
By analyzing the skin data, an analysis result of the skin of the user can be obtained, and corresponding skin care suggestions are generated according to the analysis result. Specifically, if the skin of the user is abnormal, the cause of the abnormality can be analyzed to generate skin care improvement suggestions; if no abnormality in the skin of the user occurs, a suggestion may be generated to maintain the current skin care method. In an embodiment of the present application, the content of the skin care advice may include: the recommended user uses the corresponding skin care product, recommends the user to use the smearing method of the skin care product, or recommends the user to use the corresponding sun protection measures, etc., and the skin care advice can be in the forms of audio, characters, images, videos, etc., and is not limited herein. For example, when skin data is analyzed to obtain a vaccinia on the skin of the user, a recommended suggestion for an anti-acne product may be generated.
In some embodiments, if face images and corresponding skin data of a plurality of users within a preset time period are obtained, the skin change condition of the users within the time period can be analyzed, and skin care advice can be generated according to the skin change condition of the users. For example, analysis may result in a decrease in acne on the user's skin, a voice prompt "may be happy, your skin condition is improving-! ".
In some embodiments, cosmetic products suitable for the skin of the user may also be obtained by analyzing the skin data. When a user uses the makeup guidance function, when the makeup strategy is obtained according to the target makeup selected by the user, the makeup products which are suitable for the skin of the user and correspond to the makeup strategy can be screened, so that the makeup strategy is more in line with the skin condition of the user.
It is to be understood that step S730 may be performed locally by the terminal device, may be performed in the server, may be performed separately by the terminal device and the server, and may be performed according to the requirements according to different actual application scenarios, and is not limited herein.
Step S740: displaying skin care advice.
In this embodiment, the skin care advice may be output on the corresponding output device of the terminal device according to the form of the skin care advice. For example, skin care advice in the form of text can be displayed on the screen of the terminal device by a text box.
It is understood that step S740 may be performed locally by the terminal device.
It should be noted that, steps S710 to S740 may be performed at any step before steps S110 to S140 are performed.
According to the intelligent makeup method based on the virtual image, provided by the embodiment, the face image of the user is obtained, the skin data of the user is obtained from the face image, then the skin data is analyzed, the skin care advice is generated, and the skin care advice is displayed. Therefore, the skin care suggestion can be made for the user according to the skin data of the user, the user with skin care requirements is further helped to achieve the purpose of effective skin care, and the user experience is improved.
Referring to fig. 16, fig. 16 is a block diagram illustrating a configuration of an avatar-based intelligent cosmetic apparatus according to an embodiment of the present application. As will be described below with respect to the structural block diagram shown in fig. 16, the avatar-based intelligent cosmetic apparatus 1600 includes: list display module 1610, target makeup acquisition module 1620, makeup policy acquisition module 1630, and video processing module 1640, wherein:
a list display module 1610 is configured to display a recommendation list of the makeup.
Further, the list display module 1610 further includes: the system comprises a feature generation sub-module, a list dressing acquisition sub-module and a dressing display sub-module, wherein:
And the feature generation sub-module is used for acquiring the face image of the user and generating the facial features of the user according to the face image.
A list dressing acquiring sub-module for acquiring at least one dressing according to the facial features; and the dressing display sub-module is used for displaying a recommendation list, and the recommendation list comprises at least one dressing.
Further, the makeup obtaining submodule includes: data acquisition unit and product dressing obtain the unit, wherein:
And the data acquisition unit is used for acquiring the makeup product data and the makeup style data input by the user.
And a product make-up acquisition unit for acquiring make-up according to the facial features, the product data and the make-up style data, wherein at least one make-up in the make-up is obtained by using the cosmetic product.
The target makeup acquisition module 1620 is configured to acquire a target makeup selected by a user in the recommendation list.
Further, the target makeup acquisition module 1620 further includes: dressing selection submodule, instruction judgment submodule, first dressing acquisition submodule and second dressing acquisition submodule, wherein:
and the dressing selection sub-module is used for acquiring the foundation dressing selected by the user in the recommendation list.
The instruction judging sub-module is used for judging whether the adjustment instruction input by the user is acquired or not.
And the first dressing acquisition sub-module is used for adjusting the basic dressing based on the adjustment instruction if the adjustment instruction input by the user is acquired, and taking the adjusted basic dressing as the target dressing.
And the second dressing acquisition sub-module is used for taking the basic dressing as the target dressing if the adjustment instruction is not acquired.
And the dressing policy obtaining module 1630 is used for obtaining the dressing policy corresponding to the target dressing.
Further, the makeup policy obtaining module includes: the cosmetic step obtains submodule and substep tactics and obtains submodule, wherein:
The makeup step acquisition submodule is used for acquiring a makeup step of a target makeup, and the makeup step is used for representing the makeup behaviors which are sequentially carried out according to the time sequence and are required by completing the target makeup.
The step strategy obtaining submodule is used for obtaining a dressing strategy corresponding to a dressing step, the dressing strategy comprises a dressing product, a dressing instruction method and a dressing effect, the dressing product comprises the name and the dosage of the dressing product, the dressing instruction method comprises a dressing action and the use area of the dressing product on the face, and the dressing effect comprises the color of the dressing product combined with the face area and the outline of the five sense organs after dressing.
Further, after acquiring the makeup policy corresponding to the target makeup, the avatar-based intelligent cosmetic apparatus 1600 further includes: the system comprises a video acquisition module and a contrast analysis module, wherein:
And the video acquisition module is used for acquiring the makeup video of the user.
And the contrast analysis module is used for carrying out contrast analysis on the makeup video of the user and the makeup strategy and displaying the result of the contrast analysis.
Further, the contrast analysis module includes: the system comprises a completion degree acquisition sub-module and a strategy adjustment sub-module, wherein:
The completion degree obtaining sub-module is used for comparing and analyzing the makeup video of the user with the makeup strategy to obtain the completion degree of the user's makeup, wherein the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy.
Further, the completion acquiring submodule includes: a first video analysis unit, a first completion degree acquisition unit, and a first information generation unit, wherein:
the first video analysis unit is used for analyzing the makeup video and acquiring the makeup steps and the makeup effects of the user in the makeup video of the user.
The first completion degree obtaining unit is used for analyzing the difference between the user makeup effect and the makeup effect in the makeup strategy according to the makeup steps and obtaining the completion degree of the user makeup effect.
And the first information generating unit is used for generating and displaying first prompt information if the completion degree of the makeup effect of the user is smaller than a first specified threshold value, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and the correction suggestion of the makeup effect.
Further, the completion acquiring submodule includes: a second video analysis unit, a second completion degree acquisition unit, and a second information generation unit, wherein:
and the second video analysis unit is used for analyzing the makeup video and acquiring the makeup steps and the user makeup methods in the makeup video of the user.
And the second completion degree acquisition unit is used for analyzing the difference between the user makeup manipulation and the makeup instruction manipulation in the makeup strategy according to the makeup steps and acquiring the completion degree of the user makeup manipulation.
And the second information generation unit is used for generating and displaying second prompt information if the completion degree of the makeup technique is smaller than a second designated threshold value, wherein the second prompt information comprises at least one of the completion degree of the makeup technique, the makeup guiding technique and the makeup technique correction suggestion.
And the strategy adjustment sub-module is used for adjusting the dressing strategy into an alternative strategy if the completion degree is smaller than a preset numerical value, wherein the alternative strategy is a strategy with higher matching degree with the dressing video of the user.
The video processing module 1640 is configured to generate a video for guiding makeup of an avatar according to a makeup policy and display the video, wherein the avatar is generated in advance according to a face image of a user.
Further, the video processing module includes: a parameter generation sub-module, an avatar driving sub-module, and a video display sub-module, wherein:
and the parameter generation sub-module is used for generating expression driving parameters and action driving parameters corresponding to the virtual image guiding user's make-up behaviors according to the make-up strategy.
Further, the parameter generation submodule includes a model processing unit, wherein:
The model processing unit is used for inputting the makeup strategy and the face image of the user into the parameter generation model, obtaining the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by training the real makeup video and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
The virtual image driving sub-module is used for driving the expression and the action of the virtual image based on the expression driving parameters and the action driving parameters, generating a video for guiding makeup of the virtual image, wherein the video is formed by multi-frame images generated by driving the virtual image; and the video display sub-module is used for displaying videos.
Further, the avatar-based intelligent cosmetic apparatus 1600 further includes: an image acquisition module, a skin data acquisition module, and a suggestion generation module, wherein:
and the image acquisition module is used for acquiring the face image of the user.
And the skin data acquisition module acquires skin data of the user from the face image.
And the suggestion generation module is used for analyzing the skin data and generating skin care suggestions.
It can be clearly understood by those skilled in the art that the intelligent cosmetic device based on an avatar provided in the embodiment of the present application can implement each process in the foregoing method embodiment, and for convenience and brevity of description, the specific working process of the foregoing description device and module may refer to the corresponding process in the foregoing method embodiment, which is not repeated here
In the embodiments provided herein, the modules shown or discussed may be coupled or directly coupled or communicatively connected to each other via some interface, whether an apparatus or module is indirectly coupled or communicatively connected, whether electrically, mechanically or otherwise.
In addition, each functional module in the embodiment of the present application may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 17, fig. 17 is a block diagram illustrating a structure of an electronic device for performing an avatar-based intelligent cosmetic method according to an embodiment of the present application. The electronic device 1700 may be an electronic device capable of running applications such as smart cosmetic mirrors, smart phones, tablet computers, electronic books, and the like. The electronic device 1700 in the present application may include one or more of the following components: a processor 1710, a memory 1720, and one or more application programs, wherein the one or more application programs may be stored in the memory 1720 and configured to be executed by the one or more processors 1710, the one or more program(s) configured to perform the methods as described in the foregoing method embodiments.
Processor 1710 may include one or more processing cores. The processor 1710 utilizes various interfaces and lines to connect various portions of the overall electronic device 1700, perform various functions of the electronic device 1700, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1720, and invoking data stored in the memory 1720. Alternatively, the processor 1710 can be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1710 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1710 and may be implemented solely by a single communication chip.
Memory 1720 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Memory 1720 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the electronic device 1700 in use (e.g., phonebook, audio-video data, chat log data), and the like.
Referring to fig. 18, fig. 18 illustrates a storage unit for storing or carrying program codes for implementing an avatar-based intelligent cosmetic method according to an embodiment of the present application. The computer readable storage medium 1800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 1800 may be an electronic memory such as a flash memory, an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM), a hard disk, or a ROM. Optionally, computer readable storage medium 1800 includes non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 1800 has storage space for program code 1810 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 1810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. An avatar-based intelligent cosmetic method, comprising:
acquiring a dressing demand instruction of a user, and displaying a recommendation list of the dressing according to the dressing demand instruction;
Acquiring a target dressing form selected by a user in the recommendation list, wherein the target dressing form can correspond to dressing strategies with different difficulties, and the higher the difficulty, the more details of the dressing form in the dressing strategy, and the higher the similarity with the target dressing form;
Acquiring the dressing policy corresponding to the target dressing; generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to a face image of a user;
the displaying of the recommendation list of the makeup includes:
acquiring a face image of a user, and generating facial features of the user according to the face image;
Acquiring at least one makeup look according to the facial features;
displaying the recommendation list, wherein the recommendation list comprises at least one dressing;
The step of obtaining at least one makeup according to the facial features comprises the following steps:
Acquiring makeup product data and makeup style data input by a user;
Acquiring a makeup look according to the facial features, the cosmetic product data and the makeup style data, wherein at least one of the makeup looks is obtained by using the cosmetic product;
The obtaining of the makeup according to the facial features, the makeup product data, and the makeup style data includes:
Acquiring a plurality of makeup looks according to the facial features; screening the plurality of makeup according to the makeup product data and the makeup style data as screening conditions, and taking the makeup meeting the screening conditions as the acquired makeup;
The step of obtaining the dressing policy corresponding to the target dressing style comprises the following steps:
And acquiring a makeup video of the user, obtaining the makeup level of the user by analyzing the makeup video of the user, and acquiring a corresponding makeup strategy according to the makeup level of the user.
2. The method according to claim 1, wherein the obtaining the target makeup selected by the user in the recommendation list comprises:
acquiring a basic makeup selected by a user in the recommendation list;
Judging whether an adjustment instruction input by a user is acquired or not;
If an adjustment instruction input by a user is acquired, adjusting the basic makeup based on the adjustment instruction, and taking the adjusted basic makeup as the target makeup;
And if the adjustment instruction is not acquired, taking the basic makeup as the target makeup.
3. A method according to claim 1, wherein said obtaining a makeup policy corresponding to said target makeup comprises:
a makeup step of obtaining the target makeup, wherein the makeup step is used for representing makeup behaviors which are sequentially carried out according to time sequence and are required by completing the target makeup;
And acquiring a dressing strategy corresponding to the dressing step, wherein the dressing strategy comprises a dressing product, a dressing instruction method and a dressing effect, the dressing product comprises the name and the dosage of the dressing product, the dressing instruction method comprises a dressing action and a using area of the dressing product on the face, and the dressing effect comprises the color of the dressing product combined with the face area after dressing and the outline of the five sense organs.
4. A method according to claim 3, wherein after the obtaining of the makeup policy corresponding to the target makeup, further comprises:
Acquiring a makeup video of a user;
And comparing and analyzing the makeup video of the user with the makeup strategy, and displaying the result of the comparison and analysis.
5. The method of claim 4, wherein comparing the user's cosmetic video with the cosmetic policy, displaying the results of the comparison analysis, comprises:
Comparing and analyzing the makeup video of the user with the makeup strategy to obtain the finish degree of the makeup of the user, wherein the finish degree is used for representing the difference between the makeup video of the user and the makeup strategy;
And if the completion degree is smaller than a preset numerical value, adjusting the dressing strategy into an alternative strategy, wherein the alternative strategy is a strategy with higher matching degree with the user's dressing video.
6. The method of claim 5, wherein comparing the user's cosmetic video with the makeup policy to obtain the degree of completion of the user's makeup comprises:
Analyzing the makeup video to obtain a makeup step and a makeup effect of the user in the makeup video of the user;
according to the makeup step, analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy, and obtaining the completion degree of the makeup effect of the user;
And if the finishing degree of the makeup effect of the user is smaller than a first specified threshold, generating and displaying first prompt information, wherein the prompt information comprises at least one of the finishing degree of the makeup effect, the makeup effect in the makeup strategy and the correction suggestion of the makeup effect.
7. The method of claim 5, wherein comparing the user's cosmetic video with the makeup policy to obtain the degree of completion of the user's makeup comprises:
Analyzing the makeup video, and acquiring a makeup step and a user makeup method in the makeup video of the user;
according to the makeup step, analyzing the difference between the user makeup manipulation and the makeup instruction manipulation in the makeup strategy, and obtaining the completion degree of the user makeup manipulation;
And if the completion degree of the makeup technique is smaller than a second designated threshold, generating and displaying second prompt information, wherein the second prompt information comprises at least one of the completion degree of the makeup technique, the makeup guiding technique and the makeup technique correction suggestion.
8. A method according to any one of claims 1 to 7, wherein generating an avatar-directed make-up video and displaying the video according to the make-up policy, comprises:
Generating expression driving parameters and action driving parameters corresponding to the virtual image guiding user's make-up behaviors according to the make-up strategy;
Driving the expression and the action of the avatar based on the expression driving parameters and the action driving parameters, and generating a video of the avatar guiding makeup, wherein the video is composed of multi-frame images generated by driving the avatar;
and displaying the video.
9. The method of claim 8, wherein generating expression driving parameters and action driving parameters corresponding to the avatar guiding the user's cosmetic behavior according to the makeup policy comprises:
inputting the makeup strategy into a parameter generation model, and obtaining the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by training a real makeup video and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
10. The method as recited in claim 1, further comprising:
acquiring a face image of a user;
Acquiring skin data of a user from the face image;
Analyzing the skin data to generate skin care advice;
displaying the skin care advice.
11. An avatar-based intelligent cosmetic apparatus, the apparatus comprising:
The list display module is used for acquiring a dressing demand instruction of a user and displaying a recommendation list of the dressing according to the dressing demand instruction;
The target dressing acquisition module is used for acquiring target dressing selected by a user in the recommendation list, wherein the target dressing can correspond to dressing strategies with different difficulties, and the higher the difficulty, the more details of the dressing in the dressing strategies, the higher the similarity with the target dressing;
The dressing policy acquisition module is used for acquiring the dressing policy corresponding to the target dressing;
The video processing module is used for generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video, and the virtual image is generated in advance according to a face image of a user;
the list display module further includes: the system comprises a feature generation sub-module, a list dressing acquisition sub-module and a dressing display sub-module, wherein:
The feature generation sub-module is used for acquiring a face image of a user and generating facial features of the user according to the face image;
The list dressing acquisition sub-module is used for acquiring at least one dressing according to the facial features;
The dressing display sub-module is used for displaying the recommendation list, and the recommendation list comprises at least one dressing;
The list dressing acquisition submodule comprises: data acquisition unit and product dressing obtain the unit, wherein:
The data acquisition unit is used for acquiring the makeup product data and the makeup style data input by a user;
The product make-up obtaining unit is used for obtaining make-up according to the facial features, the cosmetic product data and the make-up style data, wherein at least one make-up in the make-up is obtained by using the cosmetic product;
The product makeup acquisition unit is also used for acquiring a plurality of makeup according to the facial features; screening the plurality of makeup according to the makeup product data and the makeup style data as screening conditions, and taking the makeup meeting the screening conditions as the acquired makeup;
The makeup strategy acquisition module is specifically used for acquiring a makeup video of a user, acquiring the makeup level of the user by analyzing the makeup video of the user, and acquiring a corresponding makeup strategy according to the makeup level of the user.
12. An electronic device, comprising:
one or more processors;
A memory;
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method of any one of the claims 1-10.
CN202010801807.0A 2020-08-11 2020-08-11 Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium Active CN111968248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010801807.0A CN111968248B (en) 2020-08-11 2020-08-11 Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010801807.0A CN111968248B (en) 2020-08-11 2020-08-11 Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111968248A CN111968248A (en) 2020-11-20
CN111968248B true CN111968248B (en) 2024-06-21

Family

ID=73365110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010801807.0A Active CN111968248B (en) 2020-08-11 2020-08-11 Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111968248B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625955A (en) * 2020-12-14 2022-06-14 宝能汽车集团有限公司 Image replacing method and device for vehicle-mounted robot and server
CN112749634A (en) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 Control method and device based on beauty equipment and electronic equipment
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN115120077A (en) * 2021-03-20 2022-09-30 海信集团控股股份有限公司 Cosmetic mirror and method for assisting make-up
CN113672752A (en) * 2021-07-28 2021-11-19 杭州知衣科技有限公司 Garment multi-mode fusion search system and method based on deep learning
CN116797864B (en) * 2023-04-14 2024-03-19 东莞莱姆森科技建材有限公司 Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108937407A (en) * 2018-05-25 2018-12-07 深圳市赛亿科技开发有限公司 A kind of Intelligent mirror making-up guidance method and system
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium

Also Published As

Publication number Publication date
CN111968248A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111968248B (en) Intelligent cosmetic method and device based on virtual image, electronic equipment and storage medium
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US10799010B2 (en) Makeup application assist device and makeup application assist method
WO2021147920A1 (en) Makeup processing method and apparatus, electronic device, and storage medium
CN111432267B (en) Video adjusting method and device, electronic equipment and storage medium
US20190130652A1 (en) Control method, controller, smart mirror, and computer readable storage medium
WO2017110041A1 (en) Makeup part creation device, makeup part usage device, makeup part creation method, makeup part usage method, makeup part creation program, and makeup part usage program
US11776187B2 (en) Digital makeup artist
US11961169B2 (en) Digital makeup artist
JP7278307B2 (en) Computer program, server device, terminal device and display method
CN112819718A (en) Image processing method and device, electronic device and storage medium
CN108932654A (en) A kind of virtually examination adornment guidance method and device
CN111523981A (en) Virtual trial method and device, electronic equipment and storage medium
CN116830073A (en) Digital color palette
CN113661520A (en) Modifying the appearance of hair
WO2022257766A1 (en) Image processing method and apparatus, device, and medium
CN112632349A (en) Exhibition area indicating method and device, electronic equipment and storage medium
CN112190921A (en) Game interaction method and device
CN114021022A (en) Dressing information acquisition method and device, vehicle and storage medium
KR20230118191A (en) digital makeup artist
US11321882B1 (en) Digital makeup palette
JP2024506454A (en) digital makeup palette
CN112633129A (en) Video analysis method and device, electronic equipment and storage medium
JP2023177260A (en) Program, information processing device and method
CN108090882A (en) Cosmetic method on a kind of real-time digital image of interactive mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant