CN111968248A - Intelligent makeup method and device based on virtual image, electronic equipment and storage medium - Google Patents

Intelligent makeup method and device based on virtual image, electronic equipment and storage medium Download PDF

Info

Publication number
CN111968248A
CN111968248A CN202010801807.0A CN202010801807A CN111968248A CN 111968248 A CN111968248 A CN 111968248A CN 202010801807 A CN202010801807 A CN 202010801807A CN 111968248 A CN111968248 A CN 111968248A
Authority
CN
China
Prior art keywords
makeup
user
strategy
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010801807.0A
Other languages
Chinese (zh)
Inventor
常向月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Zhuiyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuiyi Technology Co Ltd filed Critical Shenzhen Zhuiyi Technology Co Ltd
Priority to CN202010801807.0A priority Critical patent/CN111968248A/en
Publication of CN111968248A publication Critical patent/CN111968248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an intelligent makeup method based on an avatar, which comprises the following steps: displaying a recommended list of makeup; acquiring a target makeup selected by a user in the recommendation list; obtaining a makeup strategy corresponding to the target makeup; and generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user. According to the embodiment of the application, the user can be guided to make up through the virtual image generated by the face image of the user according to the makeup selected by the user, so that personalized makeup guidance is realized, and the effectiveness of the makeup guidance is improved.

Description

Intelligent makeup method and device based on virtual image, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of human-computer interaction technologies, and more particularly, to an intelligent makeup method and apparatus based on an avatar, an electronic device, and a storage medium.
Background
With the continuous improvement of living standard, the pursuit of people for beauty is also continuously improved, and a plurality of people can improve the image quality of themselves through makeup in daily life or work. However, different users have different facial shapes and facial features, and have different degrees of mastery of makeup, and it is difficult for users to obtain personalized makeup guidance according to their makeup level, and thus, the makeup requirements of users are difficult to meet.
Disclosure of Invention
In view of the above problems, the present application provides an intelligent makeup method, apparatus, electronic device and storage medium based on an avatar, which can obtain a recommended makeup look and guide makeup based on the avatar.
In a first aspect, an embodiment of the present application provides an intelligent makeup method based on an avatar, the method including: displaying a recommended list of makeup; acquiring a target makeup selected by a user in the recommendation list; obtaining a makeup strategy corresponding to the target makeup; and generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user.
Optionally, the displaying a recommended list of makeup includes: acquiring a face image of a user, and generating facial features of the user according to the face image; obtaining at least one makeup look according to the facial features; displaying the recommendation list, the recommendation list including at least one of the makeup looks.
Optionally, said obtaining at least one makeup look according to said facial features comprises: acquiring makeup product data and makeup style data input by a user; obtaining makeup based on the facial features, the makeup product data, and the makeup style data, at least one of the makeup being made using the makeup product.
Optionally, the obtaining of the target makeup appearance selected by the user in the recommendation list includes: obtaining the basic makeup selected by the user in the recommendation list; judging whether an adjusting instruction input by a user is acquired; if an adjusting instruction input by a user is acquired, adjusting the basic makeup based on the adjusting instruction, and taking the adjusted basic makeup as the target makeup; and if the adjustment instruction is not acquired, taking the base makeup as the target makeup.
Optionally, the obtaining of the makeup strategy corresponding to the target makeup includes: a makeup step of obtaining the target makeup, the makeup step being used for representing chronological sequential makeup behaviors required for completing the target makeup; obtaining a makeup strategy corresponding to the makeup step, wherein the makeup strategy comprises a makeup product, a makeup guiding method and a makeup effect, the makeup product comprises the name and the dosage of the makeup product, the makeup guiding method comprises a makeup action and the use area of the makeup product on the face, and the makeup effect comprises the color of the makeup product combined with the face area after makeup and the contour line of the five sense organs.
Optionally, after obtaining the makeup strategy corresponding to the target makeup, the intelligent makeup method based on an avatar further includes: obtaining a makeup video of a user; and comparing and analyzing the makeup video of the user and the makeup strategy, and displaying the result of the comparison and analysis.
Optionally, the comparing the makeup video of the user with the makeup strategy and displaying the result of the comparing analysis includes: comparing and analyzing the makeup video of the user and the makeup strategy to obtain the completion degree of the makeup of the user, wherein the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy; and if the completion degree is smaller than a preset numerical value, adjusting the makeup strategy into an alternative strategy, wherein the alternative strategy is a strategy with higher matching degree with the makeup video of the user.
Optionally, the comparing and analyzing the makeup video of the user and the makeup strategy to obtain the completion degree of the makeup of the user includes: analyzing the makeup video to obtain a makeup step and a makeup effect of the user in the makeup video of the user; analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy according to the makeup step to obtain the completion degree of the makeup effect of the user; and if the completion degree of the makeup effect of the user is smaller than a first specified threshold value, generating and displaying first prompt information, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and a makeup effect correction suggestion.
Optionally, the comparing and analyzing the makeup video of the user and the makeup strategy to obtain the completion degree of the makeup of the user includes: analyzing the makeup video to obtain a makeup step and a user makeup method in the makeup video of the user; analyzing the difference between the user's makeup technique and the makeup guidance technique in the makeup strategy according to the makeup step to obtain the completeness of the user's makeup technique; and if the completion degree of the cosmetic manipulation is smaller than a second specified threshold, generating and displaying second prompt information, wherein the second prompt information comprises at least one of the completion degree of the cosmetic manipulation, the cosmetic guiding manipulation and a cosmetic manipulation correction suggestion.
Optionally, the generating and displaying a video of an avatar guiding makeup according to the makeup strategy includes: generating expression driving parameters and action driving parameters corresponding to the virtual image to guide the user to make up according to the makeup strategy; driving the expression and the action of the virtual image based on the expression driving parameter and the action driving parameter to generate a video for guiding makeup by the virtual image, wherein the video is formed by a plurality of frames of images generated by driving the virtual image; and displaying the video.
Optionally, the generating of the expression driving parameter and the action driving parameter corresponding to the avatar to guide the user to make up according to the makeup strategy includes: inputting the makeup strategy and the face image of the user into a parameter generation model, and acquiring the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by real person makeup video training and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Optionally, the avatar-based intelligent makeup method further includes: acquiring a face image of a user; acquiring skin data of a user from the face image; analyzing the skin data to generate skin care advice; displaying the skin care advice.
In a second aspect, the present application provides an intelligent virtual image-based makeup apparatus, including: the list display module is used for displaying a recommendation list of makeup; the target makeup obtaining module is used for obtaining the target makeup selected by the user in the recommendation list; the makeup strategy obtaining module is used for obtaining a makeup strategy corresponding to the target makeup; and the video processing module is used for generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user.
Optionally, the list display module includes: the makeup appearance system comprises a feature generation submodule, a list makeup appearance acquisition submodule and a makeup appearance display submodule, wherein: the feature generation submodule is used for acquiring a face image of a user and generating facial features of the user according to the face image; a list makeup obtaining sub-module for obtaining at least one makeup based on the facial features; a makeup display sub-module for displaying the recommendation list, the recommendation list including at least one of the makeup.
Optionally, the makeup obtaining sub-module includes: data acquisition unit and makeup of products acquisition unit, wherein: a data acquisition unit for acquiring makeup product data and makeup style data input by a user; a makeup acquisition unit configured to acquire makeup, at least one of which is obtained using the makeup product, based on the facial features, the product data, and the makeup style data.
Optionally, the target makeup obtaining module includes: the makeup selection submodule, the instruction judgment submodule, the first makeup acquisition submodule and the second makeup acquisition submodule, wherein: the makeup selection submodule is used for acquiring the basic makeup selected by the user in the recommendation list; the instruction judgment submodule is used for judging whether an adjustment instruction input by a user is acquired; a first makeup obtaining sub-module, configured to, if an adjustment instruction input by a user is obtained, adjust the basic makeup based on the adjustment instruction, and take the adjusted basic makeup as the target makeup; and the second makeup obtaining sub-module is used for taking the basic makeup as the target makeup if the adjusting instruction is not obtained.
Optionally, the makeup strategy acquiring module includes: a make-up step acquisition submodule and a step-by-step strategy acquisition submodule, wherein: a makeup step acquisition submodule for acquiring a makeup step of the target makeup, the makeup step being for representing a chronological sequential makeup action required for completing the target makeup; the step strategy obtaining submodule is used for obtaining a makeup strategy corresponding to the makeup step, the makeup strategy comprises a makeup product, a makeup guiding method and a makeup effect, the makeup product comprises the name and the dosage of the makeup product, the makeup guiding method comprises a makeup action and the use area of the makeup product on the face, and the makeup effect comprises the color of the makeup product after being combined with the face area after makeup and the contour line of the five sense organs.
Optionally, after obtaining the makeup strategy corresponding to the target makeup, the avatar-based intelligent makeup apparatus further includes: video acquisition module and contrastive analysis module, wherein: the video acquisition module is used for acquiring a makeup video of a user; and the comparison analysis module is used for comparing and analyzing the makeup video of the user and the makeup strategy and displaying the result of the comparison analysis.
Optionally, the comparative analysis module comprises: a completion acquisition submodule and a strategy adjustment submodule, wherein: the completion degree obtaining submodule is used for comparing and analyzing the makeup video of the user and the makeup strategy to obtain the completion degree of the makeup of the user, and the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy; and the strategy adjusting submodule is used for adjusting the makeup strategy into an alternative strategy if the completion degree is smaller than a preset numerical value, wherein the alternative strategy is a strategy with higher matching degree with the makeup video of the user.
Optionally, the completion obtaining sub-module includes: the device comprises a first video analysis unit, a first completion acquisition unit and a first information generation unit, wherein: the first video analysis unit is used for analyzing the makeup video and acquiring a makeup step and a makeup effect of the user in the makeup video of the user; a first completeness acquiring unit, configured to analyze a difference between the user makeup effect and the makeup effect in the makeup strategy according to the makeup step, and acquire a completeness of the user makeup effect; and the first information generating unit is used for generating and displaying first prompt information if the completion degree of the makeup effect of the user is smaller than a first specified threshold, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and a makeup effect correction suggestion.
Optionally, the completion obtaining sub-module includes: a second video analysis unit, a second completion acquisition unit, and a second information generation unit, wherein: the second video analysis unit is used for analyzing the makeup video and acquiring a makeup step and a user makeup method in the makeup video of the user; a second completion degree obtaining unit configured to obtain a completion degree of a user's makeup manipulation by analyzing a difference between the user's makeup manipulation and the makeup teaching manipulation in the makeup strategy according to the makeup procedure; and a second information generating unit configured to generate and display second prompt information including at least one of the degree of completion of the makeup procedure, the makeup guidance procedure, and a makeup procedure correction suggestion if the degree of completion of the makeup procedure is smaller than a second specified threshold.
Optionally, the video processing module includes: parameter generation submodule, avatar drive submodule and video display submodule, wherein: the parameter generation submodule is used for generating expression driving parameters and action driving parameters corresponding to the virtual image to guide the user to make up according to the makeup strategy; the virtual image driving submodule is used for driving the expression and the action of the virtual image based on the expression driving parameter and the action driving parameter to generate a video for guiding makeup by the virtual image, and the video is formed by a plurality of frames of images generated by driving the virtual image; and the video display sub-module is used for displaying the video.
Optionally, the parameter generation submodule includes a model processing unit, wherein: and the model processing unit is used for inputting the makeup strategy and the facial image of the user into a parameter generation model to obtain the expression driving parameters and the action driving parameters corresponding to the makeup strategy, and the parameter generation model is obtained by real person makeup video training and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Optionally, the avatar-based intelligent cosmetic device further comprises: an image acquisition module, a skin data acquisition module, and a suggestion generation module, wherein: the image acquisition module is used for acquiring a face image of a user; the skin data acquisition module is used for acquiring skin data of a user from the face image; and the suggestion generation module is used for analyzing the skin data to generate skin care suggestions.
In a third aspect, an embodiment of the present application provides an electronic device, which may include: a memory; one or more processors coupled with the memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the method of the first aspect as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having program code stored therein, where the program code is called by a processor to execute the method according to the first aspect.
The embodiment of the application provides an intelligent makeup method, an intelligent makeup device, electronic equipment and a storage medium based on an avatar, wherein the avatar is generated in advance according to a face image of a user by displaying a recommendation list of makeup, then acquiring a target makeup selected by the user in the recommendation list, then acquiring a makeup strategy corresponding to the target makeup, and finally generating a video for guiding makeup by the avatar according to the makeup strategy and displaying the video. Therefore, the user can be guided to make up through the virtual image generated by the face image of the user according to the makeup selected by the user, so that personalized makeup guidance is realized, and the effectiveness of the makeup guidance is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an application environment suitable for use in embodiments of the present application;
FIG. 2 is a flowchart illustrating an avatar-based intelligent makeup method according to an embodiment of the present application;
FIG. 3 illustrates an interactive interface diagram provided by an exemplary embodiment of the present application;
FIG. 4 is a flowchart illustrating an avatar-based intelligent makeup method according to another embodiment of the present application;
FIG. 5 is a flowchart illustrating step S220 in FIG. 4 according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart illustrating an avatar-based intelligent makeup method according to still another embodiment of the present application;
FIG. 7 illustrates yet another interactive interface schematic provided by an exemplary embodiment of the present application;
FIG. 8 illustrates another interactive interface schematic provided by an exemplary embodiment of the present application;
FIG. 9 is a flowchart illustrating an avatar-based intelligent makeup method according to still another embodiment of the present application;
FIG. 10 is a flowchart illustrating an avatar-based intelligent makeup method according to still another embodiment of the present application;
FIG. 11 is a flowchart illustrating step S560 of FIG. 10 according to an exemplary embodiment of the present application;
fig. 12 is a flowchart illustrating step S561 in fig. 11 according to an exemplary embodiment of the present application;
FIG. 13 is a flowchart illustrating step S561 of FIG. 11, according to another exemplary embodiment of the present application;
FIG. 14 is a flowchart illustrating an avatar-based intelligent makeup method according to yet another embodiment of the present application;
FIG. 15 is a flowchart illustrating an avatar-based intelligent makeup method according to still another embodiment of the present application;
fig. 16 is a block diagram illustrating a structure of an avatar-based intelligent make-up apparatus according to an embodiment of the present application;
fig. 17 is a block diagram illustrating a structure of an electronic device for performing an avatar-based intelligent makeup method according to an embodiment of the present application;
fig. 18 illustrates a storage unit for storing or carrying program codes implementing the avatar-based intelligent makeup method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With the progress of society, people pursue the quality of life more and more, many people can improve own image quality through making up in daily life or work, although there are a lot of makeup guide videos on the internet to refer to, different users 'facial form, five sense organs distribute differently, it is difficult for users to find suitable makeup through learning the makeup guide video, in addition, because different users have different mastery degrees of making up, users can't get effective makeup guide according to their makeup level yet, therefore, the present makeup guide method can't satisfy users' actual demand for making up.
In order to improve the above problems, the inventor researches the difficult point of the current makeup and the makeup requirement of the user in the practical application, and provides an intelligent makeup method, an intelligent makeup device, an electronic device and a storage medium based on an avatar, which guides the user to make up through the avatar generated by the face image of the user according to the makeup selected by the user, thereby realizing the personalized makeup guidance and improving the effectiveness of the makeup guidance.
In order to better understand the intelligent makeup method, device, electronic device and storage medium based on the avatar provided in the embodiments of the present application, an application environment suitable for the embodiments of the present application will be described first.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment suitable for the embodiment of the present application. The intelligent makeup method based on the virtual image provided by the embodiment of the application can be applied to a multi-state interactive system 10 shown in fig. 1. The polymorphic interaction system 10 includes a terminal device 100 and a server 200, the server 200 being communicatively coupled to the terminal device 100. The server 200 may be a conventional server, a cloud server, a server cluster including a plurality of servers, or even a server center including a plurality of servers. The server 200 may be configured to provide a background service for the user, where the background service may include, but is not limited to, obtaining a makeup strategy corresponding to a target makeup, and generating a video for guiding makeup via an avatar according to the makeup strategy, and the like, which is not limited herein.
The terminal device 100 may be any electronic device having a display screen and supporting data input, including but not limited to a smart cosmetic mirror, a robot, a smart phone, a tablet computer, a laptop computer, a desktop computer, and a wearable electronic device or other electronic devices with an avatar-based smart cosmetic apparatus deployed therein, and is not limited thereto. Specifically, the data input may be based on a voice module provided on the terminal device 100 to input voice, a character input module to input characters, an image input module to input images, a video input module to input video, and the like, or may be based on a gesture recognition module provided on the terminal device 100, so that a user may implement an interactive mode such as gesture input.
Wherein, the terminal device 100 may be installed with a client application program, and the user may communicate with the server 200 based on the client application program (e.g. APP, wechat applet, etc.), specifically, the server 200 is installed with a corresponding server application program, and the user may register a user account at the server 200 based on the client application program, and communicate with the server 200 based on the user account, for example, a user logs into a user account at a client application, and enters through the client application based on the user account, text information, voice information, image information or video information and the like can be input, and after the client application program receives the information input by the user, the information may be sent to the server 200 so that the server 200 may receive the information and process and store the information, and the server 200 may also receive the information and return a corresponding output information to the terminal device 100 according to the information.
In some embodiments, the means for processing the information input by the user may also be disposed on the terminal device 100, so that the terminal device 100 can interact with the user without relying on establishing communication with the server 200, and in this case, the polymorphic interaction system 10 may only include the terminal device 100.
The above application environments are only examples for facilitating understanding, and it is to be understood that the embodiments of the present application are not limited to the above application environments.
The following describes in detail an avatar-based intelligent makeup method, apparatus, electronic device and medium according to embodiments of the present application.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an avatar-based intelligent makeup method according to an embodiment of the present application, where the avatar-based intelligent makeup method according to the present embodiment may be applied to an electronic device. The electronic device may be the terminal device having the display screen or other image output device, or the server. In a specific embodiment, the avatar-based intelligent makeup method may be applied to an avatar-based intelligent makeup apparatus 1600 as shown in fig. 16 and an electronic device 1700 as shown in fig. 17. As will be explained in detail below with respect to the flow shown in fig. 2, the illustrated avatar-based intelligent makeup method may specifically include the following steps:
step S110: a recommended list of makeup is displayed.
Wherein, the recommendation list can include at least one dressing, and the dressing can be an integral dressing or a local dressing. Specifically, the overall makeup may include an eyebrow makeup, a lip makeup, an eye makeup, a dressing, a blush makeup, and the like, which constitute an overall face makeup, and the partial makeup may be one or more of an eyebrow makeup, a lip makeup, an eye makeup, a dressing, and a blush makeup, which are not limited herein. As one mode, the terminal device may acquire a makeup requirement instruction of the user, and display a recommendation list of makeup according to the makeup requirement instruction of the user. Specifically, if the makeup requirement instruction of the user is to acquire the whole makeup, a recommendation list of the whole makeup is displayed, and if the makeup requirement instruction of the user is to acquire the partial makeup, a recommendation list of the partial makeup corresponding to the makeup requirement instruction of the user is displayed. For example, when the terminal device receives a makeup requirement instruction for acquiring eye makeup from a user, a recommendation list including at least one eye makeup is displayed.
In some embodiments, the makeup looks in the recommendation list may be pre-stored in a database local to the terminal device, may be generated in real time by the terminal device or the server, or may be input into the terminal device by the user, which is not limited herein. The makeup in the recommendation list is obtained in various ways, so that the diversity of the makeup in the recommendation list can be improved, and the makeup range selectable by a user is increased.
As one way, the makeup in the recommendation list may be stored in advance in a database local to the terminal device, and specifically, the database stores therein makeup data such as different eyebrow makeup, lip makeup, eye makeup, blush makeup, and the like, and when the terminal device receives a request to display the recommendation list of makeup, the makeup data of the recommendation list may be acquired from the database, thereby reducing the time required to generate makeup in real time.
As another way, the makeup in the recommendation list may be generated by the terminal device or the server in real time, and specifically, when a request to display the recommendation list of makeup is received, the terminal device or the server generates makeup data in real time and transmits the makeup data to the terminal device, displaying the recommendation list of makeup on the screen of the terminal device. For example, the terminal device may acquire a face image of the user, acquire a makeup that matches the face image of the user in the server, and transmit the makeup data to the terminal device and display it.
As another way, the makeup in the recommendation list may be makeup input by the user into the terminal device, where the makeup input by the user may be a face image corresponding to the makeup, and the face image may be an image of the entire makeup or an image of a partial makeup. For example, the user may input a favorite makeup image of a star, may use the entire makeup in the image as the makeup of the recommendation list, or may use the eye makeup in the image as the makeup of the recommendation image. Alternatively, the makeup input by the user may be a makeup composed of a plurality of partial makeup selected by the user. For example, the user can select an eyebrow makeup, a lip makeup, and an eye makeup among the partial makeup, and the makeup consisting of a plurality of partial makeup selected by the user is displayed in the recommendation list.
In some embodiments, a recommended list of makeup may be displayed upon detecting an instruction from a user to request a makeup guide. Specifically, the instruction for requesting the makeup instruction may be preset by the terminal device, and the instruction may be a screen touch instruction, or a voice instruction, an action instruction, a gesture instruction, or the like. For example, when the terminal device detects a complete facial image of the user, whether makeup instruction is needed or not can be inquired through voice, and if a voice instruction that the user needs the makeup instruction is acquired, a recommendation list of makeup is displayed.
In some embodiments, after obtaining the makeup data of the recommendation list, the terminal device may display a makeup effect diagram corresponding to the makeup to the user, so that the user may select a target makeup according to the makeup effect diagram. The makeup effect picture can be a makeup effect picture obtained by a real person using the makeup, or a makeup effect picture corresponding to the makeup based on an avatar, and optionally, the avatar can be a picture generated according to a face image of the user and identical to the user's growth, or a preset picture different from the user's growth.
For example, fig. 3 shows a schematic view of an interactive interface provided by an exemplary embodiment of the present application, where the interface includes an avatar generated based on a face image of a user, and a recommendation list including a plurality of makeup contents under the avatar, and if a makeup content in the recommendation list selected by the user is received based on the interface, a makeup effect diagram of the avatar using the makeup content is displayed on the interactive interface. The user can intuitively feel the makeup effect finished by makeup on the face of the user through the makeup effect picture of the virtual image, and is convenient to select makeup suitable for the user or favorite of the user.
In some embodiments, the operation instruction of the user can be acquired based on the interactive interface on which the recommendation list is displayed, and the makeup effect chart displayed on the interactive interface is enlarged or reduced according to the operation instruction of the user, so that the user can better know the details of the makeup and select the makeup suitable for the user. For example, the terminal device may be preset to enlarge the clicked area on the screen through twice continuous clicking operations on the screen, and when receiving an operation instruction of twice continuous clicking operations on the makeup position on the interactive interface by the user, the makeup area in the makeup effect map may be enlarged.
It is understood that step S110 may be performed locally by the terminal device, or may be performed by the terminal device and the server separately, and according to different actual application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S120: and acquiring the target makeup selected by the user in the recommendation list.
After displaying the recommendation list of the makeup, the terminal device may obtain a target makeup selected by the user in the recommendation list, where the target makeup is a makeup that the user wants to obtain a makeup guide, and the selection operation of the user may be a screen touch instruction, a voice instruction, an action instruction, a gesture instruction, and the like, which is not limited herein. For example, the target makeup appearance selected by the user may be obtained by obtaining a click instruction of the user on the makeup appearance in the recommendation list on the screen, or the target makeup appearance selected by the user may be obtained as makeup appearance 1 in the recommendation list by obtaining a voice instruction "select makeup appearance 1" of the user.
In some embodiments, the face image of the user may be acquired, facial features may be extracted according to the face image of the user, the facial features of the user may be matched with the makeup data in the recommendation list, the matching probability between the facial features and the makeup data may be calculated, the higher the matching probability is, the more suitable the makeup is for the user, and the makeup corresponding to the highest matching probability may be used as the target makeup. As one way, the matching probability of the facial features and the makeup data may be calculated by a machine learning model trained in advance, specifically, the model may be trained in advance based on face data associated with the makeup effect as a sample, and after inputting the facial features of the user and the makeup data in the recommendation list into the machine learning model, the matching probability of the facial features and the makeup data may be output.
Since the makeup in the recommendation list may not necessarily satisfy the actual needs of the user, in some embodiments, the base makeup selected in the recommendation list by the user and the adjustment instruction for the base makeup may be obtained, and the adjusted base makeup may be used as the target makeup selected by the user. Specifically, please refer to the following embodiments.
It is understood that step S120 in this embodiment may be performed locally by the terminal device.
Step S130: and obtaining a makeup strategy corresponding to the target makeup.
The makeup strategy is operation information corresponding to the target makeup, information such as makeup skill, operation steps and the like can be provided for a user according to the operation information to guide the user to makeup close to the target makeup, the makeup strategy can comprise makeup products, a makeup guiding method, a makeup effect and the like, the makeup products comprise names and dosage of the makeup products, the makeup guiding method comprises a makeup action and a use area of the makeup products on the face, and the makeup effect comprises colors of the makeup products after being combined with the face area and contour lines of five sense organs. For example, when the target makeup selected by the user is eyebrow makeup, the makeup strategy may include: selecting the brand and color of the eyebrow cosmetic product, outlining the eyebrow corresponding to the target cosmetic with an eyebrow pencil, filling the inside of the eyebrow outlining with eyebrow powder, naturally shading the eyebrow with the eyebrow pencil, and heavily depicting the eyebrow.
In some embodiments, one target makeup may correspond to a different makeup strategy. Specifically, the target makeup can correspond to makeup strategies with different difficulties, and the more difficult makeup strategies are, the more detailed makeup is, the more difficult makeup is to be completed, and the higher the similarity with the target makeup is. For example, the makeup strategy with a relatively difficult eye shadow corresponding to the target makeup can be completed by only using an eye shadow disc with one color and one cosmetic brush, and the similarity of the makeup effect corresponding to the makeup strategy and the target makeup is 70%. The difficult makeup strategy corresponding to the eye shadow of the target makeup comprises the steps of using a plurality of colors in a superposition mode and using a plurality of cosmetic brushes for shading, and the similarity of the makeup effect corresponding to the makeup strategy and the target makeup is 100%.
As one way, the corresponding makeup strategy can be obtained according to the selection of the user, for example, the makeup difficulty can be divided into three modes of simple, moderate and difficult according to the difficulty difference of the makeup strategy, and if the user selects the simple mode, the simple makeup strategy corresponding to the target makeup is obtained.
As still another way, a makeup video of the user may be acquired, the makeup level of the user may be obtained by analyzing the makeup video of the user, and the corresponding makeup strategy may be selected according to the makeup level of the user. For example, it can be known that the makeup level of the user is high by analyzing the makeup video of the user, so that a makeup strategy with high difficulty can be selected.
In some embodiments, the makeup strategy corresponding to the target makeup may further include a clothing matching strategy, an accessory matching strategy, a hairstyle strategy, etc. that matches the makeup. For example, if the target makeup is an European and American style, European and American style clothing, and more exaggerated jewelry, etc. may be recommended. By the method, the user can acquire an all-around collocation strategy suitable for makeup, and the whole dressing effect is improved.
It is understood that step S130 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S140: and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video.
After the makeup strategy corresponding to the target makeup is obtained, a video for guiding makeup by the virtual image can be generated and displayed according to the makeup strategy, wherein the virtual image can be generated in advance according to the face image of the user, the video for guiding makeup by the virtual image can comprise the makeup operation performed on the target makeup on the face of the virtual image, the action performed when the user makes up like a real person can be realized, and the change of the color, the contour line and the like corresponding to the makeup action is displayed on the face of the virtual image.
It can be understood that the face image may be an image captured by the terminal device through an image capturing device such as a camera, or an image input by the user into the terminal device. Based on the face image of the user, an avatar model corresponding to the user can be obtained, and the similarity of each part of the head of the avatar model and the corresponding part of the head of the user meets a preset similarity condition. The virtual image can be a two-dimensional image or a three-dimensional image, and each part of the head of the virtual image comprises at least one of the following parts: eyes, eyebrows, mouth, nose, face, head, skin tone. Alternatively, facial skeleton detection and body posture recognition may also be performed on the user, thereby obtaining an avatar that is not only similar to the user's head, but also similar to the user's body posture.
Specifically, the virtual image model corresponding to the user may be obtained through a neural network model obtained through pre-training: firstly, feature extraction is carried out on a face image of a user through a feature extraction layer of a neural network model to obtain feature vectors of different attributes corresponding to each part, wherein the attributes are used for representing feature information embodied outside the part, for example, the attributes of the eyes can comprise the shape of the eyes, the type of eyelids, the color of pupils, the length of eyelashes and the like; then, splicing the obtained feature vectors of different attributes of each part through a feature splicing layer of the neural network to obtain the feature vectors corresponding to each part; then, through a classification layer of a neural network, prediction is carried out according to the category of the feature vector corresponding to each part; and finally, combining the materials corresponding to the parts based on the categories of the parts to generate an avatar model corresponding to the user, rendering the avatar model through an image processor and the like to obtain an avatar corresponding to the face image of the user and displaying the avatar. Optionally, the obtaining of the avatar model based on the face image of the user may be performed by a terminal device or a server.
In some embodiments, before displaying the video, the information on the makeup operation and the used makeup products in the makeup strategy may be displayed in the form of a prompt box, so that the user may judge whether the makeup strategy is suitable for himself or herself according to his or her makeup level and his or her own makeup products, and if the user is not satisfied with the current makeup strategy, the user may adjust the makeup strategy or reselect the target makeup. By the mode, the time for watching the video can be saved for the user, and the situation that the user finds that the user does not finish the needed cosmetic product when the cosmetic is half done can be avoided.
In some embodiments, the video playback of the avatar to guide makeup may be controlled according to the interactive instructions input by the user. The interactive instruction may be a preset control instruction for video display, for example, a double-speed playing video, a repeat playing video, a pause video, a playing video, and the like, and the user may input the interactive instruction by clicking a screen, voice, an action, a gesture, and the like. For example, when the terminal device receives a voice command "slow down" input by the user, the speed of the currently played video can be adjusted to 0.8 times speed for playing, so that the speed of the video playing is more consistent with the speed of the current makeup operation of the user, and the user can learn with reference to the content of the video more easily. By acquiring the multi-state interaction instruction of the user, the man-machine interaction function of the terminal equipment can be enriched, so that the makeup guidance is more in line with the actual requirements of the user.
In some embodiments, the terminal device may obtain a real-time makeup video of the user through an image capturing device such as a camera, obtain a current makeup progress of the user by analyzing the makeup video of the user, and display a video of guiding makeup with an avatar corresponding to the current makeup progress of the user, so that the user can learn makeup by referring to the video, and optionally, display a video corresponding to a next operation after the current makeup operation is finished. For example, when detecting that the current makeup operation of the user is to draw an eye shadow, the terminal device may display a video in which the avatar draws the eye shadow, or may display a video corresponding to brushing eyelashes after the avatar draws the eye shadow.
In some embodiments, the audio corresponding to the makeup strategy may also be obtained, and the audio is played through an audio playing device of the terminal device. Optionally, characters corresponding to the makeup strategy or images of makeup effects of the virtual image can be displayed through the prompt box. For example, a cosmetic product that is required to be used in a makeup process of a target makeup may be displayed on the interface, wherein the cosmetic product may include a cosmetic product, a makeup implement, and the like. For example, if the target makeup is eyebrow makeup, a makeup product such as an eyebrow pencil or eyebrow powder required for completing the eyebrow makeup and a makeup tool such as an eyebrow pencil or eyebrow brush required for use may be displayed on the interface. The user can select one or more modes such as video, audio, characters, images and the like to guide the makeup according to the makeup habit of the user, so that the makeup experience of the user is improved. As one way, when a video of an avatar guiding makeup is displayed, a link of a makeup product or a makeup tool used in the video may be displayed in the form of a prompt box on an interface displaying the video, and a user may make a purchase by clicking the link. By the mode, the user can conveniently obtain the needed products, and the merchant can realize higher purchase conversion rate of the commodities.
It is understood that step S140 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
According to the intelligent makeup method based on the virtual image, the recommendation list of the makeup is displayed, the target makeup selected by the user in the recommendation list is obtained, the makeup strategy corresponding to the target makeup is obtained, the video for guiding the makeup by the virtual image is generated and displayed according to the makeup strategy, the makeup guide can be performed according to the makeup selected by the user, the personalized makeup guide is realized, in addition, the makeup is guided by the virtual image generated according to the face image of the user, the user can learn more easily, and the effectiveness of the makeup guide can be improved.
Referring to fig. 4, fig. 4 illustrates an avatar-based intelligent makeup method according to another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S210: and acquiring a face image of the user, and generating facial features of the user according to the face image.
The face image of the user can be an image acquired by the terminal device through an image acquisition device such as a camera, and can also be uploaded to the terminal device by the user.
The facial features are used for representing feature information embodied outside the face of the user, including but not limited to the outline, color and the like of each part.
Specifically, the facial features of the user can be extracted from the acquired face image of the user through an image recognition technology, firstly, face detection can be performed on the acquired image of the user to determine the position of the face, then, the face of the user can be detected to acquire key points of the face, the face is divided into a plurality of regions according to the key points, the features of the regions are extracted, and the features of different regions form a feature vector to serve as the facial features of the user. Wherein, the key points may include but are not limited to: the key parts of the eyes, eyebrows, mouth, nose, face, etc., it is understood that the number and parts of the key points can be determined according to the actual situation.
As one mode, a face image of a user may be input into a facial feature extraction model to obtain facial features corresponding to an image of the face of the user, where the facial feature extraction model is a machine learning model trained from a data sample set labeled with a plurality of key points of the face, and is used to output the facial features corresponding to the face image according to the input face image. As an embodiment, a corresponding machine learning model may be trained for each attribute of the face, for example, a face image of the user is input to a face classification model obtained by pre-training, so as to obtain a face shape corresponding to the face image, and similarly, a nose shape, an eye shape, an eyebrow shape, a lip shape, and the like corresponding to the face image may be determined.
In some embodiments, a three-dimensional face model of the user may be generated after the face image of the user is acquired, and facial features of the user may be extracted according to the three-dimensional face model. Specifically, two-dimensional face images of a user at multiple angles can be acquired, so that a three-dimensional face model of the user is constructed according to the multiple two-dimensional face images, and the facial features of the user are generated by analyzing the three-dimensional face model. Compared with a two-dimensional face image, the three-dimensional face model is closer to the image of the user, and three-dimensional face features such as the height of a nose bridge and the height of cheekbones can be acquired.
In some embodiments, if the acquired face image of the user is an image after makeup of the user, as one mode, the acquired image may be prompted to be a makeup image in a text or voice mode, a pixel image of the user is needed to acquire facial features of the user more accurately, and as another mode, an image of a corresponding pixel may also be acquired according to the image in a machine learning model or the like.
It is understood that step S210 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S220: at least one makeup is obtained based on the facial features.
Wherein, the makeup obtained according to the facial features can be the integral makeup and can also be the local makeup. Specifically, the overall makeup may include an eyebrow makeup, a lip makeup, an eye makeup, a dressing, a blush makeup, and the like, which constitute an overall face makeup, and the partial makeup may be one or more of an eyebrow makeup, a lip makeup, an eye makeup, a dressing, and a blush makeup, which are not limited herein. As one way, one makeup may be obtained according to facial features, or a plurality of makeup may be obtained.
In some embodiments, the terminal device or the server has a database storing makeup effect maps of various real people, and the database may be preset or may be obtained by collecting the makeup effect maps on the network in real time. After the facial features of the user are generated according to the face image, the makeup effect chart in the database can be matched with the facial features, and if the matching value is larger than the preset matching value, the makeup in the makeup effect chart is used as a recommended makeup, wherein the recommended makeup is the makeup in the recommended list.
As one way, the areas of the user's face may be set to different weight values, the similarity between the areas of the user's face and the makeup effect chart in the database may be calculated, the weighted average of the area similarities may be used as the matching value of the makeup effect chart, and if the matching value is greater than the preset matching value, the makeup in the makeup effect chart may be used as the recommended makeup, where the weight values of the areas of the user's face may be set by the user or may be automatically set by the terminal device when the makeup guidance function is activated. For example, the user sets the weight of the eyes to 1 and the weight of the nose to 0.5, and when the eye-eye similarity of the makeup effect diagram reaches 80% and the nose similarity reaches 60%, the matching value of the makeup effect is 73%, which exceeds the preset matching value of 70%, and the makeup of the makeup effect diagram is taken as the recommended makeup.
As another mode, the facial features of the user and the makeup effect maps of the database may be input into a pre-trained makeup matching model, which is a machine learning model trained from makeup-free data samples of the human face and makeup data samples corresponding to the human face, matching probabilities of the facial features and different makeup effect maps may be output, and the makeup in the makeup effect maps corresponding to the matching probabilities greater than a preset value may be taken as a recommended makeup. The model can be constructed, trained and optimized in the server, configured in the server, or transplanted to the mobile terminal by the server and configured. Optionally, the model building, training and optimization process may also be performed in the mobile terminal, if the processing capabilities of the mobile terminal allow it.
Optionally, the makeup in the database may also be adjusted according to the facial features of the user, making the makeup more suitable for the user.
In some embodiments, the terminal device may input the facial features of the user into a pre-trained makeup generation model to output makeup corresponding to the facial features, the makeup generation model being a machine learning model trained from data samples of face makeup and data samples of makeup corresponding to a face, and the model may generate makeup corresponding to the facial features according to the facial features of the user.
It is understood that step S220 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Although the makeup obtained according to the facial features of the user can meet the aesthetic quality of the public, in practical applications, the makeup requirement of the user for more individualizing the scene or the makeup style may not be met, and therefore, in some embodiments, step S220 may include steps S221 to S222, please refer to fig. 5, fig. 5 shows a schematic flow chart of step S220 in fig. 4 provided by an exemplary embodiment of the present application, and step S220 may include:
step S221: cosmetic product data and makeup style data input by a user are acquired.
The makeup product data can comprise types of different makeup products such as foundation make-up, loose powder, eyebrows, blush, highlight, face-beautifying, eye shadow, lipstick and the like, and data such as brands, models and the like corresponding to the types, the makeup product data input by a user can be data of cosmetics purchased by the user, the user can be guided to better utilize the purchased products by obtaining the makeup according to the makeup product data, the makeup product data can also be data of cosmetics interested by the user, and the user can be helped to judge whether the products are suitable for the user or not according to the makeup product data.
The makeup style data may be divided according to actual needs, for example, may be divided into light makeup, heavy makeup, and the like according to the degree of makeup, may be divided into ancient makeup, national makeup, modern makeup, and the like according to the age, may be divided into european makeup, japanese makeup, chinese makeup, and the like according to geographical areas, and may be divided into stage makeup, party makeup, daily makeup, and the like according to scenes, which is not limited herein.
In some embodiments, the terminal device may provide a makeup product interactive interface and a makeup style interactive interface, and detect a user operation based on the interfaces, so as to obtain makeup product data and makeup style data input by the user, where the user operation may be a selection operation based on an option provided by the interfaces or an input operation, which is not limited herein.
In some embodiments, the terminal device may record user data, such as a target makeup selected by the user or a browsing history of makeup in the recommendation list, each time the user uses the makeup function, and then analyze user preferred makeup product data and makeup style data according to the user data, in such a way that the user does not need to additionally input the data, and the user experience may be improved.
In some embodiments, the terminal device may also acquire only the makeup product data input by the user, or only the makeup style data input by the user, without acquiring both the makeup product data and the makeup style data.
Step S221 may be performed locally by the terminal device.
Step S222: make-up is obtained based on the facial features, the makeup product data, and the make-up style data.
In acquiring the makeup product data and the makeup style data input by the user, makeup may be acquired based on facial features, the makeup product data, and the makeup style data, wherein at least one of the makeup is obtained using a makeup product.
As one mode, after a plurality of makeup looks are acquired according to facial features, the acquired makeup looks may be screened using makeup product data and makeup style data as screening conditions, and makeup looks satisfying the conditions may be used as the acquired makeup looks. Specifically, the screening condition corresponding to the makeup product data is that makeup can be obtained using the makeup product, and the screening condition corresponding to the makeup style data is that the makeup conforms to the makeup style input by the user. Alternatively, makeup product data and makeup style data may be used as the selection conditions, makeup that meets the conditions may be selected from a database containing a plurality of makeup faces, the makeup faces that meet the conditions may be matched with facial features, and if the matching value is greater than a preset matching value, the makeup faces in the makeup faces may be used as the obtained makeup faces.
In some embodiments, the manner in which makeup is obtained may be varied according to the needs of the user. As one approach, makeup may be obtained based on facial features and cosmetic product data; as yet another approach, the makeup may be obtained based on facial features and makeup style data; as another way, makeup may be obtained from facial features, makeup product data, and makeup style data; as still another way, it is also possible to acquire makeup from only makeup product data or makeup style data, so that the user can try some makeup that does not necessarily match the facial features of the user, but matches the makeup product data or makeup style data of the user.
It is understood that step S222 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S230: displaying a recommendation list, the recommendation list including at least one makeup.
After the makeup is acquired according to the facial features, a recommendation list containing the acquired makeup is displayed. Specifically, please refer to step S110.
Step S240: and acquiring the target makeup selected by the user in the recommendation list.
Step S250: and obtaining a makeup strategy corresponding to the target makeup.
Step S260: and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the intelligent makeup method based on the virtual image, the face image of the user is obtained, the facial features of the user are generated according to the face image, then at least one makeup is obtained according to the facial features, the recommendation list is displayed, the recommendation list comprises at least one makeup, the target makeup selected by the user in the recommendation list is obtained, the makeup strategy corresponding to the target makeup is obtained, and the video for guiding the makeup through the virtual image is generated and displayed according to the makeup strategy. Therefore, makeup is generated according to the facial features of the user, makeup guidance is conducted on the basis of the virtual image, the makeup in the recommendation list can be more suitable for the user, and the makeup guidance is more personalized.
Referring to fig. 6, fig. 6 illustrates an avatar-based intelligent makeup method according to another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S310: a recommended list of makeup is displayed.
Step S320: and acquiring the basic makeup selected by the user in the recommendation list.
Because the makeup of the recommendation list is the makeup which is in line with the mass aesthetic sense and cannot meet the actual requirement of the user on the makeup, after the basic makeup selected by the user in the recommendation list is obtained, the makeup can be adjusted according to the adjustment instruction of the user data, so that the makeup is more in line with the expectation of the user. The user can select from the makeup in the recommendation list in a screen touch instruction mode, a voice instruction mode, an action instruction mode, a gesture instruction mode and the like, and the selected makeup is used as a basic makeup.
Step S320 may be performed locally by the terminal device.
Step S330: and judging whether an adjusting instruction input by a user is acquired or not.
The adjustment instruction is used for representing an adjustment operation on the base makeup, and specifically, the adjustment instruction can be an operation instruction for adjusting the attributes of the base makeup, such as the contour, color, position and the like of the local makeup, such as eyebrow makeup, lip makeup, eye makeup, face makeup, blush makeup and the like. For example, the user can change the eye shadow color in the base makeup to the ground color by the adjustment instruction.
Step S330 may be performed locally by the terminal device.
In this embodiment, after determining whether the adjustment instruction input by the user is acquired, the method may further include:
if the adjustment instruction input by the user is obtained, step S340 may be executed;
if the adjustment instruction input by the user is not obtained, step S350 may be executed.
Step S340: the base makeup is adjusted based on the adjustment instruction, and the adjusted base makeup is set as the target makeup.
And if an adjustment instruction input by the user is acquired, adjusting the base makeup based on the adjustment instruction, and taking the adjusted base makeup as the target makeup. Specifically, makeup parameters corresponding to the base makeup selected by the user in the recommendation list may be acquired, the base makeup may be imaged on the face of the avatar according to the makeup parameters, and the makeup effect of the avatar using the base makeup may be displayed on the screen of the terminal device. And if the adjustment instruction input by the user is acquired, modifying the makeup parameters corresponding to the adjustment instruction, and displaying the makeup effect of the virtual image obtained by adopting the modified makeup parameters on a screen of the terminal equipment. For example, if the adjustment instruction input by the user is to change the eye shadow color in the basic makeup to the ground color, the terminal device changes the color parameter of the eye shadow part in the basic makeup to a parameter corresponding to the ground color, and displays the makeup effect obtained by using the virtual image of the modified makeup parameters. When the user finishes the adjustment operation, the base makeup obtained after the adjustment is taken as the target makeup. In this way, the user can perform personalized adjustment operation on the makeup and visually see the adjusted makeup.
It can be understood that step S340 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different actual application scenarios, the task may be allocated according to requirements, which is not limited herein.
Fig. 7 is a schematic view of another interactive interface provided by an exemplary embodiment of the present application, and when it is obtained that a user needs to adjust a base makeup, a terminal device may display the interactive interface shown in fig. 7, where the interface may include a makeup effect diagram of an avatar corresponding to the base makeup, and a local makeup list below the avatar, and may obtain a local makeup that needs to be adjusted and is selected by the user based on the interface, and as a way, may obtain a click operation of the user on a face of the avatar on the interface, and take the makeup at a position corresponding to the click operation as the local makeup that needs to be adjusted, for example, the user may not be satisfied with the eyebrows, and may adjust the eyebrows by clicking an area of the eyebrows; as still another way, the selection operation of the user in the local makeup list may be acquired, and the local makeup selected by the user is used as the local makeup to be adjusted, for example, the user may select lip makeup under the avatar through a voice instruction to adjust the lip makeup.
Fig. 8 shows another schematic view of an interactive interface provided by an exemplary embodiment of the present application, and after obtaining the partial makeup that needs to be adjusted and selected by the user, the terminal device may display the interactive interface shown in fig. 8, where the interface may include a makeup effect diagram of an avatar corresponding to the basic makeup, a name of the partial makeup that is currently adjusted, and attribute options that may be adjusted by the partial makeup, such as color, outline, and the like. Based on the interactive interface shown in fig. 8, the makeup displayed on the avatar may be adjusted accordingly according to the obtained attribute options selected by the user. For example, if the user selects color 1 and outline 2 on the current interface, the lip makeup color of the avatar displayed on the interface changes to color 1 and the lip makeup outline changes to outline 2.
Step S350: the base makeup is taken as the target makeup.
And if the adjustment instruction input by the user is not acquired, taking the base makeup as the target makeup.
It is understood that step S350 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S360: and obtaining a makeup strategy corresponding to the target makeup.
Step S370: and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the intelligent makeup method based on the virtual image, the recommended list of the makeup is displayed, the basic makeup selected by the user in the recommended list is obtained, whether an adjustment instruction input by the user is obtained or not is judged, if the adjustment instruction input by the user is obtained, the basic makeup is adjusted based on the adjustment instruction, the adjusted basic makeup is used as the target makeup, if the adjustment instruction is not obtained, the basic makeup is used as the target makeup, then a makeup strategy corresponding to the target makeup is obtained, and a video for guiding the makeup by the virtual image is generated and displayed according to the makeup strategy. After the target makeup selected by the user in the recommendation list is obtained, the basic makeup is adjusted according to the adjustment instruction of the user to obtain the target makeup, and makeup guidance is performed based on the virtual image, so that the personalized makeup requirements of the user can be met, more diversified target makeup is realized, and the makeup experience of the user is improved.
Referring to fig. 9, fig. 9 illustrates an avatar-based intelligent makeup method according to still another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S410: a recommended list of makeup is displayed.
Step S420: and acquiring the target makeup selected by the user in the recommendation list.
Step S430: and a makeup step of obtaining a target makeup.
After the target makeup selected by the user in the recommendation list is obtained, a makeup step of the target makeup may be obtained, where the makeup step is used to represent the chronological makeup behavior required for completing the target makeup, for example, the makeup step may include the sequential steps of making up such as make-up, blush, eyebrow make-up, eye make-up, face repair, lip make-up, and the like, and the make-up step may be further subdivided into sub-steps of using foundation liquid, concealing, powder scattering, and the like. Through the makeup process split of will goal makeup into a plurality of makeup steps, can simplify complicated makeup process into a plurality of comparatively simple processes, can make things convenient for the beginner to study.
Since different makeup steps are adopted for the same makeup, the corresponding makeup strategies may be different. In some embodiments, the makeup step of the target makeup may be adjusted to obtain a makeup strategy corresponding to the adjusted makeup. As one way, the user-selected makeup step may be acquired, so that only the user-selected makeup step corresponding to the target makeup is acquired. For example, the user generally only needs to draw base makeup and eyebrow makeup, so the user may select the makeup steps as base makeup and eyebrow makeup, and after acquiring the target makeup selected by the user in the recommendation list, may acquire only the makeup steps of the base makeup and eyebrow makeup corresponding to the target makeup. As another embodiment, the order of the makeup steps selected by the user may be obtained, and the order of the steps corresponding to the target makeup may be arranged according to the order selected by the user. For example, if the user prefers to use the eye shadow and then the blush after performing the makeup operation, the makeup steps corresponding to the target makeup may be the makeup, eye shadow, and blush that are performed in sequence.
It is understood that step S430 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S440: and obtaining a makeup strategy corresponding to the makeup step.
After the makeup step of obtaining the target makeup, a makeup strategy corresponding to the makeup step may be obtained, where the makeup strategy includes a makeup product including a name and a dosage of the makeup product, a makeup instruction approach including a makeup action and a use area of the makeup product on the face, and a makeup effect including a color of the makeup product after the makeup product is combined with the face area, and a contour line of five sense organs. For example, the makeup strategy corresponding to the lip makeup step in the makeup step may include: a lipstick product corresponding to lip makeup of a target makeup, a makeup brush to be used for applying the lipstick product, an application area of lipstick, an action of applying lipstick from the left end of the upper lip to the right end of the upper lip in the application area, a color of lipstick combined with the lip during application of lipstick, a contour line of the lip, and the like.
It is understood that step S440 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S450: and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the intelligent makeup method based on the virtual image, the recommendation list of the makeup is displayed, the target makeup selected by the user in the recommendation list is obtained, then the makeup step corresponding to the target makeup is obtained, the makeup strategy corresponding to the makeup step is obtained, and then the video for guiding the makeup by the virtual image is generated and displayed according to the makeup strategy. Therefore, the corresponding makeup steps and the makeup strategy can be generated according to the target makeup selected by the user, the makeup is guided through the virtual image, the user can learn the makeup according to the specific makeup steps, and the effectiveness of the makeup guidance is improved.
Referring to fig. 10, fig. 10 illustrates an avatar-based intelligent makeup method according to another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S510: a recommended list of makeup is displayed.
Step S520: and acquiring the target makeup selected by the user in the recommendation list.
Step S530: and a makeup step of obtaining a target makeup.
Step S540: and obtaining a makeup strategy corresponding to the makeup step.
Step S550: and acquiring a makeup video of the user.
The makeup video of the user can be a video acquired by the terminal device through image acquisition equipment such as a camera and the like, or can be a terminal device which uploads the video to the application of the intelligent makeup method based on the virtual image after the user records the makeup video by using other equipment.
In some embodiments, after the makeup video of the user is obtained, the user can share the makeup video of the user through the terminal device, and the makeup video shared by other users can also be viewed, so that the interest of interaction with other users in the makeup process of the user is increased.
It is understood that step S550 may be performed locally by the terminal device, and the execution sequence of step S550 is not limited to the currently listed sequence, and in some embodiments, the step S570 may be performed before step S560 is performed.
Step S560: and comparing and analyzing the makeup video of the user with the makeup strategy, and displaying the result of the comparison and analysis.
After the makeup video of the user is obtained, the makeup video of the user may be compared with the makeup strategy and a result of the comparison analysis may be displayed, where the content of the result of the comparison analysis may be the same point or different point between the two, and the form may be a form of a character, a voice, an image, and the like, which is not limited herein.
As one mode, data such as a makeup technique and a makeup effect of the user may be acquired by analyzing a makeup video of the user, the acquired data may be compared with a makeup instruction technique and a makeup effect in a makeup strategy to obtain a difference between the makeup video of the user and the corresponding makeup strategy, and a prompt message of the difference may be displayed. The user can adjust the makeup or makeup technique according to the prompt information to make the makeup closer to the target makeup.
As another mode, by analyzing the makeup video and the makeup strategy of the user, the difference between the makeup steps in the makeup video and the makeup steps in the makeup strategy of the user can be obtained, and the prompt information of the corresponding makeup steps in the makeup strategy can be displayed. The user can know the difference between the recommended dressing step and the actual dressing step according to the prompt information, thereby obtaining more perfect dressing. For example, by comparing and analyzing the user's makeup video and makeup strategy, it can be found that the user lacks the step of using loose powder makeup, as compared to the makeup strategy, the lack of step of using loose powder makeup can be displayed in the form of a dialog box, and the use of loose powder makeup can be prompted to make the makeup more permanent.
It is understood that step S560 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
In some embodiments, step 560 may include steps S561 through S562, please refer to fig. 11, fig. 11 shows a flowchart of step S560 in fig. 10 according to an exemplary embodiment of the present application, and step S560 may include:
step S561: and comparing and analyzing the makeup video of the user with the makeup strategy to obtain the completion degree of the makeup of the user.
The completion degree is used for representing the difference between the makeup video and the makeup strategy of the user, and specifically, the smaller the difference between the makeup technique and the makeup instruction technique in the makeup strategy of the user in the makeup video of the user, the smaller the difference between the makeup effect of the user and the makeup effect in the makeup technique and the makeup strategy of the user, the higher the completion degree of the makeup of the user. In one embodiment, different weights may be given to the makeup instruction method and the makeup effect in the makeup strategy in accordance with actual needs. For example, if the user wants the actual makeup effect to be similar to the target makeup but does not care whether the makeup technique is standardized, the user can give a higher weight to the makeup effect, and the finish of the makeup of the user is obtained mainly according to the difference in the makeup effect.
It can be understood that step S561 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the tasks may be allocated according to requirements, which is not limited herein.
In some embodiments, step 561 may include step S5611 to step S5612, please refer to fig. 12, where fig. 12 shows a flowchart of step S561 in fig. 11 according to an exemplary embodiment of the present application, and step S561 may include:
step S5611: and analyzing the makeup video, and acquiring the makeup steps and the makeup effects of the user in the makeup video of the user.
By analyzing the makeup video of the user, the makeup steps contained in the video and the makeup effect of the user corresponding to each makeup step can be obtained. Specifically, the makeup steps included in the makeup video of the user may be obtained through analysis of the makeup video, then the makeup video of the user is divided into a plurality of video frames according to the makeup steps, and for each makeup step, an image frame including a makeup effect of the user may be extracted from the video frame corresponding to the step as the makeup effect of the user corresponding to the step.
It can be understood that step S5611 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S5612: and analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy according to the makeup steps to obtain the completion degree of the makeup effect of the user.
The finish degree of the makeup effect of the user is used for representing the difference between the makeup effect of the user and the makeup effect in the makeup strategy, the larger the difference is, the lower the finish degree of the makeup effect of the user is, and specifically, the difference between the makeup effect of the user and the makeup effect in the makeup strategy may include differences of colors, contour lines of five sense organs, and the like after the makeup product is combined with the face area after makeup, which is not limited herein.
In some embodiments, for each makeup step included in the user makeup video, the difference between the makeup effect of the user corresponding to the step and the makeup effect corresponding to the step in the makeup strategy may be analyzed, and the overall completion degree of the makeup effect of the user is obtained by integrating the differences in the makeup effects of all the steps. As a way, the face in the makeup effect of the user may be aligned with the face in the makeup effect in the makeup strategy, and the completion degree of the makeup effect of the user may be obtained by calculating the difference between the pixels in each region on the aligned face, where the larger the difference between the pixels is, the lower the completion degree of the makeup effect of the user is. As another mode, the makeup effect of the user and the makeup effect in the makeup strategy may be input into a makeup effect analysis model obtained by pre-training, a deviation value between the output makeup effect of the user and the makeup effect in the makeup strategy may be obtained, and a degree of completion of the makeup effect of the user is obtained according to the deviation value, where the larger the deviation value is, the lower the degree of completion of the makeup effect is, where the makeup effect analysis model may be a machine learning model obtained by training a makeup effect image as sample data.
In some embodiments, the difference between the user makeup effect and the makeup effect in the makeup strategy may also be analyzed according to the makeup step specified by the user, so as to obtain the completion degree of the user makeup effect corresponding to the specified step. For example, a user uploads a makeup video of the user on a terminal device, wants to know the difference between eye makeup drawn by the user and eye makeup in a target makeup look, can select to analyze the makeup effect completeness of the eye makeup, obtains a user makeup effect corresponding to the step of eye makeup by analyzing the makeup video of the user, and obtains the makeup effect completeness corresponding to the eye makeup of the user according to the difference between the user makeup effect corresponding to the eye makeup and the makeup effect in the makeup look strategy.
It is understood that step S5612 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S5613: and if the finish degree of the makeup effect of the user is smaller than a first specified threshold value, generating and displaying first prompt information.
If the finish degree of the makeup effect of the user is smaller than the first specified threshold value, generating and displaying first prompt information, wherein the prompt information comprises at least one of the finish degree of the makeup effect, the makeup effect in the makeup strategy and a makeup effect correction suggestion, so that the user can adjust the current makeup effect according to the prompt information to obtain the makeup effect closer to the target makeup effect. For example, if the first specified threshold is 70%, and the completion degree of the makeup effect of the user is 50%, the image of the makeup effect in the makeup strategy may be displayed, a part greatly different from the makeup effect of the user may be marked in the image, and a suggestion for correcting the makeup effect may be made in the form of text or voice.
In some embodiments, if the degree of completion of the makeup effect of the user is greater than or equal to the first specified threshold, no processing may be performed, and a prompt may be displayed for interaction with the user. For example, the first specified threshold is 80%, and if the user's completion of the makeup effect is 90%, a voice prompt message "true stick of your makeup level! To encourage the user and thereby increase the pleasure of the user in making up the makeup.
It can be understood that step S5613 may be performed locally by the terminal device, or may be performed by the terminal device and the server separately, and according to different actual application scenarios, the task may be allocated according to a requirement, which is not limited herein.
In some embodiments, step 561 may include step S5614 to step S5616, please refer to fig. 13, fig. 13 shows a flowchart of step S561 in fig. 11 according to another exemplary embodiment of the present application, and step S561 may include:
step S5614: and analyzing the makeup video, and acquiring the makeup steps and the makeup methods of the user in the makeup video.
By analyzing the makeup video of the user, the makeup steps contained in the video and the makeup technique of the user corresponding to each makeup step can be obtained. Specifically, the makeup steps included in the makeup video of the user may be acquired through analysis of the makeup video, the makeup video of the user is then divided into a plurality of video frames according to the makeup steps, and for each makeup step, the makeup technique of the user corresponding to the makeup step may be acquired by recognizing the user's motion in the video frame corresponding to the step.
It is understood that step S5614 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to a difference of an actual application scenario, the task may be allocated according to a requirement, which is not limited herein.
Step S5615: according to the makeup procedure, the difference between the user's makeup technique and the makeup guidance technique in the makeup strategy is analyzed to obtain the degree of completion of the user's makeup technique.
The user's operation technique completeness is used to represent the difference between the user's operation technique and the makeup teaching technique in the makeup strategy, and the larger the difference is, the lower the user's makeup effect is, and specifically, the difference between the user's makeup technique and the makeup teaching technique in the makeup strategy may include a technique of the user using a makeup tool, a movement trajectory of the user when making up, a use area of a cosmetic product on the face, and the like. The makeup technique analysis model is a machine learning model obtained by training by taking a makeup technique video as sample data.
In one embodiment, the completion degree of the user's makeup technique in the makeup procedure selected by the user may be obtained by analyzing the difference between the user's makeup technique and the makeup teaching technique in the makeup strategy according to the makeup procedure selected by the user.
It can be understood that step S5615 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S5616: and if the completeness of the makeup manipulation is smaller than a second specified threshold, generating and displaying second prompt information.
The second specified threshold may be a value preset by the terminal device or a value set by the user, and if the degree of completion of the makeup technique is smaller than the second specified threshold, second prompt information is generated and displayed, the second prompt information includes at least one of the degree of completion of the makeup technique, a makeup instruction technique, and a makeup technique correction suggestion, so that the user can adjust the makeup technique of the user according to the prompt information, and the makeup technique of the user is closer to the makeup instruction technique in the makeup strategy. In some embodiments, if the user's makeup technique is completed by a degree greater than or equal to the second specified threshold, no processing may be performed, and a prompt may be displayed to interact with the user.
It is understood that step S5616 may be performed locally by the terminal device, or may be performed by the terminal device and the server separately, and according to different actual application scenarios, the task may be allocated according to a requirement, which is not limited herein.
Step S562: and if the completion degree is smaller than the preset numerical value, adjusting the dressing strategy into an alternative strategy.
The preset value may be a value set by default by the terminal device, or may be data set in advance by the user.
The alternative strategy is a strategy with a higher matching degree with the makeup video of the user, the matching degree can be used for representing the matching degree of the makeup level of the user and the difficulty of the strategy, and the matching degree can also be used for representing the matching degree of the makeup in the makeup video of the user and the makeup corresponding to the strategy.
As one way, when the degree of matching is a degree of matching for characterizing the difficulty of the user's makeup level with the strategy, the alternative strategy may be a makeup strategy with a lower difficulty than the current makeup strategy, i.e., a strategy that is easier for the user to complete. After the makeup strategy is adjusted to the alternative strategy, a video for guiding makeup by the virtual image can be generated and displayed according to the alternative strategy. The makeup strategy is adjusted by analyzing the completion degree of the makeup video of the user, so that the user can obtain more suitable makeup, and a more flexible makeup guidance mode can be realized.
As another way, when the matching degree is a matching degree used for representing the makeup in the makeup video of the user and the makeup corresponding to the strategy, the alternative strategy may be a makeup strategy that is more matched with the current makeup effect of the user, specifically, the effect completed by the user in the makeup process is more different from the target makeup, the makeup that is more suitable for the current makeup effect of the user may be obtained in real time, and the corresponding makeup strategy is used as the alternative strategy. For example, if the eye shadow color used by the user in actual operation is different from the target makeup selected by the user and the deviation is large, the makeup strategy corresponding to the makeup fitting the current eye shadow color of the user may be used as the alternative strategy, and the current makeup strategy may be adjusted to the alternative strategy.
In some embodiments, when the completion of the user's makeup is less than a preset value, a prompt for adjusting the makeup strategy may be displayed, if an operation is received in which the user determines to adjust the makeup strategy according to the completion of the user's makeup, the makeup strategy is adjusted to an alternative strategy, and if an operation is not received in which the user determines to adjust the makeup strategy according to the completion of the user's makeup, the makeup strategy remains unchanged.
It can be understood that step S562 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S570: and generating a video for guiding makeup by the virtual image according to the makeup strategy and displaying the video.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the intelligent makeup method based on the virtual image, the recommended list of the makeup is displayed, the target makeup selected by the user in the recommended list is obtained, then the makeup step corresponding to the target makeup is obtained, the makeup strategy corresponding to the makeup step is obtained, the makeup video of the user and the makeup strategy are compared and analyzed, the result of the comparison and analysis is displayed, and then the video for guiding the makeup through the virtual image is generated and displayed according to the makeup strategy. By analyzing the makeup video of the user, the user can know the difference between the actual makeup process and the makeup strategy corresponding to the target makeup, and more targeted makeup guidance is realized, so that the user can be helped to draw a makeup look more similar to the target makeup look.
Referring to fig. 14, fig. 14 illustrates an avatar-based intelligent makeup method according to yet another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S610: a recommended list of makeup is displayed.
Step S620: and acquiring the target makeup selected by the user in the recommendation list.
Step S630: and obtaining the makeup strategy of the target makeup.
Step S640: and generating expression driving parameters and action driving parameters corresponding to virtual images to guide the user to make up according to the makeup strategy.
Generating visual model parameters of the virtual image for guiding a user to make up according to a makeup strategy, wherein the visual model parameters can comprise expression driving parameters and action driving parameters, the expression driving parameters can be a series of expression parameters of a face model for adjusting the virtual image, and specifically, the expression driving parameters can adjust the make-up effect of the face of the virtual image, the regional mouth-shaped action for performing make-up operation on the face, other facial actions and the like; the action driving parameter may be a series of limb parameters for adjusting a body model of the avatar, and specifically, may adjust a gesture, an amplitude of an action, etc. of the avatar performing a makeup operation.
It is understood that, in the embodiment of the present application, the expression driving parameters and the motion driving parameters that are acquired are sets of parameters corresponding to changes in time, for example, if the number of frame images of an avatar per second is 10, then the video per second corresponds to 10 sets of expression driving parameters and motion driving parameters of a desired avatar. In addition, if the avatar is a two-dimensional image, the expression driving parameters and the action driving parameters of the avatar are driving parameters corresponding to the two-dimensional image, and if the avatar is a three-dimensional image, the expression driving parameters and the action driving parameters are driving parameters corresponding to the three-dimensional image, which is not limited.
In some embodiments, the visual model parameters of the avatar may be generated in advance according to the makeup strategy and stored in a database of the terminal or the server, and the terminal device may directly obtain the visual model parameters corresponding to the makeup strategy in the server without considering the limitation of the calculation resources. As a way, the visual model parameters of the avatar may also be generated by a machine learning model in real time according to a makeup strategy, and in this way, the visual model parameters of the avatar may be generated in real time according to a target makeup strategy selected by a user and a corresponding makeup strategy without being limited to the makeup strategy in a database, thereby increasing the flexibility of the makeup method.
In some embodiments, generating an avatar to guide a user to make up corresponding expression driving parameters and action driving parameters may be implemented using a machine learning model according to a makeup strategy. Specifically, the makeup strategy can be input into a parameter generation model to obtain expression driving parameters and action driving parameters corresponding to the makeup strategy, and the parameter generation model is a machine learning model obtained by real person makeup video training and used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
Specifically, after obtaining the avatar corresponding to the user, the machine learning model may be utilized to perform feature extraction on a frame image in a real person makeup video, extract expression features and action features in the real person makeup process, obtain the expression features of the avatar corresponding to the makeup strategy based on the extracted expression features of the real person, obtain the action features of the avatar corresponding to the makeup strategy based on the extracted action features of the real person, then obtain expression driving parameters based on the expression features of the avatar, and obtain action driving parameters based on the action features of the avatar. The machine learning model used is not limited herein. For example, a Recurrent Neural Network (RNN) model, a Convolutional Neural Network (CNN) model, a Generative Adaptive Network (GAN), or the like may be used, and a variation, a combination, or the like of the above machine learning model may be used.
For example, a large amount of videos of real people drawing eye shadows are used as sample data training models, the expression characteristics such as colors and contour lines after eye shadow products are combined with eye regions in the makeup process, and the action characteristics such as the regions using the eye shadows, the motion tracks of smearing the eye shadows, and the gestures of a user using a makeup brush can be extracted, and then the expression driving parameters and the action driving parameters of the virtual image corresponding to the step of drawing the eye shadows in the makeup strategy can be obtained according to the expression characteristics and the action characteristics of the real people user drawing the eye shadows.
It is understood that step S640 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S650: and driving the expression and the action of the virtual image based on the expression driving parameter and the action driving parameter to generate a video for guiding makeup by the virtual image.
After the expression driving parameters and the action driving parameters corresponding to the virtual image to guide the user to make up are generated according to the makeup strategy, the expression of the virtual image can be driven based on the expression driving parameters, the action of the virtual image is driven based on the action driving parameters of the virtual image, and a video for guiding the makeup by the virtual image is generated, wherein the video is formed by a plurality of frames of images generated by driving the virtual image. Specifically, the expression driving parameters and the action driving parameters may be aligned so that the durations of the videos corresponding to the expression driving parameters and the action driving parameters are consistent, then multiple frames of continuous images of the virtual image are generated according to the expression driving parameters and the action driving parameters corresponding to one another, and the images are synthesized into the video.
It is understood that step S650 may be performed locally by the terminal device, may also be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S660: and displaying the video.
And displaying the generated video for guiding the makeup by the virtual image on a display screen or other image output devices of the terminal equipment. Optionally, the audio corresponding to the makeup strategy can be acquired, and the audio corresponding to the video content is played while the generated video for guiding makeup by the avatar is displayed, so that the avatar with simulated image, sound and behavior similar to that of a real person is presented to the user, and the user experience is further improved.
It is to be understood that step S660 may be performed locally by the terminal device.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the foregoing embodiments, and details are not described herein again.
According to the intelligent makeup method based on the virtual image, the recommendation list of the makeup is displayed, the target makeup selected by the user in the recommendation list is obtained, then the makeup strategy of the target makeup is obtained, the expression driving parameters and the action driving parameters corresponding to the virtual image to guide the user to makeup are generated according to the makeup strategy, then the expression and the action of the virtual image are driven based on the expression driving parameters and the action driving parameters, a video of the virtual image to guide the makeup is generated, and the video is displayed. Therefore, the video for guiding make-up can better accord with the action and expression of people in the actual make-up scene, and a user can obtain more effective make-up guidance through the video.
Referring to fig. 15, fig. 15 illustrates an avatar-based intelligent makeup method according to still another embodiment of the present application, which may be applied to the terminal device, and the method may include:
step S710: and acquiring a face image of the user.
The face image of the user can be an image acquired by the terminal device through an image acquisition device such as a camera, and can also be uploaded to the terminal device by the user. Optionally, the face image of the user acquired each time within the preset time period may be stored in a database of the terminal device or the server, and used for analyzing a change condition of the skin data of the user within the preset time period.
It is understood that step S710 may be performed locally by the terminal device.
Step S720: and acquiring skin data of the user from the face image.
Wherein the skin data of the user is used for characterizing the skin condition of the user, may include: skin color, freckles, wrinkles, and glossiness, but are not limited thereto. Specifically, after a face image of a user is acquired, portrait recognition may be performed on the image to acquire a portrait area, the portrait area is divided into a plurality of areas to be processed according to a preset feature database, and skin features of each area to be processed are recognized, where the preset feature database includes a plurality of skin feature information of the plurality of areas, and skin data of the user may be acquired by comparing the skin features of the areas to be processed of the user with features in a corresponding feature database.
For example, the forehead area in the feature database includes a head-up line, the skin features of the head-up line may include quantifiable data such as a range of wrinkles, a number of wrinkles, and the like, and after the face image of the user is acquired, if it is detected that the skin features in the forehead area of the user match with the skin features corresponding to the head-up line, it may be determined that the head-up line exists in the forehead area of the user.
It is to be understood that step S720 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S730: analyzing the skin data to generate skin care advice.
By analyzing the skin data, the analysis result of the skin of the user can be obtained, and the corresponding skin care suggestion is generated according to the analysis result. Specifically, if the skin of the user is abnormal, the abnormal reason can be analyzed, and a skin care improvement suggestion is generated; if the user's skin is not abnormal, a recommendation can be generated to maintain the current skin care method. In embodiments of the present application, the content of skin care advice may include: recommending the user to use the corresponding skin care product, recommending the user to use the smearing method of the skin care product, or recommending the user to adopt the corresponding sun-screening measure, etc., wherein the skin care suggestion can be in the form of audio, characters, images, videos, etc., and is not limited herein. For example, when analyzing skin data for acne on the skin of a user, a recommended suggestion for an acne-removing product may be generated.
In some embodiments, when the face images and the corresponding skin data of a plurality of users within a preset time period are acquired, the change condition of the skin of the user within the time period can be analyzed, and a skin care suggestion is generated according to the change condition of the skin of the user. For example, analysis of a decrease in acne on the user's skin may generate a voice prompt of "congratulation that your skin condition is improving! ".
In some embodiments, a makeup product suitable for the skin of the user may also be obtained by analyzing the skin data. When the user uses the makeup guidance function, the makeup strategy can be obtained according to the target makeup selected by the user, and the makeup products suitable for the skin of the user corresponding to the makeup strategy are screened, so that the makeup strategy is more in line with the skin condition of the user.
It can be understood that step S730 may be performed locally by the terminal device, may be performed in the server, and may also be performed by the terminal device and the server separately, and according to different practical application scenarios, the task may be allocated according to requirements, which is not limited herein.
Step S740: skin care advice is displayed.
In this embodiment, the skin care advice can be output on the corresponding output device of the terminal device according to the form of the skin care advice. For example, text-form skin care advice may be displayed on the screen of the terminal device by a text box.
It is understood that step S740 may be performed locally by the terminal device.
It should be noted that steps S710 to S740 may be performed at any step before steps S110 to S140 are performed.
According to the intelligent makeup method based on the virtual image, the face image of the user is obtained, the skin data of the user is obtained from the face image, then the skin data is analyzed, the skin care suggestion is generated, and the skin care suggestion is displayed. Therefore, the targeted skin care suggestion can be carried out on the user according to the skin data of the user, the aim of effectively caring the skin of the user with skin care requirements is further fulfilled, and the user experience is improved.
Referring to fig. 16, fig. 16 is a block diagram illustrating a structure of an avatar-based intelligent make-up apparatus according to an embodiment of the present application. As will be explained below with respect to the block diagram of fig. 16, the avatar-based intelligent makeup apparatus 1600 includes: a list display module 1610, a target makeup acquisition module 1620, a makeup strategy acquisition module 1630, and a video processing module 1640, wherein:
a list display module 1610, configured to display a recommendation list of makeup.
Further, the list display module 1610 further includes: the makeup appearance system comprises a feature generation submodule, a list makeup appearance acquisition submodule and a makeup appearance display submodule, wherein:
and the feature generation submodule is used for acquiring a face image of the user and generating facial features of the user according to the face image.
A list makeup obtaining sub-module for obtaining at least one makeup based on the facial features; and the makeup display sub-module is used for displaying a recommendation list, and the recommendation list comprises at least one makeup.
Further, the makeup obtaining sub-module includes: data acquisition unit and makeup of products acquisition unit, wherein:
and the data acquisition unit is used for acquiring the makeup product data and the makeup style data input by the user.
A makeup acquisition unit for acquiring makeup, at least one of which is obtained using a makeup product, based on the facial features, the product data, and the makeup style data.
And a target makeup obtaining module 1620 configured to obtain the target makeup selected by the user in the recommendation list.
Further, the target makeup obtaining module 1620 further includes: the makeup selection submodule, the instruction judgment submodule, the first makeup acquisition submodule and the second makeup acquisition submodule, wherein:
and the makeup selection submodule is used for acquiring the basic makeup selected by the user in the recommendation list.
And the instruction judgment sub-module is used for judging whether an adjustment instruction input by a user is acquired.
And the first makeup obtaining sub-module is used for adjusting the basic makeup based on the adjustment instruction if the adjustment instruction input by the user is obtained, and taking the adjusted basic makeup as the target makeup.
And the second makeup obtaining sub-module is used for taking the basic makeup as the target makeup if the adjusting instruction is not obtained.
A makeup strategy obtaining module 1630 for obtaining a makeup strategy corresponding to the target makeup.
Further, the makeup strategy acquisition module includes: a make-up step acquisition submodule and a step-by-step strategy acquisition submodule, wherein:
and the makeup step acquisition submodule is used for acquiring a makeup step of the target makeup, and the makeup step is used for representing the makeup behaviors which are required for completing the target makeup and are sequentially performed according to time sequence.
The sub-module is used for acquiring a makeup strategy corresponding to the makeup step, the makeup strategy comprises a makeup product, a makeup guiding method and a makeup effect, the makeup product comprises the name and the dosage of the makeup product, the makeup guiding method comprises a makeup action and the use area of the makeup product on the face, and the makeup effect comprises the color of the makeup product after the makeup product is combined with the face area and the contour line of the five sense organs.
Further, after obtaining the makeup strategy corresponding to the target makeup, the avatar-based intelligent makeup apparatus 1600 further includes: video acquisition module and contrastive analysis module, wherein:
and the video acquisition module is used for acquiring the makeup video of the user.
And the comparison analysis module is used for comparing and analyzing the makeup video of the user and the makeup strategy and displaying the result of the comparison analysis.
Further, the comparative analysis module includes: a completion acquisition submodule and a strategy adjustment submodule, wherein:
and the completion degree acquisition submodule is used for comparing and analyzing the makeup video of the user and the makeup strategy to acquire the completion degree of the makeup of the user, and the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy.
Further, the completion obtaining sub-module includes: the device comprises a first video analysis unit, a first completion acquisition unit and a first information generation unit, wherein:
the first video analysis unit is used for analyzing the makeup video and acquiring the makeup steps and the makeup effects of the user in the makeup video of the user.
And the first completeness acquiring unit is used for analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy according to the makeup steps and acquiring the completeness of the makeup effect of the user.
And the first information generating unit is used for generating and displaying first prompt information if the completion degree of the makeup effect of the user is smaller than a first specified threshold, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and a makeup effect correction suggestion.
Further, the completion obtaining sub-module includes: a second video analysis unit, a second completion acquisition unit, and a second information generation unit, wherein:
and the second video analysis unit is used for analyzing the makeup video and acquiring the makeup steps and the makeup methods of the user in the makeup video.
And a second completion degree acquisition unit for analyzing the difference between the user's makeup technique and the makeup guidance technique in the makeup strategy according to the makeup procedure, and acquiring the completion degree of the user's makeup technique.
And a second information generation unit for generating and displaying second prompt information if the completion degree of the cosmetic technique is less than a second specified threshold, wherein the second prompt information comprises at least one of the completion degree of the cosmetic technique, a cosmetic guidance technique and a cosmetic technique correction suggestion.
And the strategy adjusting submodule is used for adjusting the makeup strategy into an alternative strategy if the completion degree is smaller than a preset numerical value, wherein the alternative strategy is a strategy with higher matching degree with the makeup video of the user.
The video processing module 1640 is used for generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user.
Further, the video processing module includes: parameter generation submodule, avatar drive submodule and video display submodule, wherein:
and the parameter generation submodule is used for generating expression driving parameters and action driving parameters corresponding to virtual image guidance user makeup behaviors according to the makeup strategy.
Further, the parameter generation submodule includes a model processing unit, wherein:
and the model processing unit is used for inputting the makeup strategy and the face image of the user into a parameter generation model and acquiring expression driving parameters and action driving parameters corresponding to the makeup strategy, and the parameter generation model is obtained by real person makeup video training and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
The virtual image driving submodule is used for driving the expression and the action of the virtual image based on the expression driving parameter and the action driving parameter to generate a video for guiding makeup by the virtual image, and the video is formed by a plurality of frames of images generated by driving the virtual image; and the video display sub-module is used for displaying videos.
Further, the avatar-based intelligent makeup apparatus 1600 further includes: an image acquisition module, a skin data acquisition module, and a suggestion generation module, wherein:
and the image acquisition module is used for acquiring a face image of the user.
And the skin data acquisition module acquires the skin data of the user from the face image.
And the suggestion generation module is used for analyzing the skin data and generating skin care suggestions.
It can be clearly understood by those skilled in the art that the intelligent make-up device based on an avatar provided in the embodiment of the present application can implement each process in the foregoing method embodiments, and for convenience and brevity of description, the specific working processes of the above-described device and module may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again
In the embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 17, fig. 17 is a block diagram illustrating an electronic device for performing an avatar-based intelligent makeup method according to an embodiment of the present application. The electronic device 1700 may be an electronic device capable of running an application, such as a smart vanity mirror, a smart phone, a tablet computer, and an electronic book. Electronic device 1700 in the present application may include one or more of the following components: a processor 1710, a memory 1720, and one or more applications, wherein the one or more applications may be stored in the memory 1720 and configured to be executed by the one or more processors 1710, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 1710 may include one or more processing cores. The processor 1710 interfaces various components throughout the electronic device 1700 using various interfaces and circuitry to perform various functions and process data for the electronic device 1700 by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1720, as well as invoking data stored in the memory 1720. Alternatively, the processor 1710 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1710 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is to be appreciated that the modem can be implemented by a single communication chip without being integrated into the processor 1710.
Memory 1720 may include a Random Access Memory (RAM) or may include a Read-Only Memory (Read-Only Memory). The memory 1720 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The stored data area may also store data created during use by the electronic device 1700 (e.g., phone books, audio-visual data, chat log data), and so forth.
Referring to fig. 18, fig. 18 illustrates a storage unit for storing or carrying program codes for implementing an avatar-based intelligent makeup method according to an embodiment of the present application. The computer-readable storage medium 1800 has stored therein program code that can be invoked by a processor to perform the methods described in the above-described method embodiments.
The computer-readable storage medium 1800 may be an electronic memory such as a flash memory, an electrically-erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a hard disk, or a ROM. Alternatively, the computer-readable storage medium 1800 includes a non-volatile computer-readable storage medium. The computer-readable storage medium 1800 has storage space for program code 1810 for performing any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 1810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. An intelligent makeup method based on an avatar, comprising:
displaying a recommended list of makeup;
acquiring a target makeup selected by a user in the recommendation list;
obtaining a makeup strategy corresponding to the target makeup;
and generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user.
2. The method of claim 1, wherein the displaying the recommended list of makeup includes:
acquiring a face image of a user, and generating facial features of the user according to the face image;
obtaining at least one makeup look according to the facial features;
displaying the recommendation list, the recommendation list including at least one of the makeup looks.
3. The method of claim 2, wherein said obtaining at least one makeup look based on said facial features comprises:
acquiring makeup product data and makeup style data input by a user;
obtaining makeup based on the facial features, the makeup product data, and the makeup style data, at least one of the makeup being made using the makeup product.
4. The method of claim 1, wherein the obtaining of the target makeup selected by the user in the recommendation list comprises:
obtaining the basic makeup selected by the user in the recommendation list;
judging whether an adjusting instruction input by a user is acquired;
if an adjusting instruction input by a user is acquired, adjusting the basic makeup based on the adjusting instruction, and taking the adjusted basic makeup as the target makeup;
and if the adjustment instruction is not acquired, taking the base makeup as the target makeup.
5. The method of claim 1, wherein obtaining the makeup strategy corresponding to the target makeup comprises:
a makeup step of obtaining the target makeup, the makeup step being used for representing chronological sequential makeup behaviors required for completing the target makeup;
obtaining a makeup strategy corresponding to the makeup step, wherein the makeup strategy comprises a makeup product, a makeup guiding method and a makeup effect, the makeup product comprises the name and the dosage of the makeup product, the makeup guiding method comprises a makeup action and the use area of the makeup product on the face, and the makeup effect comprises the color of the makeup product combined with the face area after makeup and the contour line of the five sense organs.
6. The method as claimed in claim 5, wherein after obtaining the makeup strategy corresponding to the target makeup, further comprising:
obtaining a makeup video of a user;
and comparing and analyzing the makeup video of the user and the makeup strategy, and displaying the result of the comparison and analysis.
7. The method of claim 6, wherein the comparing the makeup video of the user with the makeup strategy and displaying the results of the comparing comprises:
comparing and analyzing the makeup video of the user and the makeup strategy to obtain the completion degree of the makeup of the user, wherein the completion degree is used for representing the difference between the makeup video of the user and the makeup strategy;
and if the completion degree is smaller than a preset numerical value, adjusting the makeup strategy into an alternative strategy, wherein the alternative strategy is a strategy with higher matching degree with the makeup video of the user.
8. The method according to claim 7, wherein the comparing the makeup video of the user with the makeup strategy to obtain the completion of the makeup of the user comprises:
analyzing the makeup video to obtain a makeup step and a makeup effect of the user in the makeup video of the user;
analyzing the difference between the makeup effect of the user and the makeup effect in the makeup strategy according to the makeup step to obtain the completion degree of the makeup effect of the user;
and if the completion degree of the makeup effect of the user is smaller than a first specified threshold value, generating and displaying first prompt information, wherein the prompt information comprises at least one of the completion degree of the makeup effect, the makeup effect in the makeup strategy and a makeup effect correction suggestion.
9. The method according to claim 7, wherein the comparing the makeup video of the user with the makeup strategy to obtain the completion of the makeup of the user comprises:
analyzing the makeup video to obtain a makeup step and a user makeup method in the makeup video of the user;
analyzing the difference between the user's makeup technique and the makeup guidance technique in the makeup strategy according to the makeup step to obtain the completeness of the user's makeup technique;
and if the completion degree of the cosmetic manipulation is smaller than a second specified threshold, generating and displaying second prompt information, wherein the second prompt information comprises at least one of the completion degree of the cosmetic manipulation, the cosmetic guiding manipulation and a cosmetic manipulation correction suggestion.
10. The method according to any one of claims 1-9, wherein said generating and displaying a video of an avatar directing makeup according to said makeup strategy comprises:
generating expression driving parameters and action driving parameters corresponding to the virtual image to guide the user to make up according to the makeup strategy;
driving the expression and the action of the virtual image based on the expression driving parameter and the action driving parameter to generate a video for guiding makeup by the virtual image, wherein the video is formed by a plurality of frames of images generated by driving the virtual image;
and displaying the video.
11. The method according to claim 10, wherein generating expression driving parameters and action driving parameters corresponding to the avatar to guide the user in the makeup behavior according to the makeup strategy comprises:
inputting the makeup strategy into a parameter generation model, and acquiring the expression driving parameters and the action driving parameters corresponding to the makeup strategy, wherein the parameter generation model is obtained by real person makeup video training and is used for outputting the expression driving parameters and the action driving parameters corresponding to the makeup strategy according to the input makeup strategy.
12. The method of claim 1, further comprising:
acquiring a face image of a user;
acquiring skin data of a user from the face image;
analyzing the skin data to generate skin care advice;
displaying the skin care advice.
13. An intelligent make-up device based on an avatar, the device comprising:
the list display module is used for displaying a recommendation list of makeup;
the target makeup obtaining module is used for obtaining the target makeup selected by the user in the recommendation list;
the makeup strategy obtaining module is used for obtaining a makeup strategy corresponding to the target makeup;
and the video processing module is used for generating a video for guiding makeup by an avatar according to the makeup strategy and displaying the video, wherein the avatar is generated in advance according to the face image of the user.
14. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-12.
15. A computer-readable storage medium having program code stored therein, the program code being invoked by a processor to perform the method of any of claims 1-12.
CN202010801807.0A 2020-08-11 2020-08-11 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium Pending CN111968248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010801807.0A CN111968248A (en) 2020-08-11 2020-08-11 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010801807.0A CN111968248A (en) 2020-08-11 2020-08-11 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111968248A true CN111968248A (en) 2020-11-20

Family

ID=73365110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010801807.0A Pending CN111968248A (en) 2020-08-11 2020-08-11 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111968248A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749634A (en) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 Control method and device based on beauty equipment and electronic equipment
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN113672752A (en) * 2021-07-28 2021-11-19 杭州知衣科技有限公司 Garment multi-mode fusion search system and method based on deep learning
CN115120077A (en) * 2021-03-20 2022-09-30 海信集团控股股份有限公司 Cosmetic mirror and method for assisting make-up
CN116797864A (en) * 2023-04-14 2023-09-22 东莞莱姆森科技建材有限公司 Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256432A (en) * 2017-12-20 2018-07-06 歌尔股份有限公司 A kind of method and device for instructing makeup
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN109446365A (en) * 2018-08-30 2019-03-08 新我科技(广州)有限公司 A kind of intelligent cosmetic exchange method and storage medium
CN109784281A (en) * 2019-01-18 2019-05-21 深圳壹账通智能科技有限公司 Products Show method, apparatus and computer equipment based on face characteristic

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749634A (en) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 Control method and device based on beauty equipment and electronic equipment
CN112819718A (en) * 2021-02-01 2021-05-18 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN115120077A (en) * 2021-03-20 2022-09-30 海信集团控股股份有限公司 Cosmetic mirror and method for assisting make-up
CN113672752A (en) * 2021-07-28 2021-11-19 杭州知衣科技有限公司 Garment multi-mode fusion search system and method based on deep learning
CN116797864A (en) * 2023-04-14 2023-09-22 东莞莱姆森科技建材有限公司 Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror
CN116797864B (en) * 2023-04-14 2024-03-19 东莞莱姆森科技建材有限公司 Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror

Similar Documents

Publication Publication Date Title
CN111968248A (en) Intelligent makeup method and device based on virtual image, electronic equipment and storage medium
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
WO2021147920A1 (en) Makeup processing method and apparatus, electronic device, and storage medium
Le et al. Live speech driven head-and-eye motion generators
US10799010B2 (en) Makeup application assist device and makeup application assist method
CN101055647B (en) Method and device for processing image
CN111432267B (en) Video adjusting method and device, electronic equipment and storage medium
US20190130652A1 (en) Control method, controller, smart mirror, and computer readable storage medium
KR101306221B1 (en) Method and apparatus for providing moving picture using 3d user avatar
US20160134840A1 (en) Avatar-Mediated Telepresence Systems with Enhanced Filtering
CN110874557A (en) Video generation method and device for voice-driven virtual human face
JP7448652B2 (en) Image-to-image conversion using unpaired data for supervised learning
EP3488371A1 (en) Technique for controlling virtual image generation system using emotional states of user
CN108920490A (en) Assist implementation method, device, electronic equipment and the storage medium of makeup
CN111045582A (en) Personalized virtual portrait activation interaction system and method
WO2023284435A1 (en) Method and apparatus for generating animation
US11776187B2 (en) Digital makeup artist
JP7278307B2 (en) Computer program, server device, terminal device and display method
CN108932654A (en) A kind of virtually examination adornment guidance method and device
US11961169B2 (en) Digital makeup artist
CN111523981A (en) Virtual trial method and device, electronic equipment and storage medium
CN116830073A (en) Digital color palette
CN112819718A (en) Image processing method and device, electronic device and storage medium
WO2022257766A1 (en) Image processing method and apparatus, device, and medium
JP7273752B2 (en) Expression control program, recording medium, expression control device, expression control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination