CN114119154A - Virtual makeup method and device - Google Patents

Virtual makeup method and device Download PDF

Info

Publication number
CN114119154A
CN114119154A CN202111415899.XA CN202111415899A CN114119154A CN 114119154 A CN114119154 A CN 114119154A CN 202111415899 A CN202111415899 A CN 202111415899A CN 114119154 A CN114119154 A CN 114119154A
Authority
CN
China
Prior art keywords
image
makeup
product
cosmetic product
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111415899.XA
Other languages
Chinese (zh)
Inventor
贾辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111415899.XA priority Critical patent/CN114119154A/en
Publication of CN114119154A publication Critical patent/CN114119154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure provides a virtual makeup method and device, relating to the technical field of augmented reality/virtual reality and computers, wherein the method comprises: acquiring an image of an object to be made up; identifying at least one makeup feature of a current makeup of the subject based on the subject image; selecting at least one candidate cosmetic product from the plurality of cosmetic products based on the at least one cosmetic characteristic; generating a product list containing at least one candidate cosmetic product for selection; determining a target cosmetic product from the at least one candidate cosmetic product in response to the selection input; and carrying out image processing operation on the object image to obtain a makeup image. The method disclosed by the embodiment of the disclosure allows the cosmetic products to be recommended individually according to the individual difference of the user and the preference of the user, and improves the use experience of the user.

Description

Virtual makeup method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of artificial intelligence technologies such as augmented/virtual reality and image processing, and in particular, to a method and an apparatus for virtual makeup, an electronic device, a computer-readable storage medium, and a computer program product.
Background
The makeup is to apply cosmetics and tools, and the face, five sense organs and other parts of the human body are rendered, drawn and arranged by adopting steps and skills according to rules, so that the three-dimensional impression is enhanced, the shape and color are adjusted, the defects are covered, and the expression is magical, thereby achieving the purpose of beautifying the visual perception.
With the development of modern e-commerce and online shopping platforms, users are increasingly inclined to purchase appropriate cosmetic products on virtual online platforms. Some existing client-side shopping applications have a recommendation function and a virtual makeup trial function, and the applications perform image processing on a face image input by a user and then return a makeup image capable of simulating the face after the recommended makeup product is applied for the user to refer to. However, the existing cosmetic recommendation list is usually set according to the recommendation of the anchor/e-commerce platform, and the like, so that the recommendation list may not meet the user requirements, and therefore, related products cannot be made to customize the recommended makeup.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a method and apparatus for virtual makeup, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a method of virtual makeup, including: acquiring an image of an object to be made up; identifying at least one makeup feature of a current makeup of the subject based on the subject image; selecting at least one candidate cosmetic product from the plurality of cosmetic products based on the at least one cosmetic characteristic; generating a product list containing at least one candidate cosmetic product for selection; determining a target cosmetic product from the at least one candidate cosmetic product in response to the selection input; and performing image processing operation on the object image to obtain a makeup image, wherein the image processing operation is associated with the target makeup product.
According to another aspect of the present disclosure, there is provided an apparatus for virtual makeup, including: an image acquisition unit configured to acquire an image of an object to be made up; an identifying unit configured to identify at least one makeup feature of a current makeup of the subject based on the subject image; a screening unit configured to screen at least one candidate cosmetic product from the plurality of cosmetic products according to the at least one cosmetic characteristic; a list generating unit configured to generate a product list containing at least one candidate cosmetic product for selection; a determination unit configured to determine a target cosmetic product from among the at least one candidate cosmetic product in response to a selection input; and an image processing unit configured to perform an image processing operation on the object image to obtain a makeup image, wherein the image processing operation is associated with the target makeup product.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the above-described method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the above-mentioned method when executed by a processor.
According to one or more embodiments of the present disclosure, at least one makeup feature of a user's current makeup may be identified, and then cosmetics to be makeup tried may be recommended according to the makeup feature of the user's current makeup, thereby enabling the recommended cosmetics to match the user's current makeup. Therefore, the method of the embodiment allows the cosmetic product to be recommended individually according to the individual difference of the user and the preference of the user, and improves the use experience of the user.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method of virtual makeup in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a method of screening out at least one candidate cosmetic product having a color that matches a makeup color from a plurality of candidate cosmetic products according to an embodiment of the present disclosure;
FIG. 4 illustrates an arrangement of cosmetic products arranged by their colors;
FIG. 5 shows a flow diagram of a method of generating a product list in accordance with an embodiment of the present disclosure;
FIG. 6 shows a flow diagram of a method of performing image processing operations on facial images according to an embodiment of the present disclosure
FIG. 7 illustrates a mouth image for identifying facial keypoints, according to an embodiment of the present disclosure;
fig. 8 shows a mask image obtained by using the mouth image shown in fig. 7 as an input;
fig. 9 illustrates a block diagram of a structure of an apparatus for virtual makeup according to an embodiment of the present disclosure;
fig. 10 is a block diagram illustrating a structure of an apparatus for virtual makeup according to another embodiment of the present disclosure; and
FIG. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, server 120 may run one or more services or software applications that enable the method of virtual makeup to be performed.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use the client device 101, 102, 103, 104, 105, and/or 106 to input and upload a facial image of the user, and may select a target cosmetic product from a list of products via the client device 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptops), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The database 130 may be of different types. In certain embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 illustrates a flow chart of a method 200 of virtual makeup according to an embodiment of the present disclosure, as illustrated in fig. 2, the method 200 including:
step 201, obtaining an object image to be made up;
step 202, identifying at least one makeup feature of the current makeup of the object based on the object image;
step 203, screening out at least one candidate cosmetic product from a plurality of cosmetic products according to at least one cosmetic characteristic;
step 204, generating a product list containing at least one candidate cosmetic product for selection;
step 205, in response to a selection input, determining a target cosmetic product from at least one candidate cosmetic product;
and step 206, performing image processing operation on the object image to obtain a makeup image, wherein the image processing operation is associated with the target makeup product.
The method of the embodiment of the present disclosure first identifies at least one makeup feature of a current makeup of a subject, and then recommends a cosmetic to be makeup tried according to the makeup feature of the current makeup of the subject, thereby enabling the recommended cosmetic product to match the current makeup of the subject. Therefore, the method allows the cosmetic products to be recommended individually according to the individual difference of the objects and the preference of the objects, and improves the use experience of the user.
The object may be a customer who is using the relevant makeup APP, and the object image may be an image of the user's face. In step 201, a facial image of a user may be obtained using a related application (e.g., shopping platform APP, etc.) in client devices 101, 102, 103, 104, 105, and/or 106. The facial image may comprise a still image or a video stream, which may be captured by a camera of the client device. Note that the two-dimensional face image in the present embodiment is from a public data set.
In step 202, the makeup features may include: the makeup color, the makeup texture of the makeup shade, and other features that can be obtained by feature recognition of the face image. The client device may upload the facial image to the server 120 and the above-described feature recognition process may be performed in the server 120. In addition, the makeup feature may refer to a makeup feature of the entire face of the user, or may refer to a makeup feature of a part of the face, for example: eye makeup, blush, lipstick, foundation, or the like.
In step 203, at least one candidate cosmetic product that matches the current makeup of the user may be screened from the plurality of cosmetic products provided by the server 120 by using a partial or full makeup feature of the current makeup of the subject. Taking the color of the makeup as an example, if in step 202, it is identified that the current makeup of the subject is red, at least one candidate cosmetic product having a color close to red may be screened from the plurality of cosmetic products, for example: pink, magenta cosmetic products. As described above, the makeup feature may refer to a makeup feature of a certain portion of the face, and thus, in step 203, a plurality of cosmetic products may be screened by the makeup feature of a specific portion of the face. In addition, cosmetic products may refer to cosmetic products applied to different parts of the face, including but not limited to: eye makeup, blush, lipstick, foundation, or the like. For example, in step 202, it can be recognized that the user's current lipstick is red, and a lipstick with a color close to red can be screened from the plurality of lipsticks, and a blush with a color close to red can be screened from the plurality of blush to match the color of the user's current lipstick. The screening process may be performed in the server 120, where the server 120 stores the detailed information of a plurality of cosmetic products of each business/platform in advance, and the server 120 screens at least one candidate cosmetic product from the cosmetic products.
In step 204, the server 120 sends the screened at least one candidate cosmetic product and its detailed information to the client device 101, 102, 103, 104, 105 and/or 106, the relevant application (e.g., shopping platform APP, etc.) generates a product list based on the at least one candidate cosmetic product, and displays the product list to the user via the client device, and the user can select the at least one candidate cosmetic product through an interactive interface of the client device (using a touch display screen of a mobile phone or a tablet computer, etc.).
In step 206, an image processing operation is performed on the object image acquired in step 201 to obtain a makeup image. The above-described image processing operation is associated with a target cosmetic product, for example, the target cosmetic product is red lipstick, and then the corresponding image processing operation renders the same red color as the target cosmetic product for the mouth of the subject image. The specific image processing operation will be described in detail below.
It should be added that, although in this embodiment and some embodiments below, the object image to be made up is a face image of a user, it is understood that in some other embodiments of the present disclosure, the object to be made up is not limited to a human being, but may also be an animal such as a pet. In addition, the object image is not limited to a face image, but may be an image of other parts of the body such as hands, feet, etc., in which case the corresponding cosmetic product may be, for example, a nail polish, a sticker, etc.
In some embodiments, the makeup feature includes a makeup color, and at least one candidate cosmetic product having a color that matches the makeup color may be screened from the plurality of cosmetic products.
FIG. 3 illustrates a flow chart of a method 300 of screening out at least one candidate cosmetic product having a color that matches a makeup color from a plurality of candidate cosmetic products, the method 300 comprising:
step 301, obtaining product color information of at least one cosmetic product of a plurality of cosmetic products;
step 302, determining a color difference between a product color and a makeup color of at least one cosmetic product;
and step 303, screening at least one candidate cosmetic product with the color difference smaller than a preset difference value from the plurality of cosmetic products.
In step 301, product information for each cosmetic product, including product color information, may be acquired from the server 120.
In step 302, the plurality of cosmetic products of step 301 may be arranged according to their colors. Fig. 4 illustrates an arrangement 400 in which a plurality of cosmetic products are arranged according to their colors, and as shown in fig. 4, the plurality of cosmetic products may be arranged according to colors along a circumference of 360 °, wherein cosmetic products having similar colors are adjacently disposed, and the color difference between the product color and the makeup color may be expressed as an angle difference between a position of the makeup color and a position of the product color of each cosmetic product. When the angle difference is larger, the difference between the product color and the makeup color is larger, and conversely, when the angle difference is smaller, the difference between the product color and the makeup color is smaller.
In step 303, at least one candidate cosmetic product having a color difference smaller than a preset difference value may be screened from the plurality of cosmetic products. For example, in the example shown in fig. 4, the preset difference value may be set to a preset angle of, for example, 15 °, and if the makeup color is at the a position, all cosmetic products having an angle difference of less than 15 ° from the a position may be screened out as candidate cosmetic products. For example, candidate cosmetic products may be screened by calculating a cos value of the angle difference.
In the method of the embodiment, the cosmetic products matched with the current makeup color of the user can be selected, so that the selected candidate cosmetic products can better meet the preference of the user, and the use experience of the user is improved.
Fig. 5 shows a flow diagram of a method 500 of generating a product list, the method 500 comprising:
step 501, obtaining commodity parameters of at least one candidate cosmetic product;
at step 502, at least one candidate cosmetic product is ranked according to the commodity parameters of the at least one candidate cosmetic product.
Through the method 300, at least one candidate makeup product is obtained, and in step 501, a commodity parameter of each candidate makeup product in the at least one candidate makeup product is obtained, where the commodity parameter includes: manufacturer, distributor, time to market, preference, etc.
In step 502, the display priority of each candidate cosmetic product may be set according to the commodity parameters of each candidate cosmetic product, for example, in some embodiments, the priority of a candidate cosmetic product on which a new product is marketed may be set to be high, while the priority of an old commodity is set to be low; for another example, the preferential level of the candidate cosmetic products with the preferential benefit may be set to high, and the preferential level of the products without the preferential benefit may be set to low, and in some embodiments, the purchase preference of the user may be obtained by analyzing the historical information of the user purchase, and then the preferential level of the cosmetic products of the manufacturer or distributor that the user prefers to purchase may be set to high. Then, at least one candidate cosmetic product is ranked based on the determined priority, and the client device preferentially displays the candidate cosmetic product with the high priority based on the product list.
The method of the embodiment can preferentially display the user preference or candidate cosmetic products which are more attractive to the user, so that the use experience of the user is further improved.
In some embodiments, the subject image comprises a facial image of the subject, and fig. 6 illustrates a flow chart of a method 600 of performing image processing operations on the facial image to obtain a makeup image, the method 600 comprising:
step 601, identifying face key points in a face image;
step 602, constructing a face mesh model based on face key points;
step 603, acquiring a two-dimensional map corresponding to the target cosmetic product;
step 604, fusing the two-dimensional map to the facial image according to the facial mesh model to obtain a map image;
step 605, acquiring at least one mask image according to the facial image, wherein the mask image comprises image information about the outline of the five sense organs of the facial part of the subject; and
and 606, fusing at least one mask image and the map image to obtain a makeup image.
In step 601, the facial keypoints may be points that mark the outline of facial features, for example, the facial keypoints may be points on the lip contour line, and these points may be turning points on the contour line. The server 120 may recognize key points of the lip part through a face recognition algorithm. The above-mentioned face key points may also be points on the eye contour line, points on the cheek contour line, etc., which are not listed here.
In step 602, the mesh model is a piecewise linear curved surface formed by connecting triangles in a three-dimensional space with each other through edges and vertices, and a two-dimensional face image can be converted into an image model in the three-dimensional space by establishing the mesh model. Wherein, the vertices in the mesh model are the facial key points obtained in step 601.
In step 603, the server 120 stores the related information of the target product in advance, wherein the related information includes a two-dimensional map corresponding to the target cosmetic product, and the two-dimensional map is a standard-shaped face image generated in advance according to the physical characteristics of the target cosmetic product, such as color, brightness, texture, and concentration. The two-dimensional map may specifically be a two-dimensional image of different facial features, for example, the two-dimensional map may be an image of the mouth after the target cosmetic product has been applied to a standard shaped mouth. The two-dimensional map has anchor points corresponding to the facial key points for subsequent fusion of the two-dimensional map to the facial image.
In step 604, the anchor points in the two-dimensional map are aligned with the corresponding facial key points by the mesh model obtained in step 602, thereby fusing the two-dimensional map onto the facial image. During the fusion process, pixels in the two-dimensional map may be used to directly replace corresponding pixels on the face image. In some other embodiments, in order to improve the reality of the map image, the pixels in the two-dimensional map and the corresponding pixels on the face image may be mixed to obtain new pixel points to replace the corresponding pixels on the face image. Illustratively, pixels in the two-dimensional map may be mixed with pixels on the face image by weighting with pixel values.
The collage image obtained in step 604 may exhibit the effect of the target cosmetic product being applied to the image of the subject's face, but in some special cases, for example where certain parts of the face are occluded by an obstruction, some of the obstruction on the collage image may merge with the collage, resulting in a reduced realism of the image. Fig. 7 illustrates a mouth image 700 for identifying facial key points, the facial key points being as shown in the figure, in the mouth image 700 shown in fig. 7, the finger blocks the lips for hiss. However, in step 601, the key points of the face near the mouth obtained by the face recognition algorithm do not take into account the influence of the fingers, so in the subsequently generated map image, the finger portions are also applied with the color of the target cosmetic product, thereby causing distortion of the map image.
In order to solve the above problem, in steps 605 to 606, the map image is further corrected by the mask image to eliminate the influence of the obstruction such as the finger on the image.
In step 605, the facial image may be input into a mask acquisition model to obtain at least one mask image. The mask acquisition model may be obtained by machine training based on a convolutional neural network, and the specific training process may include inputting a plurality of training samples into a model to be trained, each training sample including a face image as an input and a mask image corresponding to the face image as an output; and continuously correcting parameters in the model through repeated iterative training of the model to finally obtain the trained mask acquisition model. Fig. 8 shows a mask image 800 obtained by inputting the mouth image shown in fig. 7. As shown in fig. 8, the mask image 800 contains image information about the contour of the user's mouth where the pixel value is greater than 0 (i.e., the white portion shown in fig. 8) and where the pixel value is equal to 0 (i.e., the black portion shown in fig. 8) at positions other than the user's mouth. Since the mask acquisition model is obtained by machine training, when the face image as a training sample contains an obstruction, the trained mask acquisition model can distinguish the face (e.g., mouth) of the user from the obstruction (e.g., finger).
In step 606, at least one mask image obtained in step 605 and the map image obtained in step 604 may be further fused to obtain a makeup image. Specifically, the map image may be sampled, and if a lip region in the mask image is hit (pixel value >0), the color of the corresponding pixel point in the map image is rendered, otherwise, the color of the corresponding pixel point in the initial face image is rendered. After the image processing, the false rendering of the shielding objects such as fingers can be eliminated, and the reality degree of the makeup image is improved. In addition, in some embodiments, the makeup image may be subjected to gaussian blur processing, so that the makeup image looks smoother, and the degree of reality of the makeup image is further improved.
It should be added that, although in the above embodiments, the specific process of the image processing operation is described by taking the mouth of the user as an example, it is understood that the method 600 can be applied to other parts of the face, such as the eyes, the cheek, etc., and the blocking object is not limited to the finger.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to the embodiment of the disclosure, a virtual makeup device is also provided. Fig. 9 is a block diagram illustrating a structure of an apparatus 900 for virtual makeup according to an embodiment of the present disclosure, and as shown in fig. 9, the apparatus 900 includes: the image acquisition unit 910 is configured to acquire an image of an object to be made up; the identifying unit 920 is configured to identify at least one makeup feature of the current makeup of the subject based on the subject image; the screening unit 930 is configured to screen at least one candidate cosmetic product from the plurality of cosmetic products according to the at least one cosmetic feature; the list generation unit 940 is configured to generate a product list containing at least one candidate cosmetic product for selection; the determination unit 950 is configured to determine a target cosmetic product from among the at least one candidate cosmetic product in response to a selection input; and the image processing unit 960 is configured to perform an image processing operation on the object image, resulting in a makeup image, wherein the image processing operation is associated with the target makeup product.
Fig. 10 is a block diagram illustrating a configuration of an apparatus 1000 for virtual makeup according to another embodiment of the present disclosure, and as shown in fig. 10, a screening unit 1030 includes: the color obtaining module 1031 is configured to obtain product color information of at least one cosmetic product of a plurality of cosmetic products; the determination module 1032 is configured to determine a color difference of a product color of the at least one cosmetic product and a makeup color; and a screening module 1033 configured to screen at least one candidate cosmetic product from the plurality of cosmetic products for a color difference less than a predetermined difference value.
The list generating unit 1040 includes: the parameter obtaining module 1041 is configured to obtain commodity parameters of at least one candidate cosmetic product, respectively; and the ranking module 1042 is configured to rank the at least one candidate cosmetic product according to the commodity parameter of the at least one candidate cosmetic product.
The subject image includes a face image of the subject, and the image processing unit 1060 includes: the recognition module 1061 is configured to recognize facial keypoints in a facial image; the construction module 1062 is configured to construct a face mesh model based on the facial keypoints; the map acquisition module 1063 is configured to acquire a two-dimensional map corresponding to the target product; and the first fusion module 1064 is configured to fuse the two-dimensional map onto the face image according to the face mesh model to obtain a map image.
The image processing unit further includes: the mask acquisition module 1065 is configured to acquire at least one mask image from the facial image, wherein the mask image contains image information about the contour of the facial features of the subject; and a second fusion module 1066 configured to fuse the at least one mask image with the map image to obtain a makeup image.
The image processing unit further includes: the blurring module 1067 is configured to blur the makeup image.
Here, the operations of the units 910-960 of the virtual makeup apparatus 900 are similar to the operations of the steps 201-206 of the method 200, and the operations of the modules of the virtual makeup apparatus 1000 are similar to the operations of the corresponding steps of the methods 300, 500, and 600, and are not repeated herein.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 11, a block diagram of a structure of an electronic device 1100, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for the operation of the electronic device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in electronic device 1100 connect to I/O interface 1105, including: an input unit 1106, an output unit 1107, a storage unit 1108, and a communication unit 1109. The input unit 1106 may be any type of device capable of inputting information to the electronic device 1100, and the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 1107 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 1108 may include, but is not limited to, a magnetic or optical disk. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as the method of virtual makeup. For example, in some embodiments, the method of virtual makeup may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1100 via the ROM 1102 and/or the communication unit 1109. When the computer program is loaded into RAM1103 and executed by computing unit 1101, one or more steps of the method of virtual makeup described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured by any other suitable means (e.g., by means of firmware) to perform a method of virtual makeup.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (18)

1. A method of virtual makeup, comprising:
acquiring an image of an object to be made up;
identifying at least one makeup feature of a current makeup of the subject based on the subject image;
selecting at least one candidate cosmetic product from a plurality of cosmetic products based on the at least one cosmetic characteristic;
generating a product list containing the at least one candidate cosmetic product for selection;
determining a target cosmetic product from the at least one candidate cosmetic product in response to a selection input; and
and carrying out image processing operation on the object image to obtain a makeup image, wherein the image processing operation is associated with the target makeup product.
2. The method of claim 1, wherein the makeup feature comprises a makeup color, and wherein screening out at least one candidate cosmetic product from a plurality of cosmetic products based on the at least one makeup feature comprises:
selecting at least one candidate cosmetic product having a color that matches the makeup color from the plurality of cosmetic products.
3. The method of claim 2, wherein screening the plurality of cosmetic products for at least one candidate cosmetic product having a color that matches the makeup color comprises:
obtaining product color information of at least one cosmetic product of the plurality of cosmetic products;
determining a color difference of a product color of the at least one cosmetic product from the makeup color; and
screening out at least one candidate cosmetic product with the color difference smaller than a preset difference value from the plurality of cosmetic products.
4. The method of claim 1, wherein generating a product list containing the at least one candidate cosmetic product for selection comprises:
obtaining commodity parameters of the at least one candidate cosmetic product;
ranking the at least one candidate cosmetic product according to the commodity parameter of the at least one candidate cosmetic product.
5. The method of any of claims 1 to 4, wherein the subject image comprises a facial image of a subject, wherein performing an image processing operation on the subject image resulting in a cosmetic image comprises:
identifying facial keypoints in the facial image;
constructing a face mesh model based on the face key points;
acquiring a two-dimensional map corresponding to the target product; and
and according to the facial mesh model, fusing the two-dimensional map to the facial image to obtain a map image.
6. The method of claim 5, wherein performing an image processing operation on the object image to obtain a makeup image further comprises:
acquiring at least one mask image according to the facial image, wherein the mask image contains image information about the outline of the five sense organs of the face of the subject; and
and fusing the at least one mask image and the map image to obtain a makeup image.
7. The method of claim 6, wherein acquiring at least one mask image from the face image further comprises:
and inputting the facial image into a mask acquisition model to obtain the at least one mask image, wherein the mask acquisition model is obtained by machine training.
8. The method of claim 6, wherein fusing the at least one mask image with the map image further comprises, after obtaining the makeup image:
and carrying out fuzzy processing on the makeup image.
9. A virtual makeup apparatus, comprising:
an image acquisition unit configured to acquire an image of an object to be made up;
an identifying unit configured to identify at least one makeup feature of a current makeup of a subject based on the subject image;
a screening unit configured to screen at least one candidate cosmetic product from a plurality of cosmetic products according to the at least one makeup feature;
a list generating unit configured to generate a product list containing the at least one candidate cosmetic product for selection;
a determination unit configured to determine a target cosmetic product from the at least one candidate cosmetic product in response to a selection input; and
an image processing unit configured to perform an image processing operation on the object image to obtain a makeup image, wherein the image processing operation is associated with the target makeup product.
10. The apparatus of claim 9, wherein the makeup feature comprises a makeup color, wherein the screening unit is further configured to:
selecting at least one candidate cosmetic product having a color that matches the makeup color from the plurality of cosmetic products.
11. The apparatus of claim 10, wherein the screening unit comprises:
a color acquisition module configured to acquire product color information of at least one cosmetic product of the plurality of cosmetic products;
a determination module configured to determine a color difference of a product color of at least one cosmetic product and the makeup color; and
a screening module configured to screen at least one candidate cosmetic product from the plurality of cosmetic products for which the color difference is less than a preset difference value.
12. The apparatus of claim 9, wherein the list generating unit comprises:
a parameter acquisition module configured to acquire commodity parameters of the at least one candidate cosmetic product, respectively; and
a ranking module configured to rank the at least one candidate cosmetic product according to a commodity parameter of the at least one candidate cosmetic product.
13. The apparatus according to any one of claims 9 to 12, wherein the object image comprises a face image of an object, wherein the image processing unit comprises:
an identification module configured to identify facial keypoints in the facial image;
a construction module configured to construct a facial mesh model based on the facial keypoints;
a map obtaining module configured to obtain a two-dimensional map corresponding to the target product; and
and the first fusion module is configured to fuse the two-dimensional map to the facial image according to the facial mesh model to obtain a map image.
14. The apparatus of claim 13, wherein the image processing unit further comprises:
a mask acquisition module configured to acquire at least one mask image from the facial image, wherein the mask image contains image information about the contour of the facial features of the subject; and
and the second fusion module is configured to fuse the at least one mask image and the map image to obtain a makeup image.
15. The apparatus of claim 14, wherein the image processing unit further comprises:
and the blurring processing module is configured to perform blurring processing on the makeup image.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
17. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
18. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-8 when executed by a processor.
CN202111415899.XA 2021-11-25 2021-11-25 Virtual makeup method and device Pending CN114119154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111415899.XA CN114119154A (en) 2021-11-25 2021-11-25 Virtual makeup method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111415899.XA CN114119154A (en) 2021-11-25 2021-11-25 Virtual makeup method and device

Publications (1)

Publication Number Publication Date
CN114119154A true CN114119154A (en) 2022-03-01

Family

ID=80373200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111415899.XA Pending CN114119154A (en) 2021-11-25 2021-11-25 Virtual makeup method and device

Country Status (1)

Country Link
CN (1) CN114119154A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220180565A1 (en) * 2020-12-09 2022-06-09 Chanel Parfums Beaute Method for identifying a lip-makeup product appearing in an image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220180565A1 (en) * 2020-12-09 2022-06-09 Chanel Parfums Beaute Method for identifying a lip-makeup product appearing in an image

Similar Documents

Publication Publication Date Title
US20200285858A1 (en) Method for generating special effect program file package, method for generating special effect, electronic device, and storage medium
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN115409922B (en) Three-dimensional hairstyle generation method, device, electronic equipment and storage medium
CN111563855A (en) Image processing method and device
CN116051729B (en) Three-dimensional content generation method and device and electronic equipment
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN116245998B (en) Rendering map generation method and device, and model training method and device
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN114119154A (en) Virtual makeup method and device
CN114120448B (en) Image processing method and device
CN114119935B (en) Image processing method and device
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN115601555A (en) Image processing method and apparatus, device and medium
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN113223128B (en) Method and apparatus for generating image
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112528929A (en) Data labeling method and device, electronic equipment, medium and product
CN115937430B (en) Method, device, equipment and medium for displaying virtual object
CN114120412B (en) Image processing method and device
CN116030191B (en) Method, device, equipment and medium for displaying virtual object
CN116311519B (en) Action recognition method, model training method and device
CN116385641B (en) Image processing method and device, electronic equipment and storage medium
CN115761855B (en) Face key point information generation, neural network training and three-dimensional face reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination