CN117830081A - Dressing generation method, device and equipment for virtual object and readable storage medium - Google Patents

Dressing generation method, device and equipment for virtual object and readable storage medium Download PDF

Info

Publication number
CN117830081A
CN117830081A CN202311862079.4A CN202311862079A CN117830081A CN 117830081 A CN117830081 A CN 117830081A CN 202311862079 A CN202311862079 A CN 202311862079A CN 117830081 A CN117830081 A CN 117830081A
Authority
CN
China
Prior art keywords
makeup
target
image
element type
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311862079.4A
Other languages
Chinese (zh)
Inventor
陈佳丽
谷长健
吴昊潜
毕梦霄
李林橙
吕唐杰
范长杰
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311862079.4A priority Critical patent/CN117830081A/en
Publication of CN117830081A publication Critical patent/CN117830081A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a method, a device, electronic equipment and a computer readable storage medium for generating a makeup of a virtual object, wherein a reference object in a makeup reference image has a target reference makeup to be simulated by acquiring the makeup reference image; identifying a target element type parameter of a reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element; acquiring a target virtual object of the makeup to be updated; and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter. According to the method and the device for generating the makeup of the virtual object, the makeup generation efficiency of the virtual object can be improved.

Description

Dressing generation method, device and equipment for virtual object and readable storage medium
Technical Field
The present application relates to the field of game technologies, and in particular, to a method and apparatus for generating a makeup of a virtual object, an electronic device, and a computer readable storage medium.
Background
Under the wave of the internet, the continuous development and evolution of hardware and software technology has promoted the advent of intelligent devices and software. At the same time, a large number of games with different themes are launched to meet the demands of users, and with the vigorous development of various technologies in the game industry, the demands for virtual objects in the games are also gradually diversified.
At present, a control capable of adjusting the dressing in a game, such as a control for selecting lipstick colors, can be generally used for generating corresponding dressing for different virtual objects in the game, but as the dressing of the virtual objects has more components, if each control needs to be adjusted in a targeted manner, more time is required, so that the dressing generation efficiency of the virtual objects is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a computer readable storage medium for generating the makeup of a virtual object, which can improve the efficiency of generating the makeup of the virtual object.
In a first aspect, an embodiment of the present application provides a method for generating a makeup of a virtual object, where the method includes:
acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
acquiring a target virtual object of the makeup to be updated;
and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In a second aspect, an embodiment of the present application further provides a device for generating a makeup of a virtual object, where the device includes:
the image acquisition module is used for acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
A parameter determining module, configured to identify a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determine a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, where the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
the object acquisition module is used for acquiring a target virtual object of the makeup to be updated;
and the makeup customizing module is used for customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory storing a computer program, where the computer program when executed by a processor causes the processor to execute any one of the method for creating a makeup of a virtual object provided in the embodiment of the present application.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium, which includes a computer program for causing an electronic device to execute any one of the methods for creating a makeup of a virtual object provided in the embodiments of the present application, when the computer program is run on the electronic device.
In the embodiment of the application, by acquiring the makeup reference image, the reference object in the makeup reference image has the target reference makeup to be simulated. Then, based on the makeup reference image, identifying a target element type parameter of the reference object under each preset makeup constituent element, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup constituent element. Finally, a target virtual object of the makeup to be updated is obtained, and the target makeup corresponding to each makeup constituent element is customized for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter, so that the corresponding makeup is customized for the target virtual object based on the parameter representation by definitely referring to the parameter representation corresponding to the makeup to be simulated, the time spent for adjusting the makeup through the control is avoided, and the makeup generation efficiency of the virtual object is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a makeup generating system for a virtual object according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an embodiment of a method for generating a makeup of a virtual object according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a process for generating a target cosmetic image provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of another process for generating a target cosmetic image provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a face segmentation process provided in an embodiment of the present application;
FIG. 6 is a schematic illustration of the alignment between a make-up sub-image and a make-up reference sub-image provided in an embodiment of the present application;
FIG. 7 is a training schematic of a game simulator provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a makeup generating device for a virtual object according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Before explaining the embodiments of the present application in detail, some terms related to the embodiments of the present application are explained.
Wherein in the description of embodiments of the present application, the terms "first," "second," and the like may be used herein to describe various concepts, but such concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a method and device for generating a makeup of a virtual object, electronic equipment and a computer readable storage medium. Specifically, the method for generating the makeup of the virtual object in the embodiment of the application may be executed by an electronic device, where the electronic device may be a device such as a terminal or a server. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
For example, as shown in fig. 1, the electronic device is illustrated by taking a terminal 10 as an example, and the terminal may acquire a makeup reference image in which a reference object has a target reference makeup to be simulated; identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element; acquiring a target virtual object of the makeup to be updated; and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
Based on the above problems, embodiments of the present application provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a makeup of a virtual object, which can improve the efficiency of generating a makeup of a virtual object.
The following detailed description is provided with reference to the accompanying drawings. The following description of the embodiments is not intended to limit the preferred embodiments. Although a logical order is depicted in the flowchart, in some cases the steps shown or described may be performed in an order different than depicted in the figures.
In this embodiment, a terminal is taken as an example for explanation, and this embodiment provides a method for generating a makeup of a virtual object, as shown in fig. 2, a specific flow of the method for generating a makeup of a virtual object may be as follows:
201. and acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated.
The reference image is an image which is required to be referred when the virtual object simulates the makeup, the reference object displayed in the image has the makeup, and the reference object has the makeup which is required to be simulated by the virtual object at present, namely the target reference makeup to be simulated.
In this embodiment, the terminal customizes the makeup of the virtual object based on the makeup of the reference object in the makeup reference image by acquiring the makeup reference image so that the makeup of the virtual object has a high degree of similarity to the makeup of the reference object.
202. Identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element.
It can be understood that, since the constituent elements of the makeup belong to discrete types of information expressions, such as what kind of eye shadow, lip makeup, eyebrow shape, repair, etc., but the attribute under the specific element belongs to continuous types of information expressions, such as color, shade, length, etc., of a certain kind of eye shadow, this results in that the makeup is to be clearly and accurately expressed, i.e. continuous parameters are required, and discrete parameters are required, how to design the parameter expression, so that the makeup is customized for the virtual character based on the parameters corresponding to the makeup, which becomes the weight in the present.
In this embodiment, the terminal may identify, based on the makeup reference image, the target element type parameter of the reference object in the image under each preset makeup component, that is, identify the element type corresponding to each makeup component under the target reference makeup, that is, determine the makeup style of each current makeup component.
The makeup component is used for indicating components required for forming the makeup, and the makeup component comprises, but is not limited to, base makeup, blush, eyebrow makeup, eye makeup, eyelashes, eye shadow, eye line, lip makeup and the like, and can be specifically set according to requirements.
Accordingly, since each makeup component corresponds to a plurality of component types, the target reference makeup of the reference object is required to be defined to be the corresponding component type under each makeup component, and the target component type parameter is used for indicating the corresponding component type of the target reference makeup, it can be understood that different component types correspond to the corresponding makeup templates, for example, if the component type corresponding to the eyebrow makeup contains a line eyebrow type, the line eyebrow type corresponds to the line eyebrow template.
For example, the make-up may correspond to a foggy, creamy, shiny, etc., type; the repair can correspond to the repair types obtained by combining different repair positions, such as the combination of at least two positions of cheekbone repair, nose repair, forehead repair and the like; blush may correspond to apple muscle type, cosmetic type, large area type, under eye type, W-type, loving type, etc.; the eyebrow makeup can be corresponding to a straight eyebrow type, a willow leaf eyebrow type, a tail eyebrow falling type, a standard eyebrow type, a star eyebrow type, an arch eyebrow type and the like; the eye makeup may correspond to a smoke type, a stage type, a bare makeup type, a Han Zhuang type, a small freshness type, etc.; eyelashes can correspond to a single type, a single cluster type, a mesh interweaving type, a transparent stem type, an exaggeration stage type, etc.; the eye shadow can be correspondingly provided with a single-color tiling type, a double-color transverse gradual layer type, a three-color preposition type, a three-color postposition type, a small smoking type, a half-cut type and the like; the eye lines can be correspondingly provided with a parallel type, a sagging type, a lifting type, an European and American type and the like; the lip makeup may correspond to a matte type, velvet type, gloss type, matte type, or the like.
In addition, aiming at the makeup of the same element type, different attributes cause the difference of the display effect of the makeup of the element type, for example, under the condition of a line eyebrow type, if the attribute is brown, the corresponding display effect is brown line eyebrow, if the attribute is black, the corresponding display effect is black line eyebrow, or if the attribute is darker, the corresponding display effect is darker line eyebrow, and if the attribute is lighter, the corresponding display effect is lighter line eyebrow.
In this embodiment, in order to further accurately express makeup, the terminal needs to set continuous parameters to express, that is, to target reference makeup, define a plurality of target makeup attribute parameters corresponding to each target element parameter, so as to define a specific display effect corresponding to the target element parameter through the plurality of target makeup attribute parameters, thereby enabling the parameters to more accurately express the target reference makeup.
Wherein, the above-mentioned dressing attribute parameter is used for indicating the attribute of the element parameter, this dressing attribute parameter includes but is not limited to colour parameter, concentration parameter, size parameter and rotation parameter etc.. And correspondingly, the target makeup attribute parameter is used for representing the related attribute of the target element type parameter.
Specifically, since the dressing attribute parameters are continuous type parameter expressions, different continuous information expression modes and parameter change spaces can be set for different dressing attribute parameters so as to clearly determine target dressing attribute parameters corresponding to target reference dressing from the dressing attribute parameters.
The color parameters are different for different makeup components, for example, the color of makeup base can be represented by information such as hue, saturation, brightness and luster, and the color of makeup base and eyebrow can be represented by RGB color information. The concentration parameter may set a corresponding value interval, for example, a corresponding value interval is set for the intensity of the repair, the shade of the eyebrow, the shade and gloss of the lip, the shade of the lip line, the shade of the eyelash, etc., so as to perform a corresponding value for the target reference makeup. The size parameters can also be set to corresponding value intervals, such as setting corresponding value intervals for the length of eyelashes, the size of eye beads in eye makeup, the size of irises, the transverse stretching length and the longitudinal stretching length of eyebrows, and the like, so as to correspondingly value for the target reference makeup. The rotation parameters can also be set to corresponding value intervals, such as setting corresponding value intervals for the rotation angle of the eyebrows, the coverage angle of the eye shadow, and the like, so as to perform corresponding value for the target reference makeup.
It may be understood that, since in some scenarios, there may be a case where some of the makeup constituent elements of the target reference makeup are empty, or, alternatively, the target reference makeup does not have some of the target makeup attribute parameters under some of the target element type parameters, in this embodiment, the terminal may further set an element type parameter whose element type is empty for each of the makeup constituent elements to indicate that the makeup constituent element is not present in the target reference makeup, and the terminal may set a null makeup attribute value for each of the different types of the makeup attribute parameters of the target element type parameters to indicate that the target reference makeup does not have a target makeup attribute parameter whose makeup attribute value is empty under the corresponding target type parameter.
In some embodiments, since the makeup component belongs to continuous type information expression, corresponding element type parameters may be respectively configured for a plurality of element types of the makeup component, and the identifying, based on the makeup reference image, the target element type parameters of the reference object under each preset makeup component may include: the terminal may determine a partial image corresponding to the makeup component in the makeup reference image. Then, from a plurality of element types of the makeup constituent element, a target element type corresponding to the makeup constituent element under the partial image is determined, and a target element type parameter corresponding to the target element type is determined.
Specifically, the configuration of the plurality of element types for the makeup component element with corresponding element type parameters may include: the terminal may determine the number of element types corresponding to the makeup component element first, so as to determine, according to the number, a multidimensional vector parameter for representing the element types, that is, the element type parameter, where the number of element types matches the number of dimensions of the element type parameter, for example, if the eyebrow makeup has 10 element types, the element type parameter may be a 10-dimensional vector parameter, that is, each dimension of the 10-dimensional vector parameter corresponds to one element type, for example, a first dimension of the 10-dimensional vector parameter represents a willow-leaf eyebrow type, that is, a discrete parameter of the element type parameter represents 1000000000, and, for example, a second dimension of the 10-dimensional vector parameter represents a line-eyebrow type, that is, a discrete parameter of the element type parameter represents 0100000000. The element type of the makeup constituent element can be encoded by adopting a one-hot technology, so that element type parameters corresponding to each element type of the makeup constituent element can be obtained.
In some embodiments, since the makeup attribute parameter belongs to a continuous type of information expression, a plurality of makeup attribute parameters of the target element type parameter may be configured with corresponding value intervals, and the determining the plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter may include: the terminal may determine a partial makeup corresponding to the target element type parameter in the target reference makeup. And then selecting the dressing attribute value matched with the local dressing from the value interval of the dressing attribute parameter, and taking the selected dressing attribute value as a target dressing attribute parameter under the target element type parameter.
Specifically, the makeup attribute parameter may include a color parameter, and since the color parameter may be represented by RGB color information, color channel splitting may be performed, that is, the color parameter is configured with at least one value interval corresponding to a color channel, such as an R color channel, a G color channel, and a B color channel, and the value interval of each color channel is 0 to 255, so that the RGB color information is expressed by a multidimensional vector (up to R, G, B three dimensions), and since the color expression is 16, for example, red is FF0000, the variation range of each color channel is #000000 to #ffff of 16, that is, 0 to 255.
It can be understood that in this embodiment, the color channel is split to represent, so that the problem of color fitting abnormality caused by direct normalization of values corresponding to the combined three color channels in the later period can be avoided, that is, on one hand, because the value range represented by the three color channels is too large, the dimension is 0-16777216 (256×256), and if the value range is normalized, a small change can be caused, so that the color in display can be changed greatly; on the other hand, the color change at the time of display is discontinuous, for example, 0000FF is blue, and 1 is added to 000100, and the color change at the time of display is almost black.
For example, the color parameter corresponding to the skin color of the base makeup may be represented by a hue and the corresponding value interval may be-0.5 to 0.5, the color parameter corresponding to the skin color of the base makeup may be represented by a saturation and the corresponding value interval may be-1 to 1, the color parameter corresponding to the skin color of the base makeup may be represented by a brightness and the corresponding value interval may be-1 to 1, the color parameter corresponding to the skin color of the base makeup may be represented by a gloss and the corresponding value interval may be 0 to 1, which may be specifically set according to the needs and is not limited herein.
For example, the concentration parameter may be set to a corresponding value interval, such as 0 to 1, and may be specifically set according to the requirement, which is not limited herein.
For example, the above size parameters may also set a corresponding value interval, for example, the value interval corresponding to the eye size may be 0.1 to 0.25, the value interval corresponding to the iris size may be 0.5 to 1, the value interval corresponding to the length of the eyelashes may be 0 to 1, the value interval corresponding to the transverse stretching length of the eyebrows may be 0.5 to 1.5, and the value interval corresponding to the longitudinal stretching length of the eyebrows may be 0.5 to 1.5, which may be specifically set according to the needs, and is not limited herein.
For example, the rotation parameters may also be set to a corresponding value interval, for example, the value interval corresponding to the rotation angle of the eyebrow may be-0.5 to 1, which may be specifically set according to the needs, and the present invention is not limited thereto.
In some embodiments, in the value interval of the makeup attribute parameter, normalization processing, for example, normalization to a range of 0 to 1, is required before selecting the value of the makeup attribute matched with the partial makeup, so as to facilitate subsequent processing, which may specifically include: the terminal can normalize the value intervals of the plurality of makeup attribute parameters of the target element type parameters to obtain normalized value intervals corresponding to the plurality of makeup attribute parameters.
It is understood that the color parameter is a multi-dimensional (up to R, G, B three dimensions) 0 to 1 change vector in the normalized value interval. In addition, after the value of the makeup attribute is determined, normalization processing may be performed, for example, when the color parameter is represented by RGB information, that is, a result of dividing the value of the makeup attribute by 255.
203. And obtaining a target virtual object of the makeup to be updated.
In this embodiment, the target virtual object is an object for which the current terminal needs to customize a makeup based on a target reference makeup, and the target virtual object may or may not have a makeup, and the terminal needs to update the makeup for the target virtual object so that the makeup of the target virtual object is the same as the target reference makeup.
It can be understood that, the target virtual object may be three-dimensional information representation, or may be three-dimensional information representation, that is, may be a makeup of a model of the three-dimensional target virtual object, or may be a makeup of an image of the two-dimensional target virtual object, and specifically may be designed according to different application scenarios.
204. And customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In this embodiment, the terminal specifies a specific makeup corresponding to each makeup component based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter, so as to customize the makeup for the target virtual object, and based on the parameter representation corresponding to the target reference makeup that is specified to be simulated, the terminal specifies the corresponding makeup for the target virtual object, thereby avoiding the time required for adjusting the makeup by the control, and improving the makeup generation efficiency of the virtual object.
Optionally, after obtaining the target element type parameter corresponding to each makeup component, the target element type parameter corresponding to each makeup component may be spliced to obtain a discrete type information expression related to the target reference makeup, so as to customize the target makeup based on the spliced target element type parameter. And the spliced parameters can be obtained by continuously splicing the spliced target element type parameters and the target makeup attribute parameters corresponding to the target element type parameters, and if the parameters are normalized to be values in the range of 0 to 1 in advance, a group of parameters used for representing the target reference makeup in the range of 0 to 1 is obtained.
In some embodiments, customizing the target makeup for the target virtual object according to the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter may include: the terminal can generate a target makeup image corresponding to the target virtual object through a preset neural network model based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter, wherein the target makeup image is used for indicating target makeup corresponding to each makeup constituent element customized for the target virtual object.
It can be appreciated that by inputting the dressing parameter representation of the target reference dressing in the embodiment into the neural network model, the neural network model can more accurately simulate the target reference dressing for the target virtual object based on more accurate parameters describing the target reference dressing.
As shown in fig. 3, an exemplary process diagram for generating a target makeup image in fig. 3 is shown, where the game simulator in fig. 3 is the neural network model, and in fig. 3, a target element type parameter corresponding to a target reference makeup in the reference makeup image and a target makeup attribute parameter corresponding to the target element type parameter are obtained by performing parameter representation on the reference makeup image, so that the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter are input into the game simulator, so that the game simulator generates a target makeup image corresponding to the target virtual object according to the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter. The two-dimensional or three-dimensional information representation of the target virtual object may be input to the game simulator in advance or may be input to the game simulator in real time.
Optionally, in some scenes, in order to enable a more accurate auxiliary neural network model to simulate a target makeup image corresponding to a target virtual object, facial feature shape pinching parameters of the reference object and the target virtual object may also be input into the neural network model.
In some embodiments, the effect of directly copying the makeup corresponding to the reference object to the makeup appearance appearing on the target virtual object is not good due to the difference in facial five sense organs between the target virtual object and the reference object, so that parameters required for customizing the makeup appearance of the target virtual object need to be updated based on the difference between the makeup appearance customized by the target virtual object and the target reference makeup appearance of the reference object, so that the makeup appearance generated based on the updated parameters is more similar to the target reference makeup appearance of the reference object.
Specifically, the generating, by using a preset neural network model, the target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter may include: the terminal can generate a makeup image based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model. The terminal may then determine loss information between the makeup image and the makeup reference image. Then, the terminal may update the target element type parameter and/or the target makeup attribute parameter based on the loss information, so that the neural network model regenerates the makeup image based on the updated target element type parameter and/or the target makeup attribute parameter, and the target element type parameter and/or the target makeup attribute parameter that are not updated, and returns to the step of determining the loss information between the makeup image and the makeup reference image until the neural network model satisfies a preset cut-off condition, and uses the finally generated makeup image as a target makeup image corresponding to the target virtual object, thereby causing the similarity between the target makeup in the obtained target makeup image and the target reference makeup in the makeup reference image to be higher.
In addition, the latest target element type parameter and the latest target dressing attribute parameter corresponding to the finally generated dressing image can be converted into parameter formats used by a game engine, for example, the color parameters can be expressed in RGB16 system.
As shown in fig. 4, another process schematic for generating a target makeup image is illustrated in fig. 4, wherein the game simulator in fig. 4 is the neural network model, and in fig. 4, the updated target element type parameter and/or the target makeup attribute parameter corresponding to the target element type parameter and the target element type parameter are input into the game simulator by updating the parameter representation of the makeup reference image based on the loss information between the makeup image and the makeup reference image, so that the game simulator regenerates the makeup image until the target makeup image corresponding to the target virtual object is obtained when the cut-off condition is satisfied. The graphic representation of the target virtual object shown in fig. 4 is a representation of a three-dimensional target virtual object that imparts a makeup to the target.
In some embodiments, the cutoff condition includes that the number of times the neural network model generates the target makeup image satisfies a preset number of times, and/or that the loss information satisfies a preset loss condition.
The preset number of times may be set according to the requirement, and is not limited herein, for example, 10 times.
In some embodiments, the terminal may update the target element type parameter and/or the target makeup attribute parameter using a gradient descent method to reduce loss information between the makeup image and the makeup reference image.
In some embodiments, the terminal may introduce a design cosmetic penalty to maximize the consistency of the cosmetic of each part of the faces of the reference object and the target virtual object, so that the similarity between the target reference cosmetic of the reference object and the cosmetic of the target virtual object is higher, i.e., the cosmetic of the target virtual object is more accurate.
Specifically, the determining the loss information between the makeup image and the makeup reference image may include:
firstly, the terminal can divide the makeup image based on a preset face division rule to obtain a makeup sub-image corresponding to each face area. And dividing the makeup reference image based on the face division rule to obtain a makeup reference sub-image corresponding to each face region.
In this embodiment, the makeup image and the makeup reference image are divided into the makeup sub-image and the makeup reference sub-image corresponding to the respective areas of the faces of the reference object and the target virtual object by the preset face division rule. For example, the five-sense organ region of the faces of the reference object and the target virtual object, such as the eyebrow region, the eye region, the lip region, the nose region, the face region, and the like, may be segmented. In addition, the terminal can obtain the eyeball pupil area in the eye area through morphological operation, and can limit to obtain the eye frame position through the frame area.
As shown in fig. 5, an exemplary face segmentation process is shown in fig. 5, that is, a makeup sub-image and a makeup reference sub-image corresponding to each face area are obtained by respectively segmenting the makeup image and the makeup reference image in fig. 5, that is, areas represented by different colors in fig. 5, and then cutting can be performed to exclude interference factors that do not need to be compared later, so as to obtain an image indication pointed by an arrow.
And secondly, the terminal respectively determines the loss information between the makeup sub-image and the makeup reference sub-image corresponding to each face area, namely respectively carries out loss calculation on each face area to obtain a finer loss information result, and gives corresponding weights to the loss information of each face area, so that fine adjustment on a certain face area is realized through different weights, for example, a user wants to have higher accuracy on eye makeup, and gives higher weight to the loss information of the face area corresponding to the eye makeup.
Before the terminal determines loss information between the makeup sub-image corresponding to each face area and the makeup reference sub-image, it is further required to determine which area of the makeup sub-image and which area of the makeup reference sub-image are compared to obtain corresponding loss information, that is, determine that the orbit area corresponds to an eye shadow, an eye line, eyelashes, etc., that the eye bead area corresponds to a pupil, that the eyebrow area corresponds to an eyebrow make-up, that the lip area corresponds to a lip make-up, that the face area corresponds to a make-up, a blush, etc.
Illustratively, as shown in fig. 6, a schematic diagram of comparison between the makeup sub-image and the makeup reference sub-image for which loss information is to be determined is shown in fig. 6, and as can be seen from fig. 6, the eye areas of the makeup sub-image and the makeup reference sub-image perform the makeup loss calculation, and the lip areas of the makeup sub-image and the makeup reference sub-image perform the makeup loss calculation.
Specifically, the terminal can obtain the color channel histogram of the makeup reference sub-image matched with the color channel histogram of a certain makeup sub-image by matching the color channel histogram of the makeup sub-image corresponding to each face area and the color channel histogram of the makeup reference sub-image corresponding to each face area, and then perform loss calculation on the matched makeup sub-image and the makeup reference sub-image to obtain the loss information of the corresponding face area, namely the makeup loss.
Finally, the terminal determines loss information between the makeup image and the makeup reference image based on the loss information after the weight is given corresponding to each face area.
In some embodiments, in addition to the makeup loss, loss information between the makeup image and the makeup reference image may be determined by calculating content loss and identity consistency loss.
In some embodiments, the training step of the neural network model may include:
firstly, a preset number of training samples are obtained, wherein the training samples comprise a history element type parameter of a history reference object with a history reference makeup, and a plurality of history makeup attribute parameters of the history reference makeup under the history element type parameter.
And secondly, configuring a corresponding label for each training sample, wherein the label is an image of a history reference makeup containing the history reference object.
And finally, training a preset neural network model through a preset number of training samples and labels corresponding to the training samples until the neural network model obtains a preset convergence condition, so as to obtain a trained neural network model.
In this embodiment, the terminal may use a deep generation network based on a convolutional neural network (Convolutional Neural Networks, CNN), train a neural network model, and constrain the loss of the neural network model to optimize the parameters of the neural network model, so that the loss between the image generated by the neural network model and the label is smaller and smaller, i.e., the similarity between the two-dimensional history dressing image of the history reference object generated by the neural network model and the image of the history reference dressing containing the history reference object is higher and higher. The image of the history reference makeup containing the history reference object may be a photo or a screenshot of the history reference object with the history reference makeup.
The preset number may be set according to the requirement, and is not limited herein, for example, 5 ten thousand, and in this embodiment, the virtual objects may be generated in the game engine to sample, so as to obtain a corresponding number of training samples.
It can be appreciated that by training the neural network model based on more accurate parameters describing the target reference makeup, the problem that discrete type parameters are difficult to train the neural network model is solved, so that the training result of the neural network model can be maximally approximate to the result of the game engine. And the target reference makeup can be expressed in the largest space through operations such as color channel splitting, normalization and the like.
As shown in fig. 7, fig. 7 is a training schematic diagram of a game simulator, where the game simulator in fig. 7 is the neural network model, in fig. 7, training samples are input into the game simulator for training, and a historical dressing image output by the game simulator in training corresponding to each training sample is compared with an image including a historical reference corresponding to the historical reference dressing image of the training sample, so as to obtain a loss between the two, and parameters of the game simulator are updated based on the loss between the two, so that the historical dressing image generated by the game simulator is more accurate until the corresponding convergence condition is met, and the game simulator with a training number is obtained.
In some embodiments, the method for generating the dressing of the virtual object can be applied to the role playing game RPG to more accurately and rapidly customize the dressing meeting the requirements of the player for the virtual object held by the player, so that when the player sees the favorite dressing, the method for generating the dressing of the virtual object directly based on the embodiment can customize the image corresponding to the favorite dressing of the player on the virtual object held by the player, so that the virtual object has the favorite dressing of the player.
From the above, it can be seen that by acquiring the makeup reference image, the reference object in the makeup reference image has the target reference makeup to be simulated. Then, based on the makeup reference image, identifying a target element type parameter of the reference object under each preset makeup constituent element, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup constituent element. Finally, a target virtual object of the makeup to be updated is obtained, and the target makeup corresponding to each makeup constituent element is customized for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter, so that the corresponding makeup is customized for the target virtual object based on the parameter representation by definitely referring to the parameter representation corresponding to the makeup to be simulated, the time spent for adjusting the makeup through the control is avoided, and the makeup generation efficiency of the virtual object is improved.
In order to better implement the method, the embodiment of the application also provides a device for generating the makeup of the virtual object, and the device for generating the makeup of the virtual object can be integrated in electronic equipment, for example, computer equipment, and the computer equipment can be equipment such as a terminal and a server.
The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
For example, in this embodiment, taking a specific integration of a makeup generating device of a virtual object in a terminal as an example, a method in this embodiment of the present application is described in detail, and this embodiment provides a makeup generating device of a virtual object, as shown in fig. 8, the makeup generating device of a virtual object may include:
an image obtaining module 801, configured to obtain a makeup reference image, where a reference object in the makeup reference image has a target reference makeup to be simulated;
a parameter determining module 802, configured to identify a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determine a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, where the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
An object obtaining module 803, configured to obtain a target virtual object of a makeup to be updated;
and a makeup customizing module 804, configured to customize, for the target virtual object, a target makeup corresponding to each of the makeup constituent elements based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In some embodiments, the plurality of element types of the makeup component are respectively configured with corresponding element type parameters, and the parameter determining module 802 is specifically configured to:
determining a partial image corresponding to the makeup component in the makeup reference image;
and determining a target element type corresponding to the makeup constituent element under the partial image from a plurality of element types of the makeup constituent element, and determining a target element type parameter corresponding to the target element type.
In some embodiments, the plurality of makeup attribute parameters of the target element type parameter are configured with corresponding value intervals, and the parameter determining module 802 is specifically configured to:
in the target reference makeup, determining a local makeup corresponding to the target element type parameter;
selecting a dressing attribute value matched with the local dressing from a value interval of the dressing attribute parameter;
And taking the selected dressing attribute value as a target dressing attribute parameter under the target element type parameter.
In some embodiments, the makeup generating device for a virtual object further includes a normalization processing module, where the normalization processing module is specifically configured to:
and normalizing the value intervals of the plurality of makeup attribute parameters of the target element type parameters to obtain normalized value intervals corresponding to the plurality of makeup attribute parameters.
In some embodiments, the makeup attribute parameter includes a color parameter, where the color parameter is configured with a value interval corresponding to at least one color channel.
In some embodiments, the makeup customizing module 804 is specifically configured to:
and generating a target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model, wherein the target makeup image is used for indicating target makeup corresponding to each makeup constituent element customized for the target virtual object.
In some embodiments, the makeup customizing module 804 is specifically configured to:
Generating a makeup image based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model;
determining loss information between the makeup image and the makeup reference image;
updating the target element type parameter and/or the target makeup attribute parameter based on the loss information, so that the neural network model regenerates the makeup image, and returning to the step of executing the loss information between the makeup image and the makeup reference image until the neural network model meets the preset cut-off condition, and taking the finally generated makeup image as the target makeup image corresponding to the target virtual object.
In some embodiments, the cutoff condition includes that the number of times the neural network model generates the target makeup image satisfies a preset number of times, and/or that the loss information satisfies a preset loss condition.
In some embodiments, the makeup customizing module 804 is specifically configured to:
dividing the makeup image based on a preset face division rule to obtain a makeup sub-image corresponding to each face area;
Dividing the makeup reference image based on the face division rule to obtain a makeup reference sub-image corresponding to each face area;
loss information between the makeup sub-image and the makeup reference sub-image corresponding to each face area is respectively determined, and corresponding weights are given to the loss information of each face area;
and determining loss information between the makeup image and the makeup reference image based on the loss information after the weight is given corresponding to each face area.
In some embodiments, the makeup generating device for a virtual object further includes a training module, where the training module is specifically configured to:
acquiring a preset number of training samples, wherein the training samples comprise history element type parameters of a history reference object with history reference makeup, and a plurality of history makeup attribute parameters of the history reference makeup under the history element type parameters;
configuring a corresponding label for each training sample, wherein the label is an image of a history reference makeup containing the history reference object;
training a preset neural network model through a preset number of training samples and labels corresponding to the training samples until the neural network model obtains a preset convergence condition, and obtaining a trained neural network model.
As can be seen from the above, the makeup generating device for a virtual object according to the present embodiment obtains a makeup reference image, where a reference object has a target reference makeup to be simulated. Then, based on the makeup reference image, identifying a target element type parameter of the reference object under each preset makeup constituent element, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup constituent element. Finally, a target virtual object of the makeup to be updated is obtained, and the target makeup corresponding to each makeup constituent element is customized for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter, so that the corresponding makeup is customized for the target virtual object based on the parameter representation by definitely referring to the parameter representation corresponding to the makeup to be simulated, the time spent for adjusting the makeup through the control is avoided, and the makeup generation efficiency of the virtual object is improved.
Correspondingly, the embodiment of the application also provides electronic equipment, which can be a terminal, and the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. As shown in fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 900 includes a processor 901 having one or more processing cores, a memory 902 having one or more computer readable storage media, and a computer program stored on the memory 902 and executable on the processor. The processor 901 is electrically connected to the memory 902. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 901 is a control center of electronic device 900, connects various portions of the entire electronic device 900 using various interfaces and lines, and performs various functions of electronic device 900 and processes data by running or loading software programs and/or modules stored in memory 902, and invoking data stored in memory 902, thereby performing overall monitoring of electronic device 900.
In the embodiment of the present application, the processor 901 in the electronic device 900 loads computer programs corresponding to the processes of one or more application programs into the memory 902 according to the following steps, and the processor 901 executes the application programs stored in the memory 902, so as to implement various functions:
acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
acquiring a target virtual object of the makeup to be updated;
and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In some embodiments, the plurality of element types of the makeup constituent element are respectively configured with corresponding element type parameters, and the identifying, based on the makeup reference image, the target element type parameters of the reference object under each preset makeup constituent element includes:
Determining a partial image corresponding to the makeup component in the makeup reference image;
and determining a target element type corresponding to the makeup constituent element under the partial image from a plurality of element types of the makeup constituent element, and determining a target element type parameter corresponding to the target element type.
In some embodiments, the plurality of makeup attribute parameters of the target element type parameter are configured with corresponding value intervals, and the determining the plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter includes:
in the target reference makeup, determining a local makeup corresponding to the target element type parameter;
selecting a dressing attribute value matched with the local dressing from a value interval of the dressing attribute parameter;
and taking the selected dressing attribute value as a target dressing attribute parameter under the target element type parameter.
In some embodiments, before selecting the value of the makeup attribute matching the partial makeup from the value interval of the makeup attribute parameter, the method further includes:
and normalizing the value intervals of the plurality of makeup attribute parameters of the target element type parameters to obtain normalized value intervals corresponding to the plurality of makeup attribute parameters.
In some embodiments, the makeup attribute parameter includes a color parameter, where the color parameter is configured with a value interval corresponding to at least one color channel.
In some embodiments, the customizing the target makeup for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter includes:
and generating a target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model, wherein the target makeup image is used for indicating target makeup corresponding to each makeup constituent element customized for the target virtual object.
In some embodiments, the generating, by the predetermined neural network model, the target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter includes:
generating a makeup image based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model;
Determining loss information between the makeup image and the makeup reference image;
updating the target element type parameter and/or the target makeup attribute parameter based on the loss information, so that the neural network model regenerates the makeup image, and returning to the step of executing the loss information between the makeup image and the makeup reference image until the neural network model meets the preset cut-off condition, and taking the finally generated makeup image as the target makeup image corresponding to the target virtual object.
In some embodiments, the cutoff condition includes that the number of times the neural network model generates the target makeup image satisfies a preset number of times, and/or that the loss information satisfies a preset loss condition.
In some embodiments, the determining loss information between the makeup image and the makeup reference image includes:
dividing the makeup image based on a preset face division rule to obtain a makeup sub-image corresponding to each face area;
dividing the makeup reference image based on the face division rule to obtain a makeup reference sub-image corresponding to each face area;
Loss information between the makeup sub-image and the makeup reference sub-image corresponding to each face area is respectively determined, and corresponding weights are given to the loss information of each face area;
and determining loss information between the makeup image and the makeup reference image based on the loss information after the weight is given corresponding to each face area.
In some embodiments, the training step of the neural network model includes:
acquiring a preset number of training samples, wherein the training samples comprise history element type parameters of a history reference object with history reference makeup, and a plurality of history makeup attribute parameters of the history reference makeup under the history element type parameters;
configuring a corresponding label for each training sample, wherein the label is an image of a history reference makeup containing the history reference object;
training a preset neural network model through a preset number of training samples and labels corresponding to the training samples until the neural network model obtains a preset convergence condition, and obtaining a trained neural network model.
Thus, the electronic device 900 provided in this embodiment may have the following technical effects: and the dressing generation efficiency of the virtual object is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 9, the electronic device 900 further includes: a touch display 903, a radio frequency circuit 904, an audio circuit 905, an input unit 906, and a power supply 907. The processor 901 is electrically connected to the touch display 903, the radio frequency circuit 904, the audio circuit 905, the input unit 906, and the power supply 907, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 9 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 903 may be used to display a graphical user interface and receive an operation instruction generated by a user acting on the graphical user interface. The touch display 903 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 901, and can receive and execute commands sent from the processor 901. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 901 to determine the type of touch event, and the processor 901 then provides a corresponding visual output on the display panel based on the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display 903 to implement input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch display 903 may also implement an input function as part of the input unit 906.
The radio frequency circuit 904 may be configured to receive and transmit radio frequency signals to and from a network device or other electronic device via wireless communication to and from the network device or other electronic device.
The audio circuitry 905 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone. The audio circuit 905 may transmit the received electrical signal converted from audio data to a speaker, and convert the electrical signal into a sound signal to output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 905 and converted into audio data, which are processed by the audio data output processor 901 for transmission to, for example, another electronic device via the radio frequency circuit 904, or which are output to the memory 902 for further processing. The audio circuit 905 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 906 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 907 is used to power the various components of the electronic device 900. Alternatively, the power supply 907 may be logically connected to the processor 901 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 907 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 9, the electronic device 900 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of the various methods of the above embodiments may be performed by a computer program, or by computer program control related hardware, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the embodiments of the present application provide a computer readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform any of the method for creating a makeup of a virtual object provided by the embodiments of the present application. For example, the computer program may perform the steps of:
Acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
acquiring a target virtual object of the makeup to be updated;
and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
In some embodiments, the plurality of element types of the makeup constituent element are respectively configured with corresponding element type parameters, and the identifying, based on the makeup reference image, the target element type parameters of the reference object under each preset makeup constituent element includes:
determining a partial image corresponding to the makeup component in the makeup reference image;
And determining a target element type corresponding to the makeup constituent element under the partial image from a plurality of element types of the makeup constituent element, and determining a target element type parameter corresponding to the target element type.
In some embodiments, the plurality of makeup attribute parameters of the target element type parameter are configured with corresponding value intervals, and the determining the plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter includes:
in the target reference makeup, determining a local makeup corresponding to the target element type parameter;
selecting a dressing attribute value matched with the local dressing from a value interval of the dressing attribute parameter;
and taking the selected dressing attribute value as a target dressing attribute parameter under the target element type parameter.
In some embodiments, before selecting the value of the makeup attribute matching the partial makeup from the value interval of the makeup attribute parameter, the method further includes:
and normalizing the value intervals of the plurality of makeup attribute parameters of the target element type parameters to obtain normalized value intervals corresponding to the plurality of makeup attribute parameters.
In some embodiments, the makeup attribute parameter includes a color parameter, where the color parameter is configured with a value interval corresponding to at least one color channel.
In some embodiments, the customizing the target makeup for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter includes:
and generating a target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model, wherein the target makeup image is used for indicating target makeup corresponding to each makeup constituent element customized for the target virtual object.
In some embodiments, the generating, by the predetermined neural network model, the target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter includes:
generating a makeup image based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model;
Determining loss information between the makeup image and the makeup reference image;
updating the target element type parameter and/or the target makeup attribute parameter based on the loss information, so that the neural network model regenerates the makeup image, and returning to the step of executing the loss information between the makeup image and the makeup reference image until the neural network model meets the preset cut-off condition, and taking the finally generated makeup image as the target makeup image corresponding to the target virtual object.
In some embodiments, the cutoff condition includes that the number of times the neural network model generates the target makeup image satisfies a preset number of times, and/or that the loss information satisfies a preset loss condition.
In some embodiments, the determining loss information between the makeup image and the makeup reference image includes:
dividing the makeup image based on a preset face division rule to obtain a makeup sub-image corresponding to each face area;
dividing the makeup reference image based on the face division rule to obtain a makeup reference sub-image corresponding to each face area;
Loss information between the makeup sub-image and the makeup reference sub-image corresponding to each face area is respectively determined, and corresponding weights are given to the loss information of each face area;
and determining loss information between the makeup image and the makeup reference image based on the loss information after the weight is given corresponding to each face area.
In some embodiments, the training step of the neural network model includes:
acquiring a preset number of training samples, wherein the training samples comprise history element type parameters of a history reference object with history reference makeup, and a plurality of history makeup attribute parameters of the history reference makeup under the history element type parameters;
configuring a corresponding label for each training sample, wherein the label is an image of a history reference makeup containing the history reference object;
training a preset neural network model through a preset number of training samples and labels corresponding to the training samples until the neural network model obtains a preset convergence condition, and obtaining a trained neural network model.
It can be seen that the computer program can be loaded by the processor to execute any of the makeup generation methods of virtual objects provided in the embodiments of the present application, so as to bring about the following technical effects: and the dressing generation efficiency of the virtual object is improved.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the computer program stored in the computer readable storage medium can execute any one of the method for generating the makeup of the virtual object provided in the embodiment of the present application, the method for generating the makeup of the virtual object provided in the embodiment of the present application can achieve the beneficial effects that can be achieved by any one of the method for generating the makeup of the virtual object provided in the embodiment of the present application, which are detailed in the previous embodiments and are not described herein again.
The foregoing describes in detail a method, an apparatus, an electronic device, and a computer readable storage medium for creating a makeup of a virtual object provided in the embodiments of the present application, where specific examples are applied to illustrate principles and implementations of the present application, and the foregoing description of the embodiments is only used to help understand the method and core idea of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (13)

1. A method for creating a makeup of a virtual object, the method comprising:
acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
identifying a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determining a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, wherein the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
acquiring a target virtual object of the makeup to be updated;
and customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
2. A method for creating a makeup of a virtual object according to claim 1, wherein a plurality of element types of the makeup constituent element are respectively configured with corresponding element type parameters, and the identifying, based on the makeup reference image, a target element type parameter of the reference object under each preset makeup constituent element includes:
In the makeup reference image, determining a local image corresponding to the makeup constituent element;
and determining a target element type corresponding to the makeup constituent element under the partial image from a plurality of element types of the makeup constituent element, and determining a target element type parameter corresponding to the target element type.
3. A method for generating a makeup of a virtual object according to claim 1, wherein a plurality of makeup attribute parameters of the target element type parameter are configured with corresponding value intervals, and the determining the plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter includes:
in the target reference makeup, determining a local makeup corresponding to the target element type parameter;
selecting a dressing attribute value matched with the local dressing from the value interval of the dressing attribute parameter;
and taking the selected dressing attribute value as a target dressing attribute parameter under the target element type parameter.
4. A method of creating a makeup for a virtual object as claimed in claim 3, further comprising, before selecting a value of a makeup attribute matching the partial makeup from the value interval of the makeup attribute parameter:
And normalizing the value intervals of the plurality of makeup attribute parameters of the target element type parameter to obtain normalized value intervals corresponding to the plurality of makeup attribute parameters.
5. A method of creating a makeup for a virtual object as claimed in claim 3, wherein the makeup attribute parameter includes a color parameter configured with a value interval corresponding to at least one color channel.
6. A method for generating a makeup of a virtual object according to any one of claims 1 to 5, wherein customizing, for the target virtual object, a target makeup corresponding to each makeup component based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter comprises:
and generating a target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model, wherein the target makeup image is used for indicating target makeup corresponding to each makeup constituent element customized for the target virtual object.
7. The method for generating a makeup of a virtual object according to claim 6, wherein the generating, by a preset neural network model, a target makeup image corresponding to the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter includes:
Generating a makeup image based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter through a preset neural network model;
determining loss information between the makeup image and the makeup reference image;
updating the target element type parameter and/or the target makeup attribute parameter based on the loss information, so that the neural network model regenerates the makeup image, and returning to the step of executing the loss information between the makeup image and the makeup reference image until the neural network model meets the preset cut-off condition, and taking the finally generated makeup image as the target makeup image corresponding to the target virtual object.
8. A method of creating a makeup of a virtual object as claimed in claim 7, wherein the cut-off condition includes the number of times the neural network model creates the target makeup image satisfying a preset number of times, and/or the loss information satisfying a preset loss condition.
9. A method of creating a makeup for a virtual object as claimed in claim 7, wherein said determining loss information between said makeup image and said makeup reference image comprises:
Dividing the makeup image based on a preset face division rule to obtain a makeup sub-image corresponding to each face area;
dividing the makeup reference image based on the face division rule to obtain a makeup reference sub-image corresponding to each face area;
loss information between the makeup sub-image and the makeup reference sub-image corresponding to each face area is respectively determined, and corresponding weights are given to the loss information of each face area;
and determining loss information between the makeup image and the makeup reference image based on the loss information corresponding to each face area after the weight is given.
10. A method of creating a makeup of a virtual object as claimed in claim 7, wherein the training step of the neural network model comprises:
acquiring a preset number of training samples, wherein the training samples comprise historical element type parameters of a historical reference object with a historical reference makeup, and a plurality of historical makeup attribute parameters of the historical reference makeup under the historical element type parameters;
configuring a corresponding label for each training sample, wherein the label is an image of a history reference makeup containing the history reference object;
Training a preset neural network model through a preset number of training samples and labels corresponding to the training samples until the neural network model obtains a preset convergence condition, and obtaining a trained neural network model.
11. A makeup generation apparatus for a virtual object, the apparatus comprising:
the image acquisition module is used for acquiring a makeup reference image, wherein a reference object in the makeup reference image has a target reference makeup to be simulated;
a parameter determining module, configured to identify a target element type parameter of the reference object under each preset makeup component element based on the makeup reference image, and determine a plurality of target makeup attribute parameters of the target reference makeup under the target element type parameter, where the target element type parameter indicates an element type corresponding to the target reference makeup under a plurality of element types of the corresponding makeup component element;
the object acquisition module is used for acquiring a target virtual object of the makeup to be updated;
and the makeup customizing module is used for customizing the target makeup corresponding to each makeup constituent element for the target virtual object based on the target element type parameter and the target makeup attribute parameter corresponding to the target element type parameter.
12. An electronic device comprising a processor and a memory, wherein the memory stores a computer program which, when executed by the processor, causes the processor to perform the method of makeup generation of a virtual object according to any one of claims 1 to 10.
13. A computer readable storage medium, characterized in that it comprises a computer program for causing an electronic device to execute the method of makeup generation of a virtual object according to any one of claims 1 to 10 when the computer program is run on the electronic device.
CN202311862079.4A 2023-12-29 2023-12-29 Dressing generation method, device and equipment for virtual object and readable storage medium Pending CN117830081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311862079.4A CN117830081A (en) 2023-12-29 2023-12-29 Dressing generation method, device and equipment for virtual object and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311862079.4A CN117830081A (en) 2023-12-29 2023-12-29 Dressing generation method, device and equipment for virtual object and readable storage medium

Publications (1)

Publication Number Publication Date
CN117830081A true CN117830081A (en) 2024-04-05

Family

ID=90518505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311862079.4A Pending CN117830081A (en) 2023-12-29 2023-12-29 Dressing generation method, device and equipment for virtual object and readable storage medium

Country Status (1)

Country Link
CN (1) CN117830081A (en)

Similar Documents

Publication Publication Date Title
CN112053423B (en) Model rendering method and device, storage medium and computer equipment
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN111729314B (en) Virtual character face pinching processing method and device and readable storage medium
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
CN108681398A (en) Visual interactive method and system based on visual human
CN113426129B (en) Method, device, terminal and storage medium for adjusting appearance of custom roles
CN117455753B (en) Special effect template generation method, special effect generation device and storage medium
Liu et al. Digital art pattern design based on visual material colouring intelligent programming system
CN112149599A (en) Expression tracking method and device, storage medium and electronic equipment
US20230298253A1 (en) Appearance editing method and apparatus for virtual pet, terminal, and storage medium
CN116071452A (en) Style image generation method and device, computer equipment and storage medium
CN117830081A (en) Dressing generation method, device and equipment for virtual object and readable storage medium
CN113824982A (en) Live broadcast method and device, computer equipment and storage medium
CN113223128B (en) Method and apparatus for generating image
CN113706399A (en) Face image beautifying method and device, electronic equipment and storage medium
CN113554760B (en) Method and device for changing package, computer equipment and storage medium
CN117689782B (en) Method, device, equipment and storage medium for generating poster image
CN112837403B (en) Mapping method, mapping device, computer equipment and storage medium
KR102620400B1 (en) Method, device and computer program to retouch face on screen
CN115861519A (en) Rendering method and device of hair model, computer equipment and storage medium
CN117689780A (en) Animation generation method and device of virtual model, computer equipment and storage medium
CN117576242A (en) Mapping processing method, mapping processing device, electronic equipment and readable storage medium
CN115994969A (en) Special effect rendering method and device for game model, computer equipment and storage medium
CN117095126A (en) Virtual model generation method and device, computer equipment and storage medium
Jin et al. Image Recoloring for Color Blindness Considering Naturalness and Harmony

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination