CN117274460A - Dressing rendering method, device and equipment for virtual character and storage medium - Google Patents

Dressing rendering method, device and equipment for virtual character and storage medium Download PDF

Info

Publication number
CN117274460A
CN117274460A CN202311287378.XA CN202311287378A CN117274460A CN 117274460 A CN117274460 A CN 117274460A CN 202311287378 A CN202311287378 A CN 202311287378A CN 117274460 A CN117274460 A CN 117274460A
Authority
CN
China
Prior art keywords
makeup
dressing
style
positions
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311287378.XA
Other languages
Chinese (zh)
Inventor
冯喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shanghai Co Ltd
Original Assignee
Tencent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shanghai Co Ltd filed Critical Tencent Technology Shanghai Co Ltd
Priority to CN202311287378.XA priority Critical patent/CN117274460A/en
Publication of CN117274460A publication Critical patent/CN117274460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for dressing and rendering of a virtual character, and belongs to the field of image processing. The method comprises the following steps: obtaining a makeup style sequence chart, wherein the makeup style sequence chart comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1; sampling a first makeup combination from the makeup style sequence diagram, wherein the makeup style of m makeup positions in the first makeup combination is from one makeup combination or different makeup combinations in the n times n makeup combinations; and rendering to obtain the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination. In the method, various dressing style selections can be realized by arranging and combining the dressing styles of different dressing positions in the dressing style sequence chart.

Description

Dressing rendering method, device and equipment for virtual character and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a storage medium for rendering a makeup of a virtual character.
Background
With the rapid development of the game field, the requirements of people on the dressing effect of the virtual characters in the game are gradually increased, so that the enrichment of the dressing effect of the virtual characters in the game is an important factor for improving the user experience.
In the related art, different makeup effects of virtual roles are realized by combining a plurality of maps. However, when implementing different cosmetic effects of virtual characters, the manner in which multiple maps are combined requires the system to load and sample a large number of maps simultaneously, which increases bandwidth occupation and resource consumption.
Therefore, how to increase the cosmetic effect of the virtual character and at the same time reduce the use of the map is a problem that needs to be solved at present.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for dressing and rendering of virtual characters, wherein the technical scheme is as follows:
according to an aspect of the present application, there is provided a method for makeup rendering of a virtual character, the method including:
obtaining a makeup style sequence chart, wherein the makeup style sequence chart comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1;
Sampling a first makeup combination from the makeup style sequence diagram, wherein the makeup style of m makeup positions in the first makeup combination is from one makeup combination or different makeup combinations in the n times n makeup combinations;
and rendering to obtain the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
According to another aspect of the present application, there is provided a makeup rendering device for a virtual character, the device including:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a makeup style sequence diagram, the makeup style sequence diagram comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1;
the sampling module is used for sampling a first makeup style combination from the makeup style sequence chart, wherein the makeup styles of m makeup positions in the first makeup style combination are from one makeup combination or different makeup combinations in the n-n makeup combinations;
and the rendering module is used for rendering the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
According to another aspect of the present application, there is provided a computer device including a processor and a memory, in which at least one instruction, at least one program, a code set, or an instruction set is stored, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by the processor to implement the method for cosmetic rendering of a virtual character as described in the above aspect.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the method for cosmetic rendering of a virtual character as described in the above aspect.
According to another aspect of the present application, there is provided a computer program product comprising computer instructions stored in a computer readable storage medium, from which a processor reads and executes the computer instructions to implement the method of cosmetic rendering of a virtual character as described in the above aspects.
The beneficial effects that this application provided technical scheme brought include at least:
and (3) sampling the first dressing combination from the dressing style sequence diagram by acquiring the dressing style sequence diagram, and rendering to obtain the dressing of the virtual character based on the dressing styles of m dressing positions in the first dressing combination. According to the method and the device, n times n make-up combinations in the make-up style sequence chart are obtained, each make-up combination comprises m make-up styles of make-up positions, diversified make-up style selection is achieved, various make-up effects can be displayed by the virtual character, and different scenes and requirements are met. Compared with the prior art that the dressing style of each dressing position is an independent map, the dressing style of different dressing positions is placed in the same map, the number of the system loading maps is reduced, and the bandwidth occupation and the resource consumption are reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic view of a make-up composition provided in an exemplary embodiment of the present application;
fig. 2 is a schematic diagram illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic architecture diagram of a computer system provided in an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 5 is a flowchart illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 6 illustrates a flowchart of a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 7 is a flowchart illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 8 is a flowchart illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 9 is a schematic diagram illustrating a makeup style of a virtual character according to an exemplary embodiment of the present application;
fig. 10 is a schematic diagram illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 11 is a schematic diagram illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
Fig. 12 is a schematic diagram illustrating a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application;
fig. 13 is a block diagram showing a configuration of a makeup rendering device of a virtual character according to an exemplary embodiment of the present application;
fig. 14 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings. Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first parameter may also be referred to as a second parameter, and similarly, a second parameter may also be referred to as a first parameter, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, description is made of related terms related to the present application:
virtual environment: referring to a simulated, virtual three-dimensional scene created by computer technology, an exemplary virtual environment may be a virtual reality world or game scene, the virtual environment generally being composed of virtual characters and lighting associated therewith.
Virtual camera: virtual objects simulating the behavior and function of real world cameras, through which virtual environments can be observed from different angles and perspectives, the setting and adjustment of the virtual cameras can affect the visual effect of the rendering results.
In the related art, when a makeup effect of a virtual character is achieved, different makeup effects of the virtual character are achieved by combining a plurality of stickers. As shown in fig. 1, when realizing a cosmetic effect of a virtual character, each cosmetic position requires a separate map, and a cosmetic composition including three cosmetic positions requires three maps. When the dressing style of a dressing position changes, a new dressing style is required, so that when different dressing effects of virtual characters are realized, a large number of maps are required to be loaded simultaneously by the system, and the bandwidth occupation and the resource consumption are increased in the mode.
The embodiment of the application provides a schematic diagram of a method for rendering makeup of a virtual character, as shown in fig. 1, the method can be executed by a computer device, and the computer device can be a terminal device or a server.
The computer equipment acquires a dressing style sequence diagram of the virtual character, wherein the dressing style sequence diagram comprises n times n dressing combinations, each dressing combination comprises dressing styles of m dressing positions, the dressing styles of the same dressing position in different dressing combinations are different, and m and n are positive integers larger than 1; the computer equipment samples a first makeup style combination from a makeup style sequence chart, wherein the makeup style of m makeup positions in the first makeup combination is one of n times n makeup combinations or different makeup combinations; the computer equipment renders the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
Optionally, the makeup style sequence diagram of the virtual character has a plurality of makeup combinations, each of which includes a plurality of makeup styles of the makeup locations, for example, a makeup style sequence diagram including 4 makeup combinations, each of which includes 3 makeup styles of the makeup locations is illustrated.
The computer equipment acquires a makeup style sequence diagram of the virtual character, wherein the makeup style sequence diagram comprises 4 makeup combinations, each makeup combination comprises 3 makeup styles of the makeup positions, for example, eye makeup of an eye area, blush of a cheek area and lip makeup of a lip area, and the makeup styles of the same makeup position in different makeup combinations are different; a set of make-up combinations is sampled from (a) of fig. 2, eye make-up pattern 1-1 from (1) of fig. 2, blush pattern 2-2 from (2) of fig. 2, lip make-up pattern 4-3 from (4) of fig. 2, and make-up patterns sampled from different make-up positions in (a) of fig. 2 are combined into a complete make-up combination (b), alternatively, make-up patterns of 3 make-up positions in (b) of fig. 2 may come from one make-up combination of 4 make-up combinations in (a) of fig. 2 or different make-up combinations.
Alternatively, the make-up pattern of 3 make-up positions in (b) of fig. 2 may be stored using one color channel in the make-up pattern sequence chart, for example, the make-up pattern 1-1 is stored using the red channel, the blush pattern 2-2 is stored using the green channel, the make-up pattern 4-3 is stored using the blue channel, and the 3 make-up positions in (b) of fig. 2 correspond to three channel mask charts.
In some embodiments, the computer device renders the make-up combination of fig. 2 (b), resulting in a make-up of the virtual character after rendering. Optionally, the 3 colors of the makeup style in (b) of fig. 2 may be mixed with the basic color of the virtual character, so as to obtain the effect of the virtual character after the makeup dyeing of the virtual character as shown in (c) of fig. 2, where the colors corresponding to the eye makeup, blush and lip makeup style are the colors that can be customized by the user, and the effect of the virtual character after the makeup dyeing of the virtual character is obtained after the colors are mixed with the basic color of the virtual character. Optionally, as shown in fig. 2 (d), a material map may be superimposed on the effect of the virtual character after the makeup dyeing, and material effect integration may be performed, for example, adding a highlight map to the lip makeup of the virtual character and adding the makeup details of the virtual character. Optionally, an illumination effect can be added to the character model of the virtual character in fig. 2 (d), and light rendering is performed, as shown in fig. 2 (e), and the final makeup effect of the virtual character is obtained after rendering.
Fig. 3 shows a schematic diagram of an image processing system according to an exemplary embodiment of the present application. The image processing system may include: a terminal device 101 and a server 102, wherein the terminal device 101 is built with a client 103.
The terminal device 101 may be an electronic device such as a mobile phone, a tablet computer, a multimedia playing device, a personal computer (Personal Computer, PC), or the like. In some embodiments, the terminal device 101 includes graphics processing hardware for handling the rendering process of the virtual character. Optionally, the types of image processing hardware include a central processing unit (CPU, central Processing Unit) and a graphics processor (GPU, graphics Processing Unit); the terminal device 101 calculates data required for virtual character display through graphic calculation hardware (Graphics Compute Unit, GCU), and finishes loading, analyzing and rendering of display data; the computed graphic data is converted into a visual image by a graphic output hardware (Graphics Output Device), for example, a two-dimensional image frame is presented on a display screen of the handset.
In some embodiments, the terminal device 101 may install the client 103 running the target application program, which may be an image processing application program or another application program provided with an image processing function, which is not limited in this application. Exemplary applications include, but are not limited to, applications (apps), applets, etc. installed in the terminal 101, but may also be in the form of web pages. The virtual character is displayed in the client 103, and the user performs an operation of changing the makeup effect of the virtual character through the client 103.
The server 102 is configured to provide a background service to the client 103 built in the terminal 101, and for example, the server 102 may be a background server of the client 103. The server 102 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center.
The terminal 101 and the server 102 can communicate with each other via a network. The network may be a wired network or a wireless network.
The application provides a technical scheme for fusing a makeup style sequence chart comprising m makeup styles of the makeup positions, wherein the number of the stickers is effectively reduced through the makeup style sequence chart, and the preparation process of the makeup style sequence chart is as follows:
1) A file is created in the drawing software, and the drawing tools are used for drawing or importing the makeup combinations in the file, wherein each makeup combination comprises m makeup styles of the makeup positions.
2) Drawing or importing a plurality of makeup combinations, each of which corresponds to a makeup effect, referring to (a) in fig. 2, each of which occupies one sub-image area of n×n sub-image areas, each of which is the same in size, and arranging the sub-image areas of the plurality of makeup combinations in order to determine their display order in the makeup style sequence chart.
3) Packaging the dressing style sequence diagram in an installation package of the game or in an update package, storing the dressing style sequence diagram in the terminal equipment after the game is installed or updated, and reading the stored dressing style sequence diagram in the running process of the game.
Next, a method for rendering a makeup of a virtual character provided in the embodiment of the present application will be described.
Fig. 4 shows a flowchart of a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application, and the method is used for an example of a terminal device, which may be the terminal device 101 in fig. 3. The method comprises at least part of the following steps:
step 310: acquiring a makeup style sequence chart;
the makeup style sequence diagram comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1. In some embodiments, the look style may be represented by a look outline.
The dressing style sequence chart refers to images in which different dressing styles are arranged in sequence, and shows the arrangement modes of different dressing combinations. The makeup style sequence diagram comprises n times n makeup combinations, and the size of each makeup style sequence diagram is determined by n. For example, when n is 2 and m is 3, 2 x 2 make-up combinations are in the make-up pattern sequence chart, at this time, one make-up pattern of two rows and two columns is in the make-up pattern sequence chart, each make-up pattern of 3 make-up positions is in the make-up pattern, the make-up patterns in each make-up map can be arranged and combined, and a total of 3^4 =81 make-up combinations of make-up patterns are available; when n is 3, and m is 3, 3*3 makeup combinations are arranged in the makeup style sequence chart, at this time, one of three rows and three columns is arranged in the makeup style sequence chart, each of the three makeup patterns is provided with 3 makeup positions, and each of the makeup patterns in the makeup map can be arranged and combined, so that a total of 3^9 =19683 makeup combinations of the makeup patterns are provided.
The cosmetic position refers to an area where a cosmetic style can be applied to a part on the face or body of the virtual character. Optionally, the dressing position includes at least one of eyes, cheeks, lips of the virtual character, but is not limited thereto, and embodiments of the present application are not particularly limited thereto.
The dressing combination refers to combining the corresponding dressing elements on the dressing position of the virtual character, and different dressing combinations correspond to different dressing effects. Illustratively, the make-up elements include eye makeup, blush, and lip makeup, each make-up element including at least one make-up style.
Each dressing combination comprises m dressing patterns of the dressing positions, and the dressing patterns of the same dressing position in different dressing combinations are different. Referring to fig. 9 in combination, the eye region in fig. 9 (a) has two different make-up patterns, and the lip region in fig. 9 (b) has three different make-up patterns.
Illustratively, the makeup style sequence chart includes 4 makeup combinations, and the makeup style sequence chart is exemplified as follows:
dressing combination 1: a1 B1C 1; dressing combination 2: a2 B2C 2;
dressing combination 3: a3 B3C 3; dressing combination 4: a4 B4C 4;
illustratively, in this makeup style sequence chart, each makeup combination has 3 makeup positions (m is 3), A, B, C respectively. The dressing style of the same dressing position in different dressing combinations is different. For example, if we compare the make-up style corresponding to the a position in different make-up combinations, in combination 1, the make-up style for the a position is A1; in combination 2, the makeup style at position a is A2; in combination 3, the makeup style at position a is A3; in combination 4, the make-up style at the a position is A4. Optionally, other make-up locations may have different make-up styles in different make-up combinations.
Step 320: sampling a first makeup combination from the makeup style sequence chart;
wherein the makeup style of m makeup positions in the first makeup combination is from one of n times n makeup combinations or different makeup combinations.
Illustratively, the makeup style sequence chart includes 4 makeup combinations, as shown in combination one, combination two, combination three and combination four above. Alternatively, A1, B1, and C1 in a combination may be selected as the makeup style for the 3 makeup locations of the first makeup combination; or selecting A1 and A2 in the combination one and C2 in the combination two as the dressing styles of 3 dressing positions of the first dressing combination; or selecting A1 in the combination I, B2 in the combination II and C3 in the combination III as the dressing styles of the 3 dressing positions of the first dressing combination. The present application is not limited in this regard.
Step 330: and rendering to obtain the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
Alternatively, the make-up style of each make-up location may be applied to the corresponding face area of the virtual character to exhibit the desired make-up effect. Illustratively, there is a first make-up composition comprising 3 make-up styles of make-up locations, A1, B1 and C1, respectively, which can be applied to corresponding facial areas of the virtual character to render the desired make-up. For example, assume that makeup position a corresponds to an eye region, B corresponds to a cheek region, and C corresponds to a lip region. We can apply a make-up style A1 to the eye area, B1 to the cheek area, C1 to the lip area. At this time, a virtual character makeup having an eye makeup effect of A1 makeup style, a blush makeup effect of B1 makeup style, and a lip makeup effect of C1 makeup style can be obtained.
In summary, in the method provided in this embodiment, a makeup style sequence chart is obtained, a first makeup combination is sampled from the makeup style sequence chart, and the makeup of the virtual character is rendered based on the makeup styles of m makeup positions in the first makeup combination. According to the method and the device, n times n make-up combinations in the make-up style sequence chart are obtained, each combination comprises m make-up styles of make-up positions, diversified make-up style selection is realized while the number of the make-up styles is saved, so that various make-up effects of virtual roles can be displayed, and different scenes and requirements are met. And rendering based on the makeup styles of m makeup positions in the first makeup combination, and applying the makeup styles to corresponding face areas, so that the makeup of the virtual character can be customized according to design requirements.
In an alternative embodiment based on fig. 4, fig. 5 shows a flowchart of a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application. In this embodiment, step 320 is alternatively implemented as step 322 and step 324:
step 322: sampling a dressing style of the ith dressing position in n x n dressing combinations from the ith color channel in the dressing style sequence chart;
Wherein, the dressing style of each dressing position in the m dressing positions is stored by adopting one color channel in the dressing style sequence chart;
color channels refer to channels used to represent different color information in an image during image processing. Typically, a color image is made up of three color channels, red (R), green (G), and blue (B), each storing intensity values for the respective colors.
Illustratively, using an RGB color model, which includes three color channels, the make-up style for each make-up location may be represented by one color channel, optionally with the red channel representing the eye make-up style, the green channel representing the cheek make-up style, and the blue channel representing the lip make-up style.
Optionally, according to the ith color channel of the makeup style sequence chart, one of the makeup styles of the ith makeup position in n×n makeup combinations may be sampled. Each color channel corresponds to a make-up location and by sampling the values of the color channels, the make-up style of the make-up location can be determined.
Specifically, the ith color channel of the makeup style sequence chart is obtained, one of the makeup combinations is selected according to n×n makeup combinations, and the makeup style of the ith makeup position is obtained from the selected makeup combination, which can be achieved by reading the value of the corresponding position of the ith color channel in the makeup style sequence chart. Through this step we can sample one make-up style of the i-th make-up position in n x n make-up combinations.
Illustratively, a corresponding channel mask map is prepared according to the number of the makeup positions of the virtual character, where the channel mask map corresponds to the makeup positions one by one, for example, a first channel mask map corresponds to a first makeup position, a second channel mask map corresponds to a second makeup position, and a third channel mask map corresponds to a third makeup position.
Step 324: and combining the sampled m makeup styles to derive a first makeup combination.
The combined sampling refers to sampling a dressing pattern of one dressing position in each color channel, and sampling dressing patterns of m dressing positions in m color channels.
A plurality of make-up styles are included in a color channel, and illustratively, a red color channel represents an eye make-up style, in a make-up style sequence diagram including 4 make-up combinations, each make-up combination including an eye make-up style, and 4 different eye make-up styles in a red color channel, each time an eye make-up style is sampled from the red color channel as it is sampled from the make-up style sequence diagram.
Alternatively, these make-up styles may be from the same make-up combination or from different make-up combinations. And combining the m dressing styles obtained by sampling according to the sequence of the dressing positions to obtain a first dressing combination.
In summary, in the method provided in this embodiment, from the ith color channel in the makeup style sequence chart, one makeup style of the ith makeup location in n×n makeup combinations is sampled; combining the sampled m makeup styles, and deriving to obtain a first makeup combination; wherein, the makeup combinations sampled by different mask patterns in the m mask patterns are the same or different. Different makeup styles in the makeup style sequence chart can be obtained by sampling different color channels, and a user can freely combine different makeup styles according to own preference and demand so as to achieve personalized and customized makeup effects.
Fig. 6 illustrates a flowchart of a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application. The method comprises at least part of the following steps:
step 322-2: determining m channel mask patterns, wherein the m channel mask patterns are in one-to-one correspondence with m dressing positions;
the channel mask map refers to a mask map for each make-up location that specifies which color channel is visible in the make-up style sequence map for each make-up location. For example, there are m make-up positions, and there is one color channel in the make-up style sequence diagram corresponding to each make-up position, then there are m channel mask diagrams, each mask diagram having one channel for specifying which color channel in the make-up style sequence diagram each make-up position is visible.
Step 322-4: sampling a dressing pattern of the ith dressing position in n.n dressing combinations from the ith color channel in the dressing pattern sequence diagram by adopting the ith channel mask diagram in the m channel mask diagrams;
wherein, the makeup combinations sampled by different mask patterns in the m mask patterns are the same or different.
Illustratively, there are three cosmetic positions, an eye region, a cheek region and a lip region, each corresponding to a channel mask pattern. A makeup style sequence chart has three color channels of red, green and blue, and can sample a makeup style of cheek regions in a 2 x 2 makeup combination. Specifically: selecting a channel mask diagram corresponding to the cheek region from the channel mask diagrams corresponding to the three dressing positions, wherein the channel mask diagram corresponds to the cheek region; and selecting a green color channel from the makeup style sequence chart, and sampling one makeup style of the cheek region in the 2 x 2 makeup combination by using the channel mask chart corresponding to the selected cheek region and the green color channel.
The implementation mode is as follows:
variable interpretation:
uv: refers to texture coordinates for calculating the makeup effect of the virtual character.
priority: the position of the serialized pictures, namely the position of the makeup in the makeup style serialized pictures, is shown.
uvprio: refers to the number of lines used to determine the line distribution of the serialized pictures.
uv0: refers to the first set uv, initial texture coordinates.
prioritytex: refers to a makeup style serialization diagram.
mask: refers to a mask diagram corresponding to the dressing position of the virtual character.
mask. R: red channel, i.e. the mask map corresponding to the eye area; mask.g: a green channel, i.e., a mask map corresponding to the cheek region; mask. B: blue channel, i.e. the mask map corresponding to the lip area.
frac: and the method is used for acquiring the decimal part of the texture coordinates so as to realize the tiling, repeating or mixing effect of the texture.
tex2d: texture sampling function.
Logic code:
uv=frac((float2(floor(priority),floor(priority/uvprio.r))+uv0)/uvprio);
mask.r=tex2d(prioritytex,uv).r;
mask.g=tex2d(prioritytex,uv).g;
mask.b=tex2d(prioritytex,uv).b;
illustratively, the code is part of an image rendering program, and according to given parameters, the makeup style of the designated makeup position is sampled from the makeup style sequence diagram and the channel mask diagram, and the sampled results are respectively stored in r, g and b components of the mask variable, that is, the sampled results are respectively stored in color channels corresponding to the channel mask diagram.
uv coordinates are used to represent the coordinate system that maps the texture map to the model surface, which defines the locations on the texture map for points on the model surface. Texture coordinates are typically two-dimensional, represented using (u, v), where u represents the transverse coordinates and v represents the longitudinal coordinates. Illustratively, in the above code, uv is a two-dimensional vector representing the texture sample location, and the uv coordinates determine the makeup location of the texture map with the makeup style on the virtual character surface.
Illustratively, first, according to the dressing position of the dressing style serialization chart and the texture coordinates corresponding to the dressing position, a texture coordinate after an offset is calculated, the offset is obtained by dividing the transverse coordinate component and the longitudinal coordinate component of the dressing position by the number of rows and columns respectively, and then rounding down. And adding the offset to the initial texture coordinates, dividing the offset by the number of rows and columns, and finally obtaining a decimal part by using a frac function to obtain texture coordinates uv corresponding to the makeup position. Then, a texture sampling function is used for obtaining a color channel value of texture coordinates uv corresponding to the makeup position from the makeup style serialization picture, and the color channel value is stored in a color channel corresponding to the channel mask map. Through the codes, the dressing style of the appointed dressing position can be sampled from the dressing style sequence diagram and the channel mask diagram according to the appointed parameters, and the result is stored in the color channel corresponding to the channel mask diagram.
For example, referring to fig. 10 in combination, a two-row two-column makeup style serialization chart is shown that includes 4 makeup combinations, each of which includes three different makeup style positions. According to texture coordinates uv corresponding to different makeup positions, eye areas of the makeup combination are sampled from a first row and a first column, cheek areas of the makeup combination are sampled from a first row and a second column, lip areas of the makeup combination are sampled from a second row and a second column, and the sampled different makeup areas are combined into a new makeup combination, as shown in (a) in fig. 10. The different makeup areas of the first makeup combination use different channel mask patterns, one for each of the eye area, cheek area and lip area, and fig. 10 (a) is the effect of the superposition of the three channel mask patterns. In the sampling of different makeup areas, the mask area of the channel mask map is multiplied by the color channel value of the makeup area corresponding to the makeup combination in the makeup style serialization map, and the corresponding makeup area is sampled. For example, the eye area in the make-up combination is sampled in the first row and first column. The channel mask image is attached to the first line and the first column of the makeup combination by taking initial texture coordinates as reference points, the value of the mask area of the channel mask image is 1, the value of the non-mask area is 0, the corresponding eye area is a red channel, the texture sampling function is used for acquiring the color channel value corresponding to the coordinates of the eye area from the first line and the first column of the makeup combination, and the sampling result is stored in the color channel corresponding to the channel mask image.
Step 322-6: and combining the sampled m makeup styles to derive a first makeup combination.
The first make-up combination refers to one of m make-up styles generated using the combined samples.
Illustratively, three makeup styles corresponding to the eye region, the cheek region and the lip region are sampled from the makeup style sequence diagram and the channel mask diagram, and the three sampled makeup styles are combined to obtain a makeup combination.
In summary, in the method provided in this embodiment, m channel mask patterns are determined, where the m channel mask patterns correspond to m cosmetic positions one by one; sampling a dressing pattern of the ith dressing position in n.times.n dressing combinations from the ith color channel in the dressing pattern sequence diagram by adopting the ith channel mask diagram in the m channel mask diagrams. By corresponding the channel shade patterns to the makeup positions one by one and sampling the corresponding color channels in the makeup style sequence chart, one makeup style of each makeup position in the makeup combination can be obtained.
In an alternative embodiment based on fig. 4, fig. 7 shows a flowchart of a method for makeup rendering of a virtual character according to an exemplary embodiment of the present application. In the present embodiment, step 330 is alternatively implemented as step 332, step 334, and step 336:
Step 332: obtaining basic colors corresponding to the m dressing positions respectively, and obtaining dressing colors corresponding to the m dressing positions respectively;
the basic color refers to the original color of the face or skin of the virtual character, which is used to describe the color of the virtual character without any cosmetic effect.
The makeup color refers to a color applied to a virtual character's makeup position for achieving a makeup effect of the virtual character. Optionally, the make-up colors include the colors of the make-up elements eye makeup, blush, and lip makeup, each make-up element corresponding to at least one make-up color.
In an alternative example, a default skin color of the virtual character on m cosmetic positions is obtained as a base color corresponding to the m cosmetic positions, respectively.
Alternatively, in the cosmetic design of the virtual character, a default skin color is used as the base color and applied in different cosmetic positions. Illustratively, the default skin color is a fixed color value, or a range of color values, which is not limited in this application.
Illustratively, the RGB values of the default skin color of the cheek region are (250, 220, 180), and the RGB values of the base color of the cheek region are (250, 220, 180).
In an alternative example, in response to receiving a make-up custom operation, a custom color of at least one of the m make-up locations is obtained as a make-up color corresponding to the at least one make-up location.
The user-defined operation means that the user can perform personalized adjustment or modification on the makeup of the virtual character, and the user-defined color means that the user selects or defines the favorite color for different makeup positions of the virtual character according to the user-defined operation.
When the computer equipment receives the dressing custom operation, custom colors of at least one dressing position in the m dressing positions can be obtained and used as the dressing color corresponding to the at least one dressing position, so that personalized customization of the virtual character dressing color by a user is realized.
For example, the custom color of the eye area of the virtual character is taken as the corresponding eye makeup color, the custom color of the cheek area of the virtual character is taken as the corresponding blush color, and the custom color of the lip area of the virtual character is taken as the corresponding lip makeup color, alternatively, the dressing colors corresponding to different dressing positions can be the same and can be different, which is not limited in the application.
Step 334: based on the dressing patterns of the m dressing positions in the first dressing combination, respectively dyeing the basic colors corresponding to the m dressing positions and the dressing colors corresponding to the m dressing positions on the character model of the virtual character;
in some embodiments, the base color and the makeup color corresponding to the virtual character in different makeup positions are mixed and calculated, so that the makeup effect of the virtual character can be obtained.
The implementation mode is as follows:
variable interpretation:
basecolor: the virtual character defaults to the skin color, i.e., the base color.
eyecolor: the eye makeup color corresponding to the virtual character eye area.
facecolor: blush color corresponding to cheek area of the virtual character.
lipcolor: lip makeup color corresponding to lip areas of the virtual character.
lerp: linear interpolation function.
Logic code:
eyecolor=lerp(basecolor,maskeyecolor,mask.r);
facecolor=lerp(eyecolor,maskfacecolor,mask.g);
lipcolor=lerp(facecolor,maskfacecolor,mask.b);
illustratively, this code is part of a color mixture calculation program that calculates the make-up colors for different make-up locations, i.e., the eye, cheek and lip areas, based on the base colors for a given channel mask pattern and virtual character make-up location.
Illustratively, the base color is mixed with the colors of the eye region, cheek region and lip region in a linear interpolation manner to obtain the makeup colors of different makeup positions of the virtual character. Performing mixed calculation on the basic color and the mask image corresponding to the eye area by using linear interpolation to obtain the makeup color of the eye area; performing mixed calculation on the makeup color of the eye area and the mask image corresponding to the cheek area by using linear interpolation to obtain the makeup color of the cheek area; and (3) performing mixed calculation on the makeup color of the cheek area and the mask image corresponding to the lip area by using linear interpolation to obtain the makeup color of the lip area.
Step 336: and rendering to obtain the makeup of the virtual character based on the dyed character model of the virtual character.
In one alternative example, a lighting effect is added to a character model of a virtual character located in a virtual environment; and rendering the character model with the increased illumination effect by adopting a virtual camera positioned in the virtual environment, and rendering to obtain the makeup of the virtual character.
Illustratively, adding lighting effects to the character model of the virtual character may be accomplished by setting parameters such as the location, color, and intensity of the light sources. For example, at least one of a point light source, a parallel light source, or an ambient light source of different types may be used, and their properties adjusted so that a suitable light shading effect is obtained for the cosmetic area of the character model surface. Illustratively, by adding point light sources to the virtual environment and simulating the irradiation of sunlight, shadows and shade changes can be produced so that the shape and details of the character model become apparent.
In some embodiments, after adding a lighting effect to the character model of the virtual character, the character model with added lighting effect may be rendered using a virtual camera, as shown in FIG. 11. For example, the virtual camera can simulate parameters such as a visual angle, a focal length, an aperture and the like of the real camera, and the rendering engine is used for rendering the angle model to present the dressing effect of the virtual character.
In summary, according to the method provided by the embodiment, the basic colors corresponding to the m makeup positions are obtained, and the makeup colors corresponding to the m makeup positions are obtained; based on the dressing patterns of the m dressing positions in the first dressing combination, respectively dyeing the basic colors corresponding to the m dressing positions and the dressing colors corresponding to the m dressing positions on the character model of the virtual character; and rendering to obtain the makeup of the virtual character based on the dyed character model of the virtual character. By acquiring the basic color and the custom makeup color of the makeup position, the personalized customization of the virtual character makeup color can be realized.
In an alternative embodiment based on fig. 7, fig. 8 shows a flowchart of a method for rendering a makeup of a virtual character according to an exemplary embodiment of the present application, where in this embodiment, the method further includes the following steps:
step 335, adding a texture map for at least one of the m cosmetic positions, the texture map being used to increase the light interaction characteristics of the cosmetic position under illumination.
The texture mapping is used for simulating the appearance of the surface of the character model of the virtual character and the light interaction characteristics, and the texture mapping can change the appearance effect of the dressing position under different illumination conditions. Optionally, the texture map includes at least one of a highlight map, a normal map, and a transparency map. The foregoing is merely illustrative of the content of the texture map, and the present application is not limited to the type of texture map.
For example, referring to fig. 12 in combination, when the texture map is a highlight map, the highlight map is used to simulate the highlight reflection effect of the cosmetic position under illumination, so that the cosmetic position presents a bright highlight effect in rendering, for example, a highlight map is added to the lip region of the virtual character, and the lip region has a highlight sparkle effect under illumination. For example, when the texture map is a normal map, the normal map is used to simulate the concave-convex texture and detail of the makeup position, so that the makeup position presents real surface detail and texture effect under illumination. Illustratively, when the texture map is a transparency map, the transparency map is used to define the transparency of the cosmetic position, and simulate the transparency effect of the cosmetic position.
In summary, in the method provided in this embodiment, a texture map is added to at least one of the m cosmetic positions, where the texture map is used to increase the optical interaction characteristics of the cosmetic positions under illumination. The light interaction characteristic of the dressing position under illumination is enhanced by adding the material map to the dressing position, so that the light effect and detail texture of the virtual character dressing can be improved, and the dressing position presents more vivid and attractive appearance effect.
Referring to fig. 13, a block diagram of a makeup rendering device for a virtual character according to an embodiment of the present application is shown. The device has the function of realizing the makeup rendering method example of the virtual role, and the function can be realized by hardware or corresponding software executed by hardware. The device may be the server described above or may be provided in the server. As shown in fig. 13, the apparatus 900 may include: an acquisition module 910, a sampling module 920, and a rendering module 930;
an obtaining module 910, configured to obtain a makeup style sequence chart, where the makeup style sequence chart includes n×n makeup combinations, each of the makeup combinations includes a makeup style of m makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers greater than 1;
a sampling module 920, configured to sample a first makeup style set from the makeup style sequence chart, where the makeup style of m makeup locations in the first makeup style set is from one makeup set or a different makeup set in the n×n makeup sets;
and the rendering module 930 is configured to render, based on the makeup styles of the m makeup positions in the first makeup combination, a makeup of the virtual character.
In an alternative embodiment, the sampling module 920 is configured to sample, from the ith color channel in the makeup style sequence chart, a makeup style of the ith makeup location in the n×n makeup combinations; the sampling module 920 is further configured to combine the sampled m makeup styles, and derive the first makeup combination.
In some alternative embodiments, apparatus 900 further comprises a determination module 940.
In an optional embodiment, a determining module 940 is configured to determine m channel mask maps, where the m channel mask maps are in a one-to-one correspondence with the m cosmetic positions; a sampling module 920, configured to sample, from the ith color channel in the makeup style sequence chart, a makeup style of the ith makeup position in the n×n makeup combinations using the ith channel mask chart in the m channel mask charts; wherein, the makeup combinations sampled by different mask patterns in the m mask patterns are the same or different.
In an alternative embodiment, the obtaining module 910 is configured to obtain basic colors corresponding to the m makeup locations respectively, and obtain makeup colors corresponding to the m makeup locations respectively; a rendering module 930, configured to dye, based on the makeup styles of the m makeup positions in the first makeup combination, the base colors respectively corresponding to the m makeup positions, and the makeup colors respectively corresponding to the m makeup positions onto the character model of the virtual character; and rendering to obtain the makeup of the virtual character based on the dyed character model of the virtual character.
In an optional embodiment, an obtaining module 910 is configured to obtain a default skin color of the virtual character at the m cosmetic positions, where the default skin color is used as a base color corresponding to each of the m cosmetic positions.
In an alternative embodiment, the obtaining module 910 is configured to obtain, in response to receiving the makeup custom operation, a custom color of at least one of the m makeup locations as a makeup color corresponding to the at least one makeup location.
In some alternative embodiments, apparatus 900 further comprises an add module 950.
In an alternative embodiment, an adding module 950 for adding lighting effects to a character model of the virtual character located in the virtual environment; and the rendering module 930 is used for rendering the character model with the added illumination effect by adopting a virtual camera positioned in the virtual environment, and rendering to obtain the makeup of the virtual character.
In an alternative embodiment, the adding module 950 is configured to add a texture map to at least one of the m cosmetic positions, where the texture map is configured to increase a light interaction characteristic of the cosmetic position under illumination.
Fig. 14 illustrates a block diagram of a computer device 1500, as shown in an exemplary embodiment of the present application. The computer device may be used to implement the makeup rendering method of the virtual character provided in the above-described embodiments. The terminal apparatus 1500 includes a central processing unit (Central Processing Unit, CPU) 1501, a system Memory 1504 including a random access Memory (Random Access Memory, RAM) 1502 and a Read-Only Memory (ROM) 1503, and a system bus 1505 connecting the system Memory 1504 and the central processing unit 1501. The terminal device 1500 also includes a basic Input/Output system (I/O) 1506, and a mass storage device 1507 for storing an operating system 1513, application programs 1514, and other program modules 1515, for transferring information between the various devices within the computer device.
The basic input/output system 1506 includes a display 1508 for displaying information and an input device 1509, such as a mouse, keyboard, etc., for the user to input information. Wherein the display 1508 and the input device 1509 are both connected to the central processing unit 1501 via an input-output controller 1510 connected to the system bus 1505. The basic input/output system 1506 may also include an input/output controller 1510 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1510 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1507 is connected to the central processing unit 1501 via a mass storage controller (not shown) connected to the system bus 1505. The mass storage device 1507 and its associated computer-readable storage media provide non-volatile storage for the terminal device 1500. That is, the mass storage device 1507 may include a computer-readable storage medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer-readable storage medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable storage instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1504 and mass storage device 1507 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1501, the one or more programs containing instructions for implementing the above-described method embodiments, the central processing unit 1501 executing the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the present application, the terminal device 1500 may also operate through a network, such as the Internet, to remote terminal devices on the network. I.e. the terminal device 1500 may be connected to the network 1512 via a network interface unit 1511 connected to the system bus 1505, or alternatively, the network interface unit 1511 may be used to connect to other types of networks or remote terminal device systems (not shown).
The memory further includes one or more programs stored in the memory, the one or more programs including steps for performing the methods provided by the embodiments of the present application, performed by the terminal device.
The embodiment of the application also provides a computer device, which comprises a processor and a memory, wherein at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to realize the method for rendering the makeup of the virtual character provided by the method embodiments.
The embodiment of the application also provides a computer readable storage medium, and at least one computer program is stored in the computer readable storage medium, and is loaded and executed by a processor to realize the method for rendering the makeup of the virtual character provided by the method embodiments.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium; the computer program is read from the computer readable storage medium and executed by a processor of a computer device, so that the computer device executes to implement the method for rendering the makeup of the virtual character provided by the above method embodiments.
It will be appreciated that in the specific embodiments of the present application, data related to user data processing, such as, for example, historical data, portraits, etc. related to user identity or characteristics, when the above embodiments of the present application are applied to specific products or technologies, user permissions or consents need to be obtained, and the collection, use and processing of related data is required to comply with relevant laws and regulations and standards of the relevant countries and regions.
It is noted that all terms used in the claims are to be construed in accordance with their ordinary meaning in the technical field unless explicitly defined otherwise herein. All references to "an element, device, component, apparatus, step, etc" are to be interpreted openly as referring to at least one instance of the element, device, component, apparatus, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.

Claims (12)

1. A method for cosmetic rendering of a virtual character, the method comprising:
obtaining a makeup style sequence chart, wherein the makeup style sequence chart comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1;
sampling a first makeup combination from the makeup style sequence diagram, wherein the makeup style of m makeup positions in the first makeup combination is from one makeup combination or different makeup combinations in the n times n makeup combinations;
and rendering to obtain the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
2. A method according to claim 1, wherein the make-up style for each of the m make-up locations is stored using one color channel in the make-up style sequence chart;
The sampling the first makeup combination from the makeup style sequence chart comprises the following steps:
sampling one dressing style of the ith dressing position in the n times n dressing combinations from the ith color channel in the dressing style sequence chart;
and combining the sampled m makeup styles, and deriving to obtain the first makeup combination.
3. A method according to claim 2, wherein sampling a makeup style of an ith makeup location in the n x n makeup combinations from an ith color channel in the sequence of makeup style charts, comprises:
determining m channel shade graphs, wherein the m channel shade graphs are in one-to-one correspondence with the m dressing positions;
sampling a dressing style of the ith dressing position in the n-n dressing combinations from the ith color channel in the dressing style sequence diagram by adopting the ith channel mask diagram in the m channel mask diagrams;
wherein, the makeup combinations sampled by different mask patterns in the m mask patterns are the same or different.
4. A method according to any one of claims 1 to 3, wherein said rendering a makeup of said virtual character based on a makeup style of m makeup positions in said first makeup combination comprises:
Obtaining basic colors corresponding to the m makeup positions respectively, and obtaining the makeup colors corresponding to the m makeup positions respectively;
based on the makeup styles of the m makeup positions in the first makeup combination, respectively dyeing the basic colors corresponding to the m makeup positions and the makeup colors corresponding to the m makeup positions on the character model of the virtual character;
and rendering to obtain the makeup of the virtual character based on the dyed character model of the virtual character.
5. The method according to claim 4, wherein the obtaining the base colors corresponding to the m makeup positions respectively includes:
and obtaining default skin colors of the virtual character on the m dressing positions to serve as basic colors corresponding to the m dressing positions respectively.
6. The method according to claim 4, wherein the obtaining the makeup colors corresponding to the m makeup positions respectively includes:
and responding to the received dressing custom operation, and acquiring the custom color of at least one dressing position in the m dressing positions as the dressing color corresponding to the at least one dressing position.
7. The method of claim 4, wherein rendering the makeup of the virtual character based on the stained character model of the virtual character comprises:
adding a lighting effect to a character model of the virtual character located in the virtual environment;
and rendering the character model with the added illumination effect by adopting a virtual camera positioned in the virtual environment, and rendering to obtain the makeup of the virtual character.
8. The method according to claim 4, wherein the method further comprises:
and adding a material map to at least one dressing position in the m dressing positions, wherein the material map is used for increasing the light interaction characteristic of the dressing position under illumination.
9. A makeup rendering device for a virtual character, the device comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a makeup style sequence diagram, the makeup style sequence diagram comprises n times n makeup combinations, each makeup combination comprises m makeup styles of the makeup positions, the makeup styles of the same makeup position in different makeup combinations are different, and m and n are positive integers larger than 1;
the sampling module is used for sampling a first makeup style combination from the makeup style sequence chart, wherein the makeup styles of m makeup positions in the first makeup style combination are from one makeup combination or different makeup combinations in the n-n makeup combinations;
And the rendering module is used for rendering the makeup of the virtual character based on the makeup styles of the m makeup positions in the first makeup combination.
10. A computer device, the computer device comprising: a processor and a memory, wherein at least one section of program is stored in the memory; the processor is configured to execute the at least one program in the memory to implement the method for rendering makeup of a virtual character according to any one of claims 1 to 8.
11. A computer readable storage medium having stored therein executable instructions that are loaded and executed by a processor to implement a method of cosmetic rendering of a virtual character according to any one of claims 1 to 8.
12. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which a processor reads and executes them to implement a method for cosmetic rendering of a virtual character according to any of the preceding claims 1 to 8.
CN202311287378.XA 2023-09-28 2023-09-28 Dressing rendering method, device and equipment for virtual character and storage medium Pending CN117274460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311287378.XA CN117274460A (en) 2023-09-28 2023-09-28 Dressing rendering method, device and equipment for virtual character and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311287378.XA CN117274460A (en) 2023-09-28 2023-09-28 Dressing rendering method, device and equipment for virtual character and storage medium

Publications (1)

Publication Number Publication Date
CN117274460A true CN117274460A (en) 2023-12-22

Family

ID=89221183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311287378.XA Pending CN117274460A (en) 2023-09-28 2023-09-28 Dressing rendering method, device and equipment for virtual character and storage medium

Country Status (1)

Country Link
CN (1) CN117274460A (en)

Similar Documents

Publication Publication Date Title
CN111009026B (en) Object rendering method and device, storage medium and electronic device
CN108876931B (en) Three-dimensional object color adjustment method and device, computer equipment and computer readable storage medium
JP6396890B2 (en) Image processing apparatus, image processing method, and program capable of virtually reproducing state where makeup coating material is applied
CN112316420B (en) Model rendering method, device, equipment and storage medium
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
JP3141245B2 (en) How to display images
CN111282277B (en) Special effect processing method, device and equipment and storage medium
CN110473282B (en) Dyeing processing method and device for object model, computer equipment and storage medium
CN113272865A (en) Point cloud coloring system with real-time 3D visualization
US10475248B1 (en) Real-time compositing in mixed reality
CN106898040A (en) Virtual resource object rendering intent and device
CN111862285A (en) Method and device for rendering figure skin, storage medium and electronic device
CN106447756B (en) Method and system for generating user-customized computer-generated animations
CN112870704A (en) Game data processing method, device and storage medium
US20170287201A1 (en) Texture generation system
US20100315421A1 (en) Generating fog effects in a simulated environment
CN114119848A (en) Model rendering method and device, computer equipment and storage medium
CN117274460A (en) Dressing rendering method, device and equipment for virtual character and storage medium
CN115345976A (en) Model rendering method and device, computer equipment and computer readable storage medium
CN115131480A (en) Method and device for manufacturing special effect of horse race lamp and electronic equipment
CN116112657B (en) Image processing method, image processing device, computer readable storage medium and electronic device
US11928757B2 (en) Partially texturizing color images for color accessibility
CN116071481A (en) Model rendering method, device, equipment, storage medium and program product
CN118210417A (en) Virtual electronic screen generation method and device, storage medium and electronic device
CN117333603A (en) Virtual model rendering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication