CN115082608A - Virtual character clothing rendering method and device, electronic equipment and storage medium - Google Patents
Virtual character clothing rendering method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115082608A CN115082608A CN202210589460.7A CN202210589460A CN115082608A CN 115082608 A CN115082608 A CN 115082608A CN 202210589460 A CN202210589460 A CN 202210589460A CN 115082608 A CN115082608 A CN 115082608A
- Authority
- CN
- China
- Prior art keywords
- coloring
- point
- vector
- direction vector
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 76
- 239000013598 vector Substances 0.000 claims abstract description 275
- 238000004040 coloring Methods 0.000 claims abstract description 244
- 238000005286 illumination Methods 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 17
- 238000013507 mapping Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 description 23
- 239000000463 material Substances 0.000 description 6
- 238000006467 substitution reaction Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 239000010985 leather Substances 0.000 description 5
- 239000007769 metal material Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 108010001267 Protein Subunits Proteins 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 244000035744 Hura crepitans Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/55—Radiosity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
The embodiment of the application discloses a virtual character clothing rendering method, a virtual character clothing rendering device, electronic equipment and a storage medium; the method comprises the following steps: for each coloring point, acquiring a normal direction vector and roughness of the coloring point; calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point; obtaining an identification map related to the clothes of the virtual character; determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and a color parameter corresponding to each identification; acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by a color operation result to obtain a diffuse reflection component; and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result. The application realizes that the substituting feeling of the game player is increased as much as possible while the game threshold is reduced.
Description
Technical Field
The application relates to the field of computers, in particular to a virtual character garment rendering method and device, electronic equipment and a storage medium.
Background
In the prior art, when rendering a game character, the game character is usually rendered into one of the following two styles: 1) a Rendering style Based on a Physical-Based Rendering (PBR for short) manner; 2) the rendering style of the animation wind.
However, the rendering style based on the PBR method generally requires a large amount of operations for pursuing the sense of reality, and the requirements for the operational capability of the terminal device are high; the reality of the rendering style of the cartoon wind is low, and the substitution feeling of the player is easily reduced. The prior art can not increase the substituting sense of game players under the condition of reducing the game admission threshold.
Disclosure of Invention
The embodiment of the application provides a virtual character clothing rendering method, a virtual character clothing rendering device, electronic equipment and a storage medium, and can solve the problem that substitution feeling of a game player cannot be increased under the condition that a game admission threshold is reduced in the prior art.
The embodiment of the application provides a virtual character clothing rendering method, which comprises the following steps:
for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character;
calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point;
acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification;
determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and color parameters corresponding to each identification;
acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point;
and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
An embodiment of the present application further provides a virtual character clothing rendering device, the device includes:
a data obtaining unit, configured to obtain, for each of a plurality of coloring points, a normal direction vector and a roughness of the coloring point, where the plurality of coloring points belong to a same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of a garment of the virtual character;
a highlight component calculation unit, configured to calculate a target highlight component corresponding to each of the coloring points according to the illumination direction vector, the normal direction vector of each of the coloring points, and the roughness of each of the coloring points;
an identification map obtaining unit, configured to obtain an identification map related to the clothing of the virtual character, where the identification map carries multiple types of identifiers, the multiple types of identifiers are respectively marked at multiple different positions of the identification map, and each type of identifier has a color parameter corresponding to the identifier;
the color operation result unit is used for determining a color operation result according to a plurality of different positions marked with various identifications in the identification map and color parameters corresponding to each identification;
the diffuse reflection component unit is used for acquiring a vector product parameter corresponding to each coloring point and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point;
and the rendering result acquisition unit is used for rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
In some embodiments, the highlight component calculation unit includes:
the visual line direction subunit is used for determining a visual line direction vector according to the world space coordinate value of the target colored point and the world space coordinate value of the current viewpoint, wherein the target colored point is any one of a plurality of colored points;
the central line subunit is used for determining a central line vector according to the sight line direction vector and the illumination direction vector;
a first point multiplication result subunit, configured to perform point multiplication on the central line vector and a normal direction vector of the target coloring point to obtain a first point multiplication result;
and the target highlight subunit is used for calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
In some embodiments, the gaze direction subunit comprises:
a difference sub-unit, configured to calculate a difference between a world space coordinate value of the target coloring point and a world space coordinate value of the current viewpoint;
and the sight direction sub-unit is used for carrying out normalization processing on the difference value to obtain the sight direction vector.
In some embodiments, a midline subunit, comprises:
the addition subunit is used for acquiring the addition of the sight line direction vector and the illumination direction vector;
and the central line sub-unit is used for carrying out normalization processing on the summation to obtain the central line vector.
In some embodiments, the color operation result unit includes:
the to-be-operated numerical value subunit is used for acquiring the to-be-operated numerical value of the position corresponding to each identifier in the plurality of identifiers;
a product result subunit, configured to multiply the to-be-computed value corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result;
the addition result subunit is used for adding the color parameter product result corresponding to each identifier in the multiple identifiers to obtain a color parameter addition result;
and the inverse illumination subunit is used for multiplying the color parameter summation result and the inverse illumination mapping to obtain the color operation result.
In some embodiments, the plurality of identifiers are numerical identifiers; the identification chartlet comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points;
the numerical subunit to be operated on includes:
the difference value calculation subunit is used for calculating the difference value between the digital numerical value of the identifier and the digital identifier corresponding to each pixel point in the plurality of pixel points for each identifier in the plurality of identifiers;
the first comparison subunit is used for setting the to-be-calculated value of the pixel point with the difference value smaller than the preset difference value threshold as 1 when the difference value is smaller than the preset difference value threshold;
the second comparison sub-unit is used for setting the to-be-calculated value of the pixel point with the difference value not smaller than the preset difference value threshold as 0 when the difference value is not smaller than the preset difference value threshold;
and the product result subunit is specifically configured to multiply the to-be-operated numerical values of the multiple pixel points by the color parameter corresponding to the current label to obtain a color parameter product result.
In some embodiments, the diffuse reflection component unit includes:
the primary selection vector subunit is used for acquiring a primary selection vector product parameter for each coloring point;
the secondary selection vector subunit is used for mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter;
and the vector product subunit is used for processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
In some embodiments, the initially selected vector subunit is specifically configured to perform a dot product operation on the normal direction vector and the illumination direction vector to obtain the initially selected vector product parameter.
The embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to perform any of the steps in the virtual character garment rendering method provided in the embodiment of the present application.
In the virtual character garment rendering method provided by the embodiment of the application, for each coloring point in a plurality of coloring points belonging to the same target coloring surface, a normal direction vector and roughness of each coloring point can be obtained, and then a target highlight component corresponding to each coloring point is calculated according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point. The embodiment of the application can also obtain the identification map, and determine the color operation result according to a plurality of different positions marked with various identifications in the identification map and the color parameters corresponding to each identification. The embodiment of the application can also obtain the vector product parameter corresponding to each coloring point, and multiply the vector product parameter with the color operation result to obtain the diffuse reflection component corresponding to each coloring point. And after the target highlight component and the diffuse reflection component corresponding to each coloring point are obtained, the three-dimensional model of the clothes of the virtual character can be rendered according to the target highlight component and the diffuse reflection component, and a rendering result is obtained.
In the present application, for each of a plurality of colored points, a target high light component and a diffuse reflection component can be obtained; the real texture of leather materials and metal materials in the clothes can be increased by the high light component of the target in the rendering process, the diffuse reflection component can contribute to the stylization of the cartoon wind, so that the rendering result is enhanced in reality on the basis of the stylization of the cartoon wind, the style of the cartoon wind has low requirement on the computing capacity of the terminal equipment, and the substitution sense of game players is increased as much as possible while the game admission threshold is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of a virtual character garment rendering method provided in an embodiment of the present application;
fig. 1b is a schematic flowchart of a virtual character garment rendering method according to an embodiment of the present application;
FIG. 1c shows the logo patch before it is marked with various logos by the designer;
fig. 2 is a schematic flowchart of a virtual character garment rendering method according to another embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for rendering a virtual character garment according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a virtual character garment rendering method, a virtual character garment rendering device, a mobile terminal and a storage medium.
The virtual character garment rendering method can be specifically integrated in electronic equipment, and the electronic equipment can be equipment such as a terminal and a server. The terminal can be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC), and the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the virtual character garment rendering method may be further integrated in a plurality of electronic devices, for example, the virtual character garment rendering method may be integrated in a plurality of servers, and the virtual character garment rendering method of the present application is implemented by the plurality of servers.
In some embodiments, the server may also be implemented in the form of a terminal.
For example, referring to fig. 1a, in some embodiments, the electronic device may be a mobile terminal, and the embodiment may obtain, for each coloring point in a plurality of coloring points, a normal direction vector and a roughness of the coloring point, where the plurality of coloring points belong to a same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of a garment of the virtual character; calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point; acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification; determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and color parameters corresponding to each identification; acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point; and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
The virtual character clothing rendering method in one embodiment of the disclosure can be run on a terminal device or a server. The terminal device may be a local terminal device. When the virtual character garment rendering method is operated on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, a running main body of a game program and a game picture presenting main body are separated, the storage and the running of the virtual character garment rendering method are completed on a cloud game server, and the client equipment is used for receiving and sending data and presenting a game picture, for example, the client equipment can be display equipment with a data transmission function close to a user side, such as a terminal, a television, a computer, a palm computer and the like; however, the terminal device for rendering the virtual character garment is a cloud game server at the cloud end. When a game is played, a user operates the client device to send an operation instruction, such as an operation instruction of touch operation, to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as a game picture and the like are encoded and compressed and returned to the client device through a network, and finally, the client device decodes the data and outputs the game picture.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with a user through a graphical user interface, namely, a game program is downloaded and installed and operated through the electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. By way of example, the local terminal device may include a display screen for presenting a graphical user interface including game screens and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
A game scene (or referred to as a virtual scene) is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, sea and the like, wherein the land comprises environmental elements such as deserts, cities and the like. For example, in a sandbox type 3D shooting game, the virtual scene is a 3D game world for the user to control the virtual object to play against, and an exemplary virtual scene may include: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
The game interface is an interface corresponding to an application program provided or displayed through a graphical user interface, the interface comprises a graphical user interface and a game picture for interaction of a user, and the game picture is a picture of a game scene.
In alternative embodiments, game controls (e.g., skill controls, behavior controls, functionality controls, etc.), indicators (e.g., direction indicators, character indicators, etc.), information presentation areas (e.g., number of clicks, game play time, etc.), or game setting controls (e.g., system settings, stores, coins, etc.) may be included in the UI interface.
In an optional embodiment, the game screen is a display screen corresponding to a virtual scene displayed by the terminal device, and the game screen may include a game object performing game logic in the virtual scene, a Non-Player Character (NPC), an Artificial Intelligence (AI) Character, and other virtual objects.
For example, in some embodiments, the content displayed in the graphical user interface at least partially comprises a game scene, wherein the game scene comprises at least one game object.
In some embodiments, the game objects in the game scene comprise virtual objects, i.e., user objects, manipulated by the player user.
The game object refers to a virtual object in a virtual scene, including a game character, which is a dynamic object that can be controlled, i.e., a dynamic virtual object. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a character controlled by a user through an input device, or an AI set in a virtual environment match-up through training, or an NPC set in a virtual scene match-up.
Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present application.
In one possible implementation, the user can control the virtual object to play the game behavior in the virtual scene, and the game behavior can include moving, releasing skills, using props, dialog, and the like, for example, controlling the virtual object to run, jump, crawl, and the like, and can also control the virtual object to fight with other virtual objects using the skills, virtual props, and the like provided by the application program.
The virtual camera is a necessary component for game scene pictures, is used for presenting the game scene pictures, one game scene at least corresponds to one virtual camera, two or more than two virtual cameras can be used as game rendering windows according to actual needs, the game rendering windows are used for capturing and presenting picture contents of a game world for a user, and the viewing angles of the game world, such as a first person viewing angle and a third person viewing angle, of the user can be adjusted by setting parameters of the virtual camera.
In an optional implementation manner, an embodiment of the present invention provides a method for rendering a virtual character garment, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, or the aforementioned client device in a cloud interaction system.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
In this embodiment, a method for rendering a virtual character garment is provided, as shown in fig. 1b, a specific flow of the method may include the following steps 110 to 160:
110. for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the plurality of coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character.
The plurality of colored points includes three vertex colored points located at vertex positions of the target colored surface and a plurality of non-vertex colored points located at non-vertex positions of the target colored points. The UV coordinate value of any one of the non-vertex coloring points can be obtained by interpolation operation of the UV coordinate values of the three vertex coloring points.
The normal direction vector is a vector with the direction consistent with the normal direction and the mould length of 1, the normal direction can be artificially baked on the model tangent space and the normal map by an artist, and the normal direction is artificially adjusted by the artist according to the aesthetic impression. The roughness may be baked on the decal by the designer.
120. And calculating the target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point.
The illumination direction vector is a vector with the direction consistent with the illumination direction under the world space coordinate and the module length of 1. The target highlight component is a component reflecting features of the costume of the virtual character related to the highlight. The highlight-related characteristic of the garment is that the surface of the garment can be made of reflective materials, such as metal materials, leather materials and the like.
Optionally, in a specific embodiment, the step 120 may specifically include the following steps 121 to 124:
121. and determining a sight line direction vector according to the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint, wherein the target coloring point is any one of a plurality of coloring points.
The current viewpoint is the position of the virtual camera. The sight-line direction vector is a vector whose direction coincides with the direction in which the player views the game world and whose modulo length is 1.
Optionally, in a specific embodiment, the step 121 may specifically include the following step a1 to step a 2:
and A1, calculating the difference value between the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint.
And A2, carrying out normalization processing on the difference value to obtain the sight direction vector.
In the above-described embodiment, the direction orientation of the sight-line direction vector may be calculated by calculating the difference between the target coloring point and the world space coordinate value of the current viewpoint; the above-described difference is a vector whose modulo length is not 1 and whose direction is the direction in which the player views the game world. After the difference is obtained, normalization processing is carried out on the difference, the modular length is made to be 1, and the sight line direction vector can be obtained.
122. And determining a midline vector according to the sight line direction vector and the illumination direction vector.
Optionally, in a specific embodiment, the step 122 may specifically include the following steps B1 to B2:
and B1, acquiring the sum of the sight line direction vector and the illumination direction vector.
And B2, carrying out normalization processing on the summation to obtain the midline vector.
In the above-described embodiment, the direction orientation of the central line vector may be calculated by calculating the sum of the sight-line direction vector and the illumination direction vector. After the sum is obtained, normalization processing is carried out on the sum, the modular length is made to be 1, and then the central line vector can be obtained.
123. And performing point multiplication on the central line vector and the normal direction vector of the target coloring point to obtain a first point multiplication result.
124. And calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
Specifically, if the first point multiplication result is not NoH, the following formula can be used:
calculating an initial high light intensity value I 1 Where NoH is the first point product and R is the roughness. The formula is in calculating I 1 When in use, willComparing with 2048, and taking the minimum value, the occurrence of extreme over-bright condition can be avoided.
After calculating the initial high light intensity value I 1 Then, the following formula can be used again:
I 2 =(0.25×R+0.25)×I 1 calculating the final highlight intensity value I 2 。
After calculating the final high light intensity value I 2 Then, the following formula can be used again:
H L =SpecularColor×NoL×I 2 +M e +Re D calculating a target highlight component H L Wherein SpecularColor is an adjustable highlight color parameter and can be determined by an art designer; NoL is the dot product of the normal direction vector and the illumination direction vector; m e The degree of metal can be determined by baking by a designer; re D The environment reflection result Based on Image Based Lighting (IBL for short) can be obtained from a real-time rendering process.
In the above embodiment, the sight line direction vector may be determined first, and then the central line vector may be determined according to the sight line direction vector and the illumination direction vector; after the central line vector is obtained, performing point multiplication on the central line vector and a normal direction vector of a target colored point to obtain a first point multiplication result; and then sequentially calculating according to the first point multiplication result, the roughness, the normal direction vector and the illumination direction vector: initial high light intensity value I 1 Final high light intensity value I 2 And a target highlight component H L . The process of calculating the target highlight component refers to unit vectors in multiple directions, so that the target highlight component is in the process of rendering the highlight of the clothes of the virtual characterMore true and natural.
130. And acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification.
The label map is a map of a plurality of labels with designer labels associated with the apparel of the virtual character, and referring to fig. 1c for details, the map shown in fig. 1c is a schematic view of the label map before the label map is marked with the plurality of labels by the designer. The plurality of marks can be marked at a plurality of different positions of the garment shown in the map, specifically, the number of each mark in the plurality of marks can be a plurality, and each mark can be marked at different positions respectively. In the multiple marks, the positions marked by the same mark indicate that the corresponding processing operations are the same, that is, the positions corresponding to the same mark can be processed by the same color parameters
140. And determining a color operation result according to a plurality of different positions marked with various identifications in the identification map and the color parameters corresponding to each identification.
The color calculation result is an intermediate result in the process of calculating the diffuse reflection component, and the color calculation result is used for reflecting different color processing operations on different positions of the clothes of the virtual character, and the color processing operations are related to the types of the marks.
Optionally, in a specific embodiment, the step 140 may specifically include the following steps 141 to 144:
141. and acquiring the numerical value to be operated of the position corresponding to each of the multiple types of identifiers.
142. And multiplying the to-be-calculated numerical value corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result.
In the above embodiment, each identifier has a corresponding position, and the position has a value to be calculated; each identification also has a respective color parameter. Therefore, the type of the identifier can be used as an index, the association between the value to be operated at the position corresponding to the same identifier and the corresponding color parameter is established, and then the two are multiplied, so that the color parameter product result corresponding to each identifier can be obtained.
In another embodiment, the plurality of identifiers are numeric identifiers; the identification mapping comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points.
Various digital marks are not set to be respectively: 0. 10, 20, 30 …, 250, and 26 number identifiers in total, for the mapping of the clothes of the virtual character, the designer can mark the corresponding number identifiers for different positions of the clothes respectively according to different color processing requirements. Different positions of the garment comprise a collar, a zipper, cuffs, sleeves, buttons and the like, for example, an art designer can mark a numerical identifier 0 for the collar position; marking a digital mark 10 for the cuff position; and respectively marking N different digital marks for the N buttons, or marking the same digital mark for the N buttons, and the like.
Correspondingly, step 141 may specifically include the following steps:
for each identifier in the multiple identifiers, calculating the difference value between the numerical value of the identifier and the numerical identifier corresponding to each pixel point in the multiple pixel points;
if the difference is smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference smaller than the preset difference threshold as 1;
and if the difference is not less than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference not less than the preset difference threshold as 0.
In the above embodiment, if the difference is smaller than the preset difference threshold, it indicates that the correlation between the pixel point whose difference is smaller than the preset difference threshold and the identifier currently being processed is strong; if the difference is not smaller than the preset difference threshold, it indicates that the correlation between the pixel point with the difference not smaller than the preset difference threshold and the currently processed identifier is weak. The method comprises the steps of calculating a difference value between a digital numerical value of an identifier and a digital identifier corresponding to each pixel point in a plurality of pixel points included in an identifier map, and judging whether the difference value is smaller than a preset difference threshold value, so that the pixel points with strong relevance to each identifier can be obtained from the identifier map, and the to-be-calculated numerical value of the pixel points is set to be 1. The embodiment can accurately obtain the pixel points corresponding to each type of identification.
Step 142 may specifically include the following steps: and multiplying the numerical values to be operated of the plurality of pixel points with the color parameters corresponding to the current labels to obtain a color parameter product result.
By multiplying the numerical value to be calculated by the color parameter corresponding to the current label, different results of color processing operation can be realized according to different label types.
143. And adding the color parameter product results corresponding to each of the multiple identifications to obtain a color parameter addition result.
144. And multiplying the color parameter addition result by the albedo map to obtain the color operation result.
The color parameter summation result is the summation of the color parameter multiplication results corresponding to each identifier in the multiple identifiers, and the summation result can be used for carrying out refined color processing operation on different positions of the garment of the same virtual character, so that the reality of the garment of the virtual character is improved.
The albedo map is a map obtained by baking the surface color of the model, and the addition result of the color parameters is multiplied by the albedo map, so that the overall style of the clothes of the virtual character is more identical.
In the above embodiment, the relationship between the position of the identifier in the identifier map and the color parameter corresponding to the identifier may be established according to the type of the identifier, and then the color parameter multiplication result corresponding to each of the different positions and the color parameter summation result obtained by adding the plurality of color parameter multiplication results are calculated. And multiplying the color parameter addition result by the albedo map to obtain a color operation result. The color operation result can reflect the color parameters respectively corresponding to different positions of the clothes of the virtual character, thereby realizing the fine processing process of the clothes of the virtual character.
150. And acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point.
The vector product parameter is obtained by processing a plurality of unit vectors in different directions, and each coloring point has a corresponding vector product parameter. And for each coloring point, multiplying the vector product parameter by the color operation result to obtain the diffuse reflection component corresponding to each coloring point.
The diffuse reflectance component is used to contribute to the stylization that the apparel of the virtual character is in a diffuse wind.
Optionally, in a specific embodiment, the step "obtaining a vector product parameter corresponding to each coloring point" may specifically include the following steps 151 to 153:
151. for each of the coloring points, an initial selection vector product parameter is obtained.
Optionally, step 151 may specifically include: and performing dot product operation on the normal direction vector and the illumination direction vector to obtain the primary selection vector product parameter.
152. And mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter.
The preset value range is a preset value range, and the specific value range is not to be understood as the limitation of the application. For example, let the initially selected vector product parameter be NoL and the preset value range be [ I min ,I max ]Then NoL is mapped to a preset value range [ I ] min ,I max ]The method specifically comprises the following steps:
if NoL is less than I min Then NoL takes the value I min (ii) a If NoL is greater than I max Then NoL takes the value I max (ii) a If NoL is located at [ I min ,I max ]Interval, no transformation is performed to NoL.
Mapping NoL to a preset range of values [ I ] min ,I max ]The next vector product parameter NoL' may then be obtained.
153. And processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
Alternatively, the following formula may be utilized: NoL Cel Computing vector product parameter NoL ═ smoothstep (m-s, m + s, NoL')/q Cel . Wherein m and s are set parameters.
In the above embodiment, by defining the value range of the vector product parameter and performing the smooth step function process, the change of the vector product parameter within the value range is smoother, so that the diffuse reflection component using the vector product parameter as the intermediate parameter is smoother and natural.
160. And rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
For each coloring point in the plurality of coloring points belonging to the same target coloring surface, the operation processes of the steps 110 to 160 may be performed, so that a plurality of target highlight components and a plurality of diffuse reflection components corresponding to the target coloring surface may be obtained, thereby implementing the rendering process of the target coloring surface. The target coloring surface is any one of a plurality of planes included in the three-dimensional model of the garment of the virtual character, so that the operation process is repeatedly executed on each plane, and the rendering results of all planes included in the three-dimensional model of the garment of the virtual character can be obtained.
In the virtual character garment rendering method provided by the embodiment of the application, for each coloring point in a plurality of coloring points belonging to the same target coloring surface, a normal direction vector and roughness of each coloring point can be obtained, and then a target highlight component corresponding to each coloring point is calculated according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point. The embodiment of the application can also obtain the identification map, and determine the color operation result according to a plurality of different positions marked with various identifications in the identification map and the color parameters corresponding to each identification. The embodiment of the application can also obtain the vector product parameter corresponding to each coloring point, and multiply the vector product parameter with the color operation result to obtain the diffuse reflection component corresponding to each coloring point. And after the target highlight component and the diffuse reflection component corresponding to each coloring point are obtained, the three-dimensional model of the clothes of the virtual character can be rendered according to the target highlight component and the diffuse reflection component, and a rendering result is obtained. In the present application, for each of a plurality of colored points, a target high light component and a diffuse reflection component can be obtained; the real texture of leather materials and the real texture of metal materials in the clothes can be increased by the high light component of the target in the rendering process, the diffuse reflection component can contribute to the stylization of the cartoon wind, so that the rendering result is increased in reality on the basis of the stylization of the cartoon wind, and the style of the cartoon wind has low requirements on the computing capability of the terminal equipment.
In the application, the substitution feeling of the game player can be increased as much as possible while the game access threshold is reduced.
The method described in the above embodiments is further described in detail below.
In this embodiment, the method of the embodiment of the present application will be described in detail by taking the case where the identifier is a digital identifier as an example.
As shown in fig. 2, a virtual character clothing rendering method specifically includes the following steps:
201. for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the plurality of coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character.
202. And calculating a difference value between the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint, and performing normalization processing on the difference value to obtain the sight line direction vector.
203. And acquiring the sum of the sight direction vector and the illumination direction vector, and carrying out normalization processing on the sum to obtain the midline vector.
204. And performing point multiplication on the central line vector and the normal direction vector of the target coloring point to obtain a first point multiplication result.
205. And calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
206. And acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification.
207. And for each identifier in the multiple identifiers, calculating the difference value between the numerical value of the identifier and the numerical identifier corresponding to each pixel point in the multiple pixel points.
208. If the difference is smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference smaller than the preset difference threshold as 1; and if the difference is not less than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference not less than the preset difference threshold as 0.
209. And multiplying the numerical values to be operated of the plurality of pixel points with the color parameters corresponding to the current label to obtain a color parameter product result.
210. And for each coloring point, performing point multiplication operation on the normal direction vector and the illumination direction vector to obtain the primary selection vector product parameter.
211. And mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter.
212. And processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
213. And multiplying the vector product parameter by the color operation result to obtain the diffuse reflection component corresponding to each coloring point.
214. And rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
As can be seen from the above, for each of a plurality of colored points belonging to the same target colored surface, the normal direction vector and the roughness of each colored point can be obtained, and then the target highlight component corresponding to each colored point is calculated according to the illumination direction vector, the normal direction vector of each colored point and the roughness of each colored point. The embodiment of the application can also obtain the identification map, and determine the color operation result according to a plurality of different positions marked with various identifications in the identification map and the color parameters corresponding to each identification. The embodiment of the application can also obtain the vector product parameter corresponding to each coloring point, and multiply the vector product parameter with the color operation result to obtain the diffuse reflection component corresponding to each coloring point. And after the target highlight component and the diffuse reflection component corresponding to each coloring point are obtained, the three-dimensional model of the clothes of the virtual character can be rendered according to the target highlight component and the diffuse reflection component, and a rendering result is obtained. In the present application, for each of a plurality of colored points, a target high light component and a diffuse reflection component can be obtained; the real texture of leather materials and metal materials in the clothes can be increased by the high light component of the target in the rendering process, the diffuse reflection component can contribute to the stylization of the cartoon wind, so that the rendering result is increased in reality on the basis of the stylization of the cartoon wind, and the style of the cartoon wind has low requirements on the operational capability of the terminal equipment.
In the application, the substitution sense of the game player can be increased as much as possible while the game admission threshold is reduced.
In order to better implement the method, an embodiment of the present application further provides a virtual character garment rendering apparatus, where the virtual character garment rendering apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices.
For example, in the present embodiment, the device of the present embodiment will be described in detail by taking an example in which the virtual character garment rendering device is specifically integrated in a terminal.
For example, as shown in fig. 3, the virtual character garment rendering apparatus may include:
a data obtaining unit 301, configured to obtain, for each of a plurality of coloring points, a normal direction vector and a roughness of the coloring point, where the plurality of coloring points belong to a same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of a garment of the virtual character;
a highlight component calculation unit 302, configured to calculate a target highlight component corresponding to each of the coloring points according to the illumination direction vector, the normal direction vector of each of the coloring points, and the roughness of each of the coloring points;
an identifier map obtaining unit 303, configured to obtain an identifier map related to the garment of the virtual character, where the identifier map carries multiple identifiers, the multiple identifiers are respectively marked at multiple different positions of the identifier map, and each identifier has a color parameter corresponding to the identifier;
a color operation result unit 304, configured to determine a color operation result according to a plurality of different positions marked with multiple types of identifiers in the identifier map and a color parameter corresponding to each type of identifier;
a diffuse reflection component unit 305, configured to obtain a vector product parameter corresponding to each coloring point, and multiply the vector product parameter with the color operation result to obtain a diffuse reflection component corresponding to each coloring point;
a rendering result obtaining unit 306, configured to render the three-dimensional model of the garment of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point, so as to obtain a rendering result.
In some embodiments, the highlight component calculation unit 302 includes:
the visual line direction subunit is used for determining a visual line direction vector according to the world space coordinate value of the target colored point and the world space coordinate value of the current viewpoint, wherein the target colored point is any one of a plurality of colored points;
the central line subunit is used for determining a central line vector according to the sight line direction vector and the illumination direction vector;
a first point multiplication result subunit, configured to perform point multiplication on the central line vector and a normal direction vector of the target coloring point to obtain a first point multiplication result;
and the target highlight subunit is used for calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
In some embodiments, the gaze direction subunit comprises:
a difference sub-unit, configured to calculate a difference between a world space coordinate value of the target coloring point and a world space coordinate value of the current viewpoint;
and the sight direction sub-unit is used for carrying out normalization processing on the difference value to obtain the sight direction vector.
In some embodiments, a midline subunit, comprises:
the addition subunit is used for acquiring the addition of the sight line direction vector and the illumination direction vector;
and the central line sub-unit is used for carrying out normalization processing on the summation to obtain the central line vector.
In some embodiments, the color operation result unit 304 includes:
the to-be-operated numerical value subunit is used for acquiring the to-be-operated numerical value of the position corresponding to each identifier in the plurality of identifiers;
a product result subunit, configured to multiply the to-be-computed value corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result;
the addition result subunit is used for adding the color parameter product result corresponding to each identifier in the multiple identifiers to obtain a color parameter addition result;
and the inverse illumination subunit is used for multiplying the color parameter summation result and the inverse illumination mapping to obtain the color operation result.
In some embodiments, the plurality of identifiers are numerical identifiers; the identification chartlet comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points;
the numerical subunit to be operated on includes:
the difference value calculation subunit is used for calculating the difference value between the digital numerical value of the identifier and the digital identifier corresponding to each pixel point in the plurality of pixel points for each identifier in the plurality of identifiers;
the first comparison subunit is used for setting the to-be-calculated value of the pixel point with the difference value smaller than the preset difference value threshold as 1 when the difference value is smaller than the preset difference value threshold;
the second comparison sub-unit is used for setting the to-be-calculated value of the pixel point with the difference value not smaller than the preset difference value threshold as 0 when the difference value is not smaller than the preset difference value threshold;
and the product result subunit is specifically configured to multiply the to-be-operated numerical values of the multiple pixel points by the color parameter corresponding to the current label to obtain a color parameter product result.
In some embodiments, the diffuse reflection component unit 305 includes:
the primary selection vector subunit is used for acquiring a primary selection vector product parameter for each coloring point;
the secondary selection vector subunit is used for mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter;
and the vector product subunit is used for processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
In some embodiments, the initially selected vector subunit is specifically configured to perform a dot product operation on the normal direction vector and the illumination direction vector to obtain the initially selected vector product parameter.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in the virtual character garment rendering method provided in the embodiment of the present application, for each of a plurality of coloring points belonging to the same target coloring surface, a normal direction vector and a roughness of each coloring point may be obtained, and then a target highlight component corresponding to each coloring point is calculated according to the illumination direction vector, the normal direction vector of each coloring point, and the roughness of each coloring point. The embodiment of the application can also obtain the identification map, and determine the color operation result according to a plurality of different positions marked with various identifications in the identification map and the color parameters corresponding to each identification. The embodiment of the application can also obtain the vector product parameter corresponding to each coloring point, and multiply the vector product parameter with the color operation result to obtain the diffuse reflection component corresponding to each coloring point. And after the target highlight component and the diffuse reflection component corresponding to each coloring point are obtained, the three-dimensional model of the clothes of the virtual character can be rendered according to the target highlight component and the diffuse reflection component, and a rendering result is obtained. In the present application, for each of a plurality of colored points, a target high light component and a diffuse reflection component can be obtained; the real texture of leather materials and metal materials in the clothes can be increased by the high light component of the target in the rendering process, the diffuse reflection component can contribute to the stylization of the cartoon wind, so that the rendering result is increased in reality on the basis of the stylization of the cartoon wind, and the style of the cartoon wind has low requirements on the operational capability of the terminal equipment.
In the application, the substitution feeling of the game player can be increased as much as possible while the game access threshold is reduced.
The embodiment of the application also provides the electronic equipment which can be equipment such as a terminal and a server. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and the like; the server may be a single server, a server cluster composed of a plurality of servers, or the like.
In some embodiments, the virtual character garment rendering apparatus may be further integrated into a plurality of electronic devices, for example, the virtual character garment rendering apparatus may be integrated into a plurality of servers, and the virtual character garment rendering method of the present application is implemented by the plurality of servers.
In this embodiment, the electronic device of this embodiment is described in detail as an example, for example, as shown in fig. 4, it shows a schematic structural diagram of the electronic device according to the embodiment of the present application, specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. In some embodiments, processor 401 may include one or more processing cores; in some embodiments, processor 401 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The electronic device also includes a power supply 403 for supplying power to the various components, and in some embodiments, the power supply 403 may be logically coupled to the processor 401 via a power management system, such that the power management system may manage charging, discharging, and power consumption. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input module 404, the input module 404 operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The electronic device may also include a communication module 405, and in some embodiments the communication module 405 may include a wireless module, through which the electronic device may wirelessly transmit over short distances, thereby providing wireless broadband internet access to the user. For example, the communication module 405 may be used to assist a user in sending and receiving e-mails, browsing web pages, accessing streaming media, and the like.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character; calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point; acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification; determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and color parameters corresponding to each identification; acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point; and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
Optionally, the calculating the target high light component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point, and the roughness of each coloring point includes: determining a sight line direction vector according to a world space coordinate value of a target coloring point and a world space coordinate value of a current viewpoint, wherein the target coloring point is any one of a plurality of coloring points; determining a central line vector according to the sight line direction vector and the illumination direction vector; performing point multiplication on the central line vector and a normal direction vector of the target coloring point to obtain a first point multiplication result; and calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
In the above embodiment, the process of calculating the target highlight component refers to unit vectors in multiple directions, so that the target highlight component is more real and natural in the process of rendering the highlight of the costume of the virtual character.
Optionally, the determining, according to the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint, a sight line direction vector includes: calculating a difference value between the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint; and carrying out normalization processing on the difference value to obtain the sight direction vector.
In the above-described embodiment, the direction orientation of the sight-line direction vector may be calculated by calculating the difference between the target coloring point and the world space coordinate value of the current viewpoint; the above-described difference is a vector whose modulo length is not 1 and whose direction is the direction in which the player views the game world. After the difference is obtained, normalization processing is carried out on the difference, the modular length is made to be 1, and the sight line direction vector can be obtained.
Optionally, the determining a centerline vector according to the gaze direction vector and the illumination direction vector includes: acquiring the sum of the sight line direction vector and the illumination direction vector; and normalizing the sum to obtain the centerline vector.
In the above-described embodiment, the direction orientation of the central line vector may be calculated by calculating the sum of the sight-line direction vector and the illumination direction vector. After the sum is obtained, normalization processing is carried out on the sum, the modular length is made to be 1, and then the central line vector can be obtained.
Optionally, the determining a color operation result according to a plurality of different positions marked with a plurality of types of identifiers in the identifier map and a color parameter corresponding to each type of identifier includes: acquiring a value to be operated of a position corresponding to each of the multiple kinds of identifiers; multiplying the value to be calculated corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result; adding the color parameter multiplication results corresponding to each of the multiple identifications to obtain a color parameter addition result; and multiplying the color parameter addition result by the albedo map to obtain the color operation result.
In the above embodiment, the color operation result may reflect color parameters corresponding to different positions of the garment of the virtual character, so as to implement a refinement process of the garment of the virtual character.
Optionally, the plurality of identifiers are numerical identifiers; the identification chartlet comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points; the obtaining the value to be calculated of the position corresponding to each of the plurality of kinds of identifiers includes: for each identifier in the multiple identifiers, calculating the difference value between the numerical value of the identifier and the numerical identifier corresponding to each pixel point in the multiple pixel points; if the difference is smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference smaller than the preset difference threshold as 1; if the difference is not smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point of which the difference is not smaller than the preset difference threshold as 0;
the multiplying the value to be calculated corresponding to each identifier by the color parameter corresponding to the identifier to obtain a color parameter product result includes: and multiplying the numerical values to be operated of the plurality of pixel points with the color parameters corresponding to the current labels to obtain a color parameter product result.
In the above embodiment, the pixels having strong association with each identifier can be obtained from the identifier map by calculating the difference between the digital value of the identifier and the digital identifier corresponding to each pixel in the plurality of pixels included in the identifier map and determining whether the difference is smaller than the preset difference threshold, so as to set the value to be calculated of the pixels to 1. The embodiment can accurately obtain the pixel points corresponding to each type of identification.
Optionally, the obtaining a vector product parameter corresponding to each coloring point includes: for each coloring point, acquiring a primary selection vector product parameter; mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter; and processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
In the above embodiment, by defining the value range of the vector product parameter and performing the smooth step function process, the change of the vector product parameter within the value range is smoother, so that the diffuse reflection component using the vector product parameter as the intermediate parameter is smoother and natural.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the virtual character garment rendering methods provided in the present application. For example, the instructions may perform the steps of:
for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character; calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point; acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification; determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and color parameters corresponding to each identification; acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point; and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
Optionally, the calculating the target high light component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point, and the roughness of each coloring point includes: determining a sight line direction vector according to a world space coordinate value of a target coloring point and a world space coordinate value of a current viewpoint, wherein the target coloring point is any one of a plurality of coloring points; determining a central line vector according to the sight line direction vector and the illumination direction vector; performing point multiplication on the central line vector and a normal direction vector of the target coloring point to obtain a first point multiplication result; and calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
In the above embodiment, the process of calculating the target highlight component refers to unit vectors in multiple directions, so that the target highlight component is more real and natural in the process of rendering the highlight of the costume of the virtual character.
Optionally, the determining, according to the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint, a sight line direction vector includes: calculating a difference value between the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint; and carrying out normalization processing on the difference value to obtain the sight direction vector.
In the above-described embodiment, the direction orientation of the sight-line direction vector may be calculated by calculating the difference between the target coloring point and the world space coordinate value of the current viewpoint; the above-described difference is a vector whose modulo length is not 1 and whose direction is the direction in which the player views the game world. After the difference is obtained, normalization processing is carried out on the difference, the modular length is made to be 1, and the sight line direction vector can be obtained.
Optionally, the determining a centerline vector according to the gaze direction vector and the illumination direction vector includes: acquiring the sum of the sight line direction vector and the illumination direction vector; and normalizing the sum to obtain the centerline vector.
In the above-described embodiment, the direction orientation of the central line vector may be calculated by calculating the sum of the sight-line direction vector and the illumination direction vector. After the sum is obtained, normalization processing is carried out on the sum, the modular length is made to be 1, and then the central line vector can be obtained.
Optionally, the determining a color operation result according to a plurality of different positions marked with a plurality of types of identifiers in the identifier map and a color parameter corresponding to each type of identifier includes: acquiring a value to be operated of a position corresponding to each of the multiple kinds of identifiers; multiplying the value to be calculated corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result; adding the color parameter multiplication results corresponding to each of the multiple identifications to obtain a color parameter addition result; and multiplying the color parameter addition result by the albedo map to obtain the color operation result.
In the above embodiment, the color operation result may reflect color parameters corresponding to different positions of the garment of the virtual character, so as to implement a refinement process of the garment of the virtual character.
Optionally, the plurality of identifiers are numerical identifiers; the identification chartlet comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points; the obtaining the value to be calculated of the position corresponding to each of the plurality of kinds of identifiers includes: for each identifier in the multiple identifiers, calculating the difference value between the numerical value of the identifier and the numerical identifier corresponding to each pixel point in the multiple pixel points; if the difference is smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference smaller than the preset difference threshold as 1; if the difference is not smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point of which the difference is not smaller than the preset difference threshold as 0;
the multiplying the to-be-calculated value corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result includes: and multiplying the numerical values to be operated of the plurality of pixel points with the color parameters corresponding to the current labels to obtain a color parameter product result.
In the above embodiment, the pixel points with strong association with each identifier can be obtained from the identifier map by calculating the difference between the digital value of the identifier and the digital identifier corresponding to each pixel point in the plurality of pixel points included in the identifier map and determining whether the difference is smaller than the preset difference threshold, so as to set the to-be-calculated value of the pixel points to 1. The embodiment can accurately obtain the pixel points corresponding to each type of identification.
Optionally, the obtaining a vector product parameter corresponding to each coloring point includes: for each coloring point, acquiring a primary selection vector product parameter; mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter; and processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
In the above embodiment, by defining the value range of the vector product parameter and performing the smooth step function process, the change of the vector product parameter within the value range is smoother, so that the diffuse reflection component using the vector product parameter as the intermediate parameter is smoother and natural.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method provided in the various alternative implementations provided in the above embodiments.
Since the instructions stored in the storage medium can execute the steps in any virtual character garment rendering method provided in the embodiments of the present application, the beneficial effects that can be achieved by any virtual character garment rendering method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The virtual character garment rendering method, the virtual character garment rendering device, the electronic device and the computer-readable storage medium provided by the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (11)
1. A method for rendering a virtual character garment, the method comprising:
for each coloring point in a plurality of coloring points, acquiring a normal direction vector and roughness of the coloring point, wherein the coloring points belong to the same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the clothes of the virtual character;
calculating a target highlight component corresponding to each coloring point according to the illumination direction vector, the normal direction vector of each coloring point and the roughness of each coloring point;
acquiring an identification map related to the clothes of the virtual character, wherein the identification map carries a plurality of identifications, the plurality of identifications are respectively marked at a plurality of different positions of the identification map, and each identification has a color parameter corresponding to the identification;
determining a color operation result according to a plurality of different positions marked with various identifications in the identification chartlet and color parameters corresponding to each identification;
acquiring a vector product parameter corresponding to each coloring point, and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point;
and rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
2. The method of claim 1, wherein said calculating the target highlight component corresponding to each said coloring point according to the illumination direction vector, the normal direction vector of each said coloring point, and the roughness of each said coloring point comprises:
determining a sight line direction vector according to a world space coordinate value of a target coloring point and a world space coordinate value of a current viewpoint, wherein the target coloring point is any one of a plurality of coloring points;
determining a central line vector according to the sight line direction vector and the illumination direction vector;
performing point multiplication on the central line vector and a normal direction vector of the target coloring point to obtain a first point multiplication result;
and calculating a target highlight component corresponding to the target coloring point according to the first point multiplication result and the roughness of the target coloring point.
3. The method of claim 2, wherein determining the gaze direction vector based on the world space coordinate value of the target colored point and the world space coordinate value of the current viewpoint comprises:
calculating a difference value between the world space coordinate value of the target coloring point and the world space coordinate value of the current viewpoint;
and carrying out normalization processing on the difference value to obtain the sight direction vector.
4. The method of claim 2, wherein determining a centerline vector from the gaze direction vector and the illumination direction vector comprises:
acquiring the sum of the sight line direction vector and the illumination direction vector;
and normalizing the sum to obtain the centerline vector.
5. The method of claim 1, wherein determining a color operation result according to a plurality of different locations marked with a plurality of types of identifiers in the identifier map and a color parameter corresponding to each type of identifier comprises:
acquiring a value to be operated of a position corresponding to each of the multiple kinds of identifiers;
multiplying the value to be calculated corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result;
adding the color parameter multiplication results corresponding to each of the multiple identifications to obtain a color parameter addition result;
and multiplying the color parameter addition result by the albedo map to obtain the color operation result.
6. The method of claim 5, wherein the plurality of identifiers are numeric identifiers; the identification chartlet comprises a plurality of pixel points, and the pixel points are in one-to-one correspondence with the coloring points;
the obtaining the value to be calculated of the position corresponding to each of the plurality of kinds of identifiers includes:
for each identifier in the multiple identifiers, calculating the difference value between the numerical value of the identifier and the numerical identifier corresponding to each pixel point in the multiple pixel points;
if the difference is smaller than the preset difference threshold, setting the to-be-calculated value of the pixel point with the difference smaller than the preset difference threshold as 1;
if the difference value is not less than the preset difference value threshold, setting the value to be calculated of the pixel point of which the difference value is not less than the preset difference value threshold as 0;
the multiplying the to-be-calculated value corresponding to each identifier with the color parameter corresponding to the identifier to obtain a color parameter product result includes:
and multiplying the numerical values to be operated of the plurality of pixel points with the color parameters corresponding to the current labels to obtain a color parameter product result.
7. The method of claim 1, wherein said obtaining vector product parameters corresponding to each of said shading points comprises:
for each coloring point, acquiring a primary selection vector product parameter;
mapping the primary selection vector product parameter to a preset value range to obtain a secondary selection vector product parameter;
and processing the secondary selection vector product parameter by using a smooth step function to obtain the vector product parameter.
8. The method of claim 7, wherein said obtaining initially selected vector product parameters comprises:
and performing dot product operation on the normal direction vector and the illumination direction vector to obtain the primary selection vector product parameter.
9. An apparatus for rendering a virtual character garment, the apparatus comprising:
a data obtaining unit, configured to obtain, for each of a plurality of coloring points, a normal direction vector and a roughness of the coloring point, where the plurality of coloring points belong to a same target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of a garment of the virtual character;
a highlight component calculation unit, configured to calculate a target highlight component corresponding to each of the coloring points according to the illumination direction vector, the normal direction vector of each of the coloring points, and the roughness of each of the coloring points;
an identification map obtaining unit, configured to obtain an identification map related to the clothing of the virtual character, where the identification map carries multiple types of identifiers, the multiple types of identifiers are respectively marked at multiple different positions of the identification map, and each type of identifier has a color parameter corresponding to the identifier;
the color operation result unit is used for determining a color operation result according to a plurality of different positions marked with various identifications in the identification map and color parameters corresponding to each identification;
the diffuse reflection component unit is used for acquiring a vector product parameter corresponding to each coloring point and multiplying the vector product parameter by the color operation result to obtain a diffuse reflection component corresponding to each coloring point;
and the rendering result acquisition unit is used for rendering the three-dimensional model of the clothes of the virtual character according to the target highlight component corresponding to each coloring point and the diffuse reflection component corresponding to each coloring point to obtain a rendering result.
10. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the virtual character garment rendering method of any one of claims 1 to 8.
11. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the method for avatar garment rendering according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210589460.7A CN115082608B (en) | 2022-05-26 | 2022-05-26 | Virtual character clothing rendering method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210589460.7A CN115082608B (en) | 2022-05-26 | 2022-05-26 | Virtual character clothing rendering method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115082608A true CN115082608A (en) | 2022-09-20 |
CN115082608B CN115082608B (en) | 2024-08-30 |
Family
ID=83249026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210589460.7A Active CN115082608B (en) | 2022-05-26 | 2022-05-26 | Virtual character clothing rendering method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115082608B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091684A (en) * | 2023-04-06 | 2023-05-09 | 杭州片段网络科技有限公司 | WebGL-based image rendering method, device, equipment and storage medium |
CN116401724A (en) * | 2023-05-30 | 2023-07-07 | 北京盈锋科技有限公司 | Digital human hair and clothing adaptation method, device and system |
CN116421970A (en) * | 2023-06-12 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for externally-installed rendering of virtual object |
CN117808956A (en) * | 2023-12-14 | 2024-04-02 | 完美世界互娱(北京)科技有限公司 | Game highlight manufacturing method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056661A (en) * | 2016-05-31 | 2016-10-26 | 钱进 | Direct3D 11-based 3D graphics rendering engine |
CN111009026A (en) * | 2019-12-24 | 2020-04-14 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic device |
CN111583128A (en) * | 2020-04-09 | 2020-08-25 | 清华大学 | Face picture highlight removal method based on deep learning and realistic rendering |
CN112053423A (en) * | 2020-09-18 | 2020-12-08 | 网易(杭州)网络有限公司 | Model rendering method and device, storage medium and computer equipment |
CN112819941A (en) * | 2021-03-05 | 2021-05-18 | 网易(杭州)网络有限公司 | Method, device, equipment and computer-readable storage medium for rendering water surface |
US20210192838A1 (en) * | 2019-12-24 | 2021-06-24 | Tencent Technology (Shenzhen) Company Limited | Object rendering method and apparatus, storage medium, and electronic device |
CN114022607A (en) * | 2021-11-19 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Data processing method and device and readable storage medium |
-
2022
- 2022-05-26 CN CN202210589460.7A patent/CN115082608B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056661A (en) * | 2016-05-31 | 2016-10-26 | 钱进 | Direct3D 11-based 3D graphics rendering engine |
CN111009026A (en) * | 2019-12-24 | 2020-04-14 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic device |
US20210192838A1 (en) * | 2019-12-24 | 2021-06-24 | Tencent Technology (Shenzhen) Company Limited | Object rendering method and apparatus, storage medium, and electronic device |
CN111583128A (en) * | 2020-04-09 | 2020-08-25 | 清华大学 | Face picture highlight removal method based on deep learning and realistic rendering |
CN112053423A (en) * | 2020-09-18 | 2020-12-08 | 网易(杭州)网络有限公司 | Model rendering method and device, storage medium and computer equipment |
CN112819941A (en) * | 2021-03-05 | 2021-05-18 | 网易(杭州)网络有限公司 | Method, device, equipment and computer-readable storage medium for rendering water surface |
CN114022607A (en) * | 2021-11-19 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Data processing method and device and readable storage medium |
Non-Patent Citations (2)
Title |
---|
GUO JIE 等: "Virtual Area Light for Dynamic and Spatially-Varying Area Lighting", 《CHINESE JOURNAL OF ELECTRONICS》, vol. 24, no. 2, 30 April 2015 (2015-04-30), pages 306 - 311, XP006071960, DOI: 10.1049/cje.2015.04.013 * |
吴德道 等: "基于次表面散射的肝脏高真实感实时渲染的研究与实现", 《南昌大学学报(理科版)》, vol. 44, no. 5, 31 October 2020 (2020-10-31), pages 482 - 491 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091684A (en) * | 2023-04-06 | 2023-05-09 | 杭州片段网络科技有限公司 | WebGL-based image rendering method, device, equipment and storage medium |
CN116401724A (en) * | 2023-05-30 | 2023-07-07 | 北京盈锋科技有限公司 | Digital human hair and clothing adaptation method, device and system |
CN116421970A (en) * | 2023-06-12 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for externally-installed rendering of virtual object |
CN116421970B (en) * | 2023-06-12 | 2023-12-05 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for externally-installed rendering of virtual object |
CN117808956A (en) * | 2023-12-14 | 2024-04-02 | 完美世界互娱(北京)科技有限公司 | Game highlight manufacturing method and device |
CN117808956B (en) * | 2023-12-14 | 2024-09-17 | 完美世界互娱(北京)科技有限公司 | Game highlight manufacturing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN115082608B (en) | 2024-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115082608B (en) | Virtual character clothing rendering method, device, electronic equipment and storage medium | |
JP7387758B2 (en) | Interface display method, device, terminal, storage medium and computer program | |
CN113559504A (en) | Information processing method, information processing apparatus, storage medium, and electronic device | |
US11238667B2 (en) | Modification of animated characters | |
CN112843704B (en) | Animation model processing method, device, equipment and storage medium | |
CN115082607B (en) | Virtual character hair rendering method, device, electronic equipment and storage medium | |
US12064689B2 (en) | Method for selecting virtual objects, apparatus, terminal and storage medium | |
CN116402931A (en) | Volume rendering method, apparatus, computer device, and computer-readable storage medium | |
CN114359458A (en) | Image rendering method, device, equipment, storage medium and program product | |
CN112206519B (en) | Method, device, storage medium and computer equipment for realizing game scene environment change | |
CN114949863A (en) | Virtual character eye rendering method and device, electronic equipment and storage medium | |
CN113313796B (en) | Scene generation method, device, computer equipment and storage medium | |
CN115501590A (en) | Display method, display device, electronic equipment and storage medium | |
CN116982088A (en) | Layered garment for conforming to underlying body and/or garment layers | |
CN113350785A (en) | Virtual character rendering method and device and electronic equipment | |
US20240075388A1 (en) | Method, apparatus, electronic device and storage media for skill control of virtual object | |
US20230124297A1 (en) | Hidden surface removal for layered clothing for an avatar body | |
CN115040868A (en) | Prompt information generation method, area adjustment method and device | |
CN118119979A (en) | Hidden surface removal for layered apparel of avatar body | |
CN115861519A (en) | Rendering method and device of hair model, computer equipment and storage medium | |
CN116310058A (en) | Luminous rendering method, luminous rendering device, electronic equipment and computer readable storage medium | |
CN115089968A (en) | Operation guiding method and device in game, electronic equipment and storage medium | |
CN114904271A (en) | Color gradient map generation method and device, electronic equipment and storage medium | |
CN114332316A (en) | Virtual character processing method and device, electronic equipment and storage medium | |
CN117689780A (en) | Animation generation method and device of virtual model, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |