CN115082607A - Virtual character hair rendering method and device, electronic equipment and storage medium - Google Patents

Virtual character hair rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115082607A
CN115082607A CN202210589067.8A CN202210589067A CN115082607A CN 115082607 A CN115082607 A CN 115082607A CN 202210589067 A CN202210589067 A CN 202210589067A CN 115082607 A CN115082607 A CN 115082607A
Authority
CN
China
Prior art keywords
point
hair
shadow
virtual character
coloring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210589067.8A
Other languages
Chinese (zh)
Inventor
刘怡安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210589067.8A priority Critical patent/CN115082607A/en
Publication of CN115082607A publication Critical patent/CN115082607A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for rendering virtual character hair; the method comprises the following steps: acquiring pixel points meeting preset requirements from the shadow map; determining an illumination influence parameter, and calculating a shading shadow coefficient according to the illumination influence parameter and a shadow sampling value; constructing an external sphere, and acquiring a normal of a contact position of the surface of the external sphere and the vertex of the hair of the virtual character as a normal close to the vertex; calculating the illumination coefficient of each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal lines corresponding to the plurality of vertexes respectively; acquiring a first operation result of each coloring point according to the color sampling value, the highlight sampling value and the illumination coefficient of each coloring point; acquiring the delineation description information and acquiring a second operation result; and acquiring the addition result of the first operation result and the second operation result, and rendering according to the addition result corresponding to each coloring point to obtain a rendering result. The method and the device can increase the sense of reality of the hair of the virtual character.

Description

Virtual character hair rendering method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of computers, in particular to a method and a device for rendering virtual character hair, electronic equipment and a storage medium.
Background
In the prior art, when rendering the hair of a virtual character, the method generally comprises the following steps: the shadow of the hair is fixedly depicted on the map to solve the problem of the shadow of the hair.
However, in the method of fixedly drawing the shadow of the hair on the map, the light and shadow effect of the hair of the avatar does not change with the movement of the avatar, and the appearance of the hair of the avatar is not natural enough. Namely, the prior art has the problem that the hair of the virtual character is not natural.
Disclosure of Invention
The embodiment of the application provides a method and a device for rendering the hair of a virtual character, electronic equipment and a storage medium, which can solve the problem that the hair of the virtual character in the prior art is not natural enough in expression.
The embodiment of the application provides a method for rendering the hair of a virtual character, which comprises the following steps:
for each coloring point in the plurality of coloring points, obtaining pixel points meeting preset requirements from a shadow map, wherein the coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values;
determining an illumination influence parameter, and calculating a shielding shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point;
constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex;
calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the head of the virtual character;
acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point;
acquiring the stroke description information, and acquiring a second operation result based on the stroke description information;
and for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
An embodiment of the present application further provides a virtual character hair rendering apparatus, the apparatus includes:
the pixel point obtaining unit is used for obtaining pixel points meeting preset requirements from the shadow map for each coloring point in the plurality of coloring points, wherein the plurality of coloring points are coloring points corresponding to a plurality of planes included in the three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values;
the illumination parameter determining unit is used for determining an illumination influence parameter and calculating a shading shadow coefficient of each shading point according to the illumination influence parameter and the shadow sampling value of each shading point;
the normal line acquisition unit is used for constructing an external sphere used for wrapping the hair of the virtual character, acquiring a normal line of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal line as a corresponding normal line close to the vertex;
the illumination coefficient calculation unit is used for calculating the illumination coefficient corresponding to each coloring point according to the shielding shadow coefficient and the illumination direction vector of each coloring point and the normal lines corresponding to the multiple vertexes of the head of the virtual character;
the first result acquisition unit is used for acquiring a color sampling value and a highlight sampling value of each coloring point and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point;
a second result acquisition unit configured to acquire the stroke description information and acquire a second operation result based on the stroke description information;
and the addition result acquisition unit is used for acquiring the addition result of the first operation result and the second operation result for the first operation result corresponding to each coloring point, and rendering the three-dimensional model of the hair of the virtual character according to the addition result corresponding to each coloring point to obtain a rendering result.
In some embodiments, the pixel point obtaining unit is specifically configured to: and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point meets the preset requirement.
In some embodiments, the illumination parameter determination unit comprises:
the illumination vector subunit is used for acquiring an illumination direction vector;
the projection vector subunit is configured to project the illumination direction vector on a target projection surface to obtain an illumination projection vector, where the target projection surface is a projection surface formed by a surface facing direction of the virtual character and a vertical upward direction vector;
and the influence parameter calculating subunit is used for calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
In some embodiments, the normal acquisition unit includes:
the external sphere sub-unit is used for generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere is superposed with the center of the skull of the virtual character;
the corner point acquisition subunit is used for making rays from the center of the external sphere to each vertex of the multiple vertices of the hair of the virtual character and acquiring an intersection point of each ray and the surface of the external sphere;
and a normal calculation subunit, configured to calculate, for each of the plurality of intersection points, a normal of each intersection point position, and take the normal of each intersection point position as a normal of each vertex adjacent to each intersection point, respectively.
In some embodiments, the illumination coefficient calculation unit includes:
the normal conversion subunit is used for converting the normals corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system respectively, and performing normalization processing to obtain a normal vector corresponding to each coloring point;
a dot product result subunit, configured to calculate, for a normal vector corresponding to each coloring point, a dot product of the normal vector and the illumination direction vector to obtain a dot product result;
the primary selection parameter subunit is used for carrying out smooth step function processing on the point multiplication result corresponding to each coloring point to obtain a primary selection illumination coefficient;
and the illumination coefficient determining subunit is used for acquiring a smaller value of the initial selection illumination coefficient and the shielding shadow coefficient for the initial selection illumination coefficient corresponding to each coloring point, wherein the smaller value is the illumination coefficient.
In some embodiments, the first result obtaining unit includes:
the color sampling subunit is used for acquiring a pixel point corresponding to the UV coordinate value of each coloring point in the shadow map, wherein the pixel point meets the preset requirement;
and the highlight sampling subunit is used for acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the highlight map for each coloring point, wherein the pixel point is the highlight sampling value.
In some embodiments, the first result obtaining unit includes:
for each coloring point:
the first product subunit is used for calculating a continuous multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product;
the second product subunit is used for calculating a continuous multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product;
a sum result subunit, configured to calculate a sum of the first product and the second product, and obtain a product sum result;
and the first result subunit is used for multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
In some embodiments, the second result obtaining unit includes:
the three-dimensional model subunit is used for acquiring a three-dimensional model of the hair of the virtual character which is subjected to the preset processing, wherein the edge area of each hair in the three-dimensional model of the hair of the virtual character which is subjected to the preset processing is a first color, and the central area of each hair is a second color;
and the delineator area subunit is used for carrying out interpolation operation on each pixel point of the area to which the first color of each lock of hair belongs according to the distance between the pixel point and the central area of the lock of hair to which the pixel point belongs and the distance between the pixel point and the side line of the lock of hair to which the pixel point belongs to so as to obtain the delineator description information.
In some embodiments, the second result obtaining unit comprises:
the stroke operation subunit is used for performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result;
and the continuous multiplication subunit is used for calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
In some embodiments, the apparatus further comprises:
a size shadow obtaining unit, configured to obtain a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, where the maximum shadow map is a map corresponding to a maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual lighting; the minimum shadow map is a map corresponding to the minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination;
the middle shadow calculation unit is used for calculating a middle shadow area according to the maximum shadow map and the minimum shadow map;
the interpolation operation shadow unit is used for carrying out interpolation operation on the middle shadow area according to the distance between a pixel point and a first boundary of the middle shadow area and the distance between the pixel point and a second boundary of the middle shadow area for each pixel point of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow;
and the shadow combination unit is used for combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in any of the virtual character hair rendering methods provided in the embodiment of the present application.
In the method for rendering the hair of the virtual character, which is provided by the embodiment of the application, the shadow sampling value meeting the preset requirement can be obtained from the shadow map; then, an illumination influence parameter is determined, and an occlusion shadow coefficient is calculated based on the illumination influence parameter and the shadow sampling value. The method and the device for calculating the shading point of the virtual character can further calculate the normal lines corresponding to the multiple vertexes of the hair of the virtual character respectively, calculate the illumination coefficient corresponding to each shading point according to the normal lines corresponding to the multiple vertexes, the illumination direction vector and the shading shadow coefficient, and further obtain the first operation result corresponding to each shading point by combining the color sampling value and the highlight sampling value. According to the embodiment of the application, a second operation result can be obtained based on the delineation description information, the second operation result is added with the first operation result corresponding to each coloring point respectively, an addition result corresponding to each coloring point is obtained, and the three-dimensional model of the hair of the virtual character is rendered according to the addition result.
In the application, when the illumination coefficients corresponding to each coloring point are calculated, the addition result corresponding to each coloring point is finally calculated according to the shading coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the hair of the virtual character, and after the illumination coefficients corresponding to each coloring point are calculated, the color sampling value, the highlight sampling value and the delineation description information are combined. After the coefficients of the aspects are introduced, the rendering of the hair of the virtual character can form natural displacement along with the movement of the virtual character, so that the reality sense of the hair of the virtual character is increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of a virtual character hair rendering method provided in an embodiment of the present application;
FIG. 1b is a schematic flowchart illustrating a method for rendering hair of a virtual character according to an embodiment of the present application;
FIG. 1c shows a schematic diagram of a shadow map of a number of clusters of hair of a virtual character;
FIG. 1d shows a three-dimensional model of the hair of a virtual character without pre-set treatment;
FIG. 1e shows a three-dimensional model of the hair of a pre-set virtual character;
FIG. 2 is a schematic flow chart illustrating a method for rendering hair of a virtual character according to another embodiment of the present application;
FIG. 3 is a schematic structural diagram of an apparatus for rendering hair of a virtual character according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a virtual character hair rendering method, a virtual character hair rendering device, a mobile terminal and a storage medium.
The method for rendering the hair of the virtual character can be specifically integrated in an electronic device, and the electronic device can be a terminal, a server and other devices. The terminal can be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC), and the like; the server may be a single server or a server cluster composed of a plurality of servers.
In some embodiments, the avatar hair rendering method may be further integrated in a plurality of electronic devices, for example, the avatar hair rendering method may be integrated in a plurality of servers, and the avatar hair rendering method of the present application is implemented by the plurality of servers.
In some embodiments, the server may also be implemented in the form of a terminal.
For example, referring to fig. 1a, in some embodiments, the electronic device may be a mobile terminal, and the embodiments may acquire, for each coloring point in a plurality of coloring points, a pixel point meeting a preset requirement from a shadow map, where the coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of hair of a virtual character, and the pixel point meeting the preset requirement is marked as a shadow sample value; determining an illumination influence parameter, and calculating a shielding shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point; constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex; calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the head of the virtual character; acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point; acquiring the stroke description information, and acquiring a second operation result based on the stroke description information; and for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
The method for rendering the hair of the virtual character in one embodiment of the disclosure can run on a terminal device or a server. The terminal device may be a local terminal device. When the virtual character hair rendering method runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, a running main body of a game program and a game picture presenting main body are separated, the storage and the running of the virtual character hair rendering method are completed on a cloud game server, and the client equipment is used for receiving and sending data and presenting a game picture, for example, the client equipment can be display equipment with a data transmission function close to a user side, such as a terminal, a television, a computer, a palm computer and the like; however, the terminal device for rendering the hair of the virtual character is a cloud game server at the cloud end. When a game is played, a user operates the client device to send an operation instruction, such as an operation instruction of touch operation, to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as a game picture and the like are encoded and compressed and returned to the client device through a network, and finally, the client device decodes the data and outputs the game picture.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with a user through a graphical user interface, namely, a game program is downloaded and installed and operated through the electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
A game scene (or referred to as a virtual scene) is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, sea and the like, wherein the land comprises environmental elements such as deserts, cities and the like. For example, in a sandbox type 3D shooting game, the virtual scene is a 3D game world for the user to control the virtual object to play against, and an exemplary virtual scene may include: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
The game interface is an interface corresponding to an application program provided or displayed through a graphical user interface, the interface comprises a graphical user interface and a game picture for interaction of a user, and the game picture is a picture of a game scene.
In alternative embodiments, game controls (e.g., skill controls, behavior controls, functionality controls, etc.), indicators (e.g., direction indicators, character indicators, etc.), information presentation areas (e.g., number of clicks, game play time, etc.), or game setting controls (e.g., system settings, stores, coins, etc.) may be included in the UI interface.
In an optional embodiment, the game screen is a display screen corresponding to a virtual scene displayed by the terminal device, and the game screen may include a game object performing game logic in the virtual scene, a Non-Player Character (NPC), an Artificial Intelligence (AI) Character, and other virtual objects.
For example, in some embodiments, the content displayed in the graphical user interface at least partially comprises a game scene, wherein the game scene comprises at least one game object.
In some embodiments, the game objects in the game scene comprise virtual objects, i.e., user objects, manipulated by the player user.
The game object refers to a virtual object in a virtual scene, including a game character, which is a dynamic object that can be controlled, i.e., a dynamic virtual object. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a character controlled by a user through an input device, or an AI set in a virtual environment battle through training, or an NPC set in a virtual scene battle.
Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present application.
In one possible implementation, the user can control the virtual object to play the game behavior in the virtual scene, and the game behavior can include moving, releasing skills, using props, dialog, and the like, for example, controlling the virtual object to run, jump, crawl, and the like, and can also control the virtual object to fight with other virtual objects using the skills, virtual props, and the like provided by the application program.
The virtual camera is a necessary component for game scene pictures, is used for presenting the game scene pictures, one game scene at least corresponds to one virtual camera, two or more than two virtual cameras can be used as game rendering windows according to actual needs, the game rendering windows are used for capturing and presenting picture contents of a game world for a user, and the viewing angles of the game world, such as a first person viewing angle and a third person viewing angle, of the user can be adjusted by setting parameters of the virtual camera.
In an optional implementation manner, an embodiment of the present invention provides a method for rendering a hair of a virtual character, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, or the aforementioned client device in a cloud interaction system.
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
In this embodiment, a method for rendering virtual character hair is provided, as shown in fig. 1b, a specific flow of the method may include the following steps 110 to 170:
110. and for each coloring point in the plurality of coloring points, obtaining pixel points meeting preset requirements from the shadow map, wherein the coloring points are coloring points corresponding to a plurality of planes included in the three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values.
The shadow map is a map reflecting a shadow formed by the head ornament of the virtual character blocking the hair of the virtual character. The head ornament is an object having a decorative effect, such as a hat, a hair clip, etc., on the head of the virtual character. The method of obtaining the shadow map will be described in detail below.
The preset requirement is a preset requirement. Optionally, in a specific embodiment, the step 110 may specifically include: and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point meets the preset requirement, and any coloring point is not set as a target coloring point for convenience of description, and the target coloring point has a target UV coordinate value corresponding to the target coloring point. The target coloring point belongs to a target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the hair of the virtual character.
The target coloring point is any coloring point located on the target coloring surface, and the target coloring surface comprises three vertex coloring points located at the vertex positions of the target coloring surface and a plurality of non-vertex coloring points located at the non-vertex positions of the target coloring point. The UV coordinate value of any one of the non-vertex coloring points can be obtained by interpolation operation of the UV coordinate values of the three vertex coloring points. The target colored point may be a vertex colored point or a non-vertex colored point.
In the above embodiment, the shadow sampling value may be obtained by obtaining the target UV coordinate value and determining the pixel point corresponding to the target UV coordinate value in the shadow map, and the shadow sampling value is not recorded as S 0
Optionally, in a specific embodiment, the method for obtaining the shadow map may specifically include the following steps a1 to a 4:
a1, acquiring a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, wherein the maximum shadow map is a map corresponding to the maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; the minimum shadow map is a map corresponding to a minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination.
The size of the shadow formed by the head ornament of the virtual character on the hair of the virtual character varies with the angle of the virtual light. In general, the smaller the included angle between the virtual illumination and the ground plane of the virtual scene is, the larger the shadow formed by the head ornament on the hair is; the larger the angle between the virtual illumination and the ground plane of the virtual scene, the smaller the shadow formed by the head ornament on the hair.
The above steps can obtain the map corresponding to the maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character and the map corresponding to the minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character.
And A2, calculating a middle shadow area according to the maximum shadow map and the minimum shadow map.
Alternatively, the largest shadow located in the largest shadow map may be subtracted from the smallest shadow located in the smallest shadow map, resulting in an intermediate shadow region. The middle shaded area is a change between no and full presentation as the angle of virtual illumination changes.
A3, for each pixel point of the middle shadow area, performing interpolation operation on the middle shadow area according to the distance between the pixel point and the first boundary of the middle shadow area and the distance between the pixel point and the second boundary of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow.
Referring to fig. 1c in detail, fig. 1c shows a schematic diagram of a shadow map of a plurality of clusters of hairs of the virtual character, wherein the area B in fig. 1c is the area with the minimum shadow, the area a is the area with the middle shadow, and the combination of the area a and the area B is the area with the maximum shadow. The middle region includes a first boundary adjacent to the minimum shadow (i.e., B region) and a second boundary away from the minimum shadow (i.e., B region).
Optionally, the interpolation operation is performed on the intermediate shadow region according to the distance between each pixel point of the intermediate shadow region and the first boundary and the distance between each pixel point of the intermediate shadow region and the second boundary, and an operation process of obtaining the interpolation operation shadow region may specifically be as follows:
for each pixel point of the middle shadow area, acquiring the distance R1 between the pixel point and the first boundary and the distance R2 between the pixel point and the second boundary, and then passing through a formula
Figure BDA0003664373830000111
The gray value I is calculated. The larger the value of I, the lighter the color; the smaller the I value, the darker the color. And for each pixel point of the middle shadow area, filling the gray value of the corresponding pixel point by using the obtained gray value, so as to obtain an interpolation operation shadow area.
It should be understood that the calculation of the gray-scale value can be performed by the above formula, and can also be performed by other operational formulas, for example, and
Figure BDA0003664373830000121
the gray value I is calculated. The specific manner of calculating the gray value should not be construed as limiting the application.
And A4, combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
After the interpolation operation shadow area is obtained through calculation in the step a3, the interpolation operation shadow area is combined with the minimum shadow map, so that the shadow map can be obtained.
In the above embodiment, the maximum shadow area and the minimum shadow area may be subtracted to obtain a middle shadow area; then, carrying out interpolation operation on the middle shadow area, and changing the middle shadow area into an interpolation operation shadow area; and then combining the interpolation operation shadow area with the minimum shadow area to obtain a shadow map. The shadow map obtained by the embodiment has a small calculation amount and can also obtain a good display effect. Therefore, the subsequent hair rendering processing is continuously executed by using the shadow map, the consumption of computing power can be greatly reduced, and the computing capability requirement of the terminal equipment for running the game can be reduced.
120. And determining an illumination influence parameter, and calculating an occlusion shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point.
The illumination effect parameter is a parameter reflecting an effect of virtual illumination on the hair of the virtual character. Optionally, in a specific embodiment, the step "determining the illumination influencing parameter" includes the following steps 121 to 123:
121. and acquiring an illumination direction vector.
The illumination direction vector is a vector with the direction consistent with the illumination direction under the world space coordinate and the module length of 1.
122. And projecting the illumination direction vector on a target projection surface to obtain an illumination projection vector, wherein the target projection surface is a projection surface formed by the vector of the face direction and the vertical upward direction of the virtual character.
The vertically upward direction vector is a vector whose direction is the vertically upward direction under world space coordinates.
The target projection surface formed by the vectors of the facing direction and the vertical upward direction of the virtual character can be obtained first, and then the illumination direction vector is projected on the target projection surface to obtain the illumination projection vector.
123. And calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
The illumination projection vector and the vertical upward direction vector can be normalized, namely, the direction of the illumination projection vector is kept unchanged, and the modular length of the illumination projection vector is changed into 1, so that the illumination projection normalized vector is obtained; and keeping the direction of the vertical upward direction vector unchanged, and changing the modular length of the vector into 1 to obtain a vertical upward normalization vector.
And then, performing point multiplication on the illumination projection normalized vector and the vertically upward normalized vector, wherein the point multiplication result is an illumination influence parameter, and the illumination influence parameter is not recorded as FdotL.
In the foregoing embodiment, in the process of calculating the illumination influence parameter, the illumination influence parameter may be influenced by the illumination direction and the facing direction of the virtual character, so that the influence of the illumination influence parameter on the hair of the virtual character may be more natural in the rendering process.
Optionally, in a specific embodiment, the step of "determining an illumination influence parameter, and calculating an occlusion shadow coefficient of each colored point according to the illumination influence parameter and the shadow sample value of each colored point" includes the steps of:
can be determined from the formula S ═ step (FdotL, S) 0 ) Calculating an occlusion shadow coefficient S, wherein FdotLAs a lighting influencing parameter, S 0 Are shaded sample values.
130. Constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex.
Optionally, in a specific embodiment, step 130 includes the following steps 131 to 133:
131. generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere coincides with the center of the skull of the virtual character.
The sphere center of the generated circumscribed sphere may coincide with the center of the skull of the virtual character.
It should be understood that, since the hairs of the avatar may have different lengths and the radii of the circumscribed sphere are the same, some of the vertices of the hairs of the avatar may contact the surface of the circumscribed sphere, all of the vertices of the hairs of the avatar may not contact the surface of the circumscribed sphere, and all of the vertices of the hairs of the avatar may contact the surface of the circumscribed sphere. Whether the apex of the hair of the avatar is in contact with the surface of the circumscribing sphere should not be construed as limiting the application.
132. And taking rays from the center of the circumscribed sphere to each vertex of the plurality of vertices of the hair of the virtual character to obtain an intersection point of each ray and the surface of the circumscribed sphere.
133. For each of the plurality of intersection points, a normal to each intersection point position is calculated, and the normal to each intersection point position is taken as a normal to each vertex adjacent to each intersection point, respectively.
In the above-described embodiment, the intersection of each ray with the surface of the circumscribed sphere may be obtained by taking a ray from the center of the circumscribed sphere to each of the plurality of vertices of the hair of the virtual character. A one-to-one correspondence relationship between a plurality of vertexes and a plurality of intersections of the surface of the circumscribed sphere is established by means of ray making. Then, for each of the plurality of intersection points, a normal to each of the intersection points may be calculated, respectively, and the calculated normal to the intersection point may be taken as a normal to a vertex to which the intersection point corresponds.
In the above embodiment, the circumscribed sphere concentric with the skull of the virtual character may be set first, and a plurality of vertices of the hair of the virtual character are obtained, and then the normal line of each vertex position is obtained by making a ray, and the normal line is taken as the normal line of the corresponding vertex. The normal of each vertex in the multiple vertexes of the hair of the virtual character is re-determined by setting the circumscribed sphere, so that the normal can be calculated more accurately.
140. And calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal line corresponding to the plurality of vertexes of the head of the virtual character.
Each colored point is each of a plurality of colored points corresponding to a plurality of planes included in the three-dimensional model of the hair of the virtual character. The illumination coefficient is a coefficient reflecting that the corresponding colored point is affected by the virtual illumination. Optionally, in a specific embodiment, the step 140 may specifically include the following steps 141 to 144:
141. and converting the normals respectively corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system, and performing normalization processing to obtain the normal vector respectively corresponding to each coloring point.
142. And calculating the point multiplication of the normal vector and the illumination direction vector to obtain a point multiplication result for the normal vector corresponding to each coloring point.
The normal vector corresponding to each color point is a vector whose direction is the normal direction corresponding to the color point and whose mode length is 1. For each coloring point, calculating a dot product of the normal vector and the illumination direction vector to obtain a dot product result corresponding to each coloring point, wherein the dot product result can be represented by NdotL.
143. And for the point multiplication result corresponding to each coloring point, performing smooth step function processing on the point multiplication result to obtain an initial selection illumination coefficient.
Alternatively, the initial illumination coefficient LI may be calculated according to the formula LI — smoothstep (m-s, m + s, NdotL), where m and s are both preset coefficients.
144. And for the primary selection illumination coefficient corresponding to each coloring point, obtaining the smaller value of the primary selection illumination coefficient and the shielding shadow coefficient, wherein the smaller value is the illumination coefficient.
For each coloring point, the initial illumination coefficient LI of the coloring point may be compared with the value of the shading coefficient S, and the smaller one of the two values is used as the illumination coefficient I of the coloring point, for example, the illumination coefficient I of the coloring point may be calculated according to the formula I ═ min (LI, S).
The steps are carried out for each coloring point, so that the illumination coefficient corresponding to each coloring point can be obtained.
In the above embodiment, when the illumination coefficient corresponding to each rendering point is calculated, the influence of the illumination direction, the shading coefficient, and the normal direction of the rendering point may be introduced, so that the influence of the illumination coefficient corresponding to each rendering point on the hair of the virtual character may be more natural in the rendering process.
150. And acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point.
And the color sampling value is a pixel point corresponding to the target UV coordinate value in the color map, and the highlight sampling value is a pixel point corresponding to the target UV coordinate value in the highlight map. By acquiring the corresponding pixel points of the target UV coordinate values in the maps with different properties, the sampling values with the corresponding properties can be acquired. Optionally, in a specific embodiment, the step "obtaining the color sample value and the highlight sample value of each color point" may specifically include the following steps 151 to 152:
151. and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the color map, wherein the pixel point is the color sampling value.
152. And for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the highlight map, wherein the pixel point is the highlight sampling value.
Optionally, in a specific embodiment, the step "obtaining the first operation result corresponding to each coloring point according to the color sampling value, the highlight sampling value, and the illumination coefficient corresponding to each coloring point" may specifically include the following steps 153 to 156:
153. and calculating the multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product.
The color parameter values and the color intensity values may be set manually by the designer. The first product can be obtained by calculating the result of the multiplication of the color parameter value, the color intensity value and the color sampling value.
154. And calculating a continuous multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product.
Highlight color values and highlight intensity values can be set manually by the designer. And the second product can be obtained by calculating the continuous multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value.
155. And calculating the sum of the first product and the second product to obtain a product sum result.
156. And multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
The steps 153 to 156 may be performed for each coloring point. After the first product and the second product are obtained respectively, the sum of the two products can be obtained to obtain the product sum result. For each coloring point, the illumination coefficient corresponding to the coloring point may be multiplied by the sum of the products, so as to obtain a first operation result corresponding to the coloring point, and further obtain a first operation result corresponding to each coloring point.
In the foregoing embodiment, for each coloring point, when the first operation result is calculated, the influence of the color parameter value, the color intensity value, the color sampling value, the highlight color value, the highlight intensity value, the highlight sampling value, and the illumination coefficient corresponding to each coloring point is introduced, so that the influence of the first operation result corresponding to each coloring point on the hair of the virtual character is more natural in the rendering process.
160. And acquiring the stroke description information, and acquiring a second operation result based on the stroke description information.
The stroke description information is to reflect: information on the influence of the drawing of the hair of the virtual character by the designer. Optionally, in a specific embodiment, the step "acquiring the stroke describing information" may specifically include the following steps 161 to 162:
161. the method comprises the steps of obtaining a three-dimensional model of the hair of the virtual character which is subjected to preset processing, wherein the edge area of each hair in the three-dimensional model of the hair of the virtual character which is subjected to preset processing is a first color, and the central area of each hair is a second color.
The preset processing is a processing process of coloring each hair in the three-dimensional model of the hair of the virtual character in different areas. Specifically, a central region of each strand of hair in the three-dimensional model of the hair of the virtual character may be colored in the second color, and an edge region of each strand of hair may be colored in the first color. The coloring process can be manually completed by an art designer or can be executed by a preset coloring program.
The second color is not black, and the first color is white. Referring to fig. 1d and 1e for details, fig. 1d shows a three-dimensional model of the hair of a virtual character which is not subjected to a preset treatment, and fig. 1e shows a three-dimensional model of the hair of a virtual character which is subjected to a preset treatment.
162. And for each pixel point of the area to which the first color of each strand of hair belongs, carrying out interpolation operation on the area to which the first color of the strand of hair belongs according to the distance between the pixel point and the central area of the strand of hair to which the pixel point belongs and the distance between the pixel point and the side line of the strand of hair to which the pixel point belongs, and obtaining the description information of the stroking line.
For each pixel point of the area to which the first color of each strand of hair belongs, interpolation operation can be carried out according to the distance between the pixel point and the center area of the strand of hair to which the pixel point belongs and the distance between the pixel point and the side line of the strand of hair to which the pixel point belongs. Alternatively, the interpolation procedure may be as follows:
according to the formula
Figure BDA0003664373830000171
The gray value J is calculated. The larger the J value is, the lighter the color is; the smaller the J value, the darker the color. R3 is the distance between the pixel point and the central area of the strand hair, and R4 is the distance between the pixel point and the side line of the strand hair. And for each pixel point of the region to which the first color belongs, gray value filling is carried out on the corresponding pixel point by using the obtained gray value, so that the delineation description information MdotL can be obtained.
It should be understood that the calculation of the gray-scale value can be performed by the above formula, and can also be realized by other operation formulas, for example, also can be realized by
Figure BDA0003664373830000172
The gray value J is calculated. The specific manner of calculating the gray value should not be construed as limiting the application.
Optionally, in a specific embodiment, the step "obtaining the second operation result based on the stroke description information" may specifically include the following steps 163 to 164:
163. and performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result.
The stroke thickness parameter is a parameter manually set by an art designer, and is not recorded as Ol. The stroking operation result Os is calculated according to the equation Os ═ Step (MdotL, Ol). Wherein MdotL is stroke description information, and Ol is stroke thickness parameter.
164. And calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
The stroking color value and the stroking intensity value can be set manually by an engineer. And calculating the continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result to obtain a second operation result.
In the above-described embodiment, the influence of the stroke description information, the stroke thickness parameter, the stroke color value, and the stroke intensity value is introduced when the second calculation result is calculated. The calculation process of the stroke description information has lower requirement on the calculation capacity of the terminal equipment, so that the calculation resource consumed by the calculation process of the second calculation result is better, and the requirement on the calculation capacity of the terminal equipment running the game can be reduced.
170. And for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
For each of the plurality of colored points corresponding to the plurality of planes included in the three-dimensional model of the hair of the virtual character, the above-mentioned operation processes from step 110 to step 170 may be performed, so that a plurality of first operation results corresponding to the plurality of planes included in the three-dimensional model of the hair of the virtual character may be obtained, and a plurality of addition results corresponding to the plurality of planes included in the three-dimensional model of the hair of the virtual character may be obtained by adding the plurality of first operation results to the second operation result, respectively, thereby implementing the rendering process of the three-dimensional model of the hair of the virtual character.
In the method for rendering the hair of the virtual character provided by the embodiment of the application, for each coloring point in a plurality of coloring points, a shadow sampling value meeting preset requirements can be obtained from a shadow map; an illumination influencing parameter is then determined and an occlusion shadow coefficient for each colored point is calculated based on the illumination influencing parameter and the shadow sample value for each colored point. The method and the device for calculating the shading point of the virtual character can further calculate the normal lines corresponding to the multiple vertexes of the hair of the virtual character respectively, calculate the lighting coefficient corresponding to each shading point according to the normal lines corresponding to the multiple vertexes, the lighting direction vector and the shading shadow coefficient of each shading point, and further obtain the first operation result corresponding to each shading point by combining the color sampling value and the highlight sampling value of each shading point. The embodiment of the application can also obtain a second operation result based on the delineation description information, add the second operation result to the first operation result corresponding to each coloring point respectively to obtain an addition result corresponding to each coloring point respectively, and render the three-dimensional model of the hair of the virtual character according to the addition result. In the application, when the illumination coefficients corresponding to each coloring point are calculated, the addition result corresponding to each coloring point is finally calculated according to the shading shadow coefficient corresponding to the coloring point, the illumination direction vector and the normal corresponding to the multiple vertexes of the hair of the virtual character, and after the illumination coefficients corresponding to each coloring point are calculated, the color sampling value, the highlight sampling value and the delineation description information of each coloring point are combined. After the coefficients of the aspects are introduced, the rendering of the hair of the virtual character can form a natural displacement along with the movement of the virtual character.
In the present application, the realism of the hair of the virtual character can be increased.
The method described in the above embodiments is further described in detail below.
In this embodiment, the sample value to be shaded is S 0 The method of the embodiment of the present application is described in detail by taking as an example that the illumination projection vector is L', the vertical upward direction vector is F, the illumination influence parameter is FdotL, the occlusion shadow coefficient is S, the dot product result is NdotL, the initial selection illumination coefficient is LI, and the illumination coefficient is I.
As shown in fig. 2, a specific flow of a method for rendering virtual character hair is as follows:
201. and acquiring a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character.
Wherein the maximum shadow map is a map corresponding to a maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual lighting; the minimum shadow map is a map corresponding to a minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination.
202. And calculating a middle shadow area according to the maximum shadow map and the minimum shadow map.
203. And for each pixel point of the middle shadow area, carrying out interpolation operation on the middle shadow area according to the distance between the pixel point and the first boundary of the middle shadow area and the distance between the pixel point and the second boundary of the middle shadow area to obtain an interpolation operation shadow area.
Wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow.
204. And combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
205. Acquiring a pixel point corresponding to the target UV coordinate value in the shadow chartlet, wherein the pixel point is a shadow sampling value S 0
For convenience of description, any one of the color dots is not set as a target color dot, and the target color dot has its own corresponding target UV coordinate value. The target UV coordinate value is a coordinate value corresponding to the target coloring point, the target coloring point belongs to a target coloring surface, and the target coloring surface is any one of a plurality of planes included in a three-dimensional model of the hair of the virtual character.
206. And acquiring an illumination direction vector, and projecting the illumination direction vector on a target projection surface to obtain an illumination projection vector L'.
The target projection surface is a projection surface formed by the surface orientation direction of the virtual character and a vertical upward direction vector F.
207. And calculating the illumination influence parameter FdotL according to the illumination projection vector L' and the vertical upward direction vector F.
208. According to the formula S ═ step (FdotL, S) 0 ) Calculating a shading coefficient S, wherein FdotL is an illumination influence parameter S 0 Are shaded sample values.
209. And generating a circumscribed sphere corresponding to the hair of the virtual character.
Wherein the center of the circumscribed sphere coincides with the center of the skull of the virtual character.
210. And taking rays from the center of the circumscribed sphere to each vertex of the plurality of vertices of the hair of the virtual character to obtain an intersection point of each ray and the surface of the circumscribed sphere.
211. For each of the plurality of intersection points, a normal to each intersection point position is calculated, and the normal to each intersection point position is taken as a normal to each vertex adjacent to each intersection point, respectively.
212. And converting the normals respectively corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system, and performing normalization processing to obtain the normal vector respectively corresponding to each coloring point.
213. And calculating the dot product of the normal vector and the illumination direction vector for the normal vector corresponding to each coloring point to obtain a dot product result NdotL.
214. And calculating an initial selection illumination coefficient LI according to a formula LI ═ smoothstep (m-s, m + s, NdotL), wherein m and s are preset coefficients.
215. And calculating an illumination coefficient I of each coloring point according to a formula I which is min (LI, S), wherein S is an occlusion shadow coefficient, and LI is an initial selection illumination coefficient.
216. Acquiring a pixel point corresponding to the target UV coordinate value in the color map, wherein the pixel point is the color sampling value; and acquiring a pixel point corresponding to the target UV coordinate value in the highlight map, wherein the pixel point is the highlight sampling value.
217. For each coloring point, calculating a multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product; calculating a multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product; calculating the sum of the first product and the second product to obtain a product sum result; and multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
218. And acquiring the stroke description information, and acquiring a second operation result based on the stroke description information.
In one embodiment, step 218 may specifically include the following steps:
acquiring a three-dimensional model of the hair of the virtual character which is subjected to the preset treatment, wherein the edge area of each hair in the three-dimensional model of the hair of the virtual character which is subjected to the preset treatment is a first color, and the central area of each hair is a second color;
for each pixel point of the area to which the first color of each strand of hair belongs, performing interpolation operation on the area to which the first color of the strand of hair belongs according to the distance between the pixel point and the central area of the strand of hair to which the pixel point belongs and the distance between the pixel point and the side line of the strand of hair to which the pixel point belongs to obtain the description information of the stroking;
performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result;
and calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
219. And for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
As can be seen from the above, in the embodiment of the present application, a shadow sampling value meeting a preset requirement can be obtained from a shadow map; then, an illumination influence parameter is determined, and an occlusion shadow coefficient is calculated based on the illumination influence parameter and the shadow sampling value. The method and the device for calculating the color shading point can further calculate the normal lines corresponding to a plurality of vertexes of the hair of the virtual character respectively, calculate the illumination coefficient corresponding to each color shading point respectively according to the normal lines, the illumination direction vectors and the shading shadow coefficients corresponding to the vertexes respectively, and further obtain the first operation result corresponding to each color shading point by combining the color sampling value and the highlight sampling value. The embodiment of the application can also obtain a second operation result based on the delineation description information, add the second operation result to the first operation result corresponding to each coloring point respectively to obtain an addition result corresponding to each coloring point respectively, and render the three-dimensional model of the hair of the virtual character according to the addition result. In the application, when the illumination coefficients corresponding to each coloring point are calculated, the addition result corresponding to each coloring point is finally calculated according to the shading coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the hair of the virtual character, and after the illumination coefficients corresponding to each coloring point are calculated, the color sampling value, the highlight sampling value and the delineation description information are combined. After the coefficients of the aspects are introduced, the rendering of the hair of the virtual character can form a natural displacement along with the movement of the virtual character.
In the present application, the realism of the hair of the virtual character can be increased.
In order to better implement the method, an embodiment of the present application further provides a virtual character hair rendering apparatus, where the virtual character hair rendering apparatus may be specifically integrated in an electronic device, and the electronic device may be a terminal. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices.
For example, in this embodiment, the apparatus of the embodiment of the present application will be described in detail by taking an example in which the virtual character hair rendering apparatus is specifically integrated in a terminal.
For example, as shown in fig. 3, the virtual character hair rendering apparatus may include:
a pixel point obtaining unit 301, configured to obtain, for each coloring point in a plurality of coloring points, a pixel point meeting a preset requirement from a shadow map, where the plurality of coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of hair of a virtual character, and the pixel point meeting the preset requirement is marked as a shadow sampling value;
an illumination parameter determining unit 302, configured to determine an illumination influence parameter, and calculate a shading shadow coefficient of each shading point according to the illumination influence parameter and a shadow sampling value of each shading point;
a normal obtaining unit 303, configured to construct an external sphere for wrapping hair of a virtual character, obtain a normal of a position where a surface of the external sphere corresponds to a vertex of the hair of the virtual character, and use the normal as a corresponding normal close to the vertex;
an illumination coefficient calculation unit 304, configured to calculate an illumination coefficient corresponding to each coloring point according to the occlusion shadow coefficient of each coloring point, the illumination direction vector, and a normal line corresponding to each of the multiple vertexes of the head of the virtual character;
a first result obtaining unit 305, configured to obtain a color sample value and a highlight sample value of each color point, and obtain a first operation result corresponding to each color point according to the color sample value, the highlight sample value, and the illumination coefficient corresponding to each color point;
a second result acquisition unit 306 configured to acquire the stroke description information and acquire a second operation result based on the stroke description information;
the summation result obtaining unit 307 is configured to obtain a summation result of the first operation result and the second operation result for each coloring point, and render the three-dimensional model of the hair of the virtual character according to the summation result corresponding to each coloring point, so as to obtain a rendering result.
In some embodiments, the pixel point obtaining unit 301 is specifically configured to: and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point is a pixel point meeting the preset requirement.
In some embodiments, the illumination parameter determination unit 302 comprises:
the illumination vector subunit is used for acquiring an illumination direction vector;
the projection vector subunit is configured to project the illumination direction vector on a target projection surface to obtain an illumination projection vector, where the target projection surface is a projection surface formed by a surface facing direction of the virtual character and a vertical upward direction vector;
and the influence parameter calculating subunit is used for calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
In some embodiments, the normal line obtaining unit 303 includes:
the external sphere sub-unit is used for generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere is superposed with the center of the skull of the virtual character;
the corner point acquisition subunit is used for making rays from the center of the external sphere to each vertex of the multiple vertices of the hair of the virtual character and acquiring an intersection point of each ray and the surface of the external sphere;
and a normal calculation subunit, configured to calculate, for each of the plurality of intersection points, a normal of each intersection point position, and take the normal of each intersection point position as a normal of each vertex adjacent to each intersection point, respectively.
In some embodiments, the illumination coefficient calculation unit 304 includes:
the normal conversion subunit is used for converting the normals corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system respectively, and performing normalization processing to obtain a normal vector corresponding to each coloring point;
a dot product result subunit, configured to calculate, for a normal vector corresponding to each coloring point, a dot product of the normal vector and the illumination direction vector to obtain a dot product result;
the primary selection parameter subunit is used for carrying out smooth step function processing on the point multiplication result corresponding to each coloring point to obtain a primary selection illumination coefficient;
and the illumination coefficient determining subunit is used for acquiring a smaller value of the initial selection illumination coefficient and the shading coefficient for the initial selection illumination coefficient corresponding to each coloring point, wherein the smaller value is the illumination coefficient.
In some embodiments, the first result obtaining unit 305 includes:
the color sampling subunit is used for acquiring a pixel point corresponding to the UV coordinate value of each coloring point in a color map, wherein the pixel point is the color sampling value;
and the highlight sampling subunit is used for acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the highlight map for each coloring point, wherein the pixel point is the highlight sampling value.
In some embodiments, the first result obtaining unit 305 further includes:
the first product subunit is used for calculating a continuous multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product;
the second product subunit is used for calculating a continuous multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product;
a sum result subunit, configured to calculate a sum of the first product and the second product, and obtain a product sum result;
and the first result subunit is used for multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
In some embodiments, the second result obtaining unit 306 includes:
the three-dimensional model subunit is used for acquiring a three-dimensional model of the hair of the virtual character which is subjected to the preset processing, wherein the edge area of each hair in the three-dimensional model of the hair of the virtual character which is subjected to the preset processing is a first color, and the central area of each hair is a second color;
and the delineator area subunit is used for carrying out interpolation operation on the area to which the first color of each strand of hair belongs according to the distance between the pixel point and the central area of the strand of hair to which the pixel point belongs and the distance between the pixel point and the side line of the strand of hair to which the pixel point belongs so as to obtain the delineator description information.
In some embodiments, the second result obtaining unit 306 further includes:
the stroke operation subunit is used for performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result;
and the continuous multiplication subunit is used for calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
In some embodiments, the apparatus further comprises:
a size shadow obtaining unit, configured to obtain a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, where the maximum shadow map is a map corresponding to a maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; the minimum shadow map is a map corresponding to a minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination;
the middle shadow calculating unit is used for calculating a middle shadow area according to the maximum shadow map and the minimum shadow map;
the interpolation operation shadow unit is used for carrying out interpolation operation on the middle shadow area according to the distance between a pixel point and a first boundary of the middle shadow area and the distance between the pixel point and a second boundary of the middle shadow area for each pixel point of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow;
and the shadow combination unit is used for combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, in the present application, when the illumination coefficients corresponding to each coloring point are calculated, the addition result corresponding to each coloring point is finally calculated according to the shading coefficient, the illumination direction vector, and the normal corresponding to each of the multiple vertexes of the hair of the virtual character, and after the illumination coefficients corresponding to each coloring point are calculated, by combining the color sampling value, the highlight sampling value, and the delineation description information. After the coefficients of the aspects are introduced, the rendering of the hair of the virtual character can form a natural displacement along with the movement of the virtual character.
The embodiment of the application can increase the sense of reality of the hair of the virtual character.
The embodiment of the application further provides the electronic equipment which can be a terminal, a server and the like. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and the like; the server may be a single server, a server cluster composed of a plurality of servers, or the like.
In some embodiments, the avatar hair rendering apparatus may be further integrated into a plurality of electronic devices, for example, the avatar hair rendering apparatus may be integrated into a plurality of servers, and the plurality of servers implement the avatar hair rendering method of the present application.
In this embodiment, the electronic device of this embodiment is described in detail as an example, for example, as shown in fig. 4, it shows a schematic structural diagram of the electronic device according to the embodiment of the present application, specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, an input module 404, and a communication module 405. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. In some embodiments, processor 401 may include one or more processing cores; in some embodiments, processor 401 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The electronic device also includes a power supply 403 for supplying power to the various components, and in some embodiments, the power supply 403 may be logically coupled to the processor 401 via a power management system, such that the power management system may manage charging, discharging, and power consumption. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input module 404, the input module 404 operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The electronic device may also include a communication module 405, and in some embodiments the communication module 405 may include a wireless module, through which the electronic device may wirelessly transmit over short distances, thereby providing wireless broadband internet access to the user. For example, the communication module 405 may be used to assist a user in sending and receiving e-mails, browsing web pages, accessing streaming media, and the like.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
for each coloring point in the plurality of coloring points, obtaining pixel points meeting preset requirements from a shadow map, wherein the coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values; determining an illumination influence parameter, and calculating a shielding shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point; constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex; calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the head of the virtual character; acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point; acquiring the stroke description information, and acquiring a second operation result based on the stroke description information; and for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
Optionally, the obtaining, for each of the plurality of colored points, a pixel point meeting a preset requirement from the shadow map includes: and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point meets the preset requirement.
In the above embodiment, the shadow sampling value may be obtained by obtaining the target UV coordinate value and determining the pixel point corresponding to the target UV coordinate value in the shadow map, and the shadow sampling value is not recorded as S 0
Optionally, the determining the illumination impact parameter comprises: acquiring an illumination direction vector; projecting the illumination direction vector on a target projection surface to obtain an illumination projection vector, wherein the target projection surface is a projection surface formed by the vector in the facing direction of the virtual character and the vector in the vertical upward direction; and calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
In the foregoing embodiment, in the process of calculating the illumination influence parameter, the illumination influence parameter may be influenced by the illumination direction and the facing direction of the virtual character, so that the influence of the illumination influence parameter on the hair of the virtual character may be more natural in the rendering process.
Optionally, the constructing an external sphere for wrapping hair of a virtual character, acquiring a normal of a position of the surface of the external sphere corresponding to a vertex of the hair of the virtual character, and taking the normal as a normal of a corresponding adjacent vertex, includes: generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere is coincident with the center of the skull of the virtual character; making rays from the center of the circumscribed sphere to each of multiple vertexes of the hair of the virtual character, and obtaining an intersection point of each ray and the surface of the circumscribed sphere; for each of the plurality of intersection points, a normal to each intersection point position is calculated, and the normal to each intersection point position is taken as a normal to each vertex adjacent to each intersection point, respectively.
In the above embodiment, an external sphere concentric with the skull of the virtual character may be set, and positions where the surface of the external sphere contacts with the multiple vertices of the hair of the virtual character may be obtained, and then a normal line of each of the contact positions may be obtained by making a ray, and the normal line may be used as a normal line of the corresponding vertex. The normal of each vertex in the multiple vertexes of the hair of the virtual character is re-determined by setting the circumscribed sphere, so that the normal can be calculated more accurately.
Optionally, the calculating, according to the shading shadow coefficient, the illumination direction vector of each shading point, and the normals corresponding to the multiple vertexes of the head of the virtual character, the illumination coefficient corresponding to each shading point includes: converting the normals corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system, and performing normalization processing to obtain normal vectors corresponding to each coloring point; calculating the point multiplication of the normal vector and the illumination direction vector to obtain a point multiplication result for the normal vector corresponding to each coloring point; for the point multiplication result corresponding to each coloring point, performing smooth step function processing on the point multiplication result to obtain an initial selection illumination coefficient; and for the primary selection illumination coefficient corresponding to each coloring point, obtaining the smaller value of the primary selection illumination coefficient and the shielding shadow coefficient, wherein the smaller value is the illumination coefficient.
In the above embodiment, when the illumination coefficient corresponding to each rendering point is calculated, the influence of the illumination direction, the shading coefficient, and the normal direction of the rendering point may be introduced, so that the influence of the illumination coefficient corresponding to each rendering point on the hair of the virtual character may be more natural in the rendering process.
Optionally, the obtaining, according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point, a first operation result corresponding to each coloring point includes: for each coloring point: calculating a continuous multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product; calculating a multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product; calculating the sum of the first product and the second product to obtain a product sum result; and multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
In the foregoing embodiment, for each coloring point, when the first operation result is calculated, the influence of the color parameter value, the color intensity value, the color sampling value, the highlight color value, the highlight intensity value, the highlight sampling value, and the illumination coefficient corresponding to each coloring point is introduced, so that the influence of the first operation result corresponding to each coloring point on the hair of the virtual character is more natural in the rendering process.
Optionally, the obtaining a second operation result based on the stroke description information includes: performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result; and calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
In the above-described embodiment, the influence of the stroke description information, the stroke thickness parameter, the stroke color value, and the stroke intensity value is introduced when the second calculation result is calculated. The calculation process of the stroke description information has lower requirement on the calculation capacity of the terminal equipment, so that the calculation resource consumed by the calculation process of the second calculation result is better, and the requirement on the calculation capacity of the terminal equipment running the game can be reduced.
Optionally, before the obtaining, for each coloring point in the plurality of coloring points, a pixel point meeting a preset requirement from the shadow map, the method further includes: acquiring a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, wherein the maximum shadow map is a map corresponding to the maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; the minimum shadow map is a map corresponding to the minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; calculating a middle shadow area according to the maximum shadow map and the minimum shadow map; for each pixel point of the middle shadow area, performing interpolation operation on the middle shadow area according to the distance between a pixel point and a first boundary of the middle shadow area and the distance between the pixel point and a second boundary of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow; and combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
In the above embodiment, the maximum shadow area and the minimum shadow area may be subtracted to obtain a middle shadow area; then, carrying out interpolation operation on the middle shadow area, and changing the middle shadow area into an interpolation operation shadow area; and then combining the interpolation operation shadow area with the minimum shadow area to obtain a shadow map. The shadow map obtained by the embodiment has a small calculation amount and can also obtain a good display effect. Therefore, the subsequent hair rendering processing is continuously executed by using the shadow map, the consumption of computing power can be greatly reduced, and the computing capability requirement of the terminal equipment for running the game can be reduced.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any of the virtual character hair rendering methods provided by the present application. For example, the instructions may perform the steps of:
for each coloring point in the plurality of coloring points, obtaining pixel points meeting preset requirements from a shadow map, wherein the coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values; determining an illumination influence parameter, and calculating a shielding shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point; constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex; calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the head of the virtual character; acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point; acquiring the stroke description information, and acquiring a second operation result based on the stroke description information; and for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
Optionally, the obtaining, for each of the plurality of colored points, a pixel point meeting a preset requirement from the shadow map includes: and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point meets the preset requirement.
In the above embodiment, the shadow sampling value may be obtained by obtaining the target UV coordinate value and determining the pixel point corresponding to the target UV coordinate value in the shadow map, and the shadow sampling value is not recorded as S 0
Optionally, the determining the illumination impact parameter comprises: acquiring an illumination direction vector; projecting the illumination direction vector on a target projection surface to obtain an illumination projection vector, wherein the target projection surface is a projection surface formed by the vector in the facing direction of the virtual character and the vector in the vertical upward direction; and calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
In the foregoing embodiment, in the process of calculating the illumination influence parameter, the illumination influence parameter may be influenced by the illumination direction and the facing direction of the virtual character, so that the influence of the illumination influence parameter on the hair of the virtual character may be more natural in the rendering process.
Optionally, the constructing an external sphere for wrapping hair of a virtual character, acquiring a normal of a position of the surface of the external sphere corresponding to a vertex of the hair of the virtual character, and taking the normal as a normal of a corresponding adjacent vertex, includes: generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere is coincident with the center of the skull of the virtual character; making rays from the center of the circumscribed sphere to each of multiple vertexes of the hair of the virtual character, and obtaining an intersection point of each ray and the surface of the circumscribed sphere; for each of the plurality of intersection points, a normal to each intersection point position is calculated, and the normal to each intersection point position is taken as a normal to each vertex adjacent to each intersection point, respectively.
In the above embodiment, an external sphere concentric with the skull of the virtual character may be set, and positions where the surface of the external sphere contacts with the multiple vertices of the hair of the virtual character may be obtained, and then a normal line of each of the contact positions may be obtained by making a ray, and the normal line may be used as a normal line of the corresponding vertex. The normal of each vertex in the multiple vertexes of the hair of the virtual character is re-determined by setting the circumscribed sphere, so that the normal can be calculated more accurately.
Optionally, the calculating, according to the occlusion shadow coefficient, the illumination direction vector of each coloring point, and the normals respectively corresponding to the multiple vertexes of the head of the virtual character, the illumination coefficient respectively corresponding to each coloring point includes: converting the normals corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system, and performing normalization processing to obtain normal vectors corresponding to each coloring point; calculating the point multiplication of the normal vector and the illumination direction vector to obtain a point multiplication result for the normal vector corresponding to each coloring point; for the point multiplication result corresponding to each coloring point, performing smooth step function processing on the point multiplication result to obtain an initial selection illumination coefficient; and for the initial selection illumination coefficient corresponding to each coloring point, acquiring the smaller value of the initial selection illumination coefficient and the shading coefficient, wherein the smaller value is the illumination coefficient.
In the above embodiment, when the illumination coefficient corresponding to each rendering point is calculated, the influence of the illumination direction, the shading coefficient, and the normal direction of the rendering point may be introduced, so that the influence of the illumination coefficient corresponding to each rendering point on the hair of the virtual character may be more natural in the rendering process.
Optionally, the obtaining, according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point, a first operation result corresponding to each coloring point includes: for each coloring point: calculating a continuous multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product; calculating a multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product; calculating the sum of the first product and the second product to obtain a product sum result; and multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
In the foregoing embodiment, for each coloring point, when the first operation result is calculated, the influence of the color parameter value, the color intensity value, the color sampling value, the highlight color value, the highlight intensity value, the highlight sampling value, and the illumination coefficient corresponding to each coloring point is introduced, so that the influence of the first operation result corresponding to each coloring point on the hair of the virtual character is more natural in the rendering process.
Optionally, the obtaining a second operation result based on the stroke description information includes: performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result; and calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
In the above-described embodiment, the influence of the stroke description information, the stroke thickness parameter, the stroke color value, and the stroke intensity value is introduced when the second calculation result is calculated. The calculation process of the stroke description information has lower requirement on the calculation capacity of the terminal equipment, so that the calculation resource consumed by the calculation process of the second calculation result is better, and the requirement on the calculation capacity of the terminal equipment running the game can be reduced.
Optionally, before the obtaining, for each of the plurality of colored points, a pixel point meeting a preset requirement from the shadow map, the method further includes: acquiring a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, wherein the maximum shadow map is a map corresponding to the maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; the minimum shadow map is a map corresponding to the minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; calculating a middle shadow area according to the maximum shadow map and the minimum shadow map; for each pixel point of the middle shadow area, performing interpolation operation on the middle shadow area according to the distance between a pixel point and a first boundary of the middle shadow area and the distance between the pixel point and a second boundary of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow; and combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
In the above embodiment, the maximum shadow area and the minimum shadow area may be subtracted to obtain a middle shadow area; then, carrying out interpolation operation on the middle shadow area, and changing the middle shadow area into an interpolation operation shadow area; and then combining the interpolation operation shadow area with the minimum shadow area to obtain a shadow map. The shadow map obtained by the embodiment has a small calculation amount and can also obtain a good display effect. Therefore, the subsequent hair rendering processing is continuously executed by using the shadow map, the consumption of computing power can be greatly reduced, and the computing capability requirement of the terminal equipment for running the game can be reduced.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations provided in the embodiments described above.
Since the instructions stored in the storage medium can execute the steps in any of the virtual character hair rendering methods provided in the embodiments of the present application, beneficial effects that can be achieved by any of the virtual character hair rendering methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The method, the apparatus, the electronic device, and the computer-readable storage medium for rendering virtual character hair provided in the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principles and embodiments of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. A method for avatar hair rendering, the method comprising:
for each coloring point in the plurality of coloring points, obtaining pixel points meeting preset requirements from a shadow map, wherein the coloring points are coloring points corresponding to a plurality of planes included in a three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values;
determining an illumination influence parameter, and calculating a shielding shadow coefficient of each coloring point according to the illumination influence parameter and the shadow sampling value of each coloring point;
constructing an external sphere for wrapping the hair of the virtual character, acquiring a normal of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal as a corresponding normal close to the vertex;
calculating the illumination coefficient corresponding to each coloring point according to the shading shadow coefficient, the illumination direction vector and the normal corresponding to the multiple vertexes of the head of the virtual character;
acquiring a color sampling value and a highlight sampling value of each coloring point, and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point;
acquiring the stroke description information, and acquiring a second operation result based on the stroke description information;
and for the first operation result corresponding to each coloring point, acquiring the sum result of the first operation result and the second operation result, and rendering the three-dimensional model of the hair of the virtual character according to the sum result corresponding to each coloring point to obtain a rendering result.
2. The method of claim 1, wherein for each of a plurality of shading points, said obtaining pixel points from the shadow map that meet a predetermined requirement comprises:
and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the shadow map, wherein the pixel point is a pixel point meeting the preset requirement.
3. The method of claim 1, wherein the determining the illumination impact parameter comprises:
acquiring an illumination direction vector;
projecting the illumination direction vector on a target projection surface to obtain an illumination projection vector, wherein the target projection surface is a projection surface formed by the vector of the face direction and the vertical upward direction of the virtual character;
and calculating the illumination influence parameters according to the illumination projection vectors and the vertical upward direction vectors.
4. The method of claim 1, wherein constructing an circumscribing sphere for wrapping around the hair of a virtual character, obtaining a normal to a location of the circumscribing sphere surface corresponding to a vertex of the hair of the virtual character, and using the normal as a normal to a corresponding proximate vertex, comprises:
generating an external sphere corresponding to the hair of the virtual character, wherein the center of the external sphere is coincident with the center of the skull of the virtual character;
making rays from the center of the circumscribed sphere to each of multiple vertexes of the hair of the virtual character, and obtaining an intersection point of each ray and the surface of the circumscribed sphere;
for each of the plurality of intersection points, a normal to each intersection point position is calculated, and the normal to each intersection point position is taken as a normal to each vertex adjacent to each intersection point, respectively.
5. The method of claim 1, wherein calculating the illumination coefficient corresponding to each shading point according to the shading coefficient of each shading point, the illumination direction vector and the normal corresponding to each of the plurality of vertexes of the head of the virtual character comprises:
converting the normals corresponding to the multiple vertexes of the head of the virtual character to a world space coordinate system, and performing normalization processing to obtain normal vectors corresponding to each coloring point;
calculating the point multiplication of the normal vector and the illumination direction vector to obtain a point multiplication result for the normal vector corresponding to each coloring point;
for the point multiplication result corresponding to each coloring point, performing smooth step function processing on the point multiplication result to obtain an initial selection illumination coefficient;
and for the primary selection illumination coefficient corresponding to each coloring point, obtaining the smaller value of the primary selection illumination coefficient and the shielding shadow coefficient, wherein the smaller value is the illumination coefficient.
6. The method of claim 1, wherein obtaining color sample values and highlight sample values for each of the colored dots comprises:
for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in a color map, wherein the pixel point is the color sampling value;
and for each coloring point, acquiring a pixel point corresponding to the UV coordinate value of the coloring point in the highlight map, wherein the pixel point is the highlight sampling value.
7. The method of claim 1, wherein obtaining the first operation result corresponding to each coloring point according to the color sampling value, highlight sampling value and the illumination coefficient corresponding to each coloring point comprises:
for each coloring point:
calculating a continuous multiplication result of the color parameter value, the color intensity value and the color sampling value to obtain a first product;
calculating a multiplication result of the highlight color value, the highlight intensity value and the highlight sampling value to obtain a second product;
calculating the sum of the first product and the second product to obtain a product sum result;
and multiplying the product addition result and the corresponding illumination coefficient to obtain a corresponding first operation result.
8. The method of claim 1, wherein said obtaining the stroke description information comprises:
acquiring a three-dimensional model of the hair of the virtual character which is subjected to the preset treatment, wherein the edge area of each hair in the three-dimensional model of the hair of the virtual character which is subjected to the preset treatment is a first color, and the central area of each hair is a second color;
and for each pixel point of the area to which the first color of each strand of hair belongs, carrying out interpolation operation on the area to which the first color of the strand of hair belongs according to the distance between the pixel point and the central area of the strand of hair to which the pixel point belongs and the distance between the pixel point and the side line of the strand of hair to which the pixel point belongs, and obtaining the description information of the stroking line.
9. The method of claim 1, wherein said obtaining a second operation result based on said stroke description information comprises:
performing step function operation on the stroke description information and the stroke thickness parameter to obtain a stroke operation result;
and calculating a continuous multiplication result of the stroking color value, the stroking intensity value and the stroking operation result, wherein the continuous multiplication result is the second operation result.
10. The method of claim 1, wherein prior to said obtaining, for each of a plurality of shading points, pixel points from a shadow map that meet preset requirements, the method further comprises:
acquiring a maximum shadow map and a minimum shadow map corresponding to the hair of the virtual character, wherein the maximum shadow map is a map corresponding to the maximum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination; the minimum shadow map is a map corresponding to the minimum shadow formed by the head ornament of the virtual character on the hair of the virtual character under the influence of virtual illumination;
calculating a middle shadow area according to the maximum shadow map and the minimum shadow map;
for each pixel point of the middle shadow area, performing interpolation operation on the middle shadow area according to the distance between a pixel point and a first boundary of the middle shadow area and the distance between the pixel point and a second boundary of the middle shadow area to obtain an interpolation operation shadow area; wherein the first boundary is a boundary adjacent to the minimum shadow and the second boundary is a boundary away from the minimum shadow;
and combining the interpolation operation shadow area with the minimum shadow map to obtain the shadow map.
11. An apparatus for rendering virtual character hair, the apparatus comprising:
the pixel point obtaining unit is used for obtaining pixel points meeting preset requirements from the shadow map for each coloring point in the plurality of coloring points, wherein the plurality of coloring points are coloring points corresponding to a plurality of planes included in the three-dimensional model of the hair of the virtual character, and the pixel points meeting the preset requirements are marked as shadow sampling values;
the illumination parameter determining unit is used for determining an illumination influence parameter and calculating a shielding shadow coefficient of each colored point according to the illumination influence parameter and the shadow sampling value of each colored point;
the normal line acquisition unit is used for constructing an external sphere used for wrapping the hair of the virtual character, acquiring a normal line of the surface of the external sphere and the position corresponding to the vertex of the hair of the virtual character, and taking the normal line as a corresponding normal line close to the vertex;
the illumination coefficient calculation unit is used for calculating the illumination coefficient corresponding to each coloring point according to the shielding shadow coefficient and the illumination direction vector of each coloring point and the normal lines corresponding to the multiple vertexes of the head of the virtual character;
the first result acquisition unit is used for acquiring a color sampling value and a highlight sampling value of each coloring point and acquiring a first operation result corresponding to each coloring point according to the color sampling value and the highlight sampling value of each coloring point and the illumination coefficient corresponding to each coloring point;
a second result acquisition unit configured to acquire the stroke description information and acquire a second operation result based on the stroke description information;
and the addition result acquisition unit is used for acquiring the addition result of the first operation result and the second operation result for the first operation result corresponding to each coloring point, and rendering the three-dimensional model of the hair of the virtual character according to the addition result corresponding to each coloring point to obtain a rendering result.
12. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps of the method for avatar hair rendering according to any of claims 1-10.
13. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the method for hair rendering of a virtual character according to any of claims 1 to 10.
CN202210589067.8A 2022-05-26 2022-05-26 Virtual character hair rendering method and device, electronic equipment and storage medium Pending CN115082607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210589067.8A CN115082607A (en) 2022-05-26 2022-05-26 Virtual character hair rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210589067.8A CN115082607A (en) 2022-05-26 2022-05-26 Virtual character hair rendering method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115082607A true CN115082607A (en) 2022-09-20

Family

ID=83249054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210589067.8A Pending CN115082607A (en) 2022-05-26 2022-05-26 Virtual character hair rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115082607A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091684A (en) * 2023-04-06 2023-05-09 杭州片段网络科技有限公司 WebGL-based image rendering method, device, equipment and storage medium
WO2024082927A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Hair rendering method and apparatus, device, storage medium and computer program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066391A1 (en) * 2017-08-30 2019-02-28 Go Ghost, LLC Method of modifying ray tracing samples after rendering and before rasterizing
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment
CN113223131A (en) * 2021-04-16 2021-08-06 完美世界(北京)软件科技发展有限公司 Model rendering method and device, storage medium and computing equipment
CN113936080A (en) * 2021-09-24 2022-01-14 网易(杭州)网络有限公司 Rendering method and device of virtual model, storage medium and electronic equipment
CN114022607A (en) * 2021-11-19 2022-02-08 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium
CN114494570A (en) * 2021-10-18 2022-05-13 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066391A1 (en) * 2017-08-30 2019-02-28 Go Ghost, LLC Method of modifying ray tracing samples after rendering and before rasterizing
CN112755535A (en) * 2021-02-05 2021-05-07 腾讯科技(深圳)有限公司 Illumination rendering method and device, storage medium and computer equipment
CN113223131A (en) * 2021-04-16 2021-08-06 完美世界(北京)软件科技发展有限公司 Model rendering method and device, storage medium and computing equipment
CN113936080A (en) * 2021-09-24 2022-01-14 网易(杭州)网络有限公司 Rendering method and device of virtual model, storage medium and electronic equipment
CN114494570A (en) * 2021-10-18 2022-05-13 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114022607A (en) * 2021-11-19 2022-02-08 腾讯科技(深圳)有限公司 Data processing method and device and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴佳佳 等: "辅助社交训练严肃游戏中虚拟角色行为表现的不确定性模型", 《中国图像图形学报》, vol. 24, no. 8, 30 September 2019 (2019-09-30), pages 1558 - 1568 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024082927A1 (en) * 2022-10-18 2024-04-25 腾讯科技(深圳)有限公司 Hair rendering method and apparatus, device, storage medium and computer program product
CN116091684A (en) * 2023-04-06 2023-05-09 杭州片段网络科技有限公司 WebGL-based image rendering method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US9342918B2 (en) System and method for using indirect texturing to efficiently simulate and image surface coatings and other effects
US8411092B2 (en) 2D imposters for simplifying processing of plural animation objects in computer graphics generation
US7905779B2 (en) Video game including effects for providing different first person experiences of the same video game world and a storage medium storing software for the video game
US6580430B1 (en) Method and apparatus for providing improved fog effects in a graphics system
US6700586B1 (en) Low cost graphics with stitching processing hardware support for skeletal animation
CN109771951A (en) Method, apparatus, storage medium and the electronic equipment that map generates
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN113826147A (en) Improvements in animated characters
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
CN116704103A (en) Image rendering method, device, equipment, storage medium and program product
KR101146660B1 (en) Image processing device, image processing method, and information recording medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN114359458A (en) Image rendering method, device, equipment, storage medium and program product
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
US20050001835A1 (en) Image generation system, program, and information storage medium
CN112843704B (en) Animation model processing method, device, equipment and storage medium
CN115501590A (en) Display method, display device, electronic equipment and storage medium
CN116958390A (en) Image rendering method, device, equipment, storage medium and program product
Tschirschwitz et al. Interactive 3D visualisation of architectural models and point clouds using low-cost-systems
US7710419B2 (en) Program, information storage medium, and image generation system
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
US7724255B2 (en) Program, information storage medium, and image generation system
Garcia et al. Modifying a game interface to take advantage of advanced I/O devices
Zhu et al. Integrated Co-Designing Using Building Information Modeling and Mixed Reality with Erased Backgrounds for Stock Renovation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination