CN109045691B - Method and device for realizing special effect of special effect object - Google Patents

Method and device for realizing special effect of special effect object Download PDF

Info

Publication number
CN109045691B
CN109045691B CN201810753307.7A CN201810753307A CN109045691B CN 109045691 B CN109045691 B CN 109045691B CN 201810753307 A CN201810753307 A CN 201810753307A CN 109045691 B CN109045691 B CN 109045691B
Authority
CN
China
Prior art keywords
special effect
vertex
effect object
data
vertex data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810753307.7A
Other languages
Chinese (zh)
Other versions
CN109045691A (en
Inventor
曹伟刚
全俊
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201810753307.7A priority Critical patent/CN109045691B/en
Publication of CN109045691A publication Critical patent/CN109045691A/en
Application granted granted Critical
Publication of CN109045691B publication Critical patent/CN109045691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention provides a special effect realization method and a device of a special effect object, comprising the following steps: acquiring special effect data of at least one special effect object which is connected to a connected object in a virtual scene and has the same material; generating vertex data of a patch of each special effect object according to the position parameters and the effect parameters in the special effect data of each special effect object; and sending the vertex data of the patch of each special effect object to an image processor, processing the vertex data in a vertex shader stage of the image processor, and coloring the processed vertex data in the patch shader stage to realize the special effect of the special effect object. Because the special effect objects exist in the form of vertex data of a patch, and the position parameters and the effect parameters are contained in the vertex data, the problems that too many files are caused by configuring a model file and a special effect file for each special effect object and a drawing batch is high caused by model submission are solved, the vertex data can be submitted in the same batch, and the drawing batch of the special effect objects is reduced.

Description

Method and device for realizing special effect of special effect object
Technical Field
The invention relates to the technical field of computers, in particular to a special effect realization method and a special effect realization device for a special effect object.
Background
With the more and more fine production of games, the special effects hooked on the virtual scene model of the games are more and more, for example, meteorites, aviation lights and other similar large special effects exist in the games.
In the drawing of computer graphics, the model is drawn by combining a static combination batch, a dynamic combination batch and a hardware combination batch strategy. For example, a model patch is formed by using a triangular patch, a model calls a graph drawing interface in a batch to draw all patches, when the models are different, the data of the model patches are separated, vertex data cannot be merged and shared, patches of different models need to be drawn in batches, and even though the vertex data of different models can be merged and shared, the same batch cannot be drawn when the materials are different, the maps are inconsistent and the parameters are inconsistent.
In the prior art, for hanging a special effect, a large number of hanging nodes are required to be arranged on a hanging object to hang the special effect, if the special effect exists in a model form, and meanwhile, a special effect file is required to be configured for attributes such as color, shape and the like of each special effect, so that a large number of special effect files and a plurality of model files are generated, during drawing, each model is drawn in a batch, and when a large number of special effects are hung on the hanging object, the drawing batch of the hanging special effects on the hanging object is increased.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are provided to provide a special effect implementation method of a special effect object and a special effect implementation apparatus of a special effect object, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a special effect implementation method for a special effect object, which is applied to a virtual scene including at least one articulated object, and the method includes:
obtaining special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, wherein the special effect data comprises position parameters and effect parameters of the special effect object;
generating vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object;
and the vertex data of each special effect object is sent to an image processor, the vertex data of each special effect object is processed in a vertex shader stage of the image processor, and the processed vertex data is shaded in a slice shader stage to realize the special effect of the special effect object.
Optionally, the position parameter includes a local coordinate value of a center point of the special effect object relative to the hanging object, the effect parameter includes a size, a color parameter, and a control parameter of the special effect object, the patch is a rectangular patch, and the step of generating vertex data of the patch of each special effect object according to the position parameter and the effect parameter of each special effect object includes:
determining a rectangular patch according to the central point, wherein the size of the rectangular patch is equal to the size of the special effect object;
determining offsets of four vertexes of the rectangular patch relative to local coordinate values of the central point according to the size of the special effect size;
acquiring texture coordinates of the four vertexes;
and determining the local coordinate value of the central point, the texture coordinates, the color parameters, the control parameters and the offset of the four vertexes as the vertex data of the patch of the special effect object.
Optionally, the step of determining the local coordinate value of the center point, the texture coordinates of the four vertices, the color parameter, the control parameter, and the offset as the vertex data of the patch of the special effect object includes:
and respectively storing the x offset, the y offset and the control parameter in the offset of each vertex into nx, ny and nz components of the normal vector of each vertex.
Optionally, the step of sending vertex data of the patch of each special effect object to an image processor includes:
when a special effect request instruction for a special effect object in a hanging object is detected, calling vertex data of the special effect object in the virtual scene;
and sending the vertex data of the special effect object to the image processor.
Optionally, the step of processing vertex data of each special effect object in the vertex shader stage in the image processor includes:
reading vertex data of each special effect object;
determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex;
and determining the position of the vertex on the display screen according to the position of the vertex in the camera space.
Optionally, the step of the fragment shader stage shading the processed vertex data includes:
acquiring a texture mapping;
and drawing the surface patch of the special effect object by adopting the texture mapping, the texture coordinate, the color parameter, the control parameter and the position of the vertex on the display screen to obtain the coloring information of the surface patch of the special effect object.
Optionally, the step of determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex includes:
obtaining a local coordinate value of a central point in vertex data;
converting the local coordinate value of the central point according to the world camera change matrix to obtain a camera space coordinate value of the central point;
and calculating the camera space coordinate value of the vertex according to the camera space coordinate value of the central point and the offset represented by nx and ny components in the normal vector of the vertex.
Optionally, the step of drawing the patch of the special effect object by using the texture map, the texture coordinates, the color parameters, the control parameters, and the vertex at the position of the display screen to obtain the coloring information of the patch of the special effect object includes:
sampling the texture map according to texture coordinates in the vertex data to obtain sampling data;
calculating a final alpha channel value of the vertex according to the alpha channel value of the sampling data and a normal vector component nz in the vertex data;
and drawing a patch of the special effect object according to the final alpha channel value, the color parameter and the control parameter to obtain coloring information of the patch.
In order to solve the above problem, an embodiment of the present invention discloses a special effect implementation apparatus for a special effect object, which is applied to a virtual scene including at least one articulated object, and the apparatus includes:
the system comprises a special effect data acquisition module, a data processing module and a data processing module, wherein the special effect data acquisition module is used for acquiring special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, and the special effect data comprises position parameters and effect parameters of the special effect object;
the vertex data generating module is used for generating vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object;
and the vertex data sending module is used for sending the vertex data of the patch of each special effect object to the image processor, processing the vertex data of each special effect object in a vertex shader stage of the image processor, and coloring the processed vertex data in a slice shader stage to realize the special effect of the special effect object.
Optionally, the position parameter includes a local coordinate value of a center point of the special effect object relative to the hanging object, the effect parameter includes a size, a color parameter, and a control parameter of the special effect object, and the vertex data generating module includes:
the rectangular patch determining submodule is used for determining a rectangular patch according to a central point, and the size of the rectangular patch is equal to the size of the special effect object;
the vertex offset determining submodule is used for determining the offsets of the four vertexes of the rectangular patch relative to the local coordinate value of the central point according to the size of the special effect size;
the texture coordinate acquisition submodule is used for acquiring texture coordinates of the four vertexes;
and the vertex data determining submodule is used for determining the local coordinate value of the central point, the texture coordinates of the four vertexes, the color parameter, the control parameter and the offset as the vertex data of the special effect object.
Optionally, the vertex data determining submodule includes:
and the offset and control parameter storage unit is used for respectively storing the x offset, the y offset and the control parameter in the offset of each vertex into nx, ny and nz components of the normal vector of each vertex.
Optionally, the vertex data sending module includes:
the vertex data calling submodule is used for calling the vertex data of the special effect object in the virtual scene when a special effect request instruction aiming at the special effect object is detected;
and the vertex data sending submodule is used for sending the vertex data of the special effect object to the image processor.
Optionally, the vertex shader comprises the following modules:
the vertex data reading module is used for reading the vertex data of each special effect object;
the vertex position determining module is used for determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex;
and the display screen position determining module is used for determining the position of the vertex on the display screen according to the position of the vertex in the camera space.
Optionally, the slice shader comprises the following modules:
the texture mapping acquisition module is used for acquiring a texture mapping;
and the drawing module is used for drawing the surface patch of the special effect object by adopting the positions of the texture map, the texture coordinate, the color parameter, the control parameter and the vertex on the display screen to obtain the coloring information of the surface patch of the special effect object.
Optionally, the vertex position determining module includes:
a local coordinate value obtaining submodule for obtaining local coordinate values of the center point in the vertex data
The coordinate transformation submodule is used for transforming the local coordinate value of the central point according to the world camera change matrix to obtain a camera space coordinate value of the central point;
and the vertex position determining submodule is used for calculating the camera space coordinate value of the vertex according to the camera space coordinate value of the central point and the offset represented by nx and ny components in the normal vector of the vertex.
Optionally, the rendering module includes:
the texture sampling submodule is used for sampling the texture map according to texture coordinates in the vertex data to obtain sampling data;
a channel value operator module, configured to calculate a final alpha channel value of the vertex according to an alpha channel value of the sample data and a normal vector component nz in vertex data;
and the rendering submodule is used for rendering the patch of the special effect object according to the final alpha channel value, the color parameter and the control parameter to obtain the coloring information of the patch.
In order to solve the above problem, an embodiment of the present invention discloses a computer-readable medium on which a computer program is stored, where the computer program is executed by a processor to implement the special effect implementation method of the special effect object according to any one of the embodiments of the present invention.
In order to solve the above problem, an embodiment of the present invention discloses an electronic device, including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the method for implementing a special effect of a special effect object according to any one of the embodiments of the present invention by executing the executable instructions.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the special effect data of the special effect objects with the same material and comprising the position parameters and the effect parameters are firstly obtained, then the vertex data of the surface patches of the special effect objects are generated according to the position parameters and the effect parameters, when the special effect objects are loaded on the hanging objects, the vertex data of the special effect objects with the same material are sent to the image processor for drawing so as to display the special effect of the special effect objects on the hanging objects, because the special effect objects all exist in the form of the vertex data of the surface patches, the position parameters and the effect parameters are contained in the vertex data, the problems of excessive files caused by the configuration of model files and special effect files for each special effect object and high drawing batch caused by the configuration of the model files and the special effect files for each special effect object in the form of the model are avoided, only the position parameters and the effect parameters of the special effect objects with the same material need to be configured, and the vertex data of the special effect objects with the same material can be submitted to the image processor in the same batch, the drawing batch of the special effect object on the articulated object is reduced.
Drawings
FIG. 1 is a flowchart illustrating steps of an embodiment 1 of a method for implementing a special effect of a special effect object according to the present invention;
FIG. 2 is a flowchart illustrating steps of embodiment 2 of a method for implementing a special effect of a special effect object according to the present invention;
FIG. 3 is a schematic diagram of a rectangular patch of an embodiment of the invention;
FIG. 4 is a schematic diagram of an example of an aviation light property compilation in accordance with an embodiment of the present invention;
fig. 5 is a block diagram of a special effect implementation apparatus for a special effect object according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart of steps of an embodiment 1 of a method for implementing a special effect of a special effect object of the present invention is shown, which may specifically include the following steps:
step 101, obtaining special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, wherein the special effect data comprises position parameters and effect parameters of the special effect object.
In the embodiment of the present invention, the virtual scene may be a scene displayed in an electronic game, the virtual scene may be a three-dimensional scene, the special effect object may be an object generating a special effect, for example, an aviation lamp, a meteorite, and other objects having a special effect in the game scene, and the hitched object may be a carrier of the special effect object, for example, a model such as a ship in the virtual scene.
In practical application, special effect objects of different materials can be hung on the hanging objects, or special effect objects of the same material are hung on the different hanging objects, and special effect data of the special effect objects of the same material hung on all the hanging objects in the virtual scene can be acquired, wherein the special effect data comprises position parameters and effect parameters of the special effect objects, for example, the special effect objects are taken as aviation lamps, and parameters such as positions, colors, flashing periods, flashing starting time and sizes of all the aviation lamps in the virtual scene can be acquired.
The position parameter and the effect parameter of the special effect object may be generated off-line, for example, when performing art editing on the hanging object, the position of the special effect object is selected on the hanging object and the effect parameter of the special effect object is set.
And 102, generating vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object.
In the embodiment of the present invention, a rectangular patch may be used to form a special effect object, and the position parameter and the effect parameter are included in attributes of four vertices of the rectangular patch, so that attribute data of the four vertices is vertex data of the patch of the special effect object, for example, the vertex data may include a local coordinate value, a color parameter, a texture coordinate, and a vertex normal vector of a center point of the rectangular patch, and the normal vector includes an offset and a control parameter of a vertex relative to the center point in a camera space, specifically, the normal vector may include nx, ny, and nz components, so that nx and ny components in the normal vector represent a coordinate offset of the vertex relative to the center point in the camera space, and the nz component represents the control parameter, for example, for an aviation lamp special effect, and the nz component represents a brightness of an aviation lamp.
103, sending the vertex data of each special effect object to an image processor, processing the vertex data of each special effect object in a vertex shader stage of the image processor, and coloring the processed vertex data in a slice shader stage to realize the special effect of the special effect object.
In the embodiment of the invention, after vertex data of patches of all special effect objects with the same material quality are generated, all the vertex data can be sent to the image processor, and after the vertex data is received in the vertex shader stage of the image processor, the screen position of each vertex can be calculated according to the vertex data, and then the vertex data is transmitted to the patch shader stage to draw the special effect of the special effect object. For example, the vertex shader stage performs world camera coordinate conversion on a local coordinate value and offset of a central point in vertex data, then converts the local coordinate value and the offset into display screen coordinates, and then submits the display screen coordinates to the fragment shader, and the fragment shader stage performs sampling rendering on a patch according to vertex color parameters, texture coordinates, control parameters in normal vectors and the like to display the special effect of a special effect object.
In the implementation of the invention, because the special effect objects have the same material and exist through the vertex data of the surface patch, only the position parameters and the effect parameters of the special effect objects with the same material need to be configured, so that the problems of excessive files caused by configuring a model file and a special effect file for each special effect object and high drawing batch caused by submitting each special effect object in a model form are solved, the vertex data of all the special effect objects with the same material on the articulated object can be submitted to the image processor in the same batch, and the drawing batch of the special effect objects on the articulated object is reduced.
Referring to fig. 2, a flowchart of steps of embodiment 2 of a method for implementing a special effect of a special effect object of the present invention is shown, which may specifically include the following steps:
step 201, obtaining special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, wherein the special effect data comprises position parameters and effect parameters of the special effect object, the position parameters comprise local coordinate values of a central point of the special effect object, and the effect parameters comprise size, color parameters and control parameters of the special effect object.
In the embodiment of the present invention, the position parameter may be a local coordinate of the special effect object on the hanging object, and the effect parameter may be a parameter for realizing a special effect of the special effect object, for example, the special effect object is taken as an aviation lamp, and the effect parameter may include parameters such as a color, a flashing period, flashing start time, and a size of the aviation lamp. The position parameter and the effect parameter may be generated by offline editing, and when the game operation requires loading of the special effect object, the position parameter and the effect parameter of the corresponding special effect object are obtained, for example, taking the aviation lamp as an example, and when the aviation lamp needs to be loaded on a model such as a ship, the local coordinate value, the size, the color parameter, the brightness parameter, and the like of the center point of the aviation lamp are obtained.
Step 202, determining a rectangular patch according to the central point, wherein the size of the rectangular patch is equal to the size of the special effect object.
Fig. 3 is a schematic diagram of a patch according to an embodiment of the present invention. In fig. 3, two triangular patches are combined into a rectangular patch, and the special effect object may be formed by a rectangular patch. Specifically, in a preferred embodiment of the present invention, the local coordinates of the center point of the special effect object may be taken as the center of a rectangular patch, and then the length and width of the rectangular patch are determined according to the size of the special effect object, so that the length and width of the rectangular patch are equal to the length and width of the special effect size of the special effect object.
It is of course also possible to set the length and width of the rectangular patches by a preset ratio, for example, the size of the ship is 100 × 50 × 20, and the size of the aviation light may be 0.5 × 0.5, i.e. the length and width of the rectangular patches is 0.5.
Step 203, determining offsets of the four vertexes of the rectangular patch relative to the local coordinate values of the central point according to the size of the special effect size.
In practical applications, the positions of the four vertices of the rectangular patch may be determined by offsets of the vertices relative to the local coordinate values of the central point, and specifically, the offsets of the four vertices of the rectangular patch relative to the local coordinate values of the central point may be calculated by the size of the special effect, as shown in fig. 3, where point P is the central point of the rectangular patch, coordinates of the central point are (10, 15), the size of the special effect (size of the rectangular patch) is 2 × 1, and then A, B, C, D has the following offsets relative to point P:
offset in the X direction Offset in the Y direction
A -1 0.5
B 1 0.5
C -1 -0.5
D 1 -0.5
For each special effect object, under the condition that the local coordinate value of the center point of the special effect object and the size of the special effect size are known, the specific positions of four vertexes of a rectangular patch of the special effect object can be obtained. In the vertex shader stage, local coordinate values of the central point are transformed to coordinate values in a camera space according to coordinate transformation, and then the coordinate values of the vertices in the camera space can be obtained by adding offset of the vertices to the coordinate values of the central point in the camera space.
And step 204, acquiring texture coordinates of the four vertexes.
Texture coordinates are a short for texture map coordinates, which define the information of the position of each point on the picture to determine the position of the surface texture map. The texture coordinates accurately correspond each point on the texture mapping image to the surface of the model object, and the position of the gap between the point and the point is subjected to image smooth interpolation processing by software.
Step 205, determining the local coordinate value of the center point, the texture coordinates of the four vertices, the color parameter, the control parameter and the offset as the vertex data of the patch of the special effect object.
In the embodiment of the present invention, the vertex data may include a local coordinate value of the central point, a texture coordinate of the vertex, a color parameter, a control parameter, and an offset of the vertex from the central point, and specifically, the offset and the control parameter of each vertex may be recorded in a normal vector of the vertex, for example, the vertex data may be as follows:
the vertex data of the four vertices a, B, C, D of the rectangular patch contains the following information:
Figure BDA0001726127440000101
the four vertexes of the rectangular patch of each special effect object all contain the data, the offset of the four vertexes can be calculated as long as the special effect size of the rectangular patch is obtained, and the positions and the coordinate values of the vertexes in the camera space can be determined according to the offset and the calculated coordinate values of the center point in the camera space in the vertex shader stage.
In the embodiment of the invention, the Position in the vertex data can be changed: (x, y, z), Color: color, Uv: (ux, uy, uz) and Normal: the (nx, ny, nz) attribute enables control of effects, such as the control of an effect object in a game script by:
for vert in range(0,4):
primitive.set_vert(vert,x,y,z,color,u,v,nx*_bright_scale,ny*_bright_scale,nz*_bright_scale)
wherein vert represents the vertex index, and x, y and z represent the position; color stands for color; uv is the sampling texture coordinate; nx, ny represent the offset of the vertex in the x, y direction relative to the central point under the camera coordinate space, nz represents the control parameter, the special effect object can be controlled by changing nz, for example, for the aviation lamp, the value of nz can be controlled in the game script to enable the aviation lamp to be turned on or turned off.
As shown in fig. 4, for the aviation lamps fixed on the two bisected line segments (L1, L2), the art only needs to edit the positions of 6 end points (A, B, C, D, E, F) of the two line segments during off-line editing to change the center positions of the aviation lamps on the two line segments, i.e. edit the positions of the 6 end points: (x, y, z). The prior art is avoided, the position of each aviation lamp is anchored manually, and the data volume of editing is reduced.
Step 206, sending the vertex data of the patch of each special effect object to the image processor.
In the embodiment of the invention, when a special effect request instruction aiming at a special effect object is detected in the running of a game program, vertex data of the special effect object in the virtual scene is called; the vertex data of the special effect object is sent to the image processor, for example, the special effect request instruction may be a refresh request of a virtual scene or a load request of a hooked object, and when the virtual scene needs to be refreshed or the hooked object needs to be loaded, the vertex data of a patch of the special effect object on the hooked object may be acquired and sent to the image processor.
Step 207, the vertex data of each special effect object is read.
After the image processor receives the submitted vertex data, the vertex shader stage may read the vertex data.
And step 208, determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex, and calculating the position of the vertex on the display screen.
In the vertex shader stage, the position of the vertex in the camera space may be determined first and then the position of the vertex in the display screen may be determined, and in particular, the following sub-steps may be included:
substep 208-1, obtaining a local coordinate value of a central point in the vertex data;
from step 208-2, converting the local coordinate value of the central point according to the world camera change matrix to obtain a camera space coordinate value of the central point;
substep 208-3, calculating the camera space coordinate value of the vertex according to the camera space coordinate value of the central point and the offset represented by nx and ny components in the vertex normal vector;
and step 208-4, determining the position of the vertex on the display screen according to the spatial position of the vertex on the camera.
In practical applications, sub-step 208-1-sub-step 208-3 may be implemented by:
Figure BDA0001726127440000111
Figure BDA0001726127440000121
wherein pos _ local is a local coordinate value of a center point of the rectangular patch, wold _ view is a world camera transformation matrix, pos _ view is a camera space coordinate value of the center point after the local coordinate value of the center point is converted into a camera space, real _ view is a coordinate value of a vertex in the camera space obtained by nx and ny components in a normal vector normal, a _ nor is a normal vector of the vertex, where a _ nor.x, a _ nor.y hold the offset of the vertex in camera space from the center point, so, in the camera space, the vertex x-coordinate pos _ view.x + a _ nor.x, the vertex y-coordinate pos _ view.y + a _ nor.y, the vertex z-coordinate pos _ view.z, that is, the four vertices of a rectangular patch are in the same z-plane in the camera space, and the rectangular patch rotates as the camera under the camera space turns, so that each rectangular patch is an independent bulletin board.
Through sub-steps 208-1 through 208-4 above, the positions of the center point and four vertices of the rectangular patch in camera space and the positions of the vertices in the display screen may be determined.
Step 209, obtain the texture map.
In the embodiment of the invention, the same texture map can be adopted for special effect objects of the same material, for example, the same texture map can be adopted for all aviation lamps, and the texture map can be defined in advance.
And step 210, drawing the surface patch of the special effect object by adopting the texture mapping, the texture coordinates, the color parameters, the control parameters and the position of the vertex on the display screen to obtain the coloring information of the surface patch of the special effect object.
Specifically, step 210 may include the following sub-steps:
the substep 210-1, sampling the texture map according to the texture coordinates in the vertex data to obtain sampling data;
substep 210-2, calculating a final alpha channel value of the vertex according to the alpha channel value of the sampled data and a normal vector component nz in the vertex data;
and a substep 210-3 of drawing a patch of the special effect object to obtain coloring information of the patch according to the final alpha channel value, the color parameter and the control parameter.
Specifically, sub-step 210-1-sub-step 210-3 may be implemented by:
{
lowp float4result=sample(tex0,uv.xy);
lowp float3local_color=float3(v_diffuse.xyz);
float alpha=result.a*a_nor.z;
float3final_color=local_color*alpha;
pixel(float4(final_color.xyz,alpha))
}
the texture is mapped by using a mapping tex0, the mapping is sampled by texture coordinates uv, the result is stored in result, v _ difference is the color of a vertex, the shape of the special effect object can be represented by using an alpha channel of the mapping, namely transparency difference, in a fragment shader stage, the value of the alpha can be obtained by the product of an alpha channel component a and a _ nor.z of a normal vector component in vertex data and is used for representing brightness, and the fragment shader stage can render the patch to display the special effect of the special effect object on a screen after the patch is drawn to obtain the coloring information of the patch.
In the implementation of the invention, the local coordinate value of the central point is recorded in the position information in the vertex data, the offset and the control parameter of the vertex relative to the central point in the camera space are recorded in the normal vector, and the position of the vertex in the camera space is determined after the average coordinate of the central point is converted into the coordinate of the camera space in the vertex shader stage, so that the vertex data of the special effect objects with the same material can be submitted to an image processor in a batch, the special effect of the special effect objects can be obtained by once drawing the surface patch of the special effect objects according to the vertex data, the problems of excessive files caused by the configuration of a model file and a special effect file for each special effect object and high drawing batch caused by the model submission of each special effect object are avoided, and the drawing batch of the special effect objects is reduced.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of an apparatus for implementing a special effect of a special effect object according to the present invention is shown, and may specifically include the following modules:
a special effect data obtaining module 301, configured to obtain special effect data of at least one special effect object that is hooked to the hooked object in the virtual scene and has the same material, where the special effect data includes a position parameter and an effect parameter of the special effect object;
a vertex data generating module 302, configured to generate vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object;
the vertex data sending module 303 is configured to send vertex data of a patch of each special effect object to an image processor, process the vertex data of each special effect object in a vertex shader stage of the image processor, and draw the processed vertex data in a slice shader stage to achieve a special effect of the special effect object.
Optionally, the position parameter includes a local coordinate value of a center point of the special effect object relative to the hanging object, the effect parameter includes a size, a color parameter, and a control parameter of the special effect object, and the vertex data generating module 302 includes:
the rectangular patch determining submodule is used for determining a rectangular patch according to a central point, and the size of the rectangular patch is equal to the size of the special effect object;
the vertex offset determining submodule is used for determining the offsets of the four vertexes of the rectangular patch relative to the local coordinate value of the central point according to the size of the special effect size;
the texture coordinate acquisition submodule is used for acquiring texture coordinates of the four vertexes;
and the vertex data determining submodule is used for determining the local coordinate value of the central point, the texture coordinates of the four vertexes, the color parameter, the control parameter and the offset as the vertex data of the special effect object.
Optionally, the vertex data determining submodule includes:
and the offset and control parameter storage unit is used for respectively storing the x offset, the y offset and the control parameter in the offset of each vertex into nx, ny and nz components of the normal vector of each vertex.
Optionally, the vertex data sending module 303 includes:
the vertex data calling submodule is used for calling the vertex data of the special effect object in the virtual scene when a special effect request instruction aiming at the special effect object is detected;
and the vertex data sending submodule is used for sending the vertex data of the special effect object to the image processor.
Optionally, the vertex shader comprises the following modules:
the vertex data reading module is used for reading the vertex data of each special effect object;
the vertex position determining module is used for determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex;
and the display screen position determining module is used for determining the position of the vertex on the display screen according to the position of the vertex in the camera space.
Optionally, the slice shader comprises the following modules:
the texture mapping acquisition module is used for acquiring a texture mapping;
and the drawing module is used for drawing the surface patch of the special effect object by adopting the positions of the texture map, the texture coordinate, the color parameter, the control parameter and the vertex on the display screen to obtain the coloring information of the surface patch of the special effect object.
Optionally, the vertex position determining module includes:
a local coordinate value obtaining submodule for obtaining local coordinate values of the center point in the vertex data
The coordinate transformation submodule is used for transforming the local coordinate value of the central point according to the world camera change matrix to obtain a camera space coordinate value of the central point;
and the vertex position determining submodule is used for calculating the camera space coordinate value of the vertex according to the camera space coordinate value of the central point and the offset represented by nx and ny components in the normal vector of the vertex.
Optionally, the rendering module includes:
the texture sampling submodule is used for sampling the texture map according to texture coordinates in the vertex data to obtain sampling data;
a channel value operator module, configured to calculate a final alpha channel value of the vertex according to an alpha channel value of the sample data and a normal vector component nz in vertex data;
and the rendering submodule is used for rendering the patch of the special effect object according to the final alpha channel value, the color parameter and the control parameter to obtain the coloring information of the patch.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for realizing the special effect of the special effect object and the device for realizing the special effect of the special effect object are introduced in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, in light of the above teachings, there may be controls over the specific embodiments and applications, and as such, the present disclosure should not be construed as limiting the present disclosure.

Claims (11)

1. A special effect implementation method of a special effect object is applied to a virtual scene comprising at least one articulated object, and comprises the following steps:
obtaining special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, wherein the special effect data comprises position parameters and effect parameters of the special effect object;
generating vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object;
sending vertex data of a patch of each special effect object to an image processor, processing the vertex data of each special effect object in a vertex shader stage of the image processor, and coloring the processed vertex data in a patch shader stage to realize the special effect of the special effect object;
the position parameters comprise local coordinate values of the center point of the special effect object relative to the hanging object, and the effect parameters comprise the size, the color parameters and the control parameters of the special effect object.
2. The method of claim 1, wherein the patches are rectangular patches, and wherein the step of generating vertex data for the patches of respective effect objects in dependence on the position parameters and the effect parameters of the respective effect objects comprises:
determining a rectangular patch according to the central point, wherein the size of the rectangular patch is equal to the size of the special effect object;
determining offsets of four vertexes of the rectangular patch relative to local coordinate values of the central point according to the size of the special effect size;
acquiring texture coordinates of the four vertexes;
and determining the local coordinate value of the central point, the texture coordinates, the color parameters, the control parameters and the offset of the four vertexes as the vertex data of the patch of the special effect object.
3. The method of claim 2, wherein the step of determining the local coordinate values of the center point, the texture coordinates of the four vertices, the color parameter, the control parameter, and the offset as vertex data of a patch of the special effect object comprises:
and respectively storing the x offset, the y offset and the control parameter in the offset of each vertex into nx, ny and nz components of the normal vector of each vertex.
4. The method of claim 3, wherein sending vertex data for a patch of the respective special effect object to an image processor comprises:
when a special effect request instruction for a special effect object in a hanging object is detected, calling vertex data of the special effect object in the virtual scene;
and sending the vertex data of the special effect object to the image processor.
5. The method of claim 3, wherein the step of processing vertex data for each special effect object in a vertex shader stage in the image processor comprises:
reading vertex data of each special effect object;
determining the position of the vertex in the camera space according to the local coordinate value of the central point in the vertex data and the normal vector of the vertex;
and determining the position of the vertex on the display screen according to the position of the vertex in the camera space.
6. The method of claim 5, wherein the step of the slice shader stage shading the processed vertex data comprises:
acquiring a texture mapping;
and drawing the surface patch of the special effect object by adopting the texture mapping, the texture coordinate, the color parameter, the control parameter and the position of the vertex on the display screen to obtain the coloring information of the surface patch of the special effect object.
7. The method of claim 5, wherein the step of determining the position of the vertex in the camera space based on the local coordinate value of the center point in the vertex data and the normal vector of the vertex comprises:
obtaining a local coordinate value of a central point in vertex data;
converting the local coordinate value of the central point according to the world camera change matrix to obtain a camera space coordinate value of the central point;
and calculating the camera space coordinate value of the vertex according to the camera space coordinate value of the central point and the offset represented by nx and ny components in the normal vector of the vertex.
8. The method of claim 6, wherein the step of rendering the tile of the special effect object using the texture map, the texture coordinates, the color parameters, the control parameters, and the vertex at the location of the display screen to obtain the rendering information for the tile of the special effect object comprises:
sampling the texture map according to texture coordinates in the vertex data to obtain sampling data;
calculating a final alpha channel value of the vertex according to the alpha channel value of the sampling data and a normal vector component nz in the vertex data;
and drawing a patch of the special effect object according to the final alpha channel value, the color parameter and the control parameter to obtain coloring information of the patch.
9. An apparatus for implementing a special effect of a special effect object, applied to a virtual scene including at least one articulated object, the apparatus comprising:
the system comprises a special effect data acquisition module, a data processing module and a data processing module, wherein the special effect data acquisition module is used for acquiring special effect data of at least one special effect object which is connected to the connected object in the virtual scene and has the same material, and the special effect data comprises position parameters and effect parameters of the special effect object;
the vertex data generating module is used for generating vertex data of a patch of each special effect object according to the position parameter and the effect parameter of each special effect object;
the vertex data sending module is used for sending the vertex data of the surface patch of each special effect object to the image processor, processing the vertex data of each special effect object in a vertex shader stage of the image processor, and coloring the processed vertex data in a slice shader stage to realize the special effect of the special effect object;
the position parameters comprise local coordinate values of the center point of the special effect object relative to the hanging object, and the effect parameters comprise the size, the color parameters and the control parameters of the special effect object.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of carrying out a special effect of a special effect object according to any one of claims 1 to 8.
11. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of special effects implementation of the special effects object of any of claims 1-8 via execution of the executable instructions.
CN201810753307.7A 2018-07-10 2018-07-10 Method and device for realizing special effect of special effect object Active CN109045691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810753307.7A CN109045691B (en) 2018-07-10 2018-07-10 Method and device for realizing special effect of special effect object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810753307.7A CN109045691B (en) 2018-07-10 2018-07-10 Method and device for realizing special effect of special effect object

Publications (2)

Publication Number Publication Date
CN109045691A CN109045691A (en) 2018-12-21
CN109045691B true CN109045691B (en) 2022-02-08

Family

ID=64819526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810753307.7A Active CN109045691B (en) 2018-07-10 2018-07-10 Method and device for realizing special effect of special effect object

Country Status (1)

Country Link
CN (1) CN109045691B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070496B (en) * 2019-02-28 2020-07-31 北京字节跳动网络技术有限公司 Method and device for generating image special effect and hardware device
CN109903366B (en) * 2019-03-13 2023-07-14 网易(杭州)网络有限公司 Virtual model rendering method and device, storage medium and electronic equipment
CN110288670B (en) * 2019-06-19 2023-06-23 杭州绝地科技股份有限公司 High-performance rendering method for UI (user interface) tracing special effect
CN110287431B (en) * 2019-06-27 2021-08-24 北京金山安全软件有限公司 Image file loading method and device, electronic equipment and storage medium
CN110354500A (en) * 2019-07-15 2019-10-22 网易(杭州)网络有限公司 Effect processing method, device, equipment and storage medium
CN111111177A (en) * 2019-12-23 2020-05-08 北京像素软件科技股份有限公司 Method and device for disturbing background by special effect of game and electronic equipment
CN111508047B (en) * 2020-04-21 2023-08-22 网易(杭州)网络有限公司 Animation data processing method and device
CN111729304B (en) * 2020-05-26 2024-04-05 广州尊游软件科技有限公司 Method for displaying mass objects
CN111882637B (en) * 2020-07-24 2023-03-31 上海米哈游天命科技有限公司 Picture rendering method, device, equipment and medium
CN112435323B (en) * 2020-11-26 2023-08-22 网易(杭州)网络有限公司 Light effect processing method, device, terminal and medium in virtual model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504185A (en) * 2016-10-26 2017-03-15 腾讯科技(深圳)有限公司 One kind renders optimization method and device
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3363137B2 (en) * 2000-11-29 2003-01-08 コナミ株式会社 Hit determination method, apparatus thereof, computer-readable recording medium having a hit determination processing program recorded therein, and video game apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780686A (en) * 2015-11-20 2017-05-31 网易(杭州)网络有限公司 The merging rendering system and method, terminal of a kind of 3D models
CN106504185A (en) * 2016-10-26 2017-03-15 腾讯科技(深圳)有限公司 One kind renders optimization method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图形渲染及优化——Unity合批技术实践;互联网;《https://gameinstitute.qq.com/community/detail/114323》;20170614;网页正文第1段-最后1段 *
顶点着色器和像素着色器的数据处理流程;互联网;《https://www.cnblogs.com/kevin-heyongyuan/articles/9111908.html》;20180530;网页正文第1段-最后1段 *

Also Published As

Publication number Publication date
CN109045691A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109045691B (en) Method and device for realizing special effect of special effect object
Catmull Computer display of curved surfaces
JP2018129051A (en) Adjustment of inclination of texture mapping of plurality of rendering of target whose resolution varies according to location of screen
US9275493B2 (en) Rendering vector maps in a geographic information system
US9589386B2 (en) System and method for display of a repeating texture stored in a texture atlas
US20090046098A1 (en) Primitive binning method for tile-based rendering
US10089782B2 (en) Generating polygon vertices using surface relief information
US6184893B1 (en) Method and system for filtering texture map data for improved image quality in a graphics computer system
US20170124748A1 (en) Method of and apparatus for graphics processing
CN111915712B (en) Illumination rendering method and device, computer readable medium and electronic equipment
JP2005100176A (en) Image processor and its method
US20150015574A1 (en) System, method, and computer program product for optimizing a three-dimensional texture workflow
CN111311720B (en) Texture image processing method and device
CN111932448B (en) Data processing method, device, storage medium and equipment
CN112132941B (en) Text rendering method, device, equipment and storage medium
CN111402385B (en) Model processing method and device, electronic equipment and storage medium
EP0789893A1 (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically
US7109999B1 (en) Method and system for implementing programmable texture lookups from texture coordinate sets
CN108171784B (en) Rendering method and terminal
US20230252715A1 (en) Image processing method, apparatus and device and storage medium
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
JP3352458B2 (en) Graphic Coloring Method for Graphic Display System
KR100848687B1 (en) 3-dimension graphic processing apparatus and operating method thereof
CN111739074A (en) Scene multipoint light source rendering method and device
JPH09114994A (en) Device and method for forming image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant