CN113694510A - Game role rendering method, device and equipment - Google Patents

Game role rendering method, device and equipment Download PDF

Info

Publication number
CN113694510A
CN113694510A CN202110929093.6A CN202110929093A CN113694510A CN 113694510 A CN113694510 A CN 113694510A CN 202110929093 A CN202110929093 A CN 202110929093A CN 113694510 A CN113694510 A CN 113694510A
Authority
CN
China
Prior art keywords
rendering
hair
semitransparent
materials
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110929093.6A
Other languages
Chinese (zh)
Other versions
CN113694510B (en
Inventor
华树程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202110929093.6A priority Critical patent/CN113694510B/en
Priority to PCT/CN2021/132561 priority patent/WO2023015770A1/en
Publication of CN113694510A publication Critical patent/CN113694510A/en
Application granted granted Critical
Publication of CN113694510B publication Critical patent/CN113694510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a game role rendering method, a game role rendering device and game role rendering equipment, and relates to the technical field of 3D rendering, wherein two parts made of uniform materials are divided into two rendering commands to be executed in the whole rendering process through a preset threshold value, so that the problem of rendering faults of rendering queues between opaque and semi-transparent parts can be avoided during hair rendering, and the art manufacturing and parameter adjusting operation are facilitated. The method comprises the following steps: obtaining semitransparent materials in game roles, wherein the semitransparent materials belong to materials of different parts in the game roles; if the semitransparent material belongs to the hair material of the hair part in the game role, cutting the hair material by using a preset threshold value; for the hair material part with the opacity larger than the preset threshold value, writing depth, and combining the rest hair material parts to form a rendering queue for semitransparent materials; and with the preset threshold value as a separation, performing rendering on the hair materials in the rendering queue aiming at the semitransparent materials in a grading manner.

Description

Game role rendering method, device and equipment
Technical Field
The present application relates to the field of 3D rendering technologies, and in particular, to a method, an apparatus, and a device for rendering a game character.
Background
With the rise of the game industry, in order to meet the requirements of players, the content of games is gradually enriched, the variety is gradually increased, the plot of the games is more and more complex, and the images are more and more vivid. The game comprises a game scene and a game role, and the visualization of the game scene and the game role is realized through the rendering function of computer software.
In the related art, the rendering function of a game character involves rendering different parts of the game character, including cloth rendering, hair rendering, and skin rendering. The hair in the real world is mainly constructed by fibers and can be divided into a multi-layer structure, a central hair marrow, internal sebum and a cuticle of an epidermis, wherein after the cuticle is enlarged, the micro-surface of pits can be seen, the micro-surface is a medium for constructing high light and reflection, in addition, the light can also be transmitted and reflected again when irradiating the surface of fur, and the pits on the micro-surface of the hair have uniform directivity and point to the root and tail from the root. Considering real world hair composition and characteristics, particularly for hair rendering, the illusion engine defaults to using a physical-based hair coloring model, which is a model based on hair emission attribute measurements, that can take into account the impact of hair growth direction on highlight. Generally, a hair coloring model forms two sub-grids in a hair rendering process, one sub-grid is used as a mask rendering, the other sub-grid is used as a semi-transparent rendering, however, a rendering queue between the opaque property and the semi-transparent property has a slight rendering effect difference, so that rendering results such as real shadow illumination and image-based illumination are slightly different in the two rendering queue calculation modes, a rendering fault problem is easily caused in the rendering queue between the opaque property and the semi-transparent property during hair rendering, and meanwhile, the two hair material renderings are inconvenient for art making and parameter adjusting operation, and the rendering effect of a game role is influenced.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a device for rendering a game character, and mainly aims to solve the problem that rendering by dividing two hair materials in the prior art is inconvenient for art production and parameter adjustment operations, and affects the rendering effect of the game character.
According to a first aspect of the present application, there is provided a game character rendering method including:
obtaining semitransparent materials in game roles, wherein the semitransparent materials belong to materials of different parts in the game roles;
if the semitransparent material belongs to the hair material of the hair part in the game role, cutting the hair material by using a preset threshold value;
writing the hair material part with the opacity larger than the preset threshold into depth, and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material;
and with the preset threshold value as a separation, performing rendering on the hair materials in the rendering queue aiming at the semitransparent materials in a grading manner.
Further, before the obtaining of the translucent material in the game character, the method further comprises:
responding to a trigger instruction for manufacturing a hair material ball, and starting to write in the depth of the semitransparent material;
aiming at a computer end, changing the sequence between a semitransparent material channel and an external custom channel while adding a rendering queue aiming at the semitransparent material;
and generating a preset rendering channel when the depth is written into the part of the semitransparent material while adding a rendering queue aiming at the semitransparent material for the mobile terminal, and storing the depth texture information into an on-chip memory by utilizing a depth storage directory, wherein the preset rendering channel is in a rendering starting channel and a rendering ending channel.
Further, at the mobile terminal, when the depth is written into the semi-transparent material portion, a preset rendering channel is generated, and the depth texture information is stored in the video memory by using the depth storage directory, and the method further includes:
and in response to an instruction for switching to a preset rendering channel, changing the depth texture information from a video memory to an off-chip memory, and performing read-back operation of the depth texture information by using the off-chip memory.
Further, the rendering the hair material in the rendering queue for the semi-transparent material in a plurality of times with the preset threshold as a partition specifically includes:
dividing the hair materials in the rendering queue aiming at the semitransparent materials into a hair material part and a residual hair material part, wherein the opacity of the hair material part is larger than the preset threshold value, and the preset threshold value is used as a partition;
preferentially performing standard rendering on the hair material part with the opacity larger than the preset threshold value, and performing deep-scene rendering on the remaining hair material part.
Further, the method further comprises:
and acquiring a semi-transparent material part related to the drop shadow in the hair material, and performing shadow casting by taking the semi-transparent material part as an entity when receiving an instruction of the semi-transparent drop shadow.
Further, after the obtaining of the translucent material in the game character, the method further comprises:
if the semitransparent material belongs to the cloth material of the cloth part in the game role, adding a coloring model for controlling the rendering sequence;
and changing the rendering sequence of sub-grids in the grid data formed aiming at the cloth material by using the coloring model, rendering the cloth material according to the changed rendering sequence, and forming a sub-grid by the grid data of the vertex part corresponding to the cloth material.
Further, the rendering order of the sub-grids in the grid data formed by the cloth material is changed by using the coloring model, and the cloth material is rendered according to the changed rendering order, which specifically includes:
determining a material identification code formed by a cloth material creating sequence by using the coloring model;
through right the material identification code is ordered, changes the rendering order of the sub-grids in the grid data formed by the cloth material, and is right according to the rendering order after the change the cloth material is rendered.
Further, after the obtaining of the translucent material in the game character, the method further comprises:
if the semitransparent material belongs to a sub-surface scattering material of a skin part in a game role, weakening the mapping of the intensity of the sub-surface scattering material so as to reduce the sub-surface scattering effect corresponding to the sub-surface scattering material;
and rendering the sub-surface scattering material by using the weakened mapping.
According to a second aspect of the present application, there is provided a game character rendering apparatus including:
the game device comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is used for acquiring semitransparent materials in game roles, and the semitransparent materials belong to materials of different parts in the game roles;
the cutting unit is used for cutting the hair material by utilizing a preset threshold value if the semitransparent material belongs to the hair material of the hair part in the game role;
the writing unit is used for writing the hair material part with the opacity larger than the preset threshold into depth and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material;
and the first rendering unit is used for rendering the hair materials in the rendering queue aiming at the semitransparent materials in a grading way by taking the preset threshold value as a separation.
Further, the apparatus further comprises:
the starting unit is used for responding to a trigger instruction for manufacturing a hair material ball before the semi-transparent material in the game role is obtained, and starting the depth writing aiming at the semi-transparent material;
the changing unit is used for changing the sequence between the semitransparent material channel and the external custom channel while adding a rendering queue aiming at the semitransparent material to the computer end;
and the generating unit is used for generating a preset rendering channel when the depth is written into the semi-transparent material part while adding the rendering queue aiming at the semi-transparent material aiming at the mobile terminal, and storing the depth texture information into the on-chip video memory by utilizing the depth storage directory, wherein the preset rendering channel is in the rendering starting channel and the rendering ending channel.
Further, the apparatus further comprises:
and the execution unit is used for generating a preset rendering channel when the depth is written into the semi-transparent material part at the mobile terminal, storing the depth texture information into the video memory by using the depth storage directory, responding to an instruction for switching to the preset rendering channel, changing the depth texture information from the video memory to the on-chip memory, and executing the read-back operation of the depth texture information by using the on-chip memory.
Further, the first rendering unit includes:
the dividing module is used for dividing the hair materials in the rendering queue aiming at the semitransparent materials into a hair material part and a residual hair material part, wherein the opacity of the hair material part is larger than the preset threshold value, and the preset threshold value is used as a partition;
and the rendering module is used for preferentially performing standard rendering on the hair material part of which the opacity is greater than the preset threshold value and then performing deep-scene rendering on the residual hair material part.
Further, the apparatus further comprises:
and the projection unit is used for acquiring a semitransparent material part related to a drop shadow in the hair material, and when receiving an instruction of the semitransparent drop shadow, performing shadow projection by taking the semitransparent material part as an entity.
Further, the apparatus further comprises:
the adding unit is used for adding a coloring model with a rendering control sequence after the semitransparent material in the game role is obtained and if the semitransparent material belongs to the cloth material of the cloth part in the game role;
and the second rendering unit is used for changing the rendering sequence of the sub-grids in the grid data formed by the cloth material by using the coloring model, rendering the cloth material according to the changed rendering sequence, and forming a sub-grid by the grid data of the vertex part corresponding to the cloth material.
Further, the second rendering unit includes:
the determining module is used for determining a material identification code formed by a cloth material creating sequence by using the coloring model;
and the sequencing module is used for changing the rendering sequence of the sub-grids in the grid data formed by the cloth material by sequencing the material identification codes and rendering the cloth material according to the changed rendering sequence.
Further, the apparatus further comprises:
the processing unit is used for weakening the mapping of the intensity of the sub-surface scattering material to reduce the sub-surface scattering effect corresponding to the sub-surface scattering material if the semitransparent material belongs to the sub-surface scattering material of the skin part in the game role after the semitransparent material in the game role is obtained;
and the third rendering unit is used for rendering the sub-surface scattering material by using the weakened mapping.
According to a third aspect of the present application, there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of the first aspect when the computer program is executed.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of the first aspect described above.
By the technical scheme, compared with the existing method, device and equipment for rendering hair by using a hair coloring model in a mobile terminal aiming at semitransparent materials, the method, device and equipment for rendering the game role have the advantages that the semitransparent materials in the game role are obtained, if the semitransparent materials belong to the hair materials of the hair part in the game role, the hair materials are cut by using a preset threshold value, the hair material part with the opacity larger than the preset threshold value is further written into the depth, a rendering queue aiming at the semitransparent materials is formed by combining the residual hair material parts, the hair materials in the rendering queue aiming at the semitransparent materials are rendered in a grading way by taking the preset threshold value as separation, so that the hair material with the written depth is at the forefront of the rendering queue during the first rendering, and the hair materials needing semitransparent effect, such as hair tips, and the like, are rendered mainly during the second rendering, the whole rendering process is executed by dividing two parts of unified materials into two rendering commands according to the preset threshold value, the process does not use two sub-grids formed by the hair coloring model to execute rendering tasks, but the semi-transparent materials of different materials are combined and then rendered in a split mode, the process of mask rendering is omitted, parameters of the unified semi-transparent materials are not needed, when the semi-transparent materials can use the hair coloring model to conduct hair rendering in a moving end, the problem of rendering faults of a rendering queue between opaque and semi-transparent materials does not exist, the illusion engine does not need to be expanded, the art manufacturing and parameter adjusting operation are facilitated, the unification of the materials in the hair rendering process is guaranteed, the semi-transparent rendering queue is used for processing, and the rendering effect of game roles is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a method for rendering a game character according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another method for rendering a game character according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a game character rendering apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another game character rendering apparatus provided in an embodiment of the present application;
fig. 5 is a schematic device structure diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
The content of the invention will now be discussed with reference to a number of exemplary embodiments. It is to be understood that these examples are discussed only to enable those of ordinary skill in the art to better understand and thus implement the teachings of the present invention, and are not meant to imply any limitations on the scope of the invention.
As used herein, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to. The term "based on" is to be read as "based, at least in part, on". The terms "one embodiment" and "an embodiment" are to be read as "at least one embodiment". The term "another embodiment" is to be read as "at least one other embodiment".
In a game character rendering scene, the rendering of different parts of the game character is involved, such as hair parts, cloth parts and skin parts. Aiming at the hair part, the hair coloring model is used for rendering by default in the illusion engine, the hair coloring model has better rendering effect compared with the empirical model, but the illusion engine only realizes the rendering function in the opaque material at the computer end, if the hair coloring model is used in the semitransparent material, the engine needs to be expanded, and if the hair coloring model is used in the semitransparent material at the mobile end, the coloring algorithm needs to be simplified, so that the diffuse reflection used as indirect light only calculates the scattering term, and the hair coloring model used as direct light has no projection term. Generally, a hair coloring model forms two sub-grids in a hair rendering process, one sub-grid is used as a mask rendering, the other sub-grid is used as a semi-transparent rendering, however, a rendering queue between the opaque property and the semi-transparent property has a slight rendering effect difference, so that rendering results such as real shadow illumination and image-based illumination are slightly different in the two rendering queue calculation modes, a rendering fault problem is easily caused in the rendering queue between the opaque property and the semi-transparent property during hair rendering, and meanwhile, the two hair material renderings are inconvenient for art making and parameter adjusting operation, and the rendering effect of a game role is influenced.
In order to solve the problem, the present embodiment provides a game character rendering method, as shown in fig. 1, which is applied to a client of a game character rendering tool, and includes the following steps:
101. and acquiring the semitransparent material in the game role.
The rendering of the game role is an important link in the game making process, and particularly relates to the rendering of different parts of the game role, such as hair parts, cloth parts, skin parts, nail parts and the like, wherein the semitransparent material can belong to the materials of different parts in the game role.
102. And if the semitransparent material belongs to the hair material of the hair part in the game role, cutting the hair material by using a preset threshold value.
Because the hair material can have two different materials, to the rendering of translucent material this moment, one material is rendered in opaque rendering queue, another is rendered in translucent rendering queue, and the biggest problem of rendering in the translucent rendering queue is not from sheltering from, causes the out of order of rendering easily, so need cut out the hair material before rendering to optimize the process of rendering of hair material.
The numerical value that the threshold value was set up for when making hair material ball by oneself is predetermine, and the purpose divides the hair material into two parts based on the transparency degree, and it is in the leading of the leading frame to play up to the hair material part that the translucent effect is weaker needs add the degree of depth after, and the hair material part that is stronger to the translucent effect needs normally to be played up, and two are played up and are all handled in the translucent queue of playing up, guarantee that the effect of playing up is unified.
The execution main body of the embodiment can be a game role rendering device or equipment, and can be configured at a client of a game role rendering tool, after a game role model is built, rendering needs to be performed on different parts on the game role model, because hair materials of hair parts in the game role relate to two materials, a single semitransparent material is used instead of two sub-grids for performing rendering on the semitransparent rendering of the hair materials, and an engine is extended to divide the hair materials into two renderings according to a preset threshold value, wherein the two renderings are performed in a semitransparent rendering queue, and the rendering effect is ensured to be completely consistent.
103. Writing the hair material part with the opacity larger than the preset threshold into the depth, and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material.
It can be understood that the opacity can represent the transparency of the hair material, and the opacity of the hair can be changed by adjusting the value of the opacity, that is, decreasing the opacity can increase the transparency of the hair, sometimes making the hair appear more flexible, but easily increasing the rendering time, where the opacity is greater than the preset threshold value, the transparency of the hair corresponding to the hair material portion is lower, and the transparency of the hair corresponding to the remaining hair material portion is higher.
Specifically in the process of rendering the hair material, when the hair material has an opaque part and a semitransparent part, the opaque part is preferentially rendered during the rendering of the hair material, then the semitransparent part is rendered, and when the opaque part is rendered, a depth buffer area needs to be opened to perform depth writing operation, so that a normal depth relation exists between the opaque parts in the hair material. That is, the writing depth is required for the hair material portion with lower transparency, so as to ensure that the hair material portion with lower transparency is preferentially rendered.
104. And with the preset threshold value as a separation, performing rendering on the hair materials in the rendering queue aiming at the semitransparent materials in a grading manner.
For the hair material part storing the depth information, which is equivalent to an opaque part in a rendering queue, each pixel stores a depth value, so that each displayed pixel point on a screen can be subjected to depth sequencing, and rendering is performed according to the depth information in the rendering process, so that the rendered shielding relation is correct. For the hair material part which is not stored with the new depth, the hair material part is equivalent to the transparent part in the rendering queue, the depth information cannot be written, and the hair material part is usually rendered by a method from back to front so as to ensure that the semitransparent rendering is correct in the hair coloring process.
Specifically, in an application scene in which a hair material in a rendering queue for a semitransparent material is rendered in multiple times, the two rendering processes may include standard rendering and depth rendering, the first rendering process is rendering in a standard TranslucenPass, a material part in which depth information is stored in a rendering sequence is kept at the forefront of the rendering queue, the second rendering process is rendering in a TranslucencyAfterDOF, and the rendering sequence is kept normal.
Compared with the existing method for rendering hair by using a hair coloring model in a mobile terminal aiming at semitransparent materials, the method for rendering the game role obtains the semitransparent materials in the game role, if the semitransparent materials belong to the hair materials of the hair part in the game role, cuts the hair materials by using a preset threshold value, further writes the hair material part of which the opacity is greater than the preset threshold value into depth, combines the rest hair material parts to form a rendering queue aiming at the semitransparent materials, and renders the hair materials in the rendering queue of the semitransparent materials by times by taking the preset threshold value as a partition, so that the hair materials with the written depth are positioned at the forefront of the rendering queue during the first rendering, and the hair materials needing the semitransparent effect, such as hair tips, are rendered mainly during the second rendering, the whole rendering process is executed by dividing two parts of unified materials into two rendering commands according to the preset threshold value, the process does not use two sub-grids formed by the hair coloring model to execute rendering tasks, but the semi-transparent materials of different materials are combined and then rendered in a split mode, the process of mask rendering is omitted, parameters of the unified semi-transparent materials are not needed, when the semi-transparent materials can use the hair coloring model to conduct hair rendering in a moving end, the problem of rendering faults of a rendering queue between opaque and semi-transparent materials does not exist, the illusion engine does not need to be expanded, the art manufacturing and parameter adjusting operation are facilitated, the unification of the materials in the hair rendering process is guaranteed, the semi-transparent rendering queue is used for processing, and the rendering effect of game roles is improved.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully describe the specific implementation process of the embodiment, the embodiment provides another game character rendering method, as shown in fig. 2, the method includes:
201. and responding to a trigger instruction for making the hair material ball, and opening the depth writing for the semitransparent material.
In general, the rendering state determines that opaque objects are all rendered before semi-transparent objects and stored in depth, then in the example, the first channel starts to write depth, and writes depth values, that is, since rendering a fragment of a semi-transparent object and writing the depth into the buffer memory will cause the fragment of the same screen coordinate but with a larger depth value behind the semi-transparent object to fail rendering without passing the depth test, and the fragment itself needs to be rendered, and the semi-transparent object needs to close the depth writing in the rendering process. Before the rendering process of the semitransparent material, the depth writing for the semitransparent material is started, and the rendering process of the semitransparent material is optimized.
202. Aiming at the computer end, when a rendering queue aiming at the semitransparent material is added, the sequence between the semitransparent material channel and the external custom channel is changed.
The illusion engine can implement hair rendering for opaque materials in the computer, but if hair rendering is to be implemented for semi-transparent materials, the virtual engine needs to be expanded. In order to realize the rendering process aiming at the semitransparent material at the computer end, a rendering queue aiming at the semitransparent material can be added, the rendering queue can customize the rendering sequence, and in order to ensure that the rendering result agrees, the sequence between a semitransparent material channel and an external customized channel needs to be changed.
Specifically, the MeshPass of PreTranslunentDepth can be added at the PC side and the order of the official semi-transparent Pass, OutVelocityPass, is changed.
203. And aiming at the mobile terminal, when a rendering queue aiming at the semitransparent material is added, and when the depth is written aiming at the part of the semitransparent material, generating a preset rendering channel, and storing the depth texture information into the on-chip video memory by utilizing a depth storage directory.
If the illusion engine wants to realize the hair at the mobile terminal and render to the translucent material, need expand the virtual engine, in order to realize the mobile terminal to the process of rendering of translucent material, here can be through adding the queue of rendering to the translucent material, should render the queue and can self-define the order of rendering to place data such as model and the material that each frame of rendering needs, and then render each frame of data. However, due to the existence of the sub-mesh in the hair coloring model, the default depth texture information is not read back to the video memory, but is stored in the on-chip video memory, so that the rendering queue of the semitransparent material has no way to perform the depth test, and further, when the depth is written into the semitransparent material, a preset rendering channel is generated, and the preset rendering channel is in the starting rendering channel and the ending rendering channel, and the depth texture information is stored in the on-chip video memory by using the depth storage directory.
204. And in response to an instruction for switching to a preset rendering channel, changing the depth texture information from a video memory to an off-chip memory, and performing read-back operation of the depth texture information by using the off-chip memory.
At the mobile terminal, switching the preset rendering channel can cause that the depth cache cannot use the on-chip memory, and the on-chip memory needs to be changed into the off-chip memory, so that the read-back operation of the depth texture information can be executed.
It can be understood that the preset rendering channels are relatively independent, do not depend on other rendering components, and perform channel classification on the basis of rendering layers, one rendering layer can only separate one attribute of an object, and the rendering channel can separate multiple attributes of the object in one rendering layer. That is to say, an infinite number of rendering channels can be established in one rendering layer, and the rendering of a large number of layers can be completed by simple rendering setting.
Specifically, a mobiletlunentdepthmesh pass can be added at the Mobile end to generate another RenderPass that needs to be in a BeginRenderPass and an EndRenderPass, and keep the depthtext in the display memory by KeepDepthContent.
205. And acquiring the semitransparent material in the game role.
206. And if the semitransparent material belongs to the hair material of the hair part in the game role, cutting the hair material by using a preset threshold value.
Specifically, the hair material can be cut according to a preset threshold clipmaskvale, and the hair material is divided into a hair material part with opacity larger than the preset threshold and a remaining hair material part with opacity not larger than the preset threshold, wherein the two parts relate to hair materials with different transparency degrees, and different rendering modes are required to be used in the semitransparent material rendering process.
207. Writing the hair material part with the opacity larger than the preset threshold into the depth, and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material.
For the hair material part with opacity greater than the preset threshold, the hair material shielding degree is higher, the writing depth is needed to ensure that the hair material part is preferentially rendered in the rendering process and is kept at the forefront of the rendering queue, for the rest hair material parts, the hair shielding degree is lower, the hair part needing stronger semitransparent effect such as a hair tip is mainly used, the hair part is not needed to be normally sequenced to be rendered, the material is uniform, the hair part is processed in a semitransparent rendering channel, and the rendering effect is improved.
208. And dividing the hair materials in the rendering queue aiming at the semitransparent materials into a hair material part and a residual hair material part, wherein the opacity of the hair material part is greater than the preset threshold value, by taking the preset threshold value as a partition.
209. Preferentially performing standard rendering on the hair material part with the opacity larger than the preset threshold value, and performing deep-scene rendering on the remaining hair material part.
In an actual application scene, firstly writing the hair material part with the opacity larger than the preset threshold into the depth in PreTranslucencyDepthPass, then drawing the hair material part with the opacity larger than the preset threshold in the standard fractional battencypass, always ensuring the hair material part to be rendered at the forefront of the whole rendering queue during the first rendering, then rendering in TranslucencyAfterDOF, and rendering the hair material part with the opacity smaller than the preset threshold during the second rendering, thereby ensuring the accuracy of the rendering process of the semitransparent material, especially aiming at the hair rendering process.
Since the depth is written in the rendering process of the semi-transparent material, the illusion engine also provides an additional function, the rendering sequence of TranslucencyAfterDOF can be moved to the point before the DOF, and the rendering sequence is not independent RT, so that the semi-transparent material with the depth written can be correctly influenced by the DOF, TAA, MotionBlur and the like.
210. And acquiring a semi-transparent material part related to the drop shadow in the hair material, and performing shadow casting by taking the semi-transparent material part as an entity when receiving an instruction of the semi-transparent drop shadow.
It can be understood that the rendering of the hair material relates to the function of the semitransparent drop shadow, but the illusion engine has a hole of the semitransparent drop shadow, and the parent material can correctly drop the semitransparent shadow, but when the material is used as an example, the shadow cannot be correctly rendered.
The problem of the semitransparent shadow casting function can be solved only by rewriting GetCastDynamicShadowAsMasked in the case that the phantom engine has no material, and the phantom engine can also be added with a TranslucentShadowOpacityMaskScale value to facilitate the independent control of the opaque mask during shadow casting. In particular, when a shadow is cast semi-transparently, the shadow casting can be performed by using semi-transparency as a solid, and the shadow casting by using an opaque mask has no difference from the shadow casting.
Furthermore, the cloth part of the game role uses self-expanding anisotropic coloring models, and the illusion engine is also added with the anisotropic coloring models, but the cloth material effect meeting the game requirement is difficult to render due to the fact that only the highlight in the u direction and the v direction causes the anisotropic complexity and poor controllability. Under the general condition, the rendering sequence of different sub-grids of the same coloring model depends on the creation sequence of the coloring model, the controllability is poor, but the semitransparent rendering instructions need to be strictly ordered according to the rendering sequence.
Specifically, in the process of rendering by using the cloth material, a coloring model for controlling the rendering sequence is added for the cloth material of the cloth part in the game role, the rendering sequence of the sub-grids in the grid data formed by the cloth material is further changed by using the coloring model, the cloth material is rendered according to the changed rendering sequence, and the grid data of the cloth material corresponding to the vertex part form one sub-grid. When the rendering sequence of the sub-grids is changed, the material identification codes formed by the cloth material creating sequence can be determined by utilizing the coloring model, the rendering sequence of the sub-grids in the grid data formed by the cloth material is changed by sequencing the material identification codes, and the cloth material is rendered according to the changed rendering sequence. For example, in an application scenario, a correct sub-model rendering order needs to change the material identifier of a translucent skirt to be before the translucent flower in software such as 3dsMax, and then simple rendering ordering is performed on different sub-networks of the same model.
Furthermore, the skin part of the game role uses the SSS material of the illusion engine, the SSS material is full-precision by default, the full-precision effect is more precise, but noise is generated due to insufficient sampling times, the performance problem is caused by a mode of improving the sampling times, the SSS of HalfRes does not have an early point problem, and when the SSS material is interacted with other materials, the edge is not precise due to the reason of resolution, so that errors are generated in calculation. The treatment aiming at weakening of the skin material strength map can be added into the illusion engine, so that the effect of sub-surface scattering is not generated.
Specifically, in the process of rendering by using the skin material, firstly, weakening treatment is carried out on the mapping of the strength of the sub-surface scattering material aiming at the sub-surface scattering material of the skin part in the game role so as to reduce the sub-surface scattering effect corresponding to the sub-surface scattering material, and then the sub-surface scattering material is rendered by using the mapping after the weakening treatment, so that the skin rendering effect is ensured. For example, a map of SSSIntensity is processed and Intensity is set to 0, without using the effect of sub-surface scattering.
Further, as a specific implementation of the method in fig. 1 and fig. 2, an embodiment of the present application provides a game character rendering apparatus, as shown in fig. 3, the apparatus includes: an acquisition unit 31, a clipping unit 32, a writing unit 33, and a first rendering unit 34.
An obtaining unit 31, configured to obtain a translucent material in a game character, where the translucent material belongs to materials of different parts in the game character;
the cutting unit 32 may be configured to cut the hair material of the hair part in the game character by using a preset threshold if the translucent material belongs to the hair material;
a writing unit 33, configured to write depth to the hair material portion with opacity greater than the preset threshold, and combine the remaining hair material portions to form a rendering queue for the semi-transparent material;
the first rendering unit 34 may be configured to render the hair materials in the rendering queue for the semi-transparent materials in multiple times with the preset threshold as a partition.
Compared with the existing method that hair coloring models are used for hair rendering in a moving end aiming at semitransparent materials, the method provided by the embodiment of the invention has the advantages that the semitransparent materials in the game role are obtained, if the semitransparent materials belong to the hair materials of the hair part in the game role, the hair materials are cut by using a preset threshold value, the hair material parts of which the opacity is greater than the preset threshold value are further written into the depth, a rendering queue aiming at the semitransparent materials is formed by combining the residual hair material parts, the hair materials in the rendering queue of the semitransparent materials are rendered in a grading way by taking the preset threshold value as separation, so that the hair materials with the written depth are positioned at the forefront of the rendering queue during the first rendering, and the hair materials needing the semitransparent effect, such as hair tips, are rendered mainly during the second rendering, the whole rendering process is executed by dividing two parts of unified materials into two rendering commands according to the preset threshold value, the process does not use two sub-grids formed by the hair coloring model to execute rendering tasks, but the semi-transparent materials of different materials are combined and then rendered in a split mode, the process of mask rendering is omitted, parameters of the unified semi-transparent materials are not needed, when the semi-transparent materials can use the hair coloring model to conduct hair rendering in a moving end, the problem of rendering faults of a rendering queue between opaque and semi-transparent materials does not exist, the illusion engine does not need to be expanded, the art manufacturing and parameter adjusting operation are facilitated, the unification of the materials in the hair rendering process is guaranteed, the semi-transparent rendering queue is used for processing, and the rendering effect of game roles is improved.
In a specific application scenario, as shown in fig. 4, the apparatus further includes:
the opening unit 35 may be configured to, before the obtaining of the translucent material in the game character, respond to a trigger instruction for making a hair material ball, open the depth writing for the translucent material;
the changing unit 36 may be configured to change, for the computer side, the sequence between the translucent material channel and the external custom channel while adding the rendering queue for the translucent material;
the generating unit 37 may be configured to generate a preset rendering channel when adding a rendering queue for the semi-transparent material for the mobile terminal and writing depth for a part of the semi-transparent material, and store depth texture information to the on-chip memory by using the depth storage directory, where the preset rendering channel is in a rendering starting channel and a rendering ending channel.
In a specific application scenario, as shown in fig. 4, the apparatus further includes:
an execution unit 38, configured to generate a preset rendering channel at the mobile terminal when a depth is written in the semi-transparent material portion, store depth texture information in the video memory by using the depth storage directory, change the depth texture information from the video memory to an off-chip memory in response to an instruction for switching to the preset rendering channel, and perform a read-back operation of the depth texture information by using the off-chip memory.
In a specific application scenario, as shown in fig. 4, the first rendering unit 34 includes:
a dividing module 341, configured to divide the hair material in the rendering queue for the semi-transparent material into a hair material portion with opacity greater than the preset threshold and a remaining hair material portion with the preset threshold as a partition;
the rendering module 342 may be configured to preferentially perform standard rendering on the hair material portion with the opacity greater than the preset threshold, and then perform deep-view rendering on the remaining hair material portion.
In a specific application scenario, as shown in fig. 4, the apparatus further includes:
the projection unit 39 may be configured to obtain a translucent material portion related to a shadow cast in the hair material, and when receiving an instruction of the translucent shadow cast, shadow cast is performed by using the translucent material portion as an entity.
In a specific application scenario, the apparatus further includes:
the adding unit can be used for adding a coloring model with a rendering control sequence after the semitransparent material in the game role is obtained and if the semitransparent material belongs to the cloth material of the cloth part in the game role;
and the second rendering unit can be used for changing the rendering sequence of the sub-grids in the grid data formed by the cloth material by using the coloring model, rendering the cloth material according to the changed rendering sequence, and forming a sub-grid by the grid data of the vertex part corresponding to the cloth material.
In a specific application scenario, as shown in fig. 5, the second rendering unit includes:
the determining module can be used for determining a material identification code formed by a cloth material creating sequence by using the coloring model;
the sequencing module can be used for changing the rendering sequence of the sub-grids in the grid data formed by the cloth material by sequencing the material identification codes and rendering the cloth material according to the changed rendering sequence.
In a specific application scenario, the apparatus further includes:
the processing unit may be configured to, after obtaining the translucent material in the game character, if the translucent material belongs to a sub-surface scattering material at a skin portion in the game character, weaken a map of intensity of the sub-surface scattering material to reduce a sub-surface scattering effect corresponding to the sub-surface scattering material;
and the third rendering unit can be used for rendering the sub-surface scattering material by using the weakened map.
It should be noted that other corresponding descriptions of the functional units related to the game character rendering device provided in this embodiment may refer to the corresponding descriptions in fig. 1 to fig. 2, and are not repeated herein.
Based on the method shown in fig. 1-2, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the game character rendering method shown in fig. 1-2.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the method shown in fig. 1-2 and the virtual device embodiment shown in fig. 3-4, to achieve the above object, an embodiment of the present application further provides an entity device for rendering a game role, which may be specifically a computer, a smart phone, a tablet computer, a smart watch, a server, or a network device, where the entity device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above game character rendering method as shown in fig. 1-2.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
In an exemplary embodiment, referring to fig. 5, the entity device includes a communication bus, a processor, a memory, and a communication interface, and may further include an input/output interface and a display device, where the functional units may communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the painting mounting method in the embodiment.
Those skilled in the art will appreciate that the structure of the physical device rendered by the game character provided in the present embodiment does not constitute a limitation to the physical device, and may include more or less components, or some components in combination, or different component arrangements.
The storage medium may further include an operating system and a network communication module. The operating system is a program for managing hardware and software resources of the actual device for store search information processing, and supports the operation of the information processing program and other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. Through the technical scheme who uses this application, compare with present current mode, whole rendering process divides into two rendering commands with two parts of presetting the threshold value with unified material and carries out in this application, this process does not use two sub-net executions that the hair coloring model formed to render the task, but carries out the gradation rendering after merging the translucent material of different materials, the process of mask rendering has been saved, also need not the parameter of unified translucent material, make translucent material can use the hair coloring model to carry out the unreal hair rendering in the removal end, need not to expand the virtual engine, guarantee that hair rendering in-process material is unified, and all handle in the translucent rendering queue, improve the effect of rendering of game role.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (11)

1. A method for rendering a game character, comprising:
obtaining semitransparent materials in game roles, wherein the semitransparent materials belong to materials of different parts in the game roles;
if the semitransparent material belongs to the hair material of the hair part in the game role, cutting the hair material by using a preset threshold value;
writing the hair material part with the opacity larger than the preset threshold into depth, and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material;
and with the preset threshold value as a separation, performing rendering on the hair materials in the rendering queue aiming at the semitransparent materials in a grading manner.
2. The method of claim 1, wherein prior to said obtaining translucent material in the game character, the method further comprises:
responding to a trigger instruction for manufacturing a hair material ball, and starting to write in the depth of the semitransparent material;
aiming at a computer end, changing the sequence between a semitransparent material channel and an external custom channel while adding a rendering queue aiming at the semitransparent material;
and generating a preset rendering channel when the depth is written into the part of the semitransparent material while adding a rendering queue aiming at the semitransparent material for the mobile terminal, and storing the depth texture information into an on-chip memory by utilizing a depth storage directory, wherein the preset rendering channel is in a rendering starting channel and a rendering ending channel.
3. The method according to claim 2, wherein, at the mobile end, after generating a preset rendering channel when writing depth to the semi-transparent material part and storing the depth texture information to the video memory by using the depth saving directory, the method further comprises:
and in response to an instruction for switching to a preset rendering channel, changing the depth texture information from a video memory to an off-chip memory, and performing read-back operation of the depth texture information by using the off-chip memory.
4. The method according to claim 1, wherein the rendering the hair texture in the rendering queue for semi-transparent textures in a plurality of times with the preset threshold as a partition includes:
dividing the hair materials in the rendering queue aiming at the semitransparent materials into a hair material part and a residual hair material part, wherein the opacity of the hair material part is larger than the preset threshold value, and the preset threshold value is used as a partition;
preferentially performing standard rendering on the hair material part with the opacity larger than the preset threshold value, and performing deep-scene rendering on the remaining hair material part.
5. The method according to any one of claims 1-4, further comprising:
and acquiring a semi-transparent material part related to the drop shadow in the hair material, and performing shadow casting by taking the semi-transparent material part as an entity when receiving an instruction of the semi-transparent drop shadow.
6. The method of any of claims 1-4, wherein after the obtaining translucent material in the game character, the method further comprises:
if the semitransparent material belongs to the cloth material of the cloth part in the game role, adding a coloring model for controlling the rendering sequence;
and changing the rendering sequence of sub-grids in the grid data formed aiming at the cloth material by using the coloring model, rendering the cloth material according to the changed rendering sequence, and forming a sub-grid by the grid data of the vertex part corresponding to the cloth material.
7. The method according to claim 6, wherein the step of changing a rendering order of sub-grids in grid data formed for the cloth material by using the coloring model and rendering the cloth material according to the changed rendering order comprises:
determining a material identification code formed by a cloth material creating sequence by using the coloring model;
through right the material identification code is ordered, changes the rendering order of the sub-grids in the grid data formed by the cloth material, and is right according to the rendering order after the change the cloth material is rendered.
8. The method of any of claims 1-4, wherein after the obtaining translucent material in the game character, the method further comprises:
if the semitransparent material belongs to a sub-surface scattering material of a skin part in a game role, weakening the mapping of the intensity of the sub-surface scattering material so as to reduce the sub-surface scattering effect corresponding to the sub-surface scattering material;
and rendering the sub-surface scattering material by using the weakened mapping.
9. A game character rendering apparatus, comprising:
the game device comprises an acquisition unit, a display unit and a processing unit, wherein the acquisition unit is used for acquiring semitransparent materials in game roles, and the semitransparent materials belong to materials of different parts in the game roles;
the cutting unit is used for cutting the hair material by utilizing a preset threshold value if the semitransparent material belongs to the hair material of the hair part in the game role;
the writing unit is used for writing the hair material part with the opacity larger than the preset threshold into depth and combining the rest hair material parts to form a rendering queue aiming at the semitransparent material;
and the first rendering unit is used for rendering the hair materials in the rendering queue aiming at the semitransparent materials in a grading way by taking the preset threshold value as a separation.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the game character rendering method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the game character rendering method according to any one of claims 1 to 8.
CN202110929093.6A 2021-08-13 2021-08-13 Game role rendering method, device and equipment Active CN113694510B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110929093.6A CN113694510B (en) 2021-08-13 2021-08-13 Game role rendering method, device and equipment
PCT/CN2021/132561 WO2023015770A1 (en) 2021-08-13 2021-11-23 Game character rendering method and apparatus, computer device, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110929093.6A CN113694510B (en) 2021-08-13 2021-08-13 Game role rendering method, device and equipment

Publications (2)

Publication Number Publication Date
CN113694510A true CN113694510A (en) 2021-11-26
CN113694510B CN113694510B (en) 2024-01-09

Family

ID=78652574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110929093.6A Active CN113694510B (en) 2021-08-13 2021-08-13 Game role rendering method, device and equipment

Country Status (2)

Country Link
CN (1) CN113694510B (en)
WO (1) WO2023015770A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923333A (en) * 1997-01-06 1999-07-13 Hewlett Packard Company Fast alpha transparency rendering method
CN101055645A (en) * 2007-05-09 2007-10-17 北京金山软件有限公司 A shade implementation method and device
KR20150042094A (en) * 2013-10-10 2015-04-20 삼성전자주식회사 Method and apparatus for rendering object and recording medium thereof
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN109389664A (en) * 2017-08-04 2019-02-26 腾讯科技(深圳)有限公司 Model pinup picture rendering method, device and terminal
CN110827381A (en) * 2019-11-04 2020-02-21 广联达科技股份有限公司 Graphic layered display method and device based on ThreeJS and storage medium
CN113052951A (en) * 2021-06-01 2021-06-29 腾讯科技(深圳)有限公司 Object rendering method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279447A1 (en) * 2010-05-16 2011-11-17 Zebra Imaging, Inc. Rendering Transparent Geometry
CN108876931B (en) * 2017-05-12 2021-04-16 腾讯科技(深圳)有限公司 Three-dimensional object color adjustment method and device, computer equipment and computer readable storage medium
CN111508053B (en) * 2020-04-26 2023-11-28 网易(杭州)网络有限公司 Rendering method and device of model, electronic equipment and computer readable medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5923333A (en) * 1997-01-06 1999-07-13 Hewlett Packard Company Fast alpha transparency rendering method
CN101055645A (en) * 2007-05-09 2007-10-17 北京金山软件有限公司 A shade implementation method and device
KR20150042094A (en) * 2013-10-10 2015-04-20 삼성전자주식회사 Method and apparatus for rendering object and recording medium thereof
CN106815883A (en) * 2016-12-07 2017-06-09 珠海金山网络游戏科技有限公司 The hair treating method and system of a kind of game role
CN109389664A (en) * 2017-08-04 2019-02-26 腾讯科技(深圳)有限公司 Model pinup picture rendering method, device and terminal
CN110827381A (en) * 2019-11-04 2020-02-21 广联达科技股份有限公司 Graphic layered display method and device based on ThreeJS and storage medium
CN113052951A (en) * 2021-06-01 2021-06-29 腾讯科技(深圳)有限公司 Object rendering method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
恶毒的狗: "《修正半透明头发的渲染异常》", 《HTTPS://BLOG.CSDN.NET/WEIXIN_45979158/ARTICLE/DETAILS/103689714》 *

Also Published As

Publication number Publication date
CN113694510B (en) 2024-01-09
WO2023015770A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
US11620800B2 (en) Three dimensional reconstruction of objects based on geolocation and image data
KR102046616B1 (en) Graphics processing enhancement by tracking object and/or primitive identifiers
US20200129862A1 (en) Terrain generation system
CN108089958A (en) GPU test methods, terminal device and computer readable storage medium
CN113498532A (en) Display processing method, display processing device, electronic apparatus, and storage medium
US9480924B2 (en) Rules based system for managing user selections in customizable objects
CN115661327B (en) Distributed virtual node rendering method and device of BIM platform graphic engine
CN114549723A (en) Rendering method, device and equipment for illumination information in game scene
CN116228943A (en) Virtual object face reconstruction method, face reconstruction network training method and device
CN114053696A (en) Image rendering processing method and device and electronic equipment
CN109147054A (en) Setting method, device, storage medium and the terminal of the 3D model direction of AR
CN113694510A (en) Game role rendering method, device and equipment
US11878239B2 (en) Replay editor in video games
JP5864474B2 (en) Image processing apparatus and image processing method for processing graphics by dividing space
CN116485980A (en) Virtual object rendering method, device, equipment and storage medium
JP2006323512A (en) Image generation system, program, and information storage medium
CN114299206A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN108525304B (en) Image analysis method and device, storage medium and electronic device
CN113262466A (en) Vibration control method and device, mobile terminal and storage medium
CN113282290A (en) Object rendering method, device and equipment and storage medium
KR20210052004A (en) Method for Real-time Fur Rendering
JP7434134B2 (en) Data processing device, program, and data processing method
CN114758042B (en) Novel virtual simulation engine, virtual simulation method and device
JP2011209864A (en) Program, information storage medium, and image creation system
US11087523B2 (en) Production ray tracing of feature lines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant