WO2023151211A1 - 模型防穿插方法及装置、电子设备、存储介质 - Google Patents

模型防穿插方法及装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2023151211A1
WO2023151211A1 PCT/CN2022/099807 CN2022099807W WO2023151211A1 WO 2023151211 A1 WO2023151211 A1 WO 2023151211A1 CN 2022099807 W CN2022099807 W CN 2022099807W WO 2023151211 A1 WO2023151211 A1 WO 2023151211A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
sub
rendering
models
display position
Prior art date
Application number
PCT/CN2022/099807
Other languages
English (en)
French (fr)
Inventor
赵琦
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2023151211A1 publication Critical patent/WO2023151211A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6653Methods for processing data by generating or executing the game program for rendering three dimensional images for altering the visibility of an object, e.g. preventing the occlusion of an object, partially hiding an object

Definitions

  • the present disclosure relates to the field of computer technology, in particular to a model anti-penetration method and device, electronic equipment, and a storage medium.
  • the first method is to adjust the physical parameters of the clothes and the character's body, the outer layer of clothes and the inner layer of clothes to make the distance between the clothes and the character's body Keep a reasonable distance between the clothes, outer clothes and inner clothes.
  • it takes a lot of time to adjust the physical parameters in this way the efficiency is low, and it will make the clothes have a feeling of being stretched, which affects the appearance.
  • the second way is to mark the inner clothing mesh and character body mesh that are blocked by the outer layer of clothing during the model making stage, and then remove the marked inner clothing mesh and character body mesh during rendering .
  • the model processed in this way is moving, because the inner mesh covered by the outer layer of clothing is removed, it is prone to obvious wear. Part of the legs, because the legs hidden in the skirt were removed and rendered, resulting in a broken leg, resulting in obvious wear.
  • the present disclosure is proposed to provide a model anti-penetration method and device, electronic equipment, and storage medium that overcome the above problems or at least partially solve the above problems, including:
  • a model anti-penetration method comprising:
  • the target sub-model is a sub-model in which the number of levels is the number of target levels among the multiple sub-models that make up the model to be rendered;
  • the corresponding display position is the same as the display position stored in the buffer remove.
  • the rendering of the target sub-model, and storing the display position of the front pixel corresponding to the target sub-model into the buffer includes:
  • the rendering the target sub-model, and storing the display position of the front pixel corresponding to the target sub-model into the buffer includes:
  • the target sub-model When the target sub-model is composed of an outer grid and an inner grid, perform backside culling rendering on the outer grid, and store the display position of the front pixel corresponding to the outer grid in the buffer in the district;
  • the same display position as that stored in the buffer Pixel culling also includes:
  • the levels of the two or more sub-models are rendered sequentially in descending order.
  • the sequential rendering of the levels of the two or more sub-models in descending order further includes:
  • the display positions of the unculled pixels corresponding to the sub-model at the current level are stored in the buffer.
  • the other sub-models are composed of a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always different from the display position stored in the buffer;
  • the corresponding display position is the same as the display position stored in the buffer Pixel culling, including:
  • the first model part is rendered.
  • the corresponding display position and the corresponding display position stored in the buffer Pixels with the same display position are culled, including:
  • the target sub-model is a sub-model whose number of levels is the maximum number of levels among the sub-models corresponding to its coverage area.
  • a model penetration prevention device comprising:
  • the first determining module is configured to determine a target sub-model of the model to be rendered;
  • the target sub-model is a sub-model whose number of levels is the number of target levels among the multiple sub-models forming the model to be rendered;
  • a first rendering module configured to render the target sub-model, and store the display position of the front pixel corresponding to the target sub-model into a buffer
  • the second rendering module is configured to, when rendering other sub-models in the model to be rendered except the target sub-model, store the corresponding display positions of the pixels corresponding to the other sub-models in the buffer Pixels with the same display position are removed.
  • the first rendering module includes:
  • the first back surface culling rendering module is configured to perform back culling rendering on the target sub-model, and store the display position of the front pixel corresponding to the target sub-model in a buffer.
  • the first rendering module includes:
  • the outer layer rendering module is used to perform backside culling rendering on the outer layer grid when the target sub-model is composed of an outer layer grid and an inner layer grid, and convert the front pixels corresponding to the outer layer grid
  • the display position of the point is stored in the buffer;
  • the inner layer rendering module is used for rendering the inner layer grid.
  • the second rendering module includes:
  • a sequential rendering module configured to, when the other sub-models are composed of two or more sub-models, sequentially render the levels of the two or more sub-models in descending order.
  • the sequential rendering module is specifically configured to store, in the buffer, the display positions of the pixels corresponding to the current-level sub-model that have not been culled, when the rendering of the current-level sub-model is completed.
  • the other sub-models are composed of a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always different from the display position stored in the buffer;
  • the second rendering module including:
  • the second model part rendering module is configured to, when rendering the second model part, remove pixels corresponding to the second model part whose corresponding display positions are the same as the display positions stored in the buffer ;
  • the first model part rendering module is configured to render the first model part after the rendering of the second model part is completed.
  • the second rendering module further includes:
  • the second back surface culling rendering module is configured to perform back culling rendering on the other sub-models.
  • the target sub-model is a sub-model whose number of levels is the maximum number of levels among the sub-models corresponding to its coverage area.
  • An electronic device comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, when the computer program is executed by the processor, the above-mentioned model anti-passing method is implemented A step of.
  • a computer-readable storage medium where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned model anti-penetration method are implemented.
  • the display position of the front pixel corresponding to the target sub-model is stored in the buffer.
  • the pixels whose corresponding display position is the same as the display position stored in the buffer are eliminated; thus ensuring that the parts of other sub-models that are blocked by the target sub-model It will not be displayed by rendering, which simply and efficiently solves the problem of interspersed models.
  • FIG. 1 is a flow chart of the steps of a model anti-penetration method according to an embodiment of the present disclosure
  • Fig. 2 is a flow chart of steps in the first processing stage in the model anti-penetration method of an example of the present disclosure
  • Fig. 3 is a flow chart of the steps of the second processing stage in the model anti-penetration method of an example of the present disclosure
  • Fig. 4 is a structural block diagram of a model penetration prevention device according to an embodiment of the present disclosure.
  • Fig. 5 is a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the first way is to adjust the physical parameters of the clothes and the character's body, the outer layer of clothes and the inner layer of clothes, so that the distance between the clothes and the character's body, the outer layer Keep a reasonable distance between clothing and inner clothing.
  • adjusting the physical parameters in this method takes a lot of time for art production personnel, which is inefficient, and will make the clothes feel stretched, resulting in an unreal overall image of the model and serious loss of physical effects.
  • the second way is to mark the inner clothing mesh and character body mesh that are blocked by the outer layer of clothing during the model making stage, and then remove the marked inner clothing mesh and character body mesh during rendering .
  • This processing method has high requirements for selecting the marked grid. If the marking range is too small, that is, some grids that should be marked have not been marked, and there will still be interspersed phenomena; if the marking range is too large, there will be some grids that have not been marked. Meshes blocked by outer clothing are marked, which will cause breakage in the rendered model.
  • an embodiment of the present disclosure provides a model penetration prevention method to overcome the defects of related technologies.
  • the model penetration prevention method in one of the embodiments of the present disclosure can run on a local terminal device or a server.
  • the model anti-penetration method runs on the server, the model anti-penetration method can be implemented and executed based on the cloud interactive system, wherein the cloud interactive system includes a server and a client device.
  • cloud games refer to game methods based on cloud computing.
  • the running mode of cloud games the running subject of the game program and the presenting subject of the game screen are separated, the storage and running of the model anti-crossover method are completed on the cloud game server, and the role of the client device is used to receive data, Sending and presentation of the game screen, for example, the client device can be a display device with data transmission function close to the user side, such as the first terminal device, TV, computer, palmtop computer, etc.; but the model anti-penetration method is a cloud game server in the cloud.
  • the player When playing a game, the player operates the client device to send an operation command to the cloud game server, and the cloud game server runs the game according to the operation command, encodes and compresses the game screen and other data, and returns it to the client device through the network. Decode and output the game screen.
  • the local terminal device stores a game program and is used to present a game screen.
  • the local terminal device is used to interact with the player through the graphical user interface, that is, the conventional electronic device downloads and installs the game program and runs it.
  • the local terminal device may provide the graphical user interface to the player in various manners, for example, rendering and displaying it on the display screen of the terminal, or providing it to the player through holographic projection.
  • the local terminal device may include a display screen and a processor, the display screen is used to present a graphical user interface, the graphical user interface includes a game screen, and the processor is used to run the game, generate a graphical user interface, and control the graphical user interface displayed on the display.
  • FIG. 1 shows a flow chart of the steps of a model anti-penetration method provided by an embodiment of the present disclosure, which may specifically include the following steps:
  • Step 101 determine the target sub-model of the model to be rendered
  • Step 102 rendering the target sub-model, and storing the display position of the front pixel corresponding to the target sub-model into a buffer;
  • Step 103 when rendering other sub-models in the model to be rendered except the target sub-model, among the pixels corresponding to the other sub-models, the corresponding display positions are the same as the display positions stored in the buffer The pixels are removed.
  • the display position of the front pixel corresponding to the target sub-model is stored in the buffer.
  • the corresponding display position is the same as the display position stored in the buffer to remove the pixels; thus ensuring that the parts of other sub-models that are blocked by the target sub-model are not It will be displayed by rendering, which can solve the problem of interspersed models simply and efficiently.
  • step 101 the target sub-model of the model to be rendered is determined.
  • the model to be rendered may be a character model that needs to be displayed in real time during the running of the game application, for example, a character model in a game.
  • the model to be rendered is generally composed of multiple sub-models. Taking the character model as an example, the character model is generally composed of the character's body model and clothing model.
  • Each sub-model has a corresponding level. According to the level corresponding to each sub-model, the target sub-model corresponding to the target level number can be determined.
  • the target level number is generally the maximum level number, that is, the target sub-model is the outermost sub-model of the model to be rendered. Model.
  • the target number of levels may be an absolute maximum number of levels, or a relative maximum number of levels, and the two cases will be exemplarily explained later.
  • art producers can specify the respective levels of the multiple sub-models that make up the model to be rendered.
  • the art producer may use the center line of the model to be rendered as a reference, and determine the level of each sub-model according to the maximum vertical distance between each sub-model and the center line.
  • the model to be rendered is a character model
  • its center line is the straight line where the median line of the character model is located.
  • the character model is composed of a body model and a clothing model
  • the body model and the center of the character model are composed
  • the vertical distance of the line is smaller than the vertical distance between the clothing model and the center line. Therefore, the layer of the body model is the innermost layer, and the clothing model is the outermost layer, that is, the body model is the first layer and the clothing model is the second layer.
  • the target sub-model of the model to be rendered is the sub-model with the largest vertical distance from the center line of the model to be rendered among the multiple sub-models constituting the model to be rendered, that is, the target sub-model is the sub-model that constitutes the model to be rendered Among the submodels, the submodel with the largest number of layers.
  • the model to be rendered is a character model wearing a vest jacket, long-sleeved underwear and a skirt, wherein the vest jacket and the skirt do not overlap, and the long-sleeve underwear and the skirt do not overlap
  • the model to be rendered The body model of the character is the first layer
  • the long-sleeved underwear model is the second layer
  • the vest model is the third layer
  • the skirt model is the second layer; it can be seen that among the multiple sub-models of the model to be rendered, the absolute maximum layer
  • the corresponding submodel is the vest model, so in this example the vest model is the target submodel, and the long-sleeved underwear model, skirt model, and body model are other submodels.
  • the sub-models corresponding to the largest number of levels are all target sub-models.
  • the model to be rendered is a character model wearing a top and a skirt
  • the top and half-body Skirts do not overlap
  • the body model of the character in the model to be rendered is the first layer
  • the top model and the skirt model are both the second layer
  • both are the largest layer
  • the target sub-model is the top model and the skirt model.
  • the target sub-model is a sub-model whose number of levels is the maximum number of levels among the sub-models corresponding to its coverage area. That is, the number of target levels corresponding to the target sub-model is the relative maximum number of levels. It can be understood that, in this embodiment, it is possible to determine whether the sub-model is located at the largest level in its coverage area according to the area corresponding to each sub-model (that is, the coverage area), and if so, determine the sub-model as the target sub-model.
  • the model to be rendered is a character model wearing a vest jacket, long-sleeved underwear and a skirt, wherein the vest jacket and the skirt do not overlap, and the long-sleeve underwear and the skirt do not overlap
  • the model to be rendered The body model of the character is the first layer
  • the long-sleeved underwear model is the second layer
  • the vest coat model is the third layer
  • the skirt model is the second layer.
  • the skirt model only the body model is involved in its coverage area, and the body model is the first layer which is smaller than the number of layers of the skirt model, that is, the skirt model is the sub-model with the largest number of layers in its coverage area, so , the skirt model can be determined as the target sub-model.
  • the vest layer its coverage area involves the body model and the long-sleeved underwear model, where the body model is the first layer, and the long-sleeved underwear model is the second layer, both of which are smaller than the number of layers of the vest model, namely
  • the vest model is the sub-model with the largest number of layers in its coverage area, so it can be determined that the vest model is also the target sub-model.
  • the long-sleeved underwear model its coverage area involves the body model and the vest model. Since the number of layers of the long-sleeved underwear model is smaller than that of the vest model, the long-sleeved underwear model is not the one with the largest number of layers in its coverage area.
  • the target submodels include the skirt model and vest model, and the long-sleeved underwear model and body model belong to other submodels.
  • step 102 the target sub-model is rendered, and the display position of the front pixel corresponding to the target sub-model is stored in a buffer.
  • the target sub-model when rendering the model to be rendered, the target sub-model is rendered first, and when rendering the target sub-model, the display position of the front pixel corresponding to the target sub-model is stored in the buffer.
  • the front pixel refers to the pixel corresponding to the front of the target sub-model; and the front generally refers to the surface of the target sub-model that can be directly seen visually when the model to be rendered is normally displayed.
  • a face is a front is related to the specifications set when making a model according to art. For example, when an artist makes a model, it can be set that the points in the polygon (that is, the surface) appear in a clockwise order as the front; therefore, when the points in the surface appear in a clockwise order, it is the front.
  • the display position of the front pixel points corresponding to the target sub-model is stored in the buffer as a reference when rendering other sub-models, preventing the pixels corresponding to other sub-models from being displayed at the display position stored in the buffer, causing the target A phenomenon in which submodels are interspersed.
  • rendering the target sub-model and storing the display position of the front pixel corresponding to the target sub-model into the buffer includes:
  • backside culling rendering refers to excluding the backside of the target sub-model and rendering only the front side of the target sub-model during rendering, so as to reduce the number of rendering surfaces and improve rendering efficiency.
  • the back surface generally refers to the surface of the target sub-model that cannot be directly seen visually when the model to be rendered is normally displayed. Specifically judging whether a face is the back side is related to the specifications set when making the model according to the art. For example, when an artist makes a model, it can be set that the points in the polygon (that is, the surface) appear in a clockwise order as the front. Therefore, when the points in the surface appear in a clockwise order, it is the front. Correspondingly, when the points in the surface When appearing in counterclockwise order, it is the back.
  • the above-mentioned rendering of the target sub-model, and storing the display position of the front pixel corresponding to the target sub-model into the buffer include:
  • the target sub-model When the target sub-model is composed of an outer grid and an inner grid, perform backside culling rendering on the outer grid, and store the display position of the front pixel corresponding to the outer grid in the buffer in the district;
  • the art production personnel When there is a need to see the interior of the clothing, the art production personnel will use two layers of grids with the same shape inside and outside when making the model representing the clothing. There is a certain thickness between the two layers of grids, and there will be no overlapping or interweaving. situation so that the inside of the garment can be shown in scenes where the inside of the garment needs to be seen.
  • the back surface culling rendering of the outer grid can be performed first to reduce the number of rendering faces; for the inner grid, directly Rendering to avoid that the inner mesh is not rendered when it needs to be displayed due to backface culling rendering.
  • step 103 when rendering other sub-models in the model to be rendered except the target sub-model, the pixel corresponding to the other sub-models, the corresponding display position and the display stored in the buffer Pixels with the same position are removed.
  • the display position of each pixel corresponding to other sub-models can be compared with the display position in the buffer.
  • any one of the stored display positions is different, perform rendering writing; when the display position of pixels corresponding to other sub-models is the same as one of the display positions stored in the buffer, then do not perform rendering writing, that is, other Among the pixels corresponding to the sub-model, the pixels whose corresponding display position is the same as the display position stored in the buffer are eliminated.
  • other sub-models may be divided into a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always different from the display position stored in the buffer ;
  • Pixel culling can include:
  • the first model part is rendered.
  • other sub-models can be divided into the first model part and the second model part by the artist based on experience, so as to ensure that the display position of the pixel corresponding to the first model part is always different from the display position stored in the buffer .
  • the target sub-model of the model to be rendered is the dress model
  • the other sub-models are the body model of the character.
  • the body model can be divided into the head and other parts, the head being the first model part, and the other parts being the second model part.
  • the body model can be divided into the head, palms, soles and other remaining parts. The head, palms and soles form the first model. part, and the remaining parts are the second model part.
  • the first model part can be directly rendered to reduce the number of pixels to be compared and improve rendering efficiency.
  • this embodiment will compare whether the display position of the pixel corresponding to the second model part is the same as the display position stored in the buffer when rendering the second model part, and remove them when they are the same; therefore, this embodiment is for the first model part
  • the division with the second model part does not need to be very precise, which can reduce the difficulty of division by art producers.
  • the first model part can be only the head of the character’s body model, or it can be composed of the head, palms, and soles of the character’s body model; it can be seen that the division does not Need to be very precise.
  • the model to be rendered when the model to be rendered is composed of more than three sub-models, that is, when other sub-models include more than two sub-models, the above-mentioned rendering of the model to be rendered except the target sub-model
  • the pixels whose corresponding display positions are the same as the display positions stored in the buffer are removed, and may also include:
  • the levels of the two or more sub-models are rendered sequentially in descending order.
  • the sub-models may be rendered sequentially in descending order of the levels of the sub-models.
  • the model to be rendered corresponds to a character model wearing a dress and a vest
  • the first level is the character model
  • the second level is the dress model
  • the third level is the vest model. Therefore, the to-be-rendered
  • the target sub-model of the rendering model is the vest model
  • the other sub-models are the dress model and the character's body model
  • the level of the dress model is greater than that of the body model.
  • the display positions of the unculled pixels corresponding to the sub-model at the current level are stored in the buffer.
  • the current level sub-model can be any level sub-model, during the rendering process of the current level sub-model, compare the display position of the pixel point corresponding to the current level sub-model with the display position stored in the buffer, and remove it when they are the same; If different, add the different display position to the buffer; so that when rendering the next-level sub-model (that is, the sub-model whose level number is smaller than the level number of the current-level sub-model), the next-level sub-model can be judged Whether the display position of the pixel corresponding to the model coincides with the display position of the corresponding pixel of the previously rendered sub-model, and remove the coincident pixel to prevent the next-level sub-model from interspersing with the previously rendered sub-model middle.
  • the next-level sub-model that is, the sub-model whose level number is smaller than the level number of the current-level sub-model
  • the rendering process of the vest model and the dress model is consistent with the previous description, and will not be repeated here.
  • the dress model is rendered, the display position of the pixel corresponding to the dress model is added to the buffer.
  • the display area with the vest and the display area where the dress does not overlap with the vest are stored in the buffer, and then rendered
  • the display positions of the pixels corresponding to the body model fall in the vest display area and the pixels in the display area where the dress does not overlap with the vest are eliminated to prevent the body model from interspersing outside the dress model or vest model.
  • the sub-models at the current level can also be divided into a part outside the model and a part inside the model, wherein the part outside the model and the part inside the model
  • the inner part is relative to the rendered sub-model, that is, the current level sub-model can be divided into the outer part of the model and the inner part of the model that will never be blocked by the rendered sub-model.
  • it can be determined by the artist Producers divide and mark based on experience.
  • the Pixels at the same display position stored in the buffer are eliminated, and may also include:
  • backface culling rendering when rendering other sub-models, backface culling rendering can be selected to reduce the number of rendered faces and improve rendering efficiency.
  • the model anti-interpenetration method provided by the embodiments of the present disclosure corrects the interspersed part from the rendering level, and solves the problem of model interspersion caused by inaccurate physical simulation.
  • the embodiment of the present disclosure does not need to change physical parameters in order to eliminate interspersed, can retain better cloth dynamic effects, and can save a lot of art production staff
  • the embodiment of the present disclosure uses a pixel-by-pixel method to eliminate the interspersed, which is more accurate and does not Produces the phenomenon of piercing that occurs when the cropped part needs to be displayed.
  • the embodiments of the present disclosure can further improve model rendering efficiency through operations that do not require accurate marking of the culled parts, and the operations that do not require precise marking of the culled parts will not bring a large workload to art producers.
  • the embodiments of the present disclosure can correctly handle interleaving problems under the premise of ensuring the physical effect of clothes, reduce the workload of art, and improve the efficiency of model rendering.
  • the model to be rendered corresponds to a character model wearing a dress
  • the implementation process of the model anti-crossover method may include two processing stages, the first processing stage is an offline processing stage, and the second processing stage is an operating stage.
  • the first processing stage may include the following steps:
  • Step 201 determine the outer grid of the target sub-model.
  • the target sub-model in this example is the dress model.
  • the dress model is composed of inner mesh and outer mesh, you need to determine the outer mesh of the dress model, and save the outer mesh of the dress model as a separate Material group; when the dress model consists of only one layer of mesh, the mesh of the dress model is the outer mesh, which is saved as a separate material group.
  • Step 202 Divide other sub-models in the model to be rendered except the target sub-model into a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always the same as that stored in the buffer The display position is different.
  • the other sub-models are the body model.
  • Artists can divide the body model into the first model part including the head, palms and feet and the remaining second model part according to experience, so that in the subsequent running process, Can improve operating efficiency.
  • the second processing stage of this example may include the following steps:
  • Step 301 perform backface culling rendering on the outer mesh of the target sub-model, and store the display positions corresponding to the pixels of the rendered outer mesh in the first stencil buffer.
  • the stencil test can be turned on before the rendering starts.
  • the stencil test is a sample-by-sample operation provided by the 3D graphics pipeline after the Fragment Shader (fragment shading).
  • Set the Stencil function (template function) of the stencil test to Always passes (pass forever), set the stencil Ref (template reference value) to 0x48 (0x48 here is just an example, it can be any value in 0x01-0xff); stencil operation (Template operation value) is set to Replace (replace); after the template test setting is completed, the back surface of the outer grid is culled and then rendered, and after the rendering of the outer grid is completed, the pixel points corresponding to the outer grid are displayed The position is stored into 0X48 of the stencil buffer.
  • Step 302 rendering the inner grid of the target sub-model.
  • the target sub-model contains the inner grid
  • modify the stencil Ref to 0X16 (here 0x16 is just an example, it can be any value in 0x01-0xff except 0X48)
  • render the inner grid that is, after the rendering of the inner grid is completed, the display position of the pixel corresponding to the inner grid will be stored in 0X16 of the stencil buffer.
  • Step 303 rendering the second model part of other sub-models.
  • the pixel with the same value will be eliminated; if the value of the corresponding pixel If the value of the display position is not equal to the value stored in the stencil buffer, rendering can be performed, and the value of the display position of the pixel is increased in the stencil buffer.
  • Step 304 rendering the first model part of other sub-models.
  • the template test can be turned off before rendering the first model part, that is, the first model part is directly rendered without special processing.
  • This example can isolate the outer mesh of the target sub-model of the model to be rendered in the offline stage, and divide the other sub-models in the model to be rendered except the target sub-model into the first grid that is not always blocked by the target sub-model
  • the model part and the remaining second model part; in the running phase, according to the outer grid of the target sub-model first rendered, the value of the display position of the pixel corresponding to the outer grid is stored in the specified template buffer, and then Render the second model part of other sub-models.
  • the value of the display position of the pixel corresponding to the second model part is compared with the stencil buffer Pixels with equal values stored in are eliminated; it can effectively prevent the occurrence of model interspersed, and, compared with the processing methods of related technologies, the efficiency is higher and the effect is better.
  • FIG. 4 it shows a structural block diagram of an embodiment of a model penetration prevention device of the present disclosure.
  • the model penetration prevention device may specifically include the following modules:
  • the first determining module 401 is configured to determine the target sub-model of the model to be rendered; the target sub-model is a sub-model whose number of levels is the number of target levels among the multiple sub-models forming the model to be rendered;
  • the first rendering module 402 is configured to render the target sub-model, and store the display position of the front pixel corresponding to the target sub-model in a buffer;
  • the second rendering module 403 is configured to, when rendering other sub-models in the model to be rendered except the target sub-model, compare the corresponding display positions of the pixels corresponding to the other sub-models with the buffer Pixels with the same stored display position are eliminated.
  • the target sub-model is a sub-model with a maximum number of levels among sub-models corresponding to its coverage area.
  • the first rendering module 402 includes:
  • the first back surface culling rendering module is configured to perform back culling rendering on the target sub-model, and store the display position of the front pixel corresponding to the target sub-model in a buffer.
  • the first rendering module 402 includes:
  • the outer layer rendering module is used to perform backside culling rendering on the outer layer grid when the target sub-model is composed of an outer layer grid and an inner layer grid, and convert the front pixels corresponding to the outer layer grid
  • the display position of the point is stored in the buffer;
  • the inner layer rendering module is used for rendering the inner layer grid.
  • the second rendering module 403 includes:
  • a sequential rendering module configured to, when the other sub-models are composed of two or more sub-models, sequentially render the levels of the two or more sub-models in descending order.
  • the sequential rendering module is specifically configured to store, in the buffer.
  • the other sub-models are composed of a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always the same as that stored in the buffer The display positions are different;
  • the second rendering module 403 includes:
  • the second model part rendering module is configured to, when rendering the second model part, remove pixels corresponding to the second model part whose corresponding display positions are the same as the display positions stored in the buffer ;
  • the first model part rendering module is configured to render the first model part after the rendering of the second model part is completed.
  • the second rendering module 403 further includes:
  • the second back surface culling rendering module is configured to perform back culling rendering on the other sub-models.
  • the target sub-model of the model to be rendered is determined by the first determination module 401, and when the first rendering module 402 renders the target sub-model, the display position of the front pixel corresponding to the target sub-model is stored in the buffer , when the second rendering module 403 renders other sub-models in the model to be rendered except the target sub-model, among the pixels corresponding to other sub-models, the pixels whose corresponding display positions are the same as those stored in the buffer are removed ; so as to ensure that the parts blocked by the target sub-model in other sub-models will not be displayed by rendering, and solve the problem of interspersed models simply and efficiently.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the embodiment of the present disclosure also discloses an electronic device.
  • the electronic device 500 includes a processor 510, a memory 520, and a computer program stored in the memory 520 and capable of running on the processor 510, When the computer program is executed by the processor 510, the steps of the above-mentioned model anti-penetration method are implemented.
  • the following steps can be implemented: determine the target sub-model of the model to be rendered; the target sub-model is among the multiple sub-models that make up the model to be rendered, and the number of levels is the target level number of sub-models; render the target sub-model, and store the display position of the front pixel corresponding to the target sub-model in the buffer; when rendering the model to be rendered except for the target sub-model When selecting a sub-model, among the pixels corresponding to the other sub-models, the pixels whose corresponding display positions are the same as the display positions stored in the buffer are removed.
  • the step of rendering the target sub-model and storing the display position of the front pixel corresponding to the target sub-model in the buffer includes: Perform back culling rendering, and store the display position of the front pixel corresponding to the target sub-model into the buffer.
  • the step of rendering the target sub-model and storing the display position of the front pixel corresponding to the target sub-model into the buffer further includes: when the target sub-model When the model is composed of an outer grid and an inner grid, the back surface culling rendering is performed on the outer grid, and the display position of the front pixel corresponding to the outer grid is stored in the buffer; Describe the inner grid.
  • the pixels corresponding to the other sub-models are combined with the buffer
  • the step of eliminating pixels with the same stored display position further includes: when the other sub-models are composed of two or more sub-models, sequentially rendering the levels of the two or more sub-models in descending order .
  • the step of sequentially rendering the levels of the two or more sub-models in descending order further includes: when the rendering of the sub-models at the current level is completed, The display positions of the unculled pixels corresponding to the hierarchical sub-models are stored in the buffer.
  • the other sub-models are composed of a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always the same as that stored in the buffer
  • the display positions are different; when rendering other sub-models in the model to be rendered except the target sub-model, among the pixels corresponding to the other sub-models, the corresponding display positions are the same as those stored in the buffer
  • the step of eliminating pixels with the same display position includes: when rendering the second model part, among the pixels corresponding to the second model part, the corresponding display position is the same as the display position stored in the buffer pixel point culling; after the rendering of the second model part is completed, render the first model part.
  • the step of culling pixels with the same display position stored in the buffer further includes: performing back culling rendering on the other sub-models.
  • the target sub-model is a sub-model with a maximum number of levels among sub-models corresponding to its coverage area.
  • the display position of the front pixel corresponding to the target sub-model is stored in the buffer.
  • the pixels whose corresponding display positions are the same as the display positions stored in the buffer are eliminated; thus ensuring that the parts of other sub-models that are blocked by the target sub-model will not be The rendering is displayed, and the problem of model interspersed is solved simply and efficiently.
  • the embodiment of the present disclosure also discloses a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned model anti-passing method are implemented.
  • the following steps can be implemented: determining the target sub-model of the model to be rendered; , the level number is the sub-model of the target level number; render the target sub-model, and store the display position of the front pixel corresponding to the target sub-model in the buffer; when rendering the model to be rendered, except the When other sub-models other than the target sub-model are selected, among the pixels corresponding to the other sub-models, the pixels whose corresponding display positions are the same as the display positions stored in the buffer are removed.
  • the step of rendering the target sub-model and storing the display position of the front pixel corresponding to the target sub-model in the buffer includes: Perform back culling rendering, and store the display position of the front pixel corresponding to the target sub-model into the buffer.
  • the step of rendering the target sub-model and storing the display position of the front pixel corresponding to the target sub-model into the buffer further includes: when the target sub-model When the model is composed of an outer grid and an inner grid, the back surface culling rendering is performed on the outer grid, and the display position of the front pixel corresponding to the outer grid is stored in the buffer; Describe the inner grid.
  • the pixels corresponding to the other sub-models are combined with the buffer
  • the step of eliminating pixels with the same stored display position further includes: when the other sub-models are composed of two or more sub-models, sequentially rendering the levels of the two or more sub-models in descending order .
  • the step of sequentially rendering the levels of the two or more sub-models in descending order further includes: when the rendering of the sub-models at the current level is completed, The display positions of the unculled pixels corresponding to the hierarchical sub-models are stored in the buffer.
  • the other sub-models are composed of a first model part and a second model part, wherein the display position of the pixel corresponding to the first model part is always the same as that stored in the buffer
  • the display positions are different; when rendering other sub-models in the model to be rendered except the target sub-model, among the pixels corresponding to the other sub-models, the corresponding display positions are the same as those stored in the buffer
  • the step of eliminating pixels with the same display position includes: when rendering the second model part, among the pixels corresponding to the second model part, the corresponding display position is the same as the display position stored in the buffer pixel point culling; after the rendering of the second model part is completed, render the first model part.
  • the step of culling pixels with the same display position stored in the buffer further includes: performing back culling rendering on the other sub-models.
  • the target sub-model is a sub-model with a maximum number of levels among sub-models corresponding to its coverage area.
  • the display position of the front pixel corresponding to the target sub-model is stored in the buffer.
  • the pixels whose corresponding display positions are the same as the display positions stored in the buffer are eliminated; thus ensuring that the parts of other sub-models that are blocked by the target sub-model will not be The rendering is displayed, and the problem of model interspersed is solved simply and efficiently.
  • embodiments of the present disclosure may be provided as methods, apparatuses, or computer program products. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • Embodiments of the present disclosure are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present disclosure. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor or processor of other programmable data processing terminal equipment to produce a machine such that instructions executed by the computer or processor of other programmable data processing terminal equipment Produce means for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing terminal to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the The instruction means implements the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种模型防穿插方法及装置、电子设备、存储介质,其中,该模型防穿插方法包括:确定待渲染模型的目标子模型(101);渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中(102);当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除(103),从而确保其他子模型中被目标子模型遮挡的部位不会被渲染显示出来,简单高效地解决模型穿插的问题。

Description

模型防穿插方法及装置、电子设备、存储介质
相关申请的交叉引用
本公开要求于2022年02月14日提交的申请号为202210135013.4、名称为“模型防穿插方法及装置、电子设备、存储介质”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机技术领域,特别是涉及模型防穿插方法及装置、电子设备、存储介质。
背景技术
3D游戏中,经常会出现人物模型的内层衣服穿插到外层衣服的外面,或者,人物的皮肤穿插到衣服外面的情况,导致游戏品质受损,用户体验不佳。
为了防止人物模型的衣服穿插的情况发生,相关技术中一般有两种方式,第一种方式是,通过调整衣服和人物身体、外层衣服和内层衣服的物理参数,让衣服和人物身体之间、外层衣服和内层衣服之间保持一个合理的距离。但是,该方式中调整物理参数需要花费大量的时间,效率低下,且会使衣服有一种被撑大的感觉,影响美观。
第二种方式是,在制作模型阶段,将被外层衣服遮挡住的内层衣服网格和人物身体网格标记出来,然后在渲染时剔除被标记的内层衣服网格和人物身体网格。但是,该方式处理后的模型,在运动时,由于内部被外层衣服遮挡的网格被剔除,容易出现明显的穿帮,比如,穿长裙的人物运动时,需要露出原本藏在裙内的部分腿部,由于藏在裙内的腿部被剔除渲染,导致出现一条断了的腿,产生明显的穿帮现象。
发明内容
鉴于上述问题,提出了本公开以便提供克服上述问题或者至少部分地解决上述问题的模型防穿插方法及装置、电子设备、存储介质,包括:
一种模型防穿插方法,所述方法包括:
确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;
渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;
当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
可选地,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位 置存储到缓冲区中,包括:
对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
可选地,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中,包括:
当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;
渲染所述内层网格。
可选地,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,与所述缓冲区存储的显示位置相同的像素点剔除,还包括:
当所述其他子模型由两个以上的子模型组成时,按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
可选地,所述按照所述两个以上的子模型的层级从大到小的顺序依次渲染,还包括:
在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
可选地,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;
所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,包括:
当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;
在所述第二模型部分渲染完成后,渲染所述第一模型部分。
可选地,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,还包括:
对所述其他子模型进行背面剔除渲染。
可选地,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
一种模型防穿插装置,所述装置包括:
第一确定模块,用于确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;
第一渲染模块,用于渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;
第二渲染模块,用于当渲染所述待渲染模型中除所述目标子模型外的其他子模型时, 将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
可选地,所述第一渲染模块,包括:
第一背面剔除渲染模块,用于对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
可选地,所述第一渲染模块,包括:
外层渲染模块,用于当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;
内层渲染模块,用于渲染所述内层网格。
可选地,所述第二渲染模块,包括:
顺序渲染模块,用于当所述其他子模型由两个以上的子模型组成时,按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
可选地,所述顺序渲染模块,具体用于在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
可选地,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;所述第二渲染模块,包括:
第二模型部分渲染模块,用于当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;
第一模型部分渲染模块,用于在所述第二模型部分渲染完成后,渲染所述第一模型部分。
可选地,所述第二渲染模块,还包括:
第二背面剔除渲染模块,用于对所述其他子模型进行背面剔除渲染。
可选地,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
一种电子设备,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的模型防穿插方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上所述的模型防穿插方法的步骤。
本公开具有以下优点:
在本公开的实施例中,通过确定待渲染模型的目标子模型,在渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中,当渲染待渲染模型中除目标子模型外的其他子模型时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除;从而确保其他子模型中被目标子模型遮挡的部位不会被渲染 显示出来,简单高效地解决模型穿插的问题。
附图说明
为了更清楚地说明本公开的技术方案,下面将对本公开的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例的一种模型防穿插方法的步骤流程图;
图2为本公开一示例的模型防穿插方法中第一处理阶段的步骤流程图;
图3为本公开一示例的模型防穿插方法中第二处理阶段的步骤流程图;
图4为本公开实施例的一种模型防穿插装置的结构框图;
图5是本公开实施例的一种电子设备的框图。
具体实施方式
为使本公开的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本公开作进一步详细的说明。显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
3D游戏中,为了模拟出现实中的衣服的动态效果,通常使用布料系统或者刚体带动骨骼等物理方式来驱动衣服网格的运动。游戏中对每帧的运行时间有严格要求,通常60帧的游戏,每帧的运行时间是16.6ms,而分配给物理模拟的时间会更少。在这么短的时间内,很难做到精确的物理模拟,因此经常会出现内层衣服穿插到外层衣服上,人物的皮肤穿插到衣服外面的情况。
相关技术中一般有两种方式来防止发生模型穿插的情况,第一种方式是,通过调整衣服和人物身体、外层衣服和内层衣服的物理参数,让衣服和人物身体之间、外层衣服和内层衣服之间保持一个合理的距离。但是,该方式中调整物理参数需要花费美术制作人员大量的时间,效率低下,且会使衣服有一种被撑大的感觉,导致模型的整体形象不真实,物理效果损失严重。
第二种方式是,在制作模型阶段,将被外层衣服遮挡住的内层衣服网格和人物身体网格标记出来,然后在渲染时剔除被标记的内层衣服网格和人物身体网格。该处理方式对选择标记的网格要求很高,若标记范围过小,即存在一些应当标记的网格没有被标记出来,仍然还是会存在穿插现象;若标记范围过大,即存在一些没有被外层衣服遮挡的网格被标记出来,则会导致渲染出的模型存在断裂现象。
鉴于此,本公开实施例提供了一种模型防穿插方法,以克服相关技术的缺陷。
在本公开其中一种实施例中的模型防穿插方法可以运行于本地终端设备或者是服务器。当模型防穿插方法运行于服务器时,该模型防穿插方法则可以基于云交互系统来实现 与执行,其中,云交互系统包括服务器和客户端设备。
在一可选的实施方式中,云交互系统下可以运行各种云应用,例如:云游戏。以云游戏为例,云游戏是指以云计算为基础的游戏方式。在云游戏的运行模式下,游戏程序的运行主体和游戏画面呈现主体是分离的,模型防穿插方法的储存与运行是在云游戏服务器上完成的,客户端设备的作用用于数据的接收、发送以及游戏画面的呈现,举例而言,客户端设备可以是靠近用户侧的具有数据传输功能的显示设备,如,第一终端设备、电视机、计算机、掌上电脑等;但是进行模型防穿插方法的为云端的云游戏服务器。在进行游戏时,玩家操作客户端设备向云游戏服务器发送操作指令,云游戏服务器根据操作指令运行游戏,将游戏画面等数据进行编码压缩,通过网络返回客户端设备,最后,通过客户端设备进行解码并输出游戏画面。
在一可选的实施方式中,以游戏为例,本地终端设备存储有游戏程序并用于呈现游戏画面。本地终端设备用于通过图形用户界面与玩家进行交互,即,常规的通过电子设备下载安装游戏程序并运行。该本地终端设备将图形用户界面提供给玩家的方式可以包括多种,例如,可以渲染显示在终端的显示屏上,或者,通过全息投影提供给玩家。举例而言,本地终端设备可以包括显示屏和处理器,该显示屏用于呈现图形用户界面,该图形用户界面包括游戏画面,该处理器用于运行该游戏、生成图形用户界面以及控制图形用户界面在显示屏上的显示。
参照图1,示出了本公开一实施例提供的一种模型防穿插方法的步骤流程图,具体可以包括如下步骤:
步骤101,确定待渲染模型的目标子模型;
步骤102,渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;
步骤103,当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
在本公开实施例中,通过确定待渲染模型的目标子模型,在渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中,当渲染待渲染模型中除目标子模型外的其他子模型时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除;从而确保其他子模型中被目标子模型遮挡的部位不会被渲染显示出来,简单高效地解决模型穿插的问题。
下面,将对本示例性实施例中模型防穿插方法作进一步地说明。
在步骤101中,确定待渲染模型的目标子模型。
本公开实施例中,待渲染模型可以是游戏应用运行过程中,需要实时显示的角色模型,例如游戏中的人物模型。
待渲染模型一般由多个子模型组合而成,以人物模型为例,人物模型一般由人物的身 体模型和服装模型组成。
每个子模型都有对应的层级,根据每个子模型对应的层级,可以确定目标层级数对应的目标子模型,目标层级数一般为最大层级数,即目标子模型为待渲染模型的最外层子模型。该目标层级数可以是绝对的最大层级数,也可以是相对的最大层级数,后文将会对两种情况进行示例性地解释说明。一般地,美术制作人员在制作模型阶段,可以明确组成待渲染模型的多个子模型各自的层级。
在一示例中,美术制作人员在确定各个子模型的层级时,可以以待渲染模型的中心线为基准,按照各个子模型与中心线的最大垂直距离来确定各个子模型的层级。
示例性地,当待渲染模型为人物模型时,其中心线为人物模型的正中线所在的直线,若该人物模型由一个身体模型和一个服装模型组成,则组成人物模型中的身体模型与中心线的垂直距离小于服装模型与中心线的垂直距离,因此,身体模型的层级在最内层,服装模型在最外层,即身体模型为第一层,服装模型为第二层。可以理解,待渲染模型的目标子模型是构成所述待渲染模型的多个子模型中,与所述待渲染模型的中心线的垂直距离最大的子模型,即目标子模型是组成待渲染模型的子模型中,层级数最大的子模型。
示例性地,当待渲染模型为穿着由马甲外套,长袖内衣以及半身裙的人物模型时,其中,马甲外套与半身裙不重叠,长袖内衣和半身裙也不重叠,该待渲染模型中人物的身体模型为第一层,长袖内衣模型为第二层,马甲外套模型为第三层,半身裙模型为第二层;可见,在待渲染模型的多个子模型中,绝对的最大层级数对应的子模型是马甲外套模型,因此,在本示例中,马甲外套模型是目标子模型,长袖内衣模型、半身裙模型以及身体模型属于其他子模型。
当层级数最大的子模型有多个时,则多个最大层级数对应的子模型均为目标子模型,例如,当待渲染模型为穿着上衣和半身裙的人物模型时,其中,上衣和半身裙不重叠,该待渲染模型中人物的身体模型为第一层,上衣模型和半身裙模型均为第二层,且均为最大层级,则目标子模型是上衣模型和半身裙模型。
在一可选实施例中,目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。即目标子模型对应的目标层级数是相对的最大层级数。可以理解,在本实施例中,可以根据每个子模型对应的区域(即覆盖区域),来判断该子模型是否位于其覆盖区域中最大层级,若是,则将该子模型确定为目标子模型。
示例性地,当待渲染模型为穿着由马甲外套,长袖内衣以及半身裙的人物模型时,其中,马甲外套与半身裙不重叠,长袖内衣和半身裙也不重叠,该待渲染模型中人物的身体模型为第一层,长袖内衣模型为第二层,马甲外套模型为第三层,半身裙模型为第二层。对于半身裙模型而言,其覆盖区域中仅还涉及身体模型,而身体模型为第一层小于半身裙模型的层级数,即半身裙模型是其覆盖区域中,层级数最大的子模型,因此,可以确定半身裙模型为目标子模型。同样地,对于马甲外套层级而言,其覆盖区域涉及身体模型和长袖内衣模型,其中,身体模型为第一层,长袖内衣模型为第二层,均小于马甲外套模型的 层级数,即马甲外套模型是其覆盖区域中,层级数最大的子模型,因此,可以确定马甲外套模型也是目标子模型。对于长袖内衣模型而言,其覆盖区域涉及身体模型和马甲外套模型,由于长袖内衣模型的层级数小于马甲外套模型的层级数,即长袖内衣模型不是其覆盖区域中,层级数最大的子模型,因此,长袖内衣模型不是目标子模型。即在本示例中,目标子模型包括半身裙模型和马甲外套模型,长袖内衣模型和身体模型属于其他子模型。
在步骤102中,渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
本公开实施例中,在对待渲染模型进行渲染时,先渲染目标子模型,并且,在渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中。其中,正面像素点是指目标子模型中正面对应的像素点;而正面一般是指在待渲染模型正常显示时,在视觉上可以直接看得到的目标子模型的面。具体判断一个面是否是正面,与根据美术制作模型时设定的规范相关。例如,美术制作模型时可以设定多边形(即面)中的点以顺时针顺序出现的为正面;因此,当面中的点以顺时针顺序出现时,即为正面。
上述将目标子模型对应的正面像素点的显示位置存储到缓冲区中,用于作为后面渲染其他子模型时的参考,防止其他子模型对应的像素点显示到缓冲区存储的显示位置,造成目标子模型被穿插的现象。
在本公开一可选实施例中,上述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中,包括:
对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
本实施例中,背面剔除渲染是指在渲染时,将目标子模型的背面剔除不渲染,只渲染目标子模型的正面,以减少渲染面数,从而提高渲染效率。其中,背面一般是指在待渲染模型正常显示时,在视觉上无法直接看得到的目标子模型的面。具体的判断一个面是否是背面,与根据美术制作模型时设定的规范相关。例如,美术制作模型时可以设定多边形(即面)中的点以顺时针顺序出现的为正面,因此,当面中的点以顺时针顺序出现时,即为正面,相应地,当面中的点以逆时针顺序出现时,则为背面。
进一步地,在本公开一可选实施例中,当目标子模型表示服装时,上述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中,包括:
当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;
渲染所述内层网格。
当存在需要看到服装内部需求时,美术制作人员在制作表示服装的模型时,会使用内外两层形状完全一样的网格,两层网格之间存在一定厚度,不会有重叠、交织的情况,以便在需要看到服装内部的场景下,可以展示出服装内部。
本公开实施例中,对于由外层网格和内层网格组成的目标子模型的渲染,可以先对外 层网格进行背面剔除渲染,以减少渲染面数,对于内层网格,则直接渲染,以避免因背面剔除渲染导致在内层网格需要展示时没有渲染出来。
在步骤103中,当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
当目标子模型渲染结束后,对待渲染模型中除目标子模型外的其他子模型进行渲染。在对其他子模型进行渲染时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除,使得在显示目标子模型的显示区域中,只显示目标子模型,可以防止其他子模型对目标子模型造成穿插现象。
在具体实现中,当渲染其他子模型时,可以将其他子模型对应的各个像素点的显示位置与缓冲区中的显示位置进行比较,当其他子模型对应的像素点的显示位置与缓冲区中存储的任意一个显示位置均不同时,进行渲染写入;当其他子模型对应的像素点的显示位置与缓冲区中存储的其中一个显示位置相同时,则不进行渲染写入,即,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除。
在本公开一可选实施例中,其他子模型可以划分为第一模型部分和第二模型部分,其中,第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;上述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,可以包括:
当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;
在所述第二模型部分渲染完成后,渲染所述第一模型部分。
在本实施例中,可以由美术制作人员根据经验对其他子模型划分为第一模型部分和第二模型部分,确保第一模型部分对应的像素点的显示位置始终与缓冲区存储的显示位置不同。例如,当待渲染模型对应的是一个穿连衣裙的人物模型时,该待渲染模型的目标子模型为连衣裙模型,其他子模型为人物的身体模型,考虑到人物的头始终不会被连衣裙遮挡,因此,可以将身体模型划分为头部和其他部位,该头部即为第一模型部分,其他部位即为第二模型部分。当然,还可以考虑到人物的手掌和脚掌也是始终不会被连衣裙遮挡,因此,可以将身体模型划分为头部、手掌、脚掌和剩余的其他部位,该头部、手掌、脚掌组成第一模型部分,剩余的其他部位为第二模型部分。
由于第一模型部分对应的像素点的显示位置始终与缓冲区存储的显示位置不同,因此,可以直接渲染第一模型部分,以减少需要比较的像素点的数量,提高渲染效率。
由于本实施例在渲染第二模型部分时,会比较第二模型部分对应的像素点的显示位置是否与缓冲区存储的显示位置相同,在相同时剔除;因此,本实施例对于第一模型部分和第二模型部分的划分不需要十分精确,可以降低美术制作人员划分的难度。如上述连衣裙的人物模型示例中,第一模型部分既可以只有人物的身体模型的头部,也可以是由人物的 身体模型的头部、手掌、脚掌这几个部位组成;可见,划分并不需要十分精确。
在本公开一可选实施例中,当待渲染模型由三个以上子模型组成时,即其他子模型包括两个以上子模型时,上述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,还可以包括:
按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
在本实施例中,针对由多个子模型组成的待渲染模型,可以按照子模型的层级从大到小的顺序依次进行渲染。
示例性地,当待渲染模型对应的是一个穿连衣裙、且外套一件马甲的人物模型时,第一层级为人物模型,第二层级为连衣裙模型,第三层级为马甲模型,因此,该待渲染模型的目标子模型为马甲模型,其他子模型为连衣裙模型和人物的身体模型,并且,连衣裙模型的层级大于身体模型的层级。此时,在马甲模型渲染完成后,开始对连衣裙模型进行渲染,在渲染连衣裙模型时,比较连衣裙模型对应的像素点的显示位置是否与缓冲区存储的显示位置相同,当相同时,将连衣裙模型对应的像素点剔除。在连衣裙模型渲染完成后,再对身体模型进行渲染,再渲染身体模型时,比较身体模型对应的像素点的显示位置是否与缓冲区存储的显示位置相同,当相同时,将身体模型对应的像素点剔除;可以确保最外层的马甲模型不会被内层模型穿插。
进一步地,在上述按照所述两个以上的子模型的层级从大到小的顺序依次渲染,还可以包括:
在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
其中,当前层级子模型可以是任意一层级子模型,在当前层级子模型渲染过程中,将当前层级子模型对应的像素点的显示位置与缓冲区存储的显示位置进行比较,在相同时剔除;不同时,将该不同的显示位置添加到缓冲区中;以便在对下一个层级子模型(即层级数比当前层级子模型的层级数小的子模型)进行渲染时,可以判断下一层级子模型对应的像素点的显示位置是否与在先已渲染完成的层级子模型对应像素点的显示位置重合,并剔除重合的像素点,防止下一层级子模型穿插到在先已渲染完成的子模型中。
继续以上述穿连衣裙、且外套一件马甲的人物模型为例,马甲模型和连衣裙模型的渲染过程与前文描述的一致,此处不再赘述。在连衣裙模型渲染完成时,将连衣裙模型对应的像素点的显示位置添加到缓冲区中,此时,缓冲区中存储的有马甲的显示区域,以及连衣裙不与马甲重叠的显示区域,然后再渲染身体模型,将身体模型对应的像素点的显示位置落在马甲显示区域以及连衣裙不与马甲重叠的显示区域内的像素点剔除,防止身体模型穿插到连衣裙模型或马甲模型外。
可选地,在按照所述两个以上的子模型的层级从大到小的顺序依次渲染时,还可以将当前层级子模型划分为模型外部分和模型内部分,其中,模型外部分和模型内部分是相对 于已渲染完成的子模型而言的,即当前层级子模型可以划分为始终不会被已渲染完成的子模型遮挡的模型外部分和模型内部分,具体实现中,可以由美术制作人员根据经验进行划分并标记。在渲染当前层级子模型的模型外部分时,可以直接渲染,并在渲染完成时,将模型外部分对应像素点的显示位置存储到缓冲区中;在渲染当前层级子模型的模型内部分时,则需要执行与缓冲区内的显示位置进行比较的过程,并在模型内部分渲染完成时,将模型内部分未被剔除的像素点的显示位置存储到缓冲区中。
进一步地,在本公开一可选实施例中,上述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,与所述缓冲区存储的显示位置相同的像素点剔除,还可以包括:
对所述其他子模型进行背面剔除渲染。
在本实施例中,在对其他子模型进行渲染时,可以选择背面剔除渲染,以减少渲染的面数,提高渲染效率。
本公开实施例提供的模型防穿插方法,从渲染层面对穿插部分进行修正,解决了物理模拟不精确导致的模型穿插问题。相较于相关技术的第一种通过美术制作人员调整物理参数的方式,本公开实施例不需要为了消除穿插而改变物理参数,可以保留更好的布料动态效果,并且,可以节省美术制作人员大量的调参时间;另外,相较于相关技术的第二种直接裁剪掉可能产生穿插的模型部分,本公开实施例在渲染时,使用逐像素的方式进行穿插的剔除,更加精确,而且不会产生在需要显示裁剪掉部分时出现的穿帮现象。进一步地,本公开实施例还可以通过不需要精确标记剔除部分的操作,进一步提高模型渲染效率,该不需要精确标记剔除部分的操作并不会给美术制作人员带来很大的工作量。本公开实施例可以实现在保证衣服物理效果的前提下,正确处理穿插问题,减少美术工作量,提高模型渲染效率。
为了方便本领域技术人员理解本方案,下面将结合具体示例对本公开实施例提供的模型防穿插方法进行说明。
在本示例中,待渲染模型对应为穿连衣裙的人物模型;模型防穿插方法的实现过程可以包括两个处理阶段,第一处理阶段为离线处理阶段,第二处理阶段为运行阶段。如图2所示,第一处理阶段可以包括如下步骤:
步骤201,确定目标子模型的外层网格。
本示例中的目标子模型为连衣裙模型,当连衣裙模型由内层网格和外层网格组成时,则需要确定连衣裙模型的外层网格,将连衣裙模型的外层网格保存为单独的材质组;当连衣裙模型仅由一层网格组成时,则连衣裙模型的网格即为外层网格,保存为单独的材质组。
步骤202,将待渲染模型中除目标子模型外的其他子模型划分为第一模型部分和第二模型部分,其中,第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同。
本示例中,其他子模型即为身体模型,美术制作人员可以根据经验将身体模型划分为 包含头部、手掌、脚掌的第一模型部分和剩余的第二模型部分,以便在后续运行过程中,可以提高运行效率。
如图3所示,本示例的第二处理阶段可以包括如下步骤:
步骤301,对目标子模型的外层网格进行背面剔除渲染,并将外层网格渲染完成的像素点对应的显示位置存储到第一模板缓冲区中。
在本示例中,可以在渲染开始前,开启模板测试,模板测试是3D图形管线提供的一种在Fragment Shader(片元着色)之后执行的逐采样操作。将模板测试的Stencil function(模板函数)设置为Always passes(永远通过),设置stencil Ref(模板参考值)为0x48(这里的0x48仅为示例,可以是0x01-0xff中的任意值);stencil operation(模板操作值)设置为Replace(替换);在模板测试设置完成后,对外层网格进行背面剔除后再渲染,再外层网格渲染完成后,将外层网格对应的像素点的显示位置存储到模板缓冲区的0X48中。
步骤302,渲染目标子模型的内层网格。
当目标子模型包含内层网格时,在目标子模型的外层网格渲染完成后,修改stencil Ref为0X16(这里的0x16仅为示例,可以是0x01-0xff中除0X48外的任意值),然后渲染内层网格,即内层网格渲染完成后,内层网格对应的像素点的显示位置会存储到模板缓冲区的0X16中。
步骤303,渲染其他子模型的第二模型部分。
本示例在目标子模型渲染完成后,需要将Stencil function设置为Not Equal(不相等),stencil Ref设置为0X48(该0X48需要与步骤301中设置的stencil Ref保持一致),接着渲染其他子模型的第二模型部分。在渲染第二模型部分时,会和模板缓冲区中的显示位置进行比较。具体地,模板缓冲区中存储的是显示位置对应的值,在比较时,如果对应像素的显示位置的值与模板缓冲区中存储的值相等,则将值相等的像素剔除;如果对应像素的显示位置的值与模板缓冲区中存储的值不相等,则可以进行渲染,并且在模板缓冲区中增加该像素的显示位置的值。
步骤304,渲染其他子模型的第一模型部分。
由于第一模型部分是始终不会被外层子模型遮挡的部分,因此,在渲染第一模型部分之前,可以关闭模板测试,即直接对第一模型部分进行渲染,不需要特殊处理。
本示例可以在离线阶段将待渲染模型的目标子模型的外层网格独立出来,并且,将待渲染模型中除目标子模型外的其他子模型划分为始终不被目标子模型遮挡的第一模型部分以及剩余的第二模型部分;在运行阶段,按照先渲染目标子模型的外层网格,将外层网格对应像素点的显示位置的值存储到指定的模板缓冲区中,然后再渲染其他子模型的第二模型部分,在渲染第二模型部分时,将第二模型部分对应的像素点的显示位置的值与模板缓冲区中存储的值相等的像素剔除,最后渲染其他子模型的第一模型部分;或者,在运行阶段,也可以先渲染其他子模型的第一模型部分,然后开启模板测试,渲染目标子模型的外层网格,将外层网格对应像素点的现实位置的值存储到指定的模板缓冲区中,最后再渲 染其他子模型的第二模型部分,在渲染第二模型部分时,将第二模型部分对应的像素点的显示位置的值与模板缓冲区中存储的值相等的像素剔除;可以有效防止模型穿插的情况发生,并且,相比于相关技术的处理方式,效率更高,且效果更好。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开实施例并不受所描述的动作顺序的限制,因为依据本公开实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本公开实施例所必须的。
参照图4,示出了本公开的一种模型防穿插装置实施例的结构框图,再本公开实施例中,该模型防穿插装置具体可以包括如下模块:
第一确定模块401,用于确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;
第一渲染模块402,用于渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;
第二渲染模块403,用于当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
在本公开一可选实施例中,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
在本公开一可选实施例中,所述第一渲染模块402,包括:
第一背面剔除渲染模块,用于对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
在本公开一可选实施例中,所述第一渲染模块402,包括:
外层渲染模块,用于当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;
内层渲染模块,用于渲染所述内层网格。
在本公开一可选实施例中,所述第二渲染模块403,包括:
顺序渲染模块,用于当所述其他子模型由两个以上的子模型组成时,按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
在本公开一可选实施例中,所述顺序渲染模块,具体用于在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
在本公开一可选实施例中,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;所述第二渲染模块403,包括:
第二模型部分渲染模块,用于当渲染所述第二模型部分时,将所述第二模型部分对应 的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;
第一模型部分渲染模块,用于在所述第二模型部分渲染完成后,渲染所述第一模型部分。
在本公开一可选实施例中,所述第二渲染模块403,还包括:
第二背面剔除渲染模块,用于对所述其他子模型进行背面剔除渲染。
在本公开实施例中,通过第一确定模块401确定待渲染模型的目标子模型,在第一渲染模块402渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中,在第二渲染模块403渲染待渲染模型中除目标子模型外的其他子模型时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除;从而确保其他子模型中被目标子模型遮挡的部位不会被渲染显示出来,简单高效地解决模型穿插的问题。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开实施例还公开了一种电子设备,如图5所示,电子设备500包括处理器510、存储器520及存储在所述存储器520上并能够在所述处理器510上运行的计算机程序,所述计算机程序被所述处理器510执行时实现如上所述的模型防穿插方法的步骤。
具体的,计算机程序被所述处理器执行时,可实现如下步骤:确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
在本公开一可选实施例中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中的步骤,包括:对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
在本公开一可选实施例中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中的步骤,还包括:当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;渲染所述内层网格。
在本公开一可选实施例中,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,与所述缓冲区存储的显示位置相同的像素点剔除的步骤,还包括:当所述其他子模型由两个以上的子模型组成时,按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
在本公开一可选实施例中,所述按照所述两个以上的子模型的层级从大到小的顺序依次渲染的步骤,还包括:在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
在本公开一可选实施例中,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除的步骤,包括:当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;在所述第二模型部分渲染完成后,渲染所述第一模型部分。
在本公开一可选实施例中,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除的步骤,还包括:对所述其他子模型进行背面剔除渲染。
在本公开一可选实施例中,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
上述实施例中,通过确定待渲染模型的目标子模型,在渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中,当渲染待渲染模型中除目标子模型外的其他子模型时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除;从而确保其他子模型中被目标子模型遮挡的部位不会被渲染显示出来,简单高效地解决模型穿插的问题。
本公开实施例还公开了计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上所述的模型防穿插方法的步骤。
具体的,计算机可读存储介质上存储的计算机程序被处理器执行时,可实现如下步骤:确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
在本公开一可选实施例中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中的步骤,包括:对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
在本公开一可选实施例中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中的步骤,还包括:当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;渲染所述内层网格。
在本公开一可选实施例中,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,与所述缓冲区存储的显示位置相同的像素点剔除的步骤,还包括:当所述其他子模型由两个以上的子模型组成时,按照所述两个以 上的子模型的层级从大到小的顺序依次渲染。
在本公开一可选实施例中,所述按照所述两个以上的子模型的层级从大到小的顺序依次渲染的步骤,还包括:在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
在本公开一可选实施例中,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除的步骤,包括:当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;在所述第二模型部分渲染完成后,渲染所述第一模型部分。
在本公开一可选实施例中,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除的步骤,还包括:对所述其他子模型进行背面剔除渲染。
在本公开一可选实施例中,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
上述实施例中,通过确定待渲染模型的目标子模型,在渲染目标子模型时,将目标子模型对应的正面像素点的显示位置存储到缓冲区中,当渲染待渲染模型中除目标子模型外的其他子模型时,将其他子模型对应的像素点中,对应的显示位置与缓冲区存储的显示位置相同的像素点剔除;从而确保其他子模型中被目标子模型遮挡的部位不会被渲染显示出来,简单高效地解决模型穿插的问题。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本公开实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本公开实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开实施例是参照根据本公开实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本公开实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本公开实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本公开所提供的一种模型防穿插方法及装置、电子设备和存储介质,进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (11)

  1. 一种模型防穿插方法,其中,所述方法包括:
    确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;
    渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;
    当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
  2. 根据权利要求1所述的方法,其中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中,包括:
    对所述目标子模型进行背面剔除渲染,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中。
  3. 根据权利要求1所述的方法,其中,所述渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中,包括:
    当所述目标子模型由外层网格和内层网格组成时,对所述外层网格进行背面剔除渲染,并将所述外层网格对应的正面像素点的显示位置存储到缓冲区中;
    渲染所述内层网格。
  4. 根据权利要求1所述的方法,其中,所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,与所述缓冲区存储的显示位置相同的像素点剔除,还包括:
    当所述其他子模型由两个以上的子模型组成时,按照所述两个以上的子模型的层级从大到小的顺序依次渲染。
  5. 根据权利要求4所述的方法,其中,所述按照所述两个以上的子模型的层级从大到小的顺序依次渲染,还包括:
    在当前层级子模型渲染完成时,将所述当前层级子模型对应的未被剔除的像素点的显示位置存储到所述缓冲区。
  6. 根据权利要求1所述的方法,其中,所述其他子模型由第一模型部分和第二模型部分组成,其中,所述第一模型部分对应的像素点的显示位置始终与所述缓冲区存储的显示位置不同;所述当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,包括:当渲染所述第二模型部分时,将所述第二模型部分对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除;在所述第二模型部分渲染完成后,渲染所述第一模型部分。
  7. 根据权利要求1-6任一项所述的方法,其中,所述当渲染所述待渲染模型中除所 述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除,还包括:
    对所述其他子模型进行背面剔除渲染。
  8. 根据权利要求1所述的方法,其中,所述目标子模型是在其覆盖区域对应的子模型中,层级数为最大层级数的子模型。
  9. 一种模型防穿插渲染装置,其中,所述装置包括:第一确定模块,用于确定待渲染模型的目标子模型;所述目标子模型是组成所述待渲染模型的多个子模型中,层级数为目标层级数的子模型;第一渲染模块,用于渲染所述目标子模型,并将所述目标子模型对应的正面像素点的显示位置存储到缓冲区中;第二渲染模块,用于当渲染所述待渲染模型中除所述目标子模型外的其他子模型时,将所述其他子模型对应的像素点中,对应的显示位置与所述缓冲区存储的显示位置相同的像素点剔除。
  10. 一种电子设备,其中,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至8中任一项所述的模型防穿插方法的步骤。
  11. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的模型防穿插方法的步骤。
PCT/CN2022/099807 2022-02-14 2022-06-20 模型防穿插方法及装置、电子设备、存储介质 WO2023151211A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210135013.4A CN114470766A (zh) 2022-02-14 2022-02-14 模型防穿插方法及装置、电子设备、存储介质
CN202210135013.4 2022-02-14

Publications (1)

Publication Number Publication Date
WO2023151211A1 true WO2023151211A1 (zh) 2023-08-17

Family

ID=81480646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099807 WO2023151211A1 (zh) 2022-02-14 2022-06-20 模型防穿插方法及装置、电子设备、存储介质

Country Status (2)

Country Link
CN (1) CN114470766A (zh)
WO (1) WO2023151211A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114470766A (zh) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 模型防穿插方法及装置、电子设备、存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070873A (zh) * 2020-08-26 2020-12-11 完美世界(北京)软件科技发展有限公司 一种模型的渲染方法和装置
US20210118225A1 (en) * 2018-04-12 2021-04-22 Netease (Hangzhou) Network Co.,Ltd. Method and Apparatus for Rendering Game Image
CN112933599A (zh) * 2021-04-08 2021-06-11 腾讯科技(深圳)有限公司 三维模型渲染方法、装置、设备及存储介质
CN113498532A (zh) * 2020-01-21 2021-10-12 京东方科技集团股份有限公司 显示处理方法、显示处理装置、电子设备及存储介质
CN114470766A (zh) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 模型防穿插方法及装置、电子设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210118225A1 (en) * 2018-04-12 2021-04-22 Netease (Hangzhou) Network Co.,Ltd. Method and Apparatus for Rendering Game Image
CN113498532A (zh) * 2020-01-21 2021-10-12 京东方科技集团股份有限公司 显示处理方法、显示处理装置、电子设备及存储介质
CN112070873A (zh) * 2020-08-26 2020-12-11 完美世界(北京)软件科技发展有限公司 一种模型的渲染方法和装置
CN112933599A (zh) * 2021-04-08 2021-06-11 腾讯科技(深圳)有限公司 三维模型渲染方法、装置、设备及存储介质
CN114470766A (zh) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 模型防穿插方法及装置、电子设备、存储介质

Also Published As

Publication number Publication date
CN114470766A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
CN106575445B (zh) 毛皮虚拟化身动画
CN108479067A (zh) 游戏画面的渲染方法和装置
CN111429557A (zh) 一种毛发生成方法、毛发生成装置及可读存储介质
CN110766776A (zh) 生成表情动画的方法及装置
JP7561770B2 (ja) アニメキャラクタの変更
CN106548392B (zh) 一种基于webGL技术的虚拟试衣实现方法
CN111324334B (zh) 一种基于叙事油画作品开发虚拟现实体验系统的设计方法
CN112967367B (zh) 水波特效生成方法及装置、存储介质、计算机设备
WO2023151211A1 (zh) 模型防穿插方法及装置、电子设备、存储介质
US20170124753A1 (en) Producing cut-out meshes for generating texture maps for three-dimensional surfaces
US11645805B2 (en) Animated faces using texture manipulation
JP2019527899A (ja) 仮想深度を用いて3d相互環境を生成するためのシステム及び方法
CN111803942A (zh) 一种软阴影生成方法、装置、电子设备和存储介质
Su The application of 3D technology in video games
CN114299200A (zh) 布料动画处理方法及装置、电子设备、存储介质
Queiroz et al. A framework for generic facial expression transfer
CN106803278B (zh) 一种虚拟人物半透明分层排序方法及系统
CN110782529B (zh) 一种基于三维人脸实现眼球转动效果的方法和设备
Tian et al. Research on Visual Design of Computer 3D Simulation Special Effects Technology in the Shaping of Sci-Fi Animation Characters
CN113509731B (zh) 流体模型处理方法及装置、电子设备、存储介质
KR20050080334A (ko) 멀티 텍스쳐링 방법 및 이를 기록한 기록매체
Luque The cel shading technique
Kolivand et al. Simulated real-time soft shadow in mixed reality using fuzzy logic
Harvey et al. Transcending dimensions: 3D to 2D and 2D to 3D
Szabó et al. Depth maps and other techniques of stereoscopy in 3D scene visualization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22925570

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE