CN114155334A - Scene rendering method and device, computer equipment and storage medium - Google Patents

Scene rendering method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114155334A
CN114155334A CN202111444522.7A CN202111444522A CN114155334A CN 114155334 A CN114155334 A CN 114155334A CN 202111444522 A CN202111444522 A CN 202111444522A CN 114155334 A CN114155334 A CN 114155334A
Authority
CN
China
Prior art keywords
rendered
rendering
attribute information
sub
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111444522.7A
Other languages
Chinese (zh)
Inventor
陈晓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111444522.7A priority Critical patent/CN114155334A/en
Publication of CN114155334A publication Critical patent/CN114155334A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides a scene rendering method, apparatus, computer device, and storage medium, wherein the method comprises: responding to a rendering request under a delayed rendering scene, and determining a region to be rendered; dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the divided subregions to be rendered, and determining a target pixel point in each subregion to be rendered; determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered; and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.

Description

Scene rendering method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a scene rendering method and apparatus, a computer device, and a storage medium.
Background
Under the scenes of mobile phone games, the meta universe and the like, the terminal equipment can respond to rendering requests under different rendering scenes and perform scene rendering according to a preset rendering pipeline, so that immersive play experience is provided for a user.
In the related art, as scene elements are made more and more finely, more and more computing resources are consumed during scene rendering, so that the scene loading of the terminal processing device with weak device performance is affected, and the playing experience of a user is further affected.
Disclosure of Invention
The embodiment of the disclosure at least provides a scene rendering method, a scene rendering device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a scene rendering method, applied to a delayed rendering scene under a target operating system platform, including:
responding to a rendering request under a delayed rendering scene, and determining a region to be rendered;
dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the divided subregions to be rendered, and determining a target pixel point in each subregion to be rendered;
determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered;
and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.
In a possible embodiment, the dividing the region to be rendered into a plurality of sub-regions to be rendered includes:
dividing the region to be rendered according to the region size of a preset sub region to be rendered to obtain a plurality of sub regions to be rendered; alternatively, the first and second electrodes may be,
and dividing the region to be rendered according to the number of the preset regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
In a possible implementation manner, the determining rendering attribute information of target pixel points in the multiple sub areas to be rendered includes:
aiming at any target pixel point, based on a preset mapping relation between the type of the rendering request and the rendering attribute information, obtaining the rendering attribute information corresponding to the rendering request from a geometric buffer zone corresponding to the zone to be rendered, and taking the obtained rendering attribute information as the rendering attribute information of the target pixel point; alternatively, the first and second electrodes may be,
and sampling the attribute information mapping corresponding to the rendering attribute information aiming at any target pixel point, and taking the rendering attribute information obtained by sampling as the rendering attribute information corresponding to the target pixel point.
In a possible implementation manner, the determining, based on the rendering attribute information of the target pixel point in each to-be-rendered sub-region, the rendering attribute information of other pixel points in each to-be-rendered sub-region except for the target pixel point includes:
aiming at any one target pixel point, determining a similar pixel point of which the position relation with the target pixel point meets a preset condition;
and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
In a possible embodiment, the preset condition includes that the preset condition is located adjacent to and/or at a target position of the target pixel point.
In a possible implementation manner, the determining, based on the rendering attribute information of the target pixel point in each to-be-rendered sub-region, the rendering attribute information of other pixel points in each to-be-rendered sub-region except for the target pixel point includes:
and aiming at any other pixel point except the target pixel point, taking the rendering attribute information of the target pixel point closest to the pixel point as the rendering attribute information of the pixel point.
In a possible implementation manner, the performing delayed rendering on the region to be rendered based on the rendering attribute information of each pixel point in the sub-region to be rendered includes:
determining target scene rendering data corresponding to the sub-region to be rendered based on rendering attribute information of each pixel point in the sub-region to be rendered;
and performing delayed rendering on the region to be rendered based on the target scene rendering data.
In a second aspect, an embodiment of the present disclosure further provides a scene rendering apparatus, including:
the first determining module is used for responding to a rendering request under a delayed rendering scene and determining a region to be rendered;
the second determining module is used for dividing the region to be rendered into a plurality of sub-regions to be rendered, respectively sampling the divided sub-regions to be rendered, and determining a target pixel point in each sub-region to be rendered;
a third determining module, configured to determine rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determine, based on the rendering attribute information of the target pixel points in each sub-area to be rendered, rendering attribute information of other pixel points in each sub-area to be rendered except for the target pixel points;
and the delayed rendering module is used for delaying rendering of the to-be-rendered area based on the rendering attribute information of each pixel point in the to-be-rendered subarea aiming at any one of the to-be-rendered subareas.
In a possible implementation manner, the second determining module, when dividing the region to be rendered into a plurality of sub-regions to be rendered, is configured to:
dividing the region to be rendered according to the region size of a preset sub region to be rendered to obtain a plurality of sub regions to be rendered; alternatively, the first and second electrodes may be,
and dividing the region to be rendered according to the number of the preset regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
In a possible implementation manner, when determining rendering attribute information of target pixel points in the multiple sub areas to be rendered, the third determining module is configured to:
aiming at any target pixel point, based on a preset mapping relation between the type of the rendering request and the rendering attribute information, obtaining the rendering attribute information corresponding to the rendering request from a geometric buffer zone corresponding to the zone to be rendered, and taking the obtained rendering attribute information as the rendering attribute information of the target pixel point; alternatively, the first and second electrodes may be,
and sampling the attribute information mapping corresponding to the rendering attribute information aiming at any target pixel point, and taking the rendering attribute information obtained by sampling as the rendering attribute information corresponding to the target pixel point.
In a possible implementation manner, when determining rendering attribute information of other pixel points except for the target pixel point in each to-be-rendered sub-region based on rendering attribute information of the target pixel point in each to-be-rendered sub-region, the third determining module is configured to:
aiming at any one target pixel point, determining a similar pixel point of which the position relation with the target pixel point meets a preset condition;
and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
In a possible embodiment, the preset condition includes that the preset condition is located adjacent to and/or at a target position of the target pixel point.
In a possible implementation manner, when determining rendering attribute information of other pixel points except for the target pixel point in each to-be-rendered sub-region based on rendering attribute information of the target pixel point in each to-be-rendered sub-region, the third determining module is configured to:
and aiming at any other pixel point except the target pixel point, taking the rendering attribute information of the target pixel point closest to the pixel point as the rendering attribute information of the pixel point.
In a possible implementation manner, the delayed rendering module, when performing delayed rendering on the region to be rendered based on the rendering attribute information of each pixel point in the sub-region to be rendered, is configured to:
determining target scene rendering data corresponding to the sub-region to be rendered based on rendering attribute information of each pixel point in the sub-region to be rendered;
and performing delayed rendering on the region to be rendered based on the target scene rendering data.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the scene rendering method, the scene rendering device, the computer equipment and the storage medium, a target pixel point in each sub-area to be rendered is determined by respectively sampling the sub-areas to be rendered, which are obtained after the sub-areas to be rendered are divided; the rendering attribute information of the target pixel points in the multiple sub-areas to be rendered is determined, the rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered is determined based on the rendering attribute information of the target pixel points in each sub-area to be rendered, and therefore the rendering attribute information of each pixel point in each sub-area to be rendered can be obtained based on the attribute information of each target pixel point, therefore, the rendering attribute information of each pixel point does not need to be calculated, further, under the condition that the rendering effect is guaranteed, the calculation resources and the time cost for determining the rendering attribute information of each pixel point are saved, the scene rendering speed is improved, and the playing experience of users is improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flowchart of a scene rendering method provided by an embodiment of the present disclosure;
fig. 2a is a schematic diagram illustrating interval sampling in a scene rendering method provided by an embodiment of the present disclosure;
fig. 2b illustrates a schematic diagram of determining similar pixel points in the scene rendering method provided by the embodiment of the present disclosure;
fig. 2c is a schematic diagram illustrating that rendering attribute information of a pixel point is determined in the scene rendering method provided by the embodiment of the disclosure;
fig. 3 is a flowchart illustrating a specific method for determining rendering attribute information of other pixel points except for a target pixel point in each sub-region to be rendered in the scene rendering method provided in the embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an architecture of a scene rendering apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Research shows that as scene elements are made more and more finely, more and more computing resources are consumed during scene rendering, so that the scene loading of terminal processing equipment with weak equipment performance is influenced, and the playing experience of a user is further influenced.
Based on the research, the present disclosure provides a scene rendering method, apparatus, computer device and storage medium, which respectively sample a plurality of sub-regions to be rendered obtained by dividing a region to be rendered, and determine a target pixel point in each of the sub-regions to be rendered; the rendering attribute information of the target pixel points in the multiple sub-areas to be rendered is determined, the rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered is determined based on the rendering attribute information of the target pixel points in each sub-area to be rendered, and therefore the rendering attribute information of each pixel point in each sub-area to be rendered can be obtained based on the attribute information of each target pixel point, therefore, the rendering attribute information of each pixel point does not need to be calculated, further, under the condition that the rendering effect is guaranteed, the calculation resources and the time cost for determining the rendering attribute information of each pixel point are saved, the scene rendering speed is improved, and the playing experience of users is improved.
To facilitate understanding of the present embodiment, first, a scene rendering method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the scene rendering method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: the terminal device may be an intelligent terminal device with a display function, for example, a smart phone, a tablet computer, an intelligent wearable device, or the like. In some possible implementations, the scene rendering method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a scene rendering method provided in the embodiment of the present disclosure is shown, where the method includes S101 to S104, where:
s101: and responding to a rendering request under the delayed rendering scene, and determining a region to be rendered.
S102: and dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the plurality of subregions to be rendered after division, and determining target pixel points in each subregion to be rendered.
S103: and determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered.
S104: and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.
The following is a detailed description of the above steps.
For S101, the deferred rendering is a rendering technique that postpones a rendering calculation process such as illumination calculation to a second step of a rendering process, and the target operating system platform may be an iOS platform.
Specifically, the delayed rendering may include the following two steps:
firstly, rendering a scene of a region to be rendered, and storing rendering attribute information (such as a normal vector, a reflection coefficient and the like) into a geometric buffer area to realize preliminary rendering of the region to be rendered; and secondly, reading the stored rendering attribute information from the geometric buffer area, and calculating rendering contents such as pixel colors and the like based on the rendering attribute information so as to perform more refined scene rendering based on a calculation result, thereby achieving a more vivid scene rendering effect.
In practical application, the rendering request may be generated when a virtual character operated by a user enters a target area, and when the user enters a preset target area, an area to be rendered in the target area may be determined based on position information of the user in the target area, so as to perform scene rendering based on the determined area to be rendered.
In addition, the rendering request may also be initiated by the user side based on a target rendering function, for example, the user side may initiate the rendering request based on a Screen-Space Ambient light Occlusion (SSAO) rendering function to show an SSAO rendering map corresponding to the region to be rendered.
S102: and dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the plurality of subregions to be rendered after division, and determining target pixel points in each subregion to be rendered.
Here, when the region to be rendered is divided, the following may be performed:
in the method 1, the area to be rendered is divided according to the area size of the preset subarea to be rendered, so that a plurality of subareas to be rendered are obtained.
Here, the preset region size may be 16 pixels × 16 pixels, and the region to be rendered having a region size of 256 pixels × 256 pixels may be divided into 256 sub-regions to be rendered having a region size of 16 pixels × 16 pixels according to the region size of the preset sub-region to be rendered.
And in the mode 2, the region to be rendered is divided according to the preset number of the regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
Here, the preset number of regions may be 100, and the region to be rendered having a region size of 1920 pixels × 1080 pixels may be divided into 100 sub-regions to be rendered having a region size of 192 pixels × 108 pixels according to the preset number of sub-regions to be rendered.
Therefore, the multiple sub-areas to be rendered are obtained by dividing the area to be rendered, and a corresponding rendering strategy is conveniently set for each sub-area to be rendered, so that scene rendering is carried out under the condition that the whole rendering pipeline of the area to be rendered is not interrupted.
In practical application, in the iOS platform of the target operating system platform, the area to be rendered may be divided based on a tile function of the iOS platform, and each sub-area to be rendered obtained after the division is respectively sampled, the tile function may call a tile storage space in a graphics processor to perform data processing, and the tile storage space is a high-speed storage space in a GPU, so as to achieve an effect of not interrupting an overall rendering pipeline of the area to be rendered.
In a possible implementation manner, when a target pixel point in each to-be-rendered subregion is determined, the partitioned subregions to be rendered may be sampled at intervals, so as to obtain the target pixel point.
For example, the schematic diagram of the interval sampling may be as shown in fig. 2a, each square in fig. 2a represents 1 pixel, one to-be-rendered sub-region includes 16 pixels, a square corresponding to a shaded portion represents a target pixel, other squares represent other pixels except for the target pixel, and 4 target pixels may be sampled from the 16 pixels by performing interval sampling on the to-be-rendered sub-region.
It should be noted that the example of the interval sampling manner provided in the embodiment of the present disclosure is only one possible implementation manner of implementing interval sampling, and other interval sampling manners or other sampling manners may also be adopted in practical applications, which is not limited in the embodiment of the present disclosure.
S103: and determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered.
Here, the rendering attribute information indicates attribute information required to be used when performing delayed rendering, such as position coordinates of a pixel point, a normal vector, texture coordinates, a reflection coefficient (e.g., a diffuse reflection factor), and the like.
Specifically, for any one of the target pixel points, the rendering attribute information of the target pixel point can be determined in the following manner:
mode 1, based on preset mapping relation between rendering request type and rendering attribute information
Here, based on a preset mapping relationship between a rendering request type and rendering attribute information, the rendering attribute information corresponding to the type of the rendering request may be obtained from the geometric buffer corresponding to the region to be rendered, and the obtained rendering attribute information may be used as the rendering attribute information of the target pixel point.
Specifically, a plurality of types of rendering attribute information may be stored in the geometric buffer, and after the rendering request is obtained, the rendering attribute information corresponding to the rendering request may be obtained from the geometric buffer corresponding to the region to be rendered according to the type of the obtained rendering request.
Mode 2, attribute information mapping based on rendering attribute information correspondence
Here, the attribute information map corresponding to the rendering attribute information may be sampled, and the rendering attribute information obtained by sampling may be used as the rendering attribute information corresponding to the target pixel point.
Specifically, the attribute information map is a picture storing the rendering attribute information, the attribute information map corresponds to the region to be rendered one by one, and each pixel point in the region to be rendered has corresponding attribute information in the attribute information map. For example, the depth information may be stored in a depth information map, and the depth information stored in the depth map may be obtained by sampling the depth information map.
In a possible implementation manner, as shown in fig. 3, the rendering attribute information of other pixel points except for the target pixel point in each to-be-rendered sub-region may be determined through the following steps:
s301: and aiming at any one target pixel point, determining similar pixel points of which the position relation with the target pixel point meets a preset condition.
Here, the preset condition includes that the preset condition is located adjacent to and/or on the target position of the target pixel point.
Here, the target orientation may be a right direction, a lower right direction, or the like, which may be set according to a sampling policy at the time of sampling.
Exemplarily, a schematic diagram for determining similar pixels may be as shown in fig. 2b, pixels 1 to 4 are sequentially marked by 1 to 4, the pixel 1 is a target pixel, and the similar pixel corresponding to the pixel 1 may be a pixel 2 adjacent to the pixel 1 and located at the right of the pixel 1, a pixel 3 adjacent to the pixel 1 and located below the pixel 1, and a pixel 4 adjacent to the pixel 1 and located at the right of the pixel 1.
S302: and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
In this way, the rendering attribute information of the target pixel point is directly used as the rendering attribute information of the similar pixel points corresponding to the target pixel point, so that the computing resources and time cost for determining the rendering attribute information of each similar pixel point can be saved, and the scene rendering speed is improved; on the other hand, because the similar pixel points are often located around the target pixel point, even if the rendering attribute information of the target pixel point is used as the rendering attribute information of the similar pixel points respectively corresponding to the target pixel point, the influence of the rendering attribute information on the rendering effect is small.
In another possible implementation manner, when determining rendering attribute information of other pixel points except the target pixel point in each to-be-rendered sub-region, for any other pixel point except the target pixel point, rendering attribute information of the target pixel point closest to the pixel point may also be used as the rendering attribute information of the pixel point.
Specifically, if there are a plurality of target pixel points closest to the pixel point, one target pixel point may be randomly selected from the plurality of target pixel points, and the rendering attribute information of the randomly selected target pixel point is used as the rendering attribute information of the pixel point.
For example, a schematic diagram for determining rendering attribute information of a pixel point may be shown in fig. 2c, pixel points 1 to 8 are sequentially marked by 1 to 8, and the pixel point 1 and the pixel point 4 are target pixel points, and rendering attribute information of the pixel point 1 closest to the pixel point 2 may be used as rendering attribute information of the pixel point 2; rendering attribute information of a pixel point 4 closest to the pixel point 3 can be used as the rendering attribute information of the pixel point 3; rendering attribute information of the pixel point 1 closest to the pixel point 5 can be used as rendering attribute information of the pixel point 5; rendering attribute information of the pixel point 1 closest to the pixel point 6 can be used as rendering attribute information of the pixel point 6; rendering attribute information of the pixel point 4 closest to the pixel point 7 can be used as rendering attribute information of the pixel point 7; the rendering attribute information of the pixel point 4 closest to the pixel point 8 may be used as the rendering attribute information of the pixel point 8.
S104: and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.
Here, when delayed rendering is performed, target scene rendering data corresponding to the sub-region to be rendered may be determined based on rendering attribute information of each pixel point in the sub-region to be rendered; and performing delayed rendering on the region to be rendered based on the target scene rendering data.
Exemplarily, diffuse reflection illumination calculation may be performed based on the position coordinates, the normal vector, and the diffuse reflection factor of each pixel point, target scene rendering data corresponding to illumination is determined, and delayed rendering is performed based on the target scene rendering data.
According to the scene rendering method provided by the embodiment of the disclosure, a plurality of sub-areas to be rendered, which are obtained after the area to be rendered is divided, are respectively sampled, and a target pixel point in each sub-area to be rendered is determined; the rendering attribute information of the target pixel points in the multiple sub-areas to be rendered is determined, the rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered is determined based on the rendering attribute information of the target pixel points in each sub-area to be rendered, and therefore the rendering attribute information of each pixel point in each sub-area to be rendered can be obtained based on the attribute information of each target pixel point, therefore, the rendering attribute information of each pixel point does not need to be calculated, further, under the condition that the rendering effect is guaranteed, the calculation resources and the time cost for determining the rendering attribute information of each pixel point are saved, the scene rendering speed is improved, and the playing experience of users is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a scene rendering device corresponding to the scene rendering method is also provided in the embodiments of the present disclosure, and as the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the scene rendering method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 4, which is a schematic diagram of an architecture of a scene rendering apparatus provided in an embodiment of the present disclosure, the apparatus includes: a first determining module 401, a second determining module 402, a third determining module 403, a delayed rendering module 404; wherein the content of the first and second substances,
a first determining module 401, configured to determine, in response to a rendering request in a delayed rendering scene, a region to be rendered;
a second determining module 402, configured to divide the region to be rendered into a plurality of sub regions to be rendered, respectively sample the plurality of divided sub regions to be rendered, and determine a target pixel point in each of the sub regions to be rendered;
a third determining module 403, configured to determine rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determine, based on the rendering attribute information of the target pixel points in each sub-area to be rendered, rendering attribute information of other pixel points in each sub-area to be rendered except for the target pixel point;
and a delayed rendering module 404, configured to perform delayed rendering on the region to be rendered based on the rendering attribute information of each pixel point in the sub region to be rendered, for any one of the sub regions to be rendered.
In a possible implementation manner, the second determining module 402, when dividing the region to be rendered into a plurality of sub-regions to be rendered, is configured to:
dividing the region to be rendered according to the region size of a preset sub region to be rendered to obtain a plurality of sub regions to be rendered; alternatively, the first and second electrodes may be,
and dividing the region to be rendered according to the number of the preset regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
In a possible implementation manner, the third determining module 403, when determining rendering attribute information of target pixel points in the multiple sub areas to be rendered, is configured to:
aiming at any target pixel point, based on a preset mapping relation between the type of the rendering request and the rendering attribute information, obtaining the rendering attribute information corresponding to the rendering request from a geometric buffer zone corresponding to the zone to be rendered, and taking the obtained rendering attribute information as the rendering attribute information of the target pixel point; alternatively, the first and second electrodes may be,
and sampling the attribute information mapping corresponding to the rendering attribute information aiming at any target pixel point, and taking the rendering attribute information obtained by sampling as the rendering attribute information corresponding to the target pixel point.
In a possible implementation manner, when determining rendering attribute information of other pixel points except for the target pixel point in each to-be-rendered sub-region based on the rendering attribute information of the target pixel point in each to-be-rendered sub-region, the third determining module 403 is configured to:
aiming at any one target pixel point, determining a similar pixel point of which the position relation with the target pixel point meets a preset condition;
and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
In a possible embodiment, the preset condition includes that the preset condition is located adjacent to and/or at a target position of the target pixel point.
In a possible implementation manner, when determining rendering attribute information of other pixel points except for the target pixel point in each to-be-rendered sub-region based on the rendering attribute information of the target pixel point in each to-be-rendered sub-region, the third determining module 403 is configured to:
and aiming at any other pixel point except the target pixel point, taking the rendering attribute information of the target pixel point closest to the pixel point as the rendering attribute information of the pixel point.
In a possible implementation manner, the delayed rendering module 404, when performing delayed rendering on the region to be rendered based on the rendering attribute information of each pixel point in the sub-region to be rendered, is configured to:
determining target scene rendering data corresponding to the sub-region to be rendered based on rendering attribute information of each pixel point in the sub-region to be rendered;
and performing delayed rendering on the region to be rendered based on the target scene rendering data.
The scene rendering device provided by the embodiment of the disclosure respectively samples a plurality of sub-areas to be rendered, which are obtained after dividing an area to be rendered, and determines a target pixel point in each sub-area to be rendered; the rendering attribute information of the target pixel points in the multiple sub-areas to be rendered is determined, the rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered is determined based on the rendering attribute information of the target pixel points in each sub-area to be rendered, and therefore the rendering attribute information of each pixel point in each sub-area to be rendered can be obtained based on the attribute information of each target pixel point, therefore, the rendering attribute information of each pixel point does not need to be calculated, further, under the condition that the rendering effect is guaranteed, the calculation resources and the time cost for determining the rendering attribute information of each pixel point are saved, the scene rendering speed is improved, and the playing experience of users is improved.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 5, a schematic structural diagram of a computer device 500 provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the computer device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
responding to a rendering request under a delayed rendering scene, and determining a region to be rendered;
dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the divided subregions to be rendered, and determining a target pixel point in each subregion to be rendered;
determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered;
and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.
In a possible implementation manner, in the instructions of the processor 501, the dividing the region to be rendered into a plurality of sub-regions to be rendered includes:
dividing the region to be rendered according to the region size of a preset sub region to be rendered to obtain a plurality of sub regions to be rendered; alternatively, the first and second electrodes may be,
and dividing the region to be rendered according to the number of the preset regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
In a possible implementation manner, the determining rendering attribute information of target pixel points in the multiple sub areas to be rendered in the instructions of the processor 501 includes:
aiming at any target pixel point, based on a preset mapping relation between the type of the rendering request and the rendering attribute information, obtaining the rendering attribute information corresponding to the rendering request from a geometric buffer zone corresponding to the zone to be rendered, and taking the obtained rendering attribute information as the rendering attribute information of the target pixel point; alternatively, the first and second electrodes may be,
and sampling the attribute information mapping corresponding to the rendering attribute information aiming at any target pixel point, and taking the rendering attribute information obtained by sampling as the rendering attribute information corresponding to the target pixel point.
In a possible implementation manner, in the instructions of the processor 501, the determining rendering attribute information of other pixels in each to-be-rendered sub-area except for the target pixel based on the rendering attribute information of the target pixel in each to-be-rendered sub-area includes:
aiming at any one target pixel point, determining a similar pixel point of which the position relation with the target pixel point meets a preset condition;
and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
In a possible implementation manner, in the instructions of the processor 501, the preset condition includes that the preset condition is located adjacent to and/or at a target position of a target pixel.
In a possible implementation manner, in the instructions of the processor 501, the determining rendering attribute information of other pixels in each to-be-rendered sub-area except for the target pixel based on the rendering attribute information of the target pixel in each to-be-rendered sub-area includes:
and aiming at any other pixel point except the target pixel point, taking the rendering attribute information of the target pixel point closest to the pixel point as the rendering attribute information of the pixel point.
In a possible implementation manner, in the instructions of the processor 501, the performing delayed rendering on the region to be rendered based on the rendering attribute information of each pixel point in the sub-region to be rendered includes:
determining target scene rendering data corresponding to the sub-region to be rendered based on rendering attribute information of each pixel point in the sub-region to be rendered;
and performing delayed rendering on the region to be rendered based on the target scene rendering data.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the scene rendering method in the foregoing method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the scene rendering method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A scene rendering method is applied to delayed rendering scenes under a target operating system platform and comprises the following steps:
responding to a rendering request under a delayed rendering scene, and determining a region to be rendered;
dividing the region to be rendered into a plurality of subregions to be rendered, respectively sampling the divided subregions to be rendered, and determining a target pixel point in each subregion to be rendered;
determining rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determining rendering attribute information of other pixel points except the target pixel points in each sub-area to be rendered based on the rendering attribute information of the target pixel points in each sub-area to be rendered;
and for any sub-area to be rendered, performing delayed rendering on the area to be rendered based on rendering attribute information of each pixel point in the sub-area to be rendered.
2. The method of claim 1, wherein the dividing the region to be rendered into a plurality of sub-regions to be rendered comprises:
dividing the region to be rendered according to the region size of a preset sub region to be rendered to obtain a plurality of sub regions to be rendered; alternatively, the first and second electrodes may be,
and dividing the region to be rendered according to the number of the preset regions of the sub-regions to be rendered to obtain a plurality of sub-regions to be rendered.
3. The method according to claim 1 or 2, wherein the determining rendering attribute information of target pixel points in the plurality of sub regions to be rendered comprises:
aiming at any target pixel point, based on a preset mapping relation between the type of the rendering request and the rendering attribute information, obtaining the rendering attribute information corresponding to the rendering request from a geometric buffer zone corresponding to the zone to be rendered, and taking the obtained rendering attribute information as the rendering attribute information of the target pixel point; alternatively, the first and second electrodes may be,
and sampling the attribute information mapping corresponding to the rendering attribute information aiming at any target pixel point, and taking the rendering attribute information obtained by sampling as the rendering attribute information corresponding to the target pixel point.
4. The method according to claim 1, wherein the determining rendering attribute information of other pixel points in each sub-region to be rendered except for the target pixel point based on the rendering attribute information of the target pixel point in each sub-region to be rendered comprises:
aiming at any one target pixel point, determining a similar pixel point of which the position relation with the target pixel point meets a preset condition;
and taking the rendering attribute information of each target pixel point as the rendering attribute information of the similar pixel point corresponding to each target pixel point.
5. The method of claim 4, wherein the predetermined condition comprises being located adjacent to and/or at a target position of the target pixel.
6. The method according to claim 1, wherein the determining rendering attribute information of other pixel points in each sub-region to be rendered except for the target pixel point based on the rendering attribute information of the target pixel point in each sub-region to be rendered comprises:
and aiming at any other pixel point except the target pixel point, taking the rendering attribute information of the target pixel point closest to the pixel point as the rendering attribute information of the pixel point.
7. The method according to claim 1, wherein the performing delayed rendering on the to-be-rendered area based on the rendering attribute information of each pixel point in the to-be-rendered sub-area comprises:
determining target scene rendering data corresponding to the sub-region to be rendered based on rendering attribute information of each pixel point in the sub-region to be rendered;
and performing delayed rendering on the region to be rendered based on the target scene rendering data.
8. A scene rendering device is applied to delayed rendering of a scene under a target operating system platform, and comprises:
the first determining module is used for responding to a rendering request under a delayed rendering scene and determining a region to be rendered;
the second determining module is used for dividing the region to be rendered into a plurality of sub-regions to be rendered, respectively sampling the divided sub-regions to be rendered, and determining a target pixel point in each sub-region to be rendered;
a third determining module, configured to determine rendering attribute information of target pixel points in the multiple sub-areas to be rendered, and determine, based on the rendering attribute information of the target pixel points in each sub-area to be rendered, rendering attribute information of other pixel points in each sub-area to be rendered except for the target pixel points;
and the delayed rendering module is used for delaying rendering of the to-be-rendered area based on the rendering attribute information of each pixel point in the to-be-rendered subarea aiming at any one of the to-be-rendered subareas.
9. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the scene rendering method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the scene rendering method according to any one of claims 1 to 7.
CN202111444522.7A 2021-11-30 2021-11-30 Scene rendering method and device, computer equipment and storage medium Pending CN114155334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111444522.7A CN114155334A (en) 2021-11-30 2021-11-30 Scene rendering method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111444522.7A CN114155334A (en) 2021-11-30 2021-11-30 Scene rendering method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114155334A true CN114155334A (en) 2022-03-08

Family

ID=80454899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111444522.7A Pending CN114155334A (en) 2021-11-30 2021-11-30 Scene rendering method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114155334A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049774A (en) * 2022-08-12 2022-09-13 深流微智能科技(深圳)有限公司 Graphic processing method, processor, storage medium and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049774A (en) * 2022-08-12 2022-09-13 深流微智能科技(深圳)有限公司 Graphic processing method, processor, storage medium and terminal device
CN115049774B (en) * 2022-08-12 2022-11-01 深流微智能科技(深圳)有限公司 Graphic processing method, processor, storage medium and terminal device

Similar Documents

Publication Publication Date Title
KR102625233B1 (en) Method for controlling virtual objects, and related devices
CN107958480B (en) Image rendering method and device and storage medium
CN110354489B (en) Virtual object control method, device, terminal and storage medium
CN107562316B (en) Method for showing interface, device and terminal
WO2012159392A1 (en) Interaction method for dynamic wallpaper and desktop component
CN110448904B (en) Game view angle control method and device, storage medium and electronic device
CN112748843B (en) Page switching method and device, computer equipment and storage medium
CN111803932A (en) Skill release method for virtual character in game, terminal and storage medium
CN111773704B (en) Game data processing method and device, storage medium, processor and electronic device
CN111589114B (en) Virtual object selection method, device, terminal and storage medium
CN110665225A (en) Control method and device in game
KR20230085187A (en) Chessboard picture display method and apparatus, device, storage medium, and program product
CN113318428A (en) Game display control method, non-volatile storage medium, and electronic device
CN114155334A (en) Scene rendering method and device, computer equipment and storage medium
CN115965737A (en) Image rendering method and device, terminal equipment and storage medium
CN113031846B (en) Method and device for displaying description information of task and electronic equipment
CN113941152A (en) Virtual object control method and device, electronic equipment and storage medium
CN111589111B (en) Image processing method, device, equipment and storage medium
CN114627225A (en) Method and device for rendering graphics and storage medium
CN113633974A (en) Method, device, terminal and storage medium for displaying real-time game-checking information of user
CN110215702B (en) Method and device for controlling grouping in game
CN114020396A (en) Display method of application program and data generation method of application program
CN111617474B (en) Information processing method and device
CN109814703B (en) Display method, device, equipment and medium
CN114053704B (en) Information display method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination