CN117456079A - Scene rendering method, device, equipment, storage medium and program product - Google Patents

Scene rendering method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN117456079A
CN117456079A CN202311468789.9A CN202311468789A CN117456079A CN 117456079 A CN117456079 A CN 117456079A CN 202311468789 A CN202311468789 A CN 202311468789A CN 117456079 A CN117456079 A CN 117456079A
Authority
CN
China
Prior art keywords
data
normal
color
decoding
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311468789.9A
Other languages
Chinese (zh)
Inventor
王钦佳
卓西宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311468789.9A priority Critical patent/CN117456079A/en
Publication of CN117456079A publication Critical patent/CN117456079A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application discloses a scene rendering method, a scene rendering device, scene rendering equipment, a storage medium and a program product, and relates to the technical field of computer graphics. The method comprises the following steps: acquiring color data and normal data of pixels to be rendered in a scene to be rendered, wherein the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the orientation of the pixels to be rendered in the scene to be rendered; coding and integrating the color data and the normal data to obtain first coded data, wherein the first coded data comprises the color coded data and the normal coded data; decoding the first encoded data to obtain rendering data, wherein the rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data; rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data to obtain a target pixel for display, so that the data storage space is saved and the rendering efficiency is improved.

Description

Scene rendering method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of computer graphics, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for rendering a scene.
Background
Ambient light occlusion (Ambient Occlusion) is a technique for estimating the extent to which each point in a scene is affected by indirect illumination. It can produce a more realistic and natural global lighting effect, enhancing the visual experience of Three-Dimensional (3D) rendering. In the ambient light shielding, normal information of each pixel is also generally needed, which is helpful to more accurately judge the relative position relationship among the pixels in the scene, so as to improve the quality of the ambient shielding effect.
In the related art, by additionally using a rendering stage, the normal line of the scene is rendered into a separate frame buffer before the scene color is rendered, and the ambient light shading effect is achieved by adopting a normal map mode in the ambient light shading calculation.
However, the above method requires additional storage space to store the normal information separately, and increases the amount of calculation in the rendering process, and has higher consumption of storage resources and calculation resources, lower rendering efficiency, and poorer performance.
Disclosure of Invention
The embodiment of the application provides a scene rendering method, device, equipment, storage medium and program product, which can save data storage resources and improve rendering efficiency. The technical scheme is as follows.
In one aspect, a scene rendering method is provided, the method including:
acquiring color data and normal data of pixels to be rendered in a scene to be rendered, wherein the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the orientation of the pixels to be rendered in the scene to be rendered;
coding and integrating the color data and the normal data to obtain first coded data, wherein the first coded data comprises color coded data and normal coded data;
decoding the first encoded data to obtain rendering data, wherein the rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data;
and rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data to obtain a target pixel for display.
In another aspect, there is provided a scene rendering apparatus, the apparatus comprising:
the system comprises an acquisition module, a rendering module and a rendering module, wherein the acquisition module is used for acquiring color data and normal data of pixels to be rendered in a scene to be rendered, the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the position depth relation of the pixels to be rendered in the scene to be rendered;
The processing module is used for encoding and integrating the color data and the normal data to obtain first encoded data, wherein the first encoded data comprises color encoded data and normal encoded data;
the processing module is further configured to decode the first encoded data to obtain rendering data, where the rendering data includes color decoding data and normal decoding data, the color decoding data is used to restore the color data, and the normal decoding data is used to restore the normal data;
the processing module is further configured to render the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, so as to obtain a target pixel for display.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement a scene rendering method as in any one of the embodiments of the application.
In another aspect, a computer readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement a scene rendering method as described in any of the embodiments of the application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the scene rendering method according to any one of the above embodiments.
The beneficial effects that technical scheme that this application embodiment provided include at least:
the first coding data is obtained by coding and integrating the color data and the normal data, redundant data spaces of two rendering buffers required by the color data and the normal data are optimized, the first coding data can be used for indicating the color data and the normal data at the same time through coding integration, so that the first coding data can be stored only by adopting a single preset rendering buffer, in scene rendering involving ambient light shielding, the ambient light shielding rendering effect can be ensured based on normal information, the additional occupation of one rendering buffer by the normal information can be avoided, the storage space is saved, simultaneously, the ambient light shielding is realized in one rendering calculation based on the color information and the normal information contained in the first coding data, the calculation amount of resource calling in the rendering process is reduced, and the rendering efficiency and the calculation performance are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a scene rendering method provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a first encoded data structure provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic representation of rendering contrast effects provided by an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a data encoding method provided by an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a data decoding method provided by an exemplary embodiment of the present application;
FIG. 7 is a flowchart of a first encoded data storage method provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of data storage provided in an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a scene rendering device provided in an exemplary embodiment of the present application;
FIG. 10 is a block diagram of a scene rendering device module provided in one exemplary embodiment of the present application;
fig. 11 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that, although the terms first, second, etc. may be used in this disclosure to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first parameter may also be referred to as a second parameter, and similarly, a second parameter may also be referred to as a first parameter, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Ambient light occlusion is a technique for estimating the degree to which each point in a scene is affected by indirect illumination. It may produce a more realistic and natural global lighting effect, thereby enhancing the visual experience of 3D rendering. In the ambient light shielding, normal information of each pixel is also generally needed, which is helpful to more accurately judge the relative position relationship among the pixels in the scene, so as to improve the quality of the ambient shielding effect. In the related art, by additionally using a rendering stage, the normal line of the scene is rendered into a separate frame buffer before the scene color is rendered, and the ambient light shading effect is achieved by adopting a normal map mode in the ambient light shading calculation. However, the above method requires additional storage space to store the normal information separately, and increases the amount of calculation in the rendering process, and has higher consumption of storage resources and calculation resources, lower rendering efficiency, and poorer performance.
According to the scene rendering method, the first encoded data are obtained by encoding and integrating the color data and the normal data, redundant data spaces of two rendering buffers required by the color data and the normal data are optimized, the first encoded data can be used for indicating the color data and the normal data at the same time through encoding and integration, so that the first encoded data can be stored only by adopting a single preset rendering buffer, in scene rendering related to ambient light shielding, the ambient light shielding rendering effect can be guaranteed based on normal information, the additional occupation of one rendering buffer by the normal information can be avoided, the storage space is saved, meanwhile, the rendering can be realized based on the color information and the normal information contained in the first encoded data at the same time, the ambient light shielding is realized in one-time rendering calculation, the calculation amount of resource calling in the rendering process is reduced, and the rendering efficiency and the calculation performance are improved.
First, an implementation environment of the present application will be described. Referring to fig. 1, a schematic diagram of an implementation environment provided in an exemplary embodiment of the present application is shown, where the implementation environment includes: a terminal 110.
The terminal 110 has installed therein a target application program for providing a scene rendering function. Alternatively, the target application may be any application capable of providing a scene rendering function, such as a virtual reality application, a three-dimensional map application, a strategy Game, a Third person shooter Game (Third-Person Shooting Game, TPS), a First person shooter Game (First-Person Shooting Game, FPS), a multiplayer online tactical competition Game (Multiplayer Online Battle Arena Games, MOBA), a massively multiplayer online role Playing Game (Massive Multiplayer Online Role-playinggame, MMORPG), a multiplayer warfare survival Game, a building modeling program, and the like.
In the scene rendering process, the terminal 110 acquires color data and normal data of pixels to be rendered in a scene to be rendered, wherein the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the orientation of the pixels to be rendered in the scene to be rendered; coding and integrating the color data and the normal data to obtain first coded data, wherein the first coded data comprises the color coded data and the normal coded data; decoding the first encoded data to obtain rendering data, wherein the rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data; rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data to obtain a target pixel for display. In some embodiments, the first encoded data is stored in a predetermined rendering buffer, the rendering buffer being configured to store data conforming to a predetermined data format; when the first encoded data needs to be decoded, the first encoded data is read from the rendering buffer.
In some embodiments, the implementation environment further includes a server 120 and a communication network 130, where the terminal 110 and the server 120 perform data transmission through the communication network 130.
The server 120 is configured to provide a background service for scene rendering for the terminal 110, where the terminal 110 obtains color data and normal data of pixels to be rendered in a scene to be rendered, and sends the color data to the server 120, where the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating an orientation of the pixels to be rendered in the scene to be rendered; the server 120 encodes and integrates the color data and the normal data to obtain first encoded data, wherein the first encoded data comprises the color encoded data and the normal encoded data; decoding the first encoded data to obtain rendering data and transmitting the rendering data to the terminal 110, wherein the rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data; the terminal 110 renders the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, and the target pixel is obtained and displayed.
The above terminal is optional, and the terminal may be a desktop computer, a laptop portable computer, a mobile phone, a tablet computer, an electronic book reader, a dynamic image expert compression standard audio layer 3 (Moving Picture Experts Group Audio Layer III, MP 3) player, a dynamic image expert compression standard audio layer 4 (Moving Picture Experts Group Audio Layer IV, MP 4) play, an intelligent television, an intelligent vehicle-mounted terminal device, or other various types of terminal devices, which are not limited in this embodiment of the present application.
It should be noted that the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud security, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform.
Cloud Technology (Cloud Technology) refers to a hosting Technology that unifies serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
In some embodiments, the servers described above may also be implemented as nodes in a blockchain system.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of the relevant region. For example, reference in the present application to color information, normal information, and the like are all acquired with sufficient authorization.
Further, the present application may display a prompt interface, a popup window, or output voice prompt information before and during the process of collecting the relevant data of the user (e.g., color information and normal information related to the present application), where the prompt interface, popup window, or voice prompt information is used to prompt the user to collect the relevant data currently, so that the present application only starts to execute the relevant step of obtaining the relevant data of the user after obtaining the confirmation operation of the user to the prompt interface or popup window, or otherwise (i.e., when the confirmation operation of the user to the prompt interface or popup window is not obtained), ends the relevant step of obtaining the relevant data of the user, i.e., does not obtain the relevant data of the user. In other words, all user data collected in the present application is collected with the consent and authorization of the user, and the collection, use and processing of relevant user data requires compliance with relevant laws and regulations and standards of the relevant region.
Referring to fig. 2, a flowchart of a scene rendering method according to an exemplary embodiment of the present application is shown, where the method may be performed by a terminal, or may be performed by a server, or may be performed simultaneously by the terminal and the server, and the embodiment of the present application is described by taking the method performed by the terminal as an example, as shown in fig. 2, and the method includes the following steps:
Step 210, obtaining color data and normal data of pixels to be rendered in the scene to be rendered.
The color data is used for coloring the pixel to be rendered, and the normal data is used for indicating the orientation of the pixel to be rendered in the scene to be rendered.
In some embodiments, the scene to be rendered is a three-dimensional virtual scene, and the scene to be rendered includes a plurality of pixels to be rendered, where the pixels to be rendered correspond to color data and normal data.
The color data is a color value of a pixel to be rendered, and is generally composed of three components of a first color, a second color and a third color, and each component has a value between 0 and 255, so as to represent the color intensity corresponding to the component, for example, the color data of a pixel is (255, 0) and can represent that the pixel is the first color. Wherein the first color, the second color and the third color correspond to three primary colors in the optical field, that is, red, green, blue (RGB).
The normal data is a normal vector to each pixel surface describing the orientation of the pixel to be rendered in the scene to be rendered, typically consisting of three direction vectors. Normal data is typically used for illumination and shadow calculations, which can make rendering more realistic. For example, a normal vector of (0, 1) for a pixel may indicate that the pixel is screen-oriented and the lighting effect is strong.
Optionally, the color data and the normal data are obtained in advance by the terminal based on a three-dimensional model corresponding to the scene to be rendered, or are preset by the user.
In some embodiments, the terminal obtains color data and normal data through a shader. A shader is a computer program for handling color and lighting effects in a rendering process.
The scene rendering process comprises a basic coloring stage and an environment shading rendering stage, wherein the basic coloring stage is used for carrying out data processing on pixels to be rendered in a scene to be rendered to obtain color data and normal data, and the environment shading rendering stage is used for processing illumination effects of the scene to be rendered based on the normal data, namely rendering the scene to be rendered based on the color data and the normal data.
The shaders include a Vertex Shader (Vertex Shader) and a Fragment Shader (Fragment Shader), where the Vertex Shader is to perform a base shading phase and the Fragment Shader is to perform an ambient shading rendering phase. That is, the terminal may acquire color data and normal data through the vertex shader.
And 220, coding and integrating the color data and the normal data to obtain first coded data.
Wherein the first encoded data includes color encoded data and normal encoded data.
In some embodiments, the color data and the normal data are encoded and integrated according to a data format requirement to obtain first encoded data conforming to a preset data format, where the data format requirement is used to indicate the preset data format. The preset data format is a four-dimensional Unsigned Integer (UInt), and the embodiment of the present application is described by taking the preset data format as a four-dimensional 16-bit Unsigned Integer, that is, a 64-bit Unsigned Integer as an example.
Illustratively, the preset data format is used for indicating that the first encoded data includes a first data bit, a second data bit, a third data bit and a fourth data bit, where the first data bit, the second data bit, the third data bit and the fourth data bit respectively correspond to 16-bit unsigned integers, the first data bit is used for storing the 1 st to 16-bit integer codes in the first encoded data, the second data bit is used for storing the 17 th to 32-bit integer codes in the first encoded data, the third data bit is used for storing the 33 rd to 48 th integer codes in the first encoded data, and the fourth data bit is used for storing the 49 th to 64 th integer codes in the first encoded data.
In some embodiments, the color data is a three-dimensional unsigned floating point number, requiring storage in the form of a three-dimensional 8-bit unsigned integer, i.e., a 24-bit unsigned integer.
Optionally, the color data is obtained by encoding three-dimensional unsigned floating point numbers, that is, the color data stores three decimal values ranging from 0.0 to 1.0 in three 8-bit unsigned integers, and the encoding and decoding between the three-dimensional unsigned floating point numbers and the three-dimensional unsigned integer color data are realized by the following formula:
where x is data to be processed, b is a bit number indicating the data x, unormencod is a coding function for coding the decimal numbers of b bits 0 to 1 into b bit integers, clip is used for determining the data to be processed with a value between 0 and 1, unormencod is a decoding function for decoding the b bit integers into the decimal numbers of 0 to 1.
In some embodiments, the color data includes first, second, and third color data for indicating a proportional relationship of the first, second, and third color channels when the pixel to be rendered is rendered.
And respectively encoding the first color data, the second color data and the third color data in the color data to obtain color coded data, wherein the color coded data comprises the first color coded data, the second color coded data and the third color coded data which are all 8-bit unsigned integers.
In some embodiments, the normal data is a three-dimensional 16-bit signed floating point number, i.e., a 64-bit signed floating point number.
In the case where the preset data format is a 64-bit unsigned integer, the color data is encoded as a 24-bit unsigned integer, and the normal data is encoded as a 40-bit unsigned integer.
Since the normal line data is three-dimensional data, in order to ensure that the accuracy of three-dimensional component data of the normal line data is consistent without generating redundant data bits, data compression is performed on the 64-bit normal line data to obtain 40-bit normal line compressed data, wherein the normal line compressed data is two-dimensional data and comprises 20-bit first compressed data and 20-bit second compressed data, and the first compressed data and the second compressed data are obtained by respectively compressing corresponding normal line data in different directions based on pixels to be rendered.
On the basis of compressing to obtain 40-bit normal compressed data, respectively encoding first compressed data and second compressed data in the normal compressed data to obtain first normal encoded data and second normal encoded data, wherein the first normal encoded data and the second normal encoded data are 20-bit unsigned integers.
In some embodiments, the first encoded data of the 64-bit unsigned integer is obtained by integrating the color encoded data of the 24-bit unsigned integer and the normal encoded data of the 40-bit unsigned integer.
Alternatively, the color-coded data and the normal-coded data may be connected in a preset order to obtain the first encoded data, or the color-coded data and the normal-coded data may be divided and integrated to obtain the first encoded data.
Taking color coded data and normal coded data connected according to a preset sequence as an example, taking 24-bit color coded data as 1 st to 24 th bit integer codes in first coded data, and taking 40-bit normal coded data as 25 th to 64 th bit integer codes in the first coded data; alternatively, the 40-bit normal encoded data is encoded as the 1 st to 40 th bit integer in the first encoded data, and the 24-bit color encoded data is encoded as the 41 st to 64 th bit integer in the first encoded data.
Taking the division and integration of the color-coded data and the normal-coded data as an example, any one data bit is determined from four data bits in the first coded data to store any two of the first color-coded data, the second color-coded data and the third color-coded data, and the remaining three data bits are used to store the remaining color-coded data and normal-coded data.
The data bit number of the first normal line coding data and the second normal line coding data is larger than the data bit number corresponding to a single data bit in the first coding data, so that the data of the first normal line coding data and the second normal line coding data are respectively divided according to the preset data bit number, and the data of a first preset bit section in the first normal line coding data and the data of a first preset bit section in the second normal line coding data are determined to be auxiliary precision data. And determining one data bit from the three data bits to store the rest of color coding data and auxiliary precision data, wherein the rest two data bits are respectively used for storing data except a first preset bit segment in the first normal coding data and data except the first preset bit segment in the second normal coding data.
The method comprises the steps of taking first color coding and second color coding as first data bits in first coding data, taking third color coding and auxiliary precision data as second data bits in the first coding data, wherein the auxiliary precision data comprise data of a first preset bit section in first normal coding data, data of the first preset bit section in second normal coding data, taking data of a second preset bit section in the first normal coding data as third data bits in the first coding data, and taking data of the second preset bit section in the second normal coding data as fourth data bits in the first coding data.
Under the condition that the preset data format is a 64-bit unsigned integer and the color coded data is a three-dimensional 8-bit unsigned integer, the bit number corresponding to the first preset bit segment is 4 bits and is used for determining 8-bit auxiliary precision data, so that the first auxiliary precision data and 8-bit color coded data together occupy 16-bit data bits in the first coded data.
Alternatively, the first preset bit segment may be used to indicate any 4-bit data in the 20-bit normal encoded data.
In some embodiments, to reduce device computation, a first preset bit segment is used to indicate the upper 4 bits of data in the 20 bits of encoded data and a second preset bit segment is used to indicate the lower 16 bits of data in the 20 bits of encoded data. Wherein the data bits from high to low are determined in the order of data from left to right.
Referring to fig. 3, fig. 3 is a schematic diagram of a first encoded data structure provided in an exemplary embodiment of the present application, as shown in fig. 3, the first color encoded data is regarded as the upper 8 bits of the first data bits 310 in the first encoded data, the second color encoded data is regarded as the lower 8 bits of the first data bits 310 in the first encoded data, the third color encoded data is regarded as the upper 8 bits of the second data bits 320 in the first encoded data, the auxiliary precision data is regarded as the lower 8 bits of the second data bits 320 in the first encoded data, wherein the auxiliary precision data includes the upper 4 bits of the first normal encoded data, the upper 4 bits of the second normal encoded data, the lower 16 bits of the first normal encoded data is regarded as the third data bits 330 in the first encoded data, and the lower 16 bits of the second normal encoded data is regarded as the fourth data bits 340 in the first encoded data.
If color coded data and normal coded data are sequentially connected according to a preset sequence to obtain first coded data, for example, bits 1 to 24 in the first coded data are color coded data, bits 25 to 44 are first normal coded data, bits 45 to 64 are second normal coded data, the upper 8 bits of the first normal coded data are required to be used as the lower 8 bits of the second data, the lower 12 bits of the first normal coded data are required to be used as the upper 12 bits of the third data, the upper 4 bits of the second normal coded data are required to be used as the lower 4 bits of the third data, and the lower 16 bits of the second normal coded data are required to be used as the fourth data.
Compared with the method that the color coded data and the normal coded data are sequentially connected according to a preset sequence, the color coded data and the normal coded data are stored in the first coded data structure shown in fig. 3, the first normal coded data and the second normal coded data can be guaranteed to be identical in storage structure, so that first data bits, second data bits, third data bits and fourth data bits in the first coded data can be read and decoded in parallel, coding logic and decoding logic for the first normal coded data and the second normal coded data are identical, parallelism in data coding and data decoding is high, calculated amount is small, and data processing efficiency can be improved.
When determining the data of the first preset bit segment in the first normal line coded data and the second normal line coded data, since the first precision data and the second precision data are required to be respectively used as the upper 4 bits and the lower 4 bits in the auxiliary precision data when the auxiliary precision data are used as the lower 8 bits of the second data bits, and one shift operation is involved, the shift operation for the data processing of the third data bit and the fourth data bit can be avoided by determining the upper 4 bits data as the data corresponding to the first preset bit segment, and the calculation amount can be reduced.
If the other bit segment data is determined to be the data corresponding to the first preset bit segment, the shift operation times are increased, so that the calculated amount is increased, the data processing efficiency is low, taking the case that the low 4 data is determined to be the data corresponding to the first preset bit segment, shift operation is needed to be performed on the first normal line coding data and the second normal line coding data respectively, two high 16-bit data are obtained to be the third data bit and the fourth data bit respectively, and when the two low 4-bit data are used as the high 4 bits and the low 4 bits in the auxiliary precision data respectively, shift operation is still performed on one of the data, the shift operation times are increased, the calculated amount is increased, and the data processing efficiency is low.
And step 230, decoding the first encoded data to obtain the rendering data.
The rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data.
Taking the first encoded data structure as shown in fig. 3 as an example, decoding first data bits in the first encoded data to obtain first color decoded data and second color decoded data, decoding second data bits in the first encoded data to obtain third color decoded data and auxiliary precision decoded data, decoding third data bits in the first encoded data to obtain first component data, the first component data being decoded data corresponding to data of a second preset bit segment in the first normal encoded data, decoding fourth data bits in the first encoded data to obtain second component data, the second decoded data being decoded data corresponding to data of a second preset bit segment in the second normal encoded data, obtaining color decoded data based on the first color decoded data, the second color decoded data and the third color decoded data, and obtaining normal decoded data based on the auxiliary precision decoded data, the first component decoded data and the second component decoded data.
In some embodiments, the terminal needs to read the first encoded data before decoding, step 230 may be implemented to read the first encoded data and decode.
Optionally, when the first encoded data is read, the terminal reads the first encoded data corresponding to the plurality of pixels to be rendered in parallel according to the number of pixels to be rendered in the scene to be rendered.
In some embodiments, the first encoded data is stored in a preset rendering buffer, and the terminal reads the first encoded data from the rendering buffer.
Optionally, the first data bit, the second data bit, the third data bit and the fourth data bit may be sequentially read and decoded according to a preset sequence, or the first data bit, the second data bit, the third data bit and the fourth data bit may be read and decoded in parallel, so as to improve decoding parallelism, thereby improving data decoding efficiency.
And step 240, rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, and displaying the obtained target pixel.
In some embodiments, the rendering is performed with a terminal deployed shader, which is a preset computer program for processing colors and lighting effects in graphics rendering.
Illustratively, rendering is performed in a rendering code segment corresponding to an open graphics library shading language (Open Graphics Library Shading Language, GLSL) shader.
The normal decoding data is used for restoring the normal data, and the normal data is used for indicating the direction of the pixel to be rendered in the scene to be rendered, so that the illumination effect of the pixel to be rendered can be indicated, the pixel to be rendered is rendered based on the color decoding data and the normal decoding data, the illumination effect can be restored based on the normal decoding data in the rendering process, and the environment shielding rendering can be realized, namely, the environment shielding rendering function is started.
In some embodiments, under the condition that an environment shielding rendering function is not started, rendering is performed on pixels to be rendered only based on color data to obtain target pixels for display, and based on the rendering method, rendering effects of rendering on a scene to be rendered are poor, and illumination effects of the scene to be rendered cannot be reflected.
Referring to fig. 4 schematically, fig. 4 is a schematic view of a rendering contrast effect provided by an exemplary embodiment of the present application, as shown in fig. 4, in the case of not turning on the environment shielding rendering, scene rendering is performed based on the color decoding data to obtain a rendering effect 410, in the case of turning on the environment shielding rendering, scene rendering is performed based on the color decoding data and the normal decoding data at the same time to obtain a rendering effect 420, by contrast, the rendering effect 420 is more obvious compared with the geometric edge details in the rendering effect 410, and the illumination effect is more accurate, so that the rendering effect 420 is better than the rendering effect 410, i.e. the rendering effect of turning on the environment shielding rendering is due to the rendering effect of not turning on the environment shielding rendering.
In summary, according to the method provided by the embodiment of the application, the color data and the normal data are encoded and integrated to obtain the first encoded data, the redundant data spaces of the two rendering buffers required by the color data and the normal data are optimized, the first encoded data can be used for indicating the color data and the normal data at the same time through encoding integration, so that the first encoded data can be stored only by adopting a single preset rendering buffer, in scene rendering involving ambient light shielding, the ambient light shielding rendering effect can be ensured based on normal information, the additional occupation of one rendering buffer by the normal information can be avoided, the storage space is saved, simultaneously, the rendering can be realized based on the color information and the normal information contained in the first encoded data, the ambient light shielding is realized in one-time rendering calculation, the calculation amount of resource calling in the rendering process is reduced, and the rendering efficiency and the calculation performance are improved.
Referring to fig. 5, a flowchart of a data encoding method provided in an exemplary embodiment of the present application is shown, where the method may be performed by a terminal, or may be performed by a server, or may be performed simultaneously by the terminal and the server, and the embodiment of the present application is described by taking the method performed by the terminal as an example, as shown in fig. 5, and in some embodiments, the first encoded data conforms to a preset data format, where step 220 includes the following steps:
step 221, the color data is encoded, resulting in color encoded data.
Wherein the color coded data occupies a first number of data bits in the first coded data.
In some embodiments, the color data is a three-dimensional unsigned floating point number, requiring storage in the form of a three-dimensional 8-bit unsigned integer, i.e., a 24-bit unsigned integer. The preset data format is used to indicate a 64-bit unsigned integer in which color coded data is stored in a 24-bit unsigned integer.
Illustratively, color data of a 24-bit floating point number is obtained, and the color data is encoded to obtain color encoded data of a 24-bit integer, i.e., the first bit number is 24 bits.
In some embodiments, the color data includes first, second, and third color data, where the first, second, and third color data are used to indicate a proportional relationship of the first, second, and third color channels when the pixel to be rendered is rendered. Step 221 is implemented by encoding the first color data, the second color data, and the third color data in the color data, respectively, to obtain first color encoded data, second color encoded data, and third color encoded data.
Illustratively, the first color data is denoted as c.r, the second color data is denoted as c.b, the third color data is denoted as c.g, and the following equations are referred to for the encoding modes of the first color data, the second color data, and the third color data:
wherein UNormEncode is a coding function for coding color data into 8-bit integers, c.r ' is first color coded data, c.b ' is second color coded data, and c.g ' is third color coded data.
And step 222, encoding the normal line data to obtain normal line encoded data.
Wherein the normal encoded data occupies the second number of data bits in the first encoded data.
In some embodiments, the normal data is a three-dimensional 16-bit signed floating point number, i.e., a 64-bit signed floating point number.
In the case where the preset data format is a 64-bit unsigned integer, the color data is encoded as a 24-bit unsigned integer, the normal data is encoded as a 40-bit unsigned integer, i.e., the second number of bits is 40 bits.
In some embodiments, step 222 comprises the following two steps:
and in the first step, data compression is carried out on normal data to obtain normal compression data, wherein the normal data is three-dimensional data, and the normal compression data is two-dimensional data.
Since the normal line data is three-dimensional data, in order to ensure the consistency of the three-dimensional component data of the normal line data under the condition that redundant data bits are not generated, data compression is performed on the 64-bit normal line data to obtain 40-bit normal line compressed data, wherein the normal line compressed data is two-dimensional data and comprises 20-bit first compressed data and 20-bit second compressed data. The first compressed data and the second compressed data are obtained by compressing normal data corresponding to pixels to be rendered in different directions.
Optionally, the normal data may be data compressed by a preset algorithm, where the preset algorithm includes any one data compression algorithm such as a spherical coordinate expansion algorithm, an octahedral expansion algorithm, and the like.
Illustratively, taking an octahedral expansion algorithm as an example, the following formula is referred to for the data compression method of normal data:
o.xy=OctaEncode(n.xyz),
where n.xyz is normal data, o.xy is normal encoded data, octaEncode is an octahedral expansion function for converting three-dimensional coordinate data into two-dimensional data in an octahedral expansion coordinate system.
And secondly, respectively encoding the first compressed data and the second compressed data in the normal compressed data to obtain first normal encoded data and second normal encoded data.
On the basis of compressing to obtain 40-bit normal compressed data, respectively encoding first compressed data and second compressed data in the normal compressed data to obtain first normal encoded data and second normal encoded data, wherein the first normal encoded data and the second normal encoded data are 20-bit unsigned integers. For illustration, please refer to the following formula:
where n '. X is first normal encoded data, o.x is first compressed data, n'. Y is second normal encoded data, o.y is second compressed data, UNormEncode is an encoding function for encoding the compressed data into 20-bit integer data.
Step 223, integrating the color-coded data and the normal-coded data according to a preset data format to obtain first coded data.
Wherein the first encoded data occupies a third number of bits of data, the third number of bits being a sum of the first number of bits and the second number of bits.
In some embodiments, the first encoded data of the 64-bit unsigned integer, i.e., the third number of bits is 64 bits, is obtained by integrating the color encoded data of the 24-bit unsigned integer and the normal encoded data of the 40-bit unsigned integer.
Alternatively, the color-coded data and the normal-coded data may be connected in a preset order to obtain the first encoded data, or the color-coded data and the normal-coded data may be divided and integrated to obtain the first encoded data.
Taking color coded data and normal coded data connected according to a preset sequence as an example, taking 24-bit color coded data as 1 st to 24 th bit integer codes in first coded data, and taking 40-bit normal coded data as 25 th to 64 th bit integer codes in the first coded data; alternatively, the 40-bit normal encoded data is encoded as the 1 st to 40 th bit integer in the first encoded data, and the 24-bit color encoded data is encoded as the 41 st to 64 th bit integer in the first encoded data.
Taking the division and integration of the color-coded data and the normal-coded data as an example, any one data bit is determined from four data bits in the first coded data to store any two of the first color-coded data, the second color-coded data and the third color-coded data, and the remaining three data bits are used to store the remaining color-coded data and normal-coded data.
The data bit number of the first normal line coding data and the second normal line coding data is larger than the data bit number corresponding to a single data bit in the first coding data, so that the data of the first normal line coding data and the second normal line coding data are respectively divided according to the preset data bit number, and the data of a first preset bit section in the first normal line coding data and the data of a first preset bit section in the second normal line coding data are determined to be auxiliary precision data. And determining one data bit from the three data bits to store the rest of color coding data and auxiliary precision data, wherein the rest two data bits are respectively used for storing data except a first preset bit segment in the first normal coding data and data except the first preset bit segment in the second normal coding data.
In some embodiments, the normal encoded data includes first normal encoded data and second normal encoded data, and the color encoded data includes first color encoded data, second color encoded data, and third color encoded data; the first encoded data includes a first data bit, a second data bit, a third data bit, and a fourth data bit having the same number of bits. Step 223 includes the steps of:
in the first step, the first color-coded data and the second color-coded data are used as first data bits in the first coded data.
Illustratively, taking the first encoded data structure as shown in fig. 3 as an example, the first color encoded data is used as the upper 8 bits of the first data bits, and the second color encoded data is used as the lower 8 bits of the first data bits, please refer to the following formula:
t.r=UNormEncode(8,c.r)×2 8 +UNormEncode(8,c.b),
wherein t.r is the first data bit in the first encoded data, c.r is the first color data, c.b is the second color data, and by shifting the first color data left by 8 bits and adding the second color data, the first color encoded data is the upper 8 bits of the first data bit, and the second color encoded data is the lower 8 bits of the first data bit.
And a second step of taking the third color coding data and the auxiliary precision data as second data bits in the first coding data.
The auxiliary precision data comprise data of a first preset bit section in the first normal line coding data and data of the first preset bit section in the second normal line coding data.
Under the condition that the preset data format is a 64-bit unsigned integer and the color coded data is a three-dimensional 8-bit unsigned integer, the bit number corresponding to the first preset bit segment is 4 bits and is used for determining 8-bit auxiliary precision data, so that the first auxiliary precision data and 8-bit color coded data together occupy 16-bit data bits in the first coded data.
Alternatively, the first preset bit segment may be used to indicate any 4-bit data in the 20-bit normal encoded data.
In some embodiments, to reduce device computation, a first preset bit segment is used to indicate the upper 4 bits of data in the 20 bits of encoded data and a second preset bit segment is used to indicate the lower 16 bits of data in the 20 bits of encoded data. Wherein the data bits from high to low are determined in the order of data from left to right.
In some embodiments, the auxiliary accuracy data needs to be determined before the third color encoded data and the auxiliary accuracy data are taken as the second data bits in the first encoded data. Optionally, the first normal line coding data and the second normal line coding data are subjected to data division according to the preset bit section to obtain first precision data and second precision data, the first precision data are data of a first preset bit section in the first normal line coding data, the second precision data are data of a first preset bit section in the second normal line coding data, the first precision data are shifted leftwards by the bit number corresponding to the first preset bit section to obtain third precision data, and the sum of the third precision data and the second precision data is used as auxiliary precision data.
Illustratively, taking the first encoding data structure as shown in fig. 3 as an example, the upper 4 bits of data in the first normal encoding and the upper 4 bits of data in the second normal encoding are taken as auxiliary precision data, please refer to the following formula:
where n'. K is auxiliary precision data, o.x is first compressed data, o.y is second compressed data, UNormEncode (20, o.x) is first normal line encoded data, UNormEncode (20, o.y) is second normal line encoded data, upper 8 bits of data in the first normal line encoded data are obtained as first precision data by right shifting the first normal line encoded data by 16 bits of data and rounding, upper 8 bits of data in the second normal line encoded data are obtained as second precision data by right shifting the second normal line encoded data by 16 bits of data and rounding, third precision data are obtained by left shifting the first precision data by 4 bits and auxiliary precision data are obtained by adding the third precision data to the second precision data.
Illustratively, taking the first encoded data structure as shown in fig. 3 as an example, the third color encoded data is used as the upper 8-bit data of the second data bit in the first encoded data, and the auxiliary precision data is used as the lower 8-bit data of the second data bit in the first encoded data, please refer to the following formula:
t.g=UNormEncode(8,c.r)×2 8 +n′.k,
Wherein t.g is the second data bit in the first encoded data, UNormEncode (8, c.r) is the third color encoded data, n' k is the auxiliary precision data, and by shifting the third color encoded data by 8 bits to the left and adding to the auxiliary precision data, the third color encoded data is implemented as the upper 8 bits of the second data bit in the first encoded data, and the auxiliary precision data is implemented as the lower 8 bits of the second data bit in the first encoded data.
And thirdly, taking the data of the second preset bit section in the first normal line coded data as the third data bit in the first coded data.
The data of the second preset bit segment refers to data except the data of the first preset bit segment in the first normal line coding data.
In some embodiments, to reduce device computation, a first preset bit segment is used to indicate the upper 4 bits of data in the 20 bits of encoded data and a second preset bit segment is used to indicate the lower 16 bits of data in the 20 bits of encoded data. Wherein the data bits from high to low are determined in the order of data from left to right.
For illustration, please refer to the following formula:
t.b=n′.xmod2 16 =UNormEncode(20,o.x)mod2 16
wherein t.b is the third data bit in the first encoded data, n'. X is the first normal encoded data, and the lower 16 bits of data in the first normal encoded data are reserved as the third data bit by modulo the first normal encoded data.
And a fourth step of taking the data of the second preset bit section in the second normal coded data as the fourth data bit in the first coded data.
The data of the second preset bit segment refers to data except the data of the first preset bit segment in the second normal line coding data.
In some embodiments, to reduce device computation, a first preset bit segment is used to indicate the upper 4 bits of data in the 20 bits of encoded data and a second preset bit segment is used to indicate the lower 16 bits of data in the 20 bits of encoded data. Wherein the data bits from high to low are determined in the order of data from left to right.
For illustration, please refer to the following formula:
t.a=n′.ymod2 16 =UNormEncode(20,o.y)mod2 16
wherein t.a is the fourth data bit in the first encoded data, n'. Y is the second normal encoded data, and the lower 16 bits of data in the second normal encoded data are reserved as the fourth data bit by modulo the second normal encoded data.
In summary, according to the method provided by the embodiment of the present application, the color data and the normal data are encoded respectively to obtain the color encoded data with the first bit number and the normal encoded data with the second bit number, so that the color encoded data and the normal encoded data meet the data bit number requirement of the preset data format, the color encoded data and the normal encoded data are integrated according to the preset data format to obtain the first encoded data with the third bit number, so that the first encoded data meet the data bit number requirement of the preset data format, and an implementation basis is provided for storing the complete first encoded data by adopting a single rendering buffer.
According to the method, the normal data are compressed into the two-dimensional normal compressed data, the three-dimensional normal data are further encoded based on the normal compressed data, the data bit number of the normal encoded data is reduced, the data bit number requirement of the preset data format is met, the first encoded data conforming to the preset data format can contain the color encoded data and the normal encoded data, and an implementation basis is provided for storing the color encoded data and the normal encoded data by adopting a single rendering buffer zone so as to save storage space.
According to the method provided by the embodiment of the application, the first color data, the second color data and the third color data in the color data are respectively encoded to obtain the first color encoded data, the second color encoded data and the third color encoded data, a specific encoding mode for the three-dimensional color data is provided, the color information in the three-dimensional color data is reserved in the form of the three-dimensional color encoded data, data loss is avoided, and the color data precision is guaranteed.
According to the method provided by the embodiment of the invention, the first data bit, the second data bit, the third data bit and the fourth data bit in the first encoded data are determined based on the first normal encoded data, the second normal encoded data, the first color encoded data, the second color encoded data and the third color encoded data according to the preset data format, and the scheme of integrating the color encoded data and the normal encoded data to obtain the first encoded data conforming to the preset data format is provided, wherein the third color encoded data and the auxiliary precision data are used as the second data bit in the first encoded data, the auxiliary precision data comprise the data of the first preset bit section in the first normal encoded data, and the data of the first preset bit section in the second normal encoded data, so that the storage structure and the encoding logic of the first normal encoded data and the second normal encoded data are relatively consistent, the parallelism of data encoding is improved, the data processing calculation amount is reduced, and the data processing efficiency is improved.
According to the method provided by the embodiment of the application, the first normal line coding data and the second normal line coding data are subjected to data division to obtain the first precision data and the second precision data, the first precision data is shifted leftwards to obtain the third precision data, the sum of the third precision data and the second precision data is used as auxiliary precision data, a scheme for determining the auxiliary precision data based on data division is clarified, the fact that the division modes of the first normal line coding data and the second normal line coding data are consistent is guaranteed, and therefore a basis is provided for guaranteeing that the storage structures and coding logics of the first normal line coding data and the second normal line coding data are relatively consistent.
Referring to fig. 6, a flowchart of a data decoding method according to an exemplary embodiment of the present application is shown, where the method may be performed by a terminal, or may be performed by a server, or may be performed simultaneously by the terminal and the server, and the embodiment of the present application is described by taking the method performed by the terminal as an example, as shown in fig. 6, where step 230 includes the following steps:
in step 231, the first data bit in the first encoded data is decoded to obtain first color decoded data and second color decoded data.
In some embodiments, the first data bits are used to store first color coded data and second color coded data.
Optionally, the high 8-bit data of the first data bit is first color coded data, the low 8-bit data is second color coded data, the high 8-bit data of the first data bit is decoded into first color decoded data, and the low 8-bit data is decoded into second color decoded data, wherein the first color decoded data is used for restoring the first color data, and the second color decoded data is used for restoring the second color data.
For illustration, please refer to the following formula:
wherein c.r is first color decoded data, c.b is second color decoded data, t.r is first data bit, UNormDncode is a decoding function for decoding to obtain 8-bit decoded data, the first data bit is modulo, the lower 8-bit data in the first data bit is reserved as second color decoded data, and the upper 8-bit data in the first data bit is obtained by subtracting the lower 8-bit data in the first data bit from the first data bit.
And step 232, decoding the second data bit in the first encoded data to obtain third color decoded data and auxiliary precision decoded data.
In some embodiments, the second data bits are used to store third color coded data and auxiliary accuracy data.
Optionally, the high 8-bit data of the second data bit is third color encoded data, the low 8-bit data is auxiliary precision data, the high 8-bit data of the second data bit is decoded into third color decoded data, and the low 8-bit data is decoded into auxiliary precision decoded data, wherein the third color decoded data is used for restoring the third color data.
For illustration, please refer to the following formula:
wherein c.g is third color decoded data, n'. K is second color decoded data, t.g is second data bit, UNormDncode is a decoding function for decoding to obtain 8-bit decoded data, the lower 8-bit data in the first data bit is reserved as auxiliary precision decoded data by modulo the second data bit, and the upper 8-bit data in the second data bit is obtained by subtracting the lower 8-bit data in the second data bit from the second data bit as third color decoded data.
In step 233, the third data bit in the first encoded data is decoded to obtain the first component data.
The first component data is decoding data corresponding to data of a second preset bit segment in the first normal line coding data.
In some embodiments, the third data bit is used to store data of a second predetermined bit segment in the first normal encoded data.
Optionally, the third data bit is used for storing the low 16 bits of the first normal encoded data, the third data bit is decoded by a decoding function, and the decoded 16 bits of data are used as the first component data.
In step 234, the fourth data bit in the first encoded data is decoded to obtain the second component data.
The second decoding data is decoding data corresponding to data of a second preset bit segment in the second normal line coding data.
In some embodiments, the fourth data bit is used to store data of a second predetermined bit segment in the second normal encoded data.
Optionally, the fourth data bit is used for storing the low 16 bits of the second normal encoded data, the fourth data bit is decoded by a decoding function, and the decoded 16 bits of data are used as the second component data.
Alternatively, the above steps 231 to 234 may be sequentially performed in a preset order, or the above steps 231 to 234 may be performed in parallel, thereby improving the parallelism of the data decoding process.
In step 235, color decoded data is obtained based on the first color decoded data, the second color decoded data, and the third color decoded data, and normal decoded data is obtained based on the auxiliary precision decoded data, the first component decoded data, and the second component decoded data.
In some embodiments, the color data is jointly restored based on the first color decoded data, the second color decoded data, and the third color decoded data.
In some embodiments, normal decoded data is derived based on the auxiliary precision decoded data, the first component decoded data, and the second component decoded data, comprising the following three steps:
first, first normal decoded data is obtained based on third precision data and first component data in the auxiliary precision data.
The first normal decoded data is decoded data corresponding to the first normal encoded data.
In some embodiments, the third precision data is high 4-bit data in the auxiliary precision data, and the first normal decoded data is obtained by taking the third precision data as high 4-bit data in the first normal decoded data and the first component data as low 16-bit data in the first normal decoded data.
For illustration, please refer to the following formula:
o.x=UNormDncode(20,t.b+n′.k-n′.k mod 2 4 ),
wherein o.x is first component data, t.b is third data bit, n'. K is auxiliary precision data, UNormDncode is a decoding function for decoding to obtain 20-bit decoded data, the lower 4 bits of the auxiliary precision data are reserved as second precision data by modulo the auxiliary precision data, the upper 4 bits of the auxiliary precision data are third precision data by subtracting the second precision data from the auxiliary precision data, and the 20-bit first normal decoded data is obtained by adding the third precision data to the third data bit.
And a second step of obtaining second normal decoded data based on second precision data and second component data in the auxiliary precision data.
Wherein the second normal decoded data is decoded data corresponding to the second normal encoded data.
In some embodiments, the second precision data is lower 4 bits of data in the auxiliary precision data, and the second normal decoded data is obtained by taking the second precision data as higher 4 bits of data in the second normal decoded data and the second component data as lower 16 bits of data in the second normal decoded data.
For illustration, please refer to the following formula:
o.y=UNormDncode(20,t.a+n′.k mod 2 4 ),
wherein o.y is second component data, t.a is fourth data bit, n'. K is auxiliary precision data, UNormDncode is a decoding function for decoding to obtain 20-bit decoded data, the lower 4 bits of the auxiliary precision data are reserved as second precision data by modulo the auxiliary precision data, and the second normal decoded data of 20 bits is obtained by adding the second precision data and the fourth data bit.
And thirdly, carrying out three-dimensional data reduction based on the first normal decoding data and the second normal decoding data to obtain normal decoding data.
Optionally, the normal data may be subjected to data restoration by a decoding function corresponding to a preset algorithm, where the preset algorithm includes any one data compression algorithm such as a spherical coordinate expansion algorithm and an octahedral expansion algorithm.
Illustratively, taking an octahedral expansion algorithm as an example, the data recovery method for normal decoded data refers to the following formula:
n.xyz=OctaDncode(o.xy),
where n.xyz is normal decoded data, o.xy is normal encoded data, and octaDncode is a decoding function corresponding to an octahedral expansion algorithm for restoring two-dimensional data in an octahedral expansion coordinate system to three-dimensional coordinate data.
In summary, according to the method provided by the embodiment of the present application, the first data bit in the first encoded data is decoded to obtain the first color decoded data and the second color decoded data, the second data bit in the first encoded data is decoded to obtain the third color decoded data and the auxiliary precision decoded data, the third data bit in the first encoded data is decoded to obtain the first component data, the fourth data bit in the first encoded data is decoded to obtain the second component data, the color decoded data is obtained based on the first color decoded data, the second color decoded data and the third color decoded data, and the normal decoded data is obtained based on the auxiliary precision decoded data, the first component decoded data and the second component decoded data.
According to the method provided by the embodiment of the application, a scheme for obtaining normal decoding data based on auxiliary precision decoding data, first component data and second component data is clarified, the first normal decoding data is obtained based on third precision data and the first component data, the second normal decoding data is obtained based on the second precision data and the second component data, the decoding logic of the first normal decoding data is ensured to be relatively consistent with that of the second normal decoding data, and an implementation basis is provided for improving the data decoding parallelism; meanwhile, three-dimensional data reduction is performed based on the first normal decoding data and the second finding decoding data, so that the normal decoding data is obtained, the three-dimensional normal data can be restored based on the three-dimensional normal decoding data in the rendering process, the accuracy of the normal decoding data is improved, the illumination effect in a scene to be rendered can be more accurately represented, and the environment shielding rendering effect is improved.
Referring to fig. 7, fig. 7 is a flowchart of a first encoded data storage method provided in an exemplary embodiment of the present application, where the method may be performed by a terminal, or may be performed by a server, or may be performed simultaneously by the terminal and the server, and the embodiment of the present application is described by taking the method performed by the terminal as an example, as shown in fig. 7, and further includes a storage process of the first encoded data after the step 220 and a reading process of the first encoded data before the step 230, and the scene rendering method provided in the embodiment of the present application includes the following steps:
Step 710, obtaining color data and normal data of pixels to be rendered in the scene to be rendered.
The color data is used for coloring the pixel to be rendered, and the normal data is used for indicating the orientation of the pixel to be rendered in the scene to be rendered.
In some embodiments, step 710 is implemented as step 210 described above, and specifically, please refer to the above embodiment, which is not described herein.
In step 720, the color data and the normal data are encoded and integrated to obtain first encoded data.
Wherein the first encoded data includes color encoded data and normal encoded data.
In some embodiments, step 720 is implemented as step 220 described above, and specifically, please refer to the above embodiment, which is not described herein.
In step 730, the first encoded data is stored in a predetermined rendering buffer.
The rendering buffer is used for storing data conforming to a preset data format.
And under the condition that the first coded data accords with a preset data format, storing the first coded data by adopting the rendering buffer.
Alternatively, the rendering buffer is implemented as a frame buffer in the format of four-dimensional color channels, each channel storing 16 bits of data, for example, an unsigned integer, and the rendering buffer is for storing four-dimensional 16-bit unsigned integers.
In some embodiments, each pixel to be rendered corresponds to one first encoded data.
Alternatively, the storage capacity of the rendering buffer may be preset by the terminal or may be set by the user.
The storage capacity is used to indicate a threshold amount of data that the rendering buffer may store.
In some embodiments, where it is desired to store the first encoded data based on the storage capacity of the rendering buffer, then prior to step 730, the storage capacity of the rendering buffer is determined, and step 730 is implemented to store the first encoded data into the rendering buffer in response to the storage capacity meeting the storage requirement.
Optionally, the storage requirement includes a data amount having a storage capacity greater than or equal to the first encoded data.
In some embodiments, the storage capacity of the rendering buffer is determined based on the number of pixels to be rendered, optionally the capacity of the rendering buffer is dynamically adjusted or statically fixed.
Taking the example of dynamically adjusting the storage capacity of the rendering buffer, before storing the first encoded data in the preset rendering buffer, the storage capacity of the rendering buffer needs to be determined, which specifically includes the following two steps:
determining a first number of pixels to be rendered in a scene to be rendered;
The second step adjusts the storage capacity of the rendering buffer based on the first number.
Wherein the first number has a positive correlation with the storage capacity.
Illustratively, when N pixels to be rendered are included in the scene to be rendered, N is a positive integer, the rendering buffer capacity is determined to be n×64 bits.
By adjusting the storage capacity of the rendering buffer based on the first number of pixels to be rendered, the problem that when data storage is performed with fixed capacity, redundant data storage spaces are generated in the rendering process of different scenes to be rendered, and therefore waste of data storage resources is avoided.
In some embodiments, where the first encoded data is stored in the rendering buffer, the first encoded data is read from the rendering buffer before decoding the first encoded data.
In some embodiments, a first buffer is used to store color data, a second buffer is used to store normal data, wherein the first buffer is a 32-bit color buffer, the second buffer is a 64-bit normal buffer, and in order to save storage space, the embodiments of the present application use a 64-bit frame buffer to store first encoded data, i.e., use a single rendering buffer to store color data and normal data simultaneously.
Referring to fig. 8, fig. 8 is a schematic diagram of data storage provided in an exemplary embodiment of the present application, in sub-graph a, color data is stored in a 32-bit first buffer 810, normal data is stored in a 64-bit second buffer 820, and in a rendering stage, the color data and the normal data are read from the first buffer 810 and the second buffer 820, respectively, to render a scene to be rendered; in sub-view b, the first encoded data is stored in the 64-bit rendering buffer 830, and in the rendering stage, only the first encoded data is read from the rendering buffer 830, so that the first encoded data is decoded to obtain the color data and the normal data for rendering, thereby saving the storage space, reducing the process call times when the data is read, and reducing the rendering calculation amount.
Step 740, the first encoded data is read from the rendering buffer.
In some embodiments, the terminal reads first encoded data corresponding to the plurality of pixels to be rendered in parallel according to the number of pixels to be rendered in the scene to be rendered.
The first encoded data structure is shown in fig. 3 and includes a first data bit, a second data bit, a third data bit, and a fourth data bit having the same number of bits.
In order to improve the processing efficiency of the terminal, when the first encoded data is read from the rendering buffer, the first data bit, the second data bit, the third data bit and the fourth data bit may be read in parallel.
In some embodiments, the first encoded data is read from the rendering buffer in response to the first encoded data being stored, i.e., when the amount of data of the first encoded data stored in the rendering buffer reaches the number of pixels to be rendered.
Step 750, decoding the first encoded data to obtain the rendered data.
The rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data.
In some embodiments, step 740 is implemented as step 230 described above, and specifically, please refer to the above embodiment, which is not described herein.
And 760, rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, and displaying the target pixel.
In some embodiments, step 760 is implemented as step 240, and, in particular, please refer to the above embodiment, which is not described herein.
In summary, according to the method provided by the embodiment of the application, the first encoded data is stored in the preset rendering buffer, and when the first encoded data accords with the preset data format, the first encoded data is stored in the rendering buffer for storing the data in accordance with the preset data format, so that the scheme of storing the first encoded data by adopting a single rendering buffer is realized, the redundant data storage space caused by respectively storing the color encoded data and the normal encoded data by adopting two rendering buffers is avoided, the waste of data storage resources is caused, and the data storage space is saved.
According to the method, the storage capacity of the rendering buffer area is adjusted based on the first number of the pixels to be rendered, so that the problem that redundant data storage space is possibly generated in the rendering process of different scenes to be rendered when the data is stored in the rendering buffer area with the fixed storage capacity is avoided, the waste of data storage resources is caused, and the data storage space is further saved.
Fig. 9 is a block diagram of a scene rendering device according to an exemplary embodiment of the present application, and as shown in fig. 9, the device includes the following parts:
An obtaining module 910, configured to obtain color data and normal data of a pixel to be rendered in a scene to be rendered, where the color data is used for coloring the pixel to be rendered, and the normal data is used for indicating an orientation of the pixel to be rendered in the scene to be rendered;
the processing module 920 is configured to encode and integrate the color data and the normal data to obtain first encoded data, where the first encoded data includes color encoded data and normal encoded data;
the processing module 920 is further configured to decode the first encoded data to obtain rendering data, where the rendering data includes color decoding data and normal decoding data, the color decoding data is used to restore the color data, and the normal decoding data is used to restore the normal data;
the processing module 920 is further configured to render the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, so as to obtain a target pixel for display.
Referring to fig. 10, fig. 10 is a block diagram illustrating a structure of a scene rendering device module according to an exemplary embodiment of the present application, as shown in fig. 10, in some embodiments, the first encoded data conforms to a preset data format; the processing module 920 includes:
A first encoding unit 921, configured to encode the color data to obtain color encoded data, where the color encoded data occupies a first number of bits of the first encoded data;
a second encoding unit 922, configured to encode the normal data to obtain normal encoded data, where the normal encoded data occupies the data bits of the second number of bits in the first encoded data;
and a third coding unit 923, configured to integrate the color coded data and the normal coded data according to the preset data format, so as to obtain the first coded data, where the first coded data occupies a third number of data bits, and the third number of data bits is a sum of the first number of data bits and the second number of data bits.
In some embodiments, the first encoding unit 921 is configured to:
and respectively encoding the first color data, the second color data and the third color data in the color data to obtain first color encoded data, second color encoded data and third color encoded data, wherein the first color data, the second color data and the third color data are used for indicating the proportional relation of the first color channel, the second color channel and the third color channel when the pixel to be rendered is colored.
In some embodiments, the second encoding unit 922 is configured to:
carrying out data compression on the normal data to obtain normal compression data, wherein the normal data is three-dimensional data, and the normal compression data is two-dimensional data;
and respectively encoding the first compressed data and the second compressed data in the normal compressed data to obtain first normal encoded data and second normal encoded data, wherein the first compressed data and the second compressed data are obtained by respectively compressing corresponding normal data in different directions based on the pixel to be rendered.
In some embodiments, the normal encoded data includes first normal encoded data and second normal encoded data, and the color encoded data includes first color encoded data, second color encoded data, and third color encoded data; the first coded data comprises a first data bit, a second data bit, a third data bit and a fourth data bit which have the same bit number;
the third encoding unit 923 is configured to:
taking the first color-coded data and the second color-coded data as the first data bits in the first coded data;
taking the third color coding data and auxiliary precision data as the second data bit in the first coding data, wherein the auxiliary precision data comprises data of a first preset bit segment in the first normal coding data and data of a first preset bit segment in the second normal coding data;
Taking the data of a second preset bit segment in the first normal coded data as the third data bit in the first coded data;
and taking the data of a second preset bit segment in the second normal coded data as the fourth data bit in the first coded data.
In some embodiments, the third encoding unit 923 is further configured to:
dividing the first normal line coding data and the second normal line coding data according to a preset bit section to obtain first precision data and second precision data, wherein the first precision data is the data of the first preset bit section in the first normal line coding data, and the second precision data is the data of the first preset bit section in the second normal line coding data;
shifting the first precision data leftwards by the bit number corresponding to the first preset bit segment to obtain third precision data;
and taking the sum of the third precision data and the second precision data as the auxiliary precision data.
In some embodiments, the processing module 920 includes a decoding unit 924, the decoding unit 924 configured to:
decoding a first data bit in the first coded data to obtain first color decoding data and second color decoding data;
Decoding the second data bit in the first coded data to obtain third color decoding data and auxiliary precision decoding data;
decoding a third data bit in the first encoded data to obtain first component data, wherein the first component data is decoded data corresponding to data of a second preset bit segment in the first normal encoded data;
decoding a fourth data bit in the first encoded data to obtain second component data, wherein the second decoded data is decoded data corresponding to data of a second preset bit segment in second normal encoded data;
the color decoding data is obtained based on the first color decoding data, the second color decoding data and the third color decoding data, and the normal decoding data is obtained based on the auxiliary precision decoding data, the first component decoding data and the second component decoding data.
In some embodiments, the decoding unit 924 is further configured to:
obtaining first normal decoding data based on third precision data and the first component data in the auxiliary precision data, wherein the first normal decoding data is decoding data corresponding to the first normal encoding data;
Obtaining second normal decoding data based on second precision data and the second component data in the auxiliary precision data, wherein the second normal decoding data is decoding data corresponding to the second normal encoding data;
and carrying out three-dimensional data reduction based on the first normal decoding data and the second normal decoding data to obtain the normal decoding data.
In some embodiments, the processing module 920 includes a data storage unit 925 and a data reading unit 926;
the data storage unit 925 is configured to store the first encoded data into a preset rendering buffer, where the rendering buffer is configured to store data that conforms to a preset data format;
the data reading unit 926 is configured to read the first encoded data from the rendering buffer.
In some embodiments, the processing module 920 further includes a size determination unit 927 to determine a storage capacity of the rendering buffer, the storage capacity being used to indicate an amount of data threshold that the rendering buffer may store;
the data storage unit 925 is configured to store the first encoded data into the rendering buffer in response to the storage capacity meeting a storage requirement.
In some embodiments, the sizing unit 927 is configured to:
determining a first number of the pixels to be rendered in the scene to be rendered;
and adjusting the storage capacity of the rendering buffer based on the first quantity, wherein the first quantity and the storage capacity have positive correlation.
In summary, the device provided by the embodiment of the application encodes and integrates the color data and the normal data to obtain the first encoded data, optimizes the redundant data space of two rendering buffers required by the color data and the normal data, and enables the first encoded data to be used for indicating the color data and the normal data simultaneously through encoding and integration, so that the first encoded data can be stored only by adopting a single preset rendering buffer, in scene rendering involving ambient light shielding, the ambient light shielding rendering effect can be ensured based on normal information, the additional occupation of one rendering buffer by the normal information can be avoided, the storage space is saved, simultaneously, the rendering can be realized based on the color information and the normal information contained in the first encoded data, the ambient light shielding is realized in one rendering calculation, the calculation amount of resource calling in the rendering process is reduced, and the rendering efficiency and the calculation performance are improved.
It should be noted that: the scene rendering device provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
Fig. 11 shows a block diagram of a terminal 1100 according to an exemplary embodiment of the present application. The terminal 1100 may be: smart phones, tablet computers, MP3 players, MP4 players, notebook computers or desktop computers. Terminal 1100 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, the terminal 1100 includes: a processor 1101 and a memory 1102.
The processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1101 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1101 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in connection with rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1101 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement the scene rendering method provided by the method embodiments herein.
In some embodiments, terminal 1100 also includes other components, and those skilled in the art will appreciate that the structure shown in fig. 11 is not limiting of terminal 1100, and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Embodiments of the present application also provide a computer device that may be implemented as a terminal or server as shown in fig. 1. The computer device includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored, where at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by the processor to implement the scene rendering method provided by the above method embodiments.
Embodiments of the present application also provide a computer readable storage medium having at least one instruction, at least one program, a code set, or an instruction set stored thereon, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the scene rendering method provided by the above method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the scene rendering method provided by the above-mentioned method embodiments.
Alternatively, the computer-readable storage medium may include: read Only Memory (ROM), random access Memory (Random Access Memory, RAM), solid state disk (Solid State Drives, SSD), or optical disk. The random access memory may include resistive random access memory (Resistance Random Access Memory, reRAM) and dynamic random access memory (Dynamic Random Access Memory, DRAM), among others. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (15)

1. A method of scene rendering, the method comprising:
acquiring color data and normal data of pixels to be rendered in a scene to be rendered, wherein the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the orientation of the pixels to be rendered in the scene to be rendered;
coding and integrating the color data and the normal data to obtain first coded data, wherein the first coded data comprises color coded data and normal coded data;
decoding the first encoded data to obtain rendering data, wherein the rendering data comprises color decoding data and normal decoding data, the color decoding data is used for restoring the color data, and the normal decoding data is used for restoring the normal data;
And rendering the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data to obtain a target pixel for display.
2. The method of claim 1, wherein the first encoded data conforms to a predetermined data format;
the step of encoding and integrating the color data and the normal data to obtain first encoded data conforming to the preset data format includes:
coding the color data to obtain color coding data, wherein the color coding data occupy data bits of a first bit number in the first coding data;
coding the normal data to obtain normal coding data, wherein the normal coding data occupies data bits of a second bit number in the first coding data;
and integrating the color coded data and the normal coded data according to the preset data format to obtain first coded data, wherein the first coded data occupies data bits of a third bit number, and the third bit number is the sum of the first bit number and the second bit number.
3. The method of claim 2, wherein the encoding the normal data to obtain normal encoded data comprises:
Carrying out data compression on the normal data to obtain normal compression data, wherein the normal data is three-dimensional data, and the normal compression data is two-dimensional data;
and respectively encoding the first compressed data and the second compressed data in the normal compressed data to obtain first normal encoded data and second normal encoded data, wherein the first compressed data and the second compressed data are obtained by respectively compressing corresponding normal data in different directions based on the pixel to be rendered.
4. The method of claim 2, wherein the encoding the color data to obtain color-coded data comprises:
and respectively encoding the first color data, the second color data and the third color data in the color data to obtain first color encoded data, second color encoded data and third color encoded data, wherein the first color data, the second color data and the third color data are used for indicating the proportional relation of the first color channel, the second color channel and the third color channel when the pixel to be rendered is colored.
5. The method of claim 2, wherein the normal encoded data comprises first normal encoded data and second normal encoded data, and wherein the color encoded data comprises first color encoded data, second color encoded data, and third color encoded data; the first coded data comprises a first data bit, a second data bit, a third data bit and a fourth data bit which have the same bit number;
The integrating the color coded data and the normal coded data according to the preset data format to obtain the first coded data comprises the following steps:
taking the first color-coded data and the second color-coded data as the first data bits in the first coded data;
taking the third color coding data and auxiliary precision data as the second data bit in the first coding data, wherein the auxiliary precision data comprises data of a first preset bit segment in the first normal coding data and data of a first preset bit segment in the second normal coding data;
taking the data of a second preset bit segment in the first normal coded data as the third data bit in the first coded data;
and taking the data of a second preset bit segment in the second normal coded data as the fourth data bit in the first coded data.
6. The method of claim 5, wherein said including said third color-coded and auxiliary precision data as said second data bits in said first coded data is preceded by:
dividing the first normal line coding data and the second normal line coding data according to a preset bit section to obtain first precision data and second precision data, wherein the first precision data is the data of the first preset bit section in the first normal line coding data, and the second precision data is the data of the first preset bit section in the second normal line coding data;
Shifting the first precision data leftwards by the bit number corresponding to the first preset bit segment to obtain third precision data;
and taking the sum of the third precision data and the second precision data as the auxiliary precision data.
7. The method according to any one of claims 1 to 6, wherein decoding the first encoded data to obtain rendered data comprises:
decoding a first data bit in the first coded data to obtain first color decoding data and second color decoding data;
decoding the second data bit in the first coded data to obtain third color decoding data and auxiliary precision decoding data;
decoding a third data bit in the first encoded data to obtain first component data, wherein the first component data is decoded data corresponding to data of a second preset bit segment in the first normal encoded data;
decoding a fourth data bit in the first encoded data to obtain second component data, wherein the second decoded data is decoded data corresponding to data of a second preset bit segment in second normal encoded data;
the color decoding data is obtained based on the first color decoding data, the second color decoding data and the third color decoding data, and the normal decoding data is obtained based on the auxiliary precision decoding data, the first component decoding data and the second component decoding data.
8. The method of claim 7, wherein the deriving the normal decoded data based on the auxiliary precision decoded data, the first component data, and the second component data comprises:
obtaining first normal decoding data based on third precision data and the first component data in the auxiliary precision data, wherein the first normal decoding data is decoding data corresponding to the first normal encoding data;
obtaining second normal decoding data based on second precision data and the second component data in the auxiliary precision data, wherein the second normal decoding data is decoding data corresponding to the second normal encoding data;
and carrying out three-dimensional data reduction based on the first normal decoding data and the second normal decoding data to obtain the normal decoding data.
9. The method according to any one of claims 1 to 6, wherein after encoding and integrating the color data and the normal data to obtain first encoded data, further comprising:
storing the first coded data into a preset rendering buffer zone, wherein the rendering buffer zone is used for storing data conforming to a preset data format;
Before the first encoded data is decoded to obtain the rendering data, the method further comprises:
the first encoded data is read from the rendering buffer.
10. The method of claim 9, wherein prior to storing the first encoded data in a predetermined rendering buffer, further comprising:
determining a storage capacity of the rendering buffer, the storage capacity being used to indicate a data amount threshold that the rendering buffer can store;
the storing the first encoded data in a preset rendering buffer zone includes:
and storing the first coded data into the rendering buffer in response to the storage capacity meeting a storage requirement.
11. The method of claim 10, wherein the determining the storage capacity of the rendering buffer comprises:
determining a first number of the pixels to be rendered in the scene to be rendered;
and adjusting the storage capacity of the rendering buffer based on the first quantity, wherein the first quantity and the storage capacity have positive correlation.
12. A scene rendering device, the device comprising:
the device comprises an acquisition module, a rendering module and a rendering module, wherein the acquisition module is used for acquiring color data and normal data of pixels to be rendered in a scene to be rendered, the color data is used for coloring the pixels to be rendered, and the normal data is used for indicating the orientation of the pixels to be rendered in the scene to be rendered;
The processing module is used for encoding and integrating the color data and the normal data to obtain first encoded data, wherein the first encoded data comprises color encoded data and normal encoded data;
the processing module is further configured to decode the first encoded data to obtain rendering data, where the rendering data includes color decoding data and normal decoding data, the color decoding data is used to restore the color data, and the normal decoding data is used to restore the normal data;
the processing module is further configured to render the pixel to be rendered based on the color decoding data and the normal decoding data in the rendering data, so as to obtain a target pixel for display.
13. A computer device, characterized in that it comprises a processor and a memory, in which at least one section of a computer program is stored, which is loaded and executed by the processor to implement the scene rendering method according to any of claims 1 to 11.
14. A computer readable storage medium, characterized in that at least one section of a computer program is stored in the storage medium, which is loaded and executed by a processor to implement the scene rendering method according to any of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements a scene rendering method as claimed in any one of claims 1 to 11.
CN202311468789.9A 2023-11-07 2023-11-07 Scene rendering method, device, equipment, storage medium and program product Pending CN117456079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311468789.9A CN117456079A (en) 2023-11-07 2023-11-07 Scene rendering method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311468789.9A CN117456079A (en) 2023-11-07 2023-11-07 Scene rendering method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117456079A true CN117456079A (en) 2024-01-26

Family

ID=89583308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311468789.9A Pending CN117456079A (en) 2023-11-07 2023-11-07 Scene rendering method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117456079A (en)

Similar Documents

Publication Publication Date Title
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
US11164342B2 (en) Machine learning applied to textures compression or upscaling
US20160005213A1 (en) Method and device for enriching the content of a depth map
CN105556574A (en) Rendering apparatus, rendering method thereof, program and recording medium
US11501467B2 (en) Streaming a light field compressed utilizing lossless or lossy compression
US20150371433A1 (en) Method and device for establishing the frontier between objects of a scene in a depth map
WO2008123823A1 (en) Vector-based image processing
CN114419234A (en) Three-dimensional scene rendering method and device, electronic equipment and storage medium
CN114299220A (en) Data generation method, device, equipment, medium and program product of illumination map
US9336561B2 (en) Color buffer caching
CN112843700B (en) Terrain image generation method and device, computer equipment and storage medium
CN112231020B (en) Model switching method and device, electronic equipment and storage medium
US11263786B2 (en) Decoding data arrays
CN116385622B (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
KR20170005035A (en) Depth offset compression
CN117456079A (en) Scene rendering method, device, equipment, storage medium and program product
KR102531605B1 (en) Hybrid block based compression
US7961195B1 (en) Two component texture map compression
CN112915540B (en) Data processing method, device and equipment for virtual scene and storage medium
CN114882149A (en) Animation rendering method and device, electronic equipment and storage medium
US8918440B2 (en) Data decompression with extra precision
US11948338B1 (en) 3D volumetric content encoding using 2D videos and simplified 3D meshes
US20230306683A1 (en) Mesh patch sub-division
CN115712580B (en) Memory address allocation method, memory address allocation device, computer equipment and storage medium
CN116993889A (en) Texture rendering method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication