CN117496023A - Gaze point rendering method, device, medium, and program - Google Patents

Gaze point rendering method, device, medium, and program Download PDF

Info

Publication number
CN117496023A
CN117496023A CN202311295469.8A CN202311295469A CN117496023A CN 117496023 A CN117496023 A CN 117496023A CN 202311295469 A CN202311295469 A CN 202311295469A CN 117496023 A CN117496023 A CN 117496023A
Authority
CN
China
Prior art keywords
layer
rendering
memory
pixel
gpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311295469.8A
Other languages
Chinese (zh)
Inventor
徐坤鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311295469.8A priority Critical patent/CN117496023A/en
Publication of CN117496023A publication Critical patent/CN117496023A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

According to the method, a plurality of layers are generated for the image to be rendered according to the gaze point rendering level, a memory space is allocated for each layer, pixels corresponding to each pixel density correspondingly generate one layer, the GPU independently renders each layer and writes rendering results into a video memory, the rendering results of each layer are written into the corresponding memory space from the video memory according to the pixel density of each layer, the rendering results of the layers are read from the memory, and the rendering results of the layers are synthesized to obtain the rendering results of the image to be rendered. Because the pixel density of each layer is known and the pixel density of each layer is the same, the pixel density of the pixel is not required to be judged in the process of writing the pixel into the memory from the video memory, and the writing address in the memory is not required to be determined, so that the cost of judging the pixel density in the writing process and the cost of calculating the writing address are reduced, and the load of the GPU is reduced.

Description

Gaze point rendering method, device, medium, and program
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a gaze point rendering method, device, medium and program.
Background
The gaze point rendering (Foveation Rendering, FR) reduces the computational complexity by reducing the resolution of the image around the gaze point, the main idea being to simulate the visual process of the human eye, which does not see the whole field of view as clear, but the center point is clear, the more blurred the two sides. Based on this, FR provides a high-resolution rendering effect for the gaze point region when rendering an image, and a low-resolution rendering effect for the region outside the gaze point region.
FR is gradually applied to Extended Reality (XR) equipment, can provide lossless resolution for the center of a visual field (namely a fixation point area), simultaneously reduces resolution of peripheral visual fields (namely an area outside the fixation point area), greatly reduces computational complexity of the XR equipment, and optimizes rendering effect of an XR scene. Static gaze point rendering (Fixed Foveated Rendering, FFR) techniques fix the field of view focus at the center of the viewport, gradually decreasing in sharpness from the center to the periphery. In the existing gaze point rendering technology, firstly, block labeling is performed on a (Texture) canvas with FFR enabled, then, rendering is performed by an image processing unit (Graphics Processing Unit, GPU) according to pixel density (density), and after each pixel rendering is completed, the GPU writes the rendering result of the pixel into a system Memory (Memory) of the device from a graphics card Memory (GMEM).
In the process of writing the rendering result of the pixel into the system memory from the video card memory, it is necessary to judge the density block where each pixel is located, use the same RGB as those of the adjacent pixels, and calculate the position where the pixel is written into the memory, and the resource consumption of the process of writing the rendering result of the pixel into the system memory from the video card memory is relatively large, so that the resource consumption of the gaze point rendering is excessive.
Disclosure of Invention
The embodiment of the application provides a gaze point rendering method, gaze point rendering equipment, gaze point rendering medium and gaze point rendering program, which can reduce the load of a GPU in the rendering process.
In a first aspect, an embodiment of the present application provides a gaze point rendering method, which is applied to a rendering device, where the rendering device includes a central processing unit CPU, an image processing unit GPU, a video memory and a memory, and the method includes:
the CPU generates a plurality of layers for an image to be rendered according to the gaze point rendering level, allocates a memory space for each layer in the memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer;
the CPU sends rendering data of each layer to the GPU, wherein the rendering data of each layer comprises graphic element information, pixel density and memory space information of the layer;
The GPU renders each layer according to the rendering data of each layer, and the rendering result is stored in the video memory;
the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the pixel density of each layer;
and the CPU reads the rendering results of the layers from the memory, synthesizes the rendering results of the layers, and obtains the rendering result of the image to be rendered.
In some embodiments, the GPU renders each layer according to the rendering data of each layer, and stores the rendering result in the video memory, including:
the GPU renders the pixels of each layer according to the pixel density and the primitive information of each layer, and the rendering result is stored in the video memory;
the GPU writes the rendering result of each layer from the video memory into the corresponding memory space according to the pixel density of each layer, and the method comprises the following steps:
the GPU determines the address mapping relation of each layer according to the pixel density of each layer, wherein the address mapping relation is the mapping relation between the address in the video memory and the address in the memory;
and the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer.
In some embodiments, if the current layer is a layer corresponding to a full pixel density, the GPU renders pixels of each layer according to the pixel density and the primitive information of each layer, and stores a rendering result in the video memory, including:
and the GPU sequentially renders each pixel in the current layer according to the full pixel density and the primitive information of the current layer, and stores the rendering result of each pixel in the video memory.
In some embodiments, the address mapping relationship of the layer corresponding to the full pixel density is that the addresses in the video memory and the addresses in the memory are in one-to-one correspondence;
the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer, and the method comprises the following steps:
and the GPU reads the rendering result of the pixel of the current layer from the video memory, and writes the read rendering result of the pixel into one address of a memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
In some embodiments, if the current layer is a layer corresponding to a non-full pixel density, the GPU renders pixels of each layer according to the pixel density and the primitive information of each layer, and stores a rendering result in the video memory, including:
The GPU performs rendering on a target pixel in a rendering pixel group in the current layer according to the non-full pixel density and the primitive information of the current layer, and stores a rendering result of the target pixel in the video memory, wherein the current layer comprises a plurality of rendering pixel groups, one rendering pixel group comprises a plurality of adjacent pixels, the number of pixels in the rendering pixel group is related to the non-full pixel density, and the rendering result of the pixels in the rendering pixel group is the same.
In some embodiments, the address mapping relationship of the layer corresponding to the non-full pixel density is that one address in the video memory corresponds to N addresses in the memory, N is an integer greater than or equal to 2, and the value of N is equal to the number of pixels in the rendered pixel group;
the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer, and the method comprises the following steps:
the GPU reads the rendering result of the target pixel of the current layer from the video memory;
and the GPU repeatedly writes the read rendering result of the target pixel into N continuous addresses of a memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
In some embodiments, the GPU renders and writes based on tiles.
In some embodiments, after generating a plurality of layers for the image to be rendered according to the gaze point rendering level, the CPU further includes:
the CPU divides each image layer into a plurality of image blocks according to the size of the memory space of the video memory, and each image block comprises a plurality of continuous pixels;
the rendering data also includes information of tiles of the layer.
In some embodiments, the GPU renders each layer according to the rendering data of each layer, and stores the rendering result in the video memory, including:
the GPU sequentially renders each image block in each image layer according to the rendering data of each image layer, and stores rendering results of each image block in the video memory, wherein the pixel density of each image block in each image layer is equal to the pixel density of the image layer to which each image block belongs;
the GPU writes the rendering result of each layer from the video memory into the corresponding memory space according to the pixel density of each layer, and the method comprises the following steps:
the GPU reads the rendering result of one image block from the video memory each time;
and the GPU writes the rendering result of the read image block into the memory space of the image layer to which the image block belongs through a write-once request according to the pixel density of the read image block.
In some embodiments, the tiles of the layer corresponding to full pixel density are smaller in size than the tiles of the layer corresponding to non-full pixel density.
In some embodiments, the tile of the layer corresponding to the non-full pixel density is N times the size of the tile of the layer corresponding to the full pixel density, N being an integer greater than 1, N being equal to the ratio of the full pixel density to the non-full pixel density.
In a second aspect, an embodiment of the present application provides a rendering device, where the rendering device includes a central processing unit CPU, an image processing unit GPU, a video memory and a memory;
the CPU is used for generating a plurality of layers for the image to be rendered according to the gaze point rendering level, and distributing a memory space for each layer in the memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer;
the CPU is further configured to send rendering data of each layer to the GPU, where the rendering data of each layer includes primitive information, pixel density and memory space information of the layer;
the GPU is used for rendering each layer according to the rendering data of each layer, and the rendering result is stored in the video memory;
The GPU is further used for writing rendering results of each layer from the video memory into corresponding memory spaces according to pixel density of each layer;
the CPU is further configured to read rendering results of the multiple layers from the memory, and synthesize the rendering results of the multiple layers to obtain a rendering result of the image to be rendered.
In a third aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program that causes a computer to perform a method as set forth in any one of the preceding claims.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as claimed in any one of the preceding claims.
According to the method, a plurality of layers are generated for the image to be rendered according to the gaze point rendering level, a memory space is allocated for each layer, pixels corresponding to each pixel density correspondingly generate one layer, the GPU independently renders each layer and writes rendering results into a video memory, the rendering results of each layer are written into the corresponding memory space from the video memory according to the pixel density of each layer, the rendering results of the layers are read from the memory, and the rendering results of the layers are synthesized to obtain the rendering results of the image to be rendered. Because the pixel density of each layer is known to be the same, the pixel density of the pixels does not need to be judged in the process of writing the pixels into the memory from the video memory and the writing address in the memory does not need to be determined, so that the cost of judging the pixel density in the writing process and the cost of calculating the writing address are reduced, and the load of the GPU is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a rendering device applicable to an embodiment of the present application;
fig. 2 is a flowchart of a gaze point rendering method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of layers of an image to be rendered;
fig. 4 is a flowchart of a gaze point rendering method provided in the second embodiment of the present application;
FIG. 5 is a block diagram of a layer corresponding to a full pixel density and a 1/4 pixel density;
fig. 6 is a schematic structural diagram of a rendering device according to a third embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of embodiments of the present application, before describing various embodiments of the present application, some concepts related to all embodiments of the present application are first appropriately explained, specifically as follows:
the gaze point rendering method provided by the embodiment of the application can be applied to XR equipment and other terminal equipment, such as mobile phones, computers, tablet computers and the like, and the embodiment of the application is not limited to the method.
XR refers to combining Reality and Virtual through a computer to create a Virtual environment capable of man-machine interaction, and XR is also a generic term for multiple technologies such as Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR). By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
VR: the technology of creating and experiencing a virtual world, calculating and generating a virtual environment, which is a multi-source information (the virtual reality mentioned herein at least comprises visual perception, auditory perception, tactile perception, motion perception, even taste perception, olfactory perception and the like, and the virtual reality also comprises gustatory perception, olfactory perception and the like), realizes the simulation of the fusion, interactive three-dimensional dynamic view and entity behaviors of the virtual environment, immerses a user into the simulated virtual reality environment, and realizes the application in various virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance and repair and the like.
VR devices are terminals for realizing virtual reality effects, and can be generally provided in the form of glasses, head mounted displays (Head Mount Display, HMD), or contact lenses for realizing visual perception and other forms of perception, although the form of virtual reality device realization is not limited thereto, and can be further miniaturized or enlarged as needed.
AR: an AR set refers to a simulated set with at least one virtual object superimposed over a physical set or representation thereof. For example, the electronic system may have an opaque display and at least one imaging sensor for capturing images or videos of the physical set, which are representations of the physical set. The system combines the image or video with the virtual object and displays the combination on an opaque display. The individual uses the system to view the physical set indirectly via an image or video of the physical set and observe a virtual object superimposed over the physical set. When the system captures images of a physical set using one or more image sensors and presents an AR set on an opaque display using those images, the displayed images are referred to as video passthrough. Alternatively, the electronic system for displaying the AR scenery may have a transparent or translucent display through which the individual may directly view the physical scenery. The system may display the virtual object on a transparent or semi-transparent display such that an individual uses the system to view the virtual object superimposed over the physical scenery. As another example, the system may include a projection system that projects the virtual object into a physical set. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to view the virtual object superimposed over the physical scene. In particular, a technique for calculating camera attitude parameters of a camera in the real world (or three-dimensional world, real world) in real time in the process of acquiring images by the camera and adding virtual elements on the images acquired by the camera according to the camera attitude parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
MR: by presenting virtual scene information in a real scene, an interactive feedback information loop is built up among the real world, the virtual world and the user, so as to enhance the sense of realism of the user experience. For example, integrating computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or representations thereof in a simulated scenery, in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
A virtual reality device (VR device) may be provided in the form of glasses, a head mounted display (Head Mount Display, abbreviated as HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited thereto, and may be further miniaturized or enlarged according to actual needs.
Optionally, the virtual reality devices (i.e., XR devices) described in embodiments of the present application may include, but are not limited to, the following types:
1) The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
2) The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
3) A computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
Fig. 1 is a schematic structural diagram of a rendering device applicable to an embodiment of the present application, where the rendering device is capable of rendering a 2D image or a 3D image, as shown in fig. 1, and the rendering device includes: a central processing unit (Central Processing Unit, CPU), an image processing unit (Graphics Processing Unit, GPU), a video Memory, also referred to as GPU Memory (GMEM), and a Memory, which refers to the system Memory of the device, the GPU, the video Memory, and the Memory may be connected to and communicate with the GPU via a bus.
In one possible configuration, the GPU and the memory may be integrated into a graphics card (also known as a display card), which is an important component of a computer that performs the task of outputting display graphics. The GPU is a main processing unit of the display card, and the display memory is used for storing data to be processed by the GPU and the processed data. The graphics card may be an integrated graphics card, an independent graphics card or any other existing graphics card, which is not limited in the embodiments of the present application.
The CPU is a main processing unit and a control unit of the rendering device, and the memory is a memory space which can be directly addressed by the CPU. It will be appreciated that the architecture of the rendering device shown in fig. 1 is only a schematic, that both the CPU and the GPU may include one or more processors, and that the rendering device may also include other components.
In the image rendering process, the CPU and the GPU cooperate together to complete image rendering, and 3D image rendering generally includes four phases: an application processing (application) stage, a geometry processing (geometry processing) stage, a rasterization (rasterisation) stage, and a pixel processing (pixel processing) stage. The application processing stage is completed by the CPU, the geometry processing stage, the rasterization stage, and the pixel processing stage are completed by the GPU.
The main tasks of the application processing stage are to prepare scene data, set rendering state, etc., and the CPU forms primitive information and rendering state by the application processing stage and sends the primitive information and rendering state to the GPU. The primitive refers to a rendered basic graph, and may be a vertex, a line segment, a plane (e.g., triangle or rectangle), and the like.
Taking a vertex as an example, attribute information of each vertex of the graph is generated in the process of preparing scene data, the attribute information of the vertex is also called vertex data, the attribute information of the vertex comprises position information and application-customized attribute information, namely three-dimensional coordinates (xyz) of the vertex, and the application-customized attribute information comprises, but is not limited to, position, color, texture, standard vector, direction and the like.
The setting of the rendering state is used for setting the rendering parameters of the object grid in the scene, wherein the rendering parameters comprise vertex shaders used for rendering, fragment shaders used for rendering, light sources, materials and the like, and can also comprise the sequence of rendering and the like.
The GPU generates geometry from primitive information provided by the CPU during a geometry processing stage, including, but not limited to, the following operations: vertex coordinate changes, vertex coloring, clipping, projection, screen mapping, and the like.
The main task of the GPU in the rasterization stage is to determine which pixels in each primitive should be drawn on the screen, i.e. to divide the pixels into pixels belonging to the geometry and pixels not belonging to the geometry according to the shape generated by the geometry shader, the pixels belonging to the geometry being the pixels eventually displayed on the screen, also called source pixels. The data of each source pixel obtained after the rasterization process includes screen coordinates, vertex information, texture information, etc., and these information are used to calculate the final color of each pixel.
The main task of the GPU in the pixel processing stage is to calculate a rendering result of each pixel from the source pixel data generated in the rasterization stage, where the rendering result of a pixel includes RGB values of the pixel, and optionally, a depth value and an α value.
In the prior art, after the GPU renders the rendering result of the pixel, the rendering result of the pixel is stored in a video memory, then the rendering result of the pixel is written into a memory, and finally the rendering result of the pixel is read from the memory during on-screen display.
In the prior art, when the GPU writes the rendering result of the pixel into the memory from the video memory, it is required to determine the pixel density (density) to which each pixel belongs and calculate the writing address of the pixel in the memory, and the GPU determines the pixel density of each pixel and calculates the time and resources occupied by the writing address of each pixel in the memory, which results in too large cost (cost) for starting the FFR by the rendering device.
For rendering devices, the most fundamental purpose of starting FFR is to reduce GPU load, FFR is mainly to save GPU resources by reducing rendering load, if the cost of starting FFR is too large, starting FFR cannot achieve the purpose of reducing GPU load or the reduced GPU load cannot achieve the expected effect, and cannot meet the application requirements.
For example, for two identical textures (canvas), FFR-enabled textures are more than 30% more loaded than FFR-unopened textures GPU.
Because of the large FFR cost, in some scenarios, in order to obtain GPU benefits, the application needs to open a high-Level FFR Level (Level) to cover the FFR cost, and for a low-Level FFR Level, GPU benefits cannot be obtained, so that the application of FFR technology is limited.
The different FFR levels have different effects on the resolution of the image, and illustratively, the higher the FFR level, the smaller the area of the full resolution area in the image, and the lower the FFR level, the larger the area of the full resolution area in the image.
In order to solve the problems in the prior art, embodiments of the present application provide a gaze point rendering method capable of reducing FFR cost, and hereinafter, embodiments and application scenarios thereof are described in detail with reference to the accompanying drawings. The following embodiments may be combined with each other, and the same or similar concepts and processes may not be described in detail in some embodiments.
Referring to the schematic structural diagram of the rendering device shown in fig. 1, to describe the gaze point rendering method provided in the embodiment of the present application, fig. 2 is a flowchart of the gaze point rendering method provided in the first embodiment of the present application, where the method is applied to the rendering device, and the rendering device may be an electronic device such as an XR device, a mobile phone, a computer, or a server. As shown in fig. 2, the method provided in this embodiment includes the following steps.
S101, a CPU generates a plurality of layers for an image to be rendered according to a gaze point rendering level, and allocates a memory space for each layer in a memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer.
The gaze point level is also called FFR level, and for the FFR function, a plurality of FFR levels may be defined, the pixel densities corresponding to different FFR levels are different, each FFR level corresponds to a plurality of pixel densities, the pixel densities corresponding to different FFR levels are different, the CPU generates a plurality of layers for the image to be rendered according to the plurality of pixel densities corresponding to the FFR levels, and generates one layer for each pixel level.
The pixel density corresponding to different FFR levels may be represented by a gaze point density map (Foveated density map, FDM), with different FDM corresponding images having different resolutions. The gaze point density map may be understood as an image in which pixel densities (densities) of different rendering areas are recorded, the rendering areas of the image including a gaze point area and a non-gaze point area, wherein the non-gaze point area may be divided into a plurality of areas according to the pixel densities.
Pixel density, also referred to as sampling density or pixel ratio, is used to describe the density or number of pixels rendered within a rendering region, wherein the higher the pixel density, the higher the resolution of the image and the more clear the image; conversely, the lower the pixel density, the lower the resolution of the image and the more blurred the image.
In one exemplary manner, the pixel density can be expressed as a value of 1, 1/2, 1/4, 1/8, 1/16, 1/32, etc., i.e., 1/2 n A format in which 1 denotes the highest pixel density in the gaze point density map, which is generally regarded as the full density or the full pixel density, and the central region in the gaze point density map is the field of view center, which is also referred to as the gaze point region, and the full pixel density map is generally usedThe lossless resolution or full resolution of the image, so that the sharpness of the center of the field of view can be ensured. The pixel density of the non-fixation point area except the central area of the fixation point density map is smaller than the full pixel density, and correspondingly, the image resolution of the non-fixation point area is smaller than the resolution of the fixation area, and the pixel density of the non-fixation point area is also called as the non-full pixel density.
When the value 1 of the pixel density corresponds to the full resolution, the resolution corresponding to the pixel density 1/2 is 1/2 full resolution, the resolution corresponding to the pixel density 1/4 is 1/4 full resolution, the resolution corresponding to the pixel density 1/8 is 1/8 full resolution, and so on.
In another exemplary manner, the pixel density value may also be expressed as: 255. 127, 63, 31, 15, 7, 3 and 1, etc., i.e. 2 n -1 format, wherein 255 is the full pixel density of the gaze point density map and the remaining values are the non-full pixel densities of the gaze point density map. Correspondingly, when the pixel density value 255 corresponds to the full resolution, the resolution corresponding to the pixel density 127 is 1/2 full resolution, the resolution corresponding to the pixel density 63 is 1/4 full resolution, and so on.
The gaze point density maps corresponding to different FFR levels are different, and illustratively, the FFR levels are classified into high, medium and low, wherein the area occupied by the full pixel density in the gaze point density map corresponding to the FFR level is gradually increased as the FFR level is from high to low.
Illustratively, if the FFR level corresponds to a gaze point density map that includes two pixel densities 1 and 1/4, the CPU generates two layers for the image to be rendered: a layer corresponding to the full pixel density and a layer corresponding to the 1/4 pixel density. If the gaze point density map corresponding to FFR level includes 4 pixel densities: 1. 1/2, 1/4 and 1/8, the CPU generates four layers for the image to be rendered: the image layer corresponding to the full pixel density, the image layer corresponding to the 1/2 pixel density, the image layer corresponding to the 1/4 pixel density and the image layer corresponding to the 1/8 pixel density.
Fig. 3 is a schematic view of layers of an image to be rendered, and as shown in fig. 3, the gaze point rendering level includes two pixel densities of 1 and 1/4, and the CPU generates two layers shown in the figure for the image to be rendered according to the gaze point rendering level. For a layer corresponding to full pixel density, it may be considered that the peripheral area of the full pixel density area of the layer (i.e., the 1/4 pixel density area) has no pixels, and accordingly, the peripheral area is not rendered during rendering. Also, for a layer corresponding to a 1/4 pixel density, the inner region of the 1/4 pixel density region of the layer (i.e., the full pixel density region) may be considered to be either hollow or pixel-free, and accordingly, is not rendered at the time of rendering.
In the prior art, an image to be rendered is regarded as a layer, and the layer includes areas with different pixel densities. Accordingly, in the prior art, the CPU only needs to allocate a memory space for an image to be rendered, so as to store a rendering result of the image to be rendered. In this embodiment, a memory space needs to be allocated for each layer of the image to be rendered, where the memory space of each layer is used to store the rendering result of the layer, and the rendering result of the layer refers to the rendering result of the pixels in the layer.
The CPU may allocate a memory space for a layer according to the number of pixels included in the layer, where the size of the memory space of each layer corresponds to the number of pixels in the layer. Assuming that 1 bit (bit) of memory space is required to store a rendering result of 1 pixel, for 1 image to be rendered of size 1280×720, the image to be rendered includes 921600 pixels in total, 921600 bits (i.e., 900 KB) of memory space is required to store a rendering result of the image to be rendered. Assuming that the layer corresponding to the full pixel density includes 2/3 pixels of the image to be rendered and the layer corresponding to the 1/4 pixel density includes 1/3 pixels of the image to be rendered, 600KB of memory space should be allocated for the layer corresponding to the full pixel density and 300KB of memory space should be allocated for the layer corresponding to the 1/4 pixel density.
In this embodiment, the memory space allocated by the CPU for each layer of the image to be rendered is independent. Alternatively, the memory space allocated by the CPU for multiple layers of the image to be rendered may be continuous or discontinuous. I.e. the memory space of any two layers may be continuous or discontinuous.
The CPU performs an application processing stage in addition to layering and allocating a memory space for an image to be rendered, generates primitive information and rendering state information of a layer, and the specific processing flow of the application processing stage is the prior art, which is not described in detail in this embodiment.
It is emphasized that during actual use, after the application turns on the FFR function, the images within the application may all use the same FFR level, i.e. the FFR level is unchanged during the running of the application. Alternatively, the FFR level is dynamically changing during application running, e.g., the FFR level may change as the user's gaze point area changes, or the FFR level may change as the CPU usage or memory usage of the device changes.
S102, the CPU sends rendering data of each layer to the GPU, wherein the rendering data of each layer comprises graphic primitive information, pixel density and memory space information of the layer.
Rendering data for a layer includes all data required for rendering the layer including, but not limited to, primitive information, pixel density, and memory space information for the layer.
The memory space information of the layer includes a starting address of a memory space allocated for the layer and a size of the memory space, and a block of memory space can be uniquely determined according to the starting address of the memory space and the size of the memory space, and optionally, the memory space information of the layer may further include end address information of the memory space.
Alternatively, the memory space information of the layer includes a start address and an end address of the memory space allocated for the layer, and a space formed by determining all consecutive addresses between the start address and the end address of the memory space of the layer may be used as the memory space of the layer.
And S103, the GPU renders each layer according to the rendering data of each layer, and the rendering result is stored in a video memory.
The GPU stores the rendering data sent by the CPU in the video memory, renders each layer according to the rendering data in the video memory, specifically, the GPU can render the pixels of each layer according to the pixel density and the primitive information of each layer, and the rendering result is stored in the video memory.
In the prior art, when an image is rendered on a screen, a GPU needs to determine the pixel density of each pixel, if the pixel density of the pixel is the full pixel density, the pixel is rendered, and if the pixel is not the non-full pixel density, whether the pixel is rendered is determined according to the non-full pixel density of the pixel and the position of the pixel. Wherein each pixel within the full pixel density region (i.e., the full resolution region) is rendered once, i.e., the GPU needs to calculate the rendering result for each pixel. And for the pixels in the non-full pixel density area, each pixel is not required to be rendered, but a plurality of pixels corresponding to the non-full pixel density are rendered once, namely the GPU only needs to calculate the rendering result of one pixel in the plurality of pixels, and the rendering results of the other pixels are the same as the rendering result of the pixel, so that the rendering load of the GPU is realized.
Illustratively, a rendering of every 2 pixels within a 1/2 pixel density region (i.e., a 1/2 full resolution region), i.e., a rendering result of one pixel is calculated in every two pixels of the GPU; rendering every 4 pixels in a 1/4 pixel density region (namely a 1/4 full resolution region), namely calculating a rendering result of one pixel in every 4 pixels of the GPU; rendering is performed once every 8 pixels in a 1/8 pixel density region (namely, a 1/8 full resolution region), namely, the rendering result of one pixel is calculated in every 8 pixels of the GPU, and the like, so that the rendered image is gradually blurred along with the reduction of the pixel density or the resolution.
In this embodiment, the GPU independently renders each layer, and the pixel density of each pixel in each layer is the same and known, so that the GPU does not need to determine the pixel density of each pixel during rendering.
When the GPU renders each layer, if the current layer is a layer corresponding to the full pixel density, the GPU sequentially renders each pixel in the current layer according to the full pixel density and the primitive information of the current layer, and the rendering result of each pixel is stored in a video memory.
If the current layer is a layer corresponding to the non-full pixel density, the GPU renders one target pixel in a rendering pixel group in the current layer according to the non-full pixel density and the primitive information of the current layer, and stores the rendering result of the target pixel in a display memory, wherein the current rendering layer comprises a plurality of rendering pixel groups, one rendering pixel group comprises a plurality of adjacent pixels, the number of pixels in the rendering pixel group is related to the non-full pixel density, and the rendering result of the pixels in the rendering pixel group is the same.
For a layer corresponding to full pixel density, each pixel in the layer needs to be rendered. For a layer corresponding to the non-full pixel density, partial pixels in the layer do not need to be rendered, the GPU determines the number of pixels in a rendering pixel group in the layer according to the value of the non-full pixel density, divides the rendering pixel group according to the number of pixels in the rendering pixel group, only renders one target pixel in the rendering pixel group, other pixels in the rendering pixel group do not need to be rendered, and the rendering results of the other pixels multiplex the rendering results of the target pixel.
Taking 1/4 pixel density as an example, the number of pixels included in the rendering pixel group is 4, that is, every 4 pixels are rendered, where 4 pixels in the rendering pixel group are adjacent pixels, the GPU sequentially determines a pixel block with a size of 2 x 2 from the layer corresponding to the 1/4 pixel density according to the starting point and the traversing direction as the rendering pixel group, and renders a target pixel specified in the rendering pixel group, and for example, may specify a pixel in the upper left corner in the 2 x 2 pixel group as the target pixel, or may specify a pixel in the lower right corner in the 2 x 2 pixel group as the target pixel, which is not limited in this embodiment, and may specify any pixel in the 2 x 2 pixel group as the target pixel.
Taking 1/8 pixel density as an example, the number of pixels included in the rendering pixel group is 8, the gpu may sequentially determine a pixel block with a size of 4*2 or a size of 2×4 from the layer corresponding to the 1/8 pixel density according to the starting point and the traversing direction as the rendering pixel group, and render a target pixel in the rendering pixel group, where the target pixel may be any pixel in the rendering pixel group.
Optionally, the GPU renders and writes based on tiles (tile) as each layer is rendered and written to memory. Correspondingly, after generating a plurality of image layers for an image to be rendered according to the gaze point rendering level, the CPU divides each image layer into a plurality of image blocks according to the size of the memory space of the video memory, each image block comprises a plurality of continuous pixels, information of the image blocks of the image layers is carried in rendering data and is sent to the GPU, and the GPU renders and writes the image blocks with the granularity of the image blocks when rendering and writing the image layers into the memory.
Tile-based rendering (Tile-based rendering) is also referred to as Tile-based rendering or Tile-based rendering, and by dividing a large layer into a plurality of tiles, each Tile is independent, rendering is performed with the tiles as granularity in the rendering process, and compared with rendering the whole layer, the size of data stored and transmitted in the middle of rendering can be reduced, so that memory and bandwidth consumption in the rendering process can be saved, and the tiles are divided into tiles which are easy to process in parallel.
And the GPU sequentially renders each block in the current layer according to the rendering data of the current layer, and stores the rendering result of each block in the video memory.
Optionally, the GPU may render multiple layers in parallel, or may render multiple layers in series, for example, when there are 4 layers, the GPU may sequentially render the 4 layers in sequence, may simultaneously render the 4 layers in parallel, may simultaneously render two layers, and may render the remaining layers after the rendering is completed. Rendering a plurality of layers in parallel can improve the rendering speed.
And S104, the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the pixel density of each layer.
Illustratively, the GPU determines an address mapping relationship of each layer according to the pixel density of each layer, where the address mapping relationship is a mapping relationship between an address in the video memory and an address in the memory. And the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer.
The address mapping relationship of the layer corresponding to the full pixel density is that the addresses in the video memory are in one-to-one correspondence with the addresses in the memory, and it can be understood that the rendering result of each pixel in the video memory is correspondingly written into one address in the memory, that is, the copy relationship of the rendering result of the pixel is in one-to-one correspondence, so the address mapping relationship can be called as a pixel mapping relationship.
The address mapping relation of the layer corresponding to the non-full pixel density is that one address in the video memory corresponds to N addresses in the memory, N is an integer greater than or equal to 2, and the value of N is equal to the number of pixels in the rendering pixel group, or the value of N corresponds to the non-full pixel density. It can also be understood that the rendering result of each pixel in the video memory is written into N addresses in the memory, that is, the copy relationship of the rendering result of the pixel is a pair of N correspondence relationships, so the address mapping relationship may be referred to as a pixel mapping relationship.
Because the pixel density of each layer is known and the pixel density of all pixels of the layer is the same, the GPU does not need to judge the pixel density of the pixels in the layer in the process of writing the rendering result of each layer into the memory from the video memory, thereby reducing the cost of judging the pixel density in the writing process. And the address mapping relation of each layer is obtained according to the pixel density, and the read rendering result of the pixels is directly written into the continuous addresses in the corresponding memory space according to the address mapping relation, so that the writing address of each pixel in the memory is not required to be calculated, and the cost of calculating the writing address in the memory in the writing process is reduced.
If the current layer of the GPU is a layer corresponding to the full pixel density, the address mapping relation of the layer corresponding to the full pixel density is that addresses in the video memory are in one-to-one correspondence with addresses in the memory, and correspondingly, the GPU reads the rendering result of the pixels of the current layer from the video memory, and writes the rendering result of the read pixels into one address of the memory space corresponding to the memory space information according to the address mapping relation of the current layer and the memory space information.
Because the pixel density of the layer corresponding to the full pixel density is known and the pixel density of all pixels of the layer is the same, the GPU does not need to judge the pixel density of each pixel in the process of writing the rendering result of the layer corresponding to the full pixel density into the memory from the video memory, thereby reducing the cost of judging the pixel density in the writing process.
In addition, each pixel in the layer corresponding to the full pixel density is rendered, and in the process of writing the rendering result of the layer into the memory from the video memory, the addresses in the video memory and the addresses in the memory can be considered to be in one-to-one correspondence, the GPU does not need to calculate the writing position of the pixel, directly reads the rendering result of the pixel from the video memory, and continuously writes the rendering result into the memory space of the layer corresponding to the full pixel density in the memory. Addresses in the memory space of the layer corresponding to the full pixel density are continuous, and when the GPU writes the rendering result of the pixels of the layer corresponding to the full pixel density into the memory space in the memory, the addresses are also continuously written, and addressing is not needed, so that the cost of calculating the writing addresses in the memory in the writing process is reduced.
If the current layer is a layer corresponding to the non-full pixel density, the address mapping relation of the layer corresponding to the non-full pixel density is that one address in the video memory corresponds to N addresses in the memory, N is an integer greater than or equal to 2, and the value of N is equal to the number of pixels in the rendering pixel group. Correspondingly, the GPU reads the rendering result of the target pixel of the current layer from the video memory, and repeatedly writes the read rendering result of the target pixel into N continuous addresses of the memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
The pixel density of the layer corresponding to the non-full pixel density is known, and the pixel density of all pixels of the layer is the same, so that the GPU does not need to judge the pixel density of each pixel in the process of writing the rendering result of the layer corresponding to the non-full pixel density into the memory from the video memory, thereby reducing the cost of judging the pixel density in the writing process.
Because the layer corresponding to the non-full pixel density only renders one target pixel in the rendering pixel group, the rendering results of other pixels in the rendering pixel group multiplex the rendering results of the target pixel, so that only the rendering results of the target pixel are stored in the display memory. However, in the writing process, the rendering result of each pixel in the layer corresponding to the non-full pixel density still needs to be written into the memory, so that the GPU needs to repeatedly write the rendering result of the target pixel multiple times when writing, so that the rendering results of all pixels in the rendering pixel group to which the target pixel belongs are written into the memory.
For a layer corresponding to the non-full pixel density, in the process that the GPU writes the rendering result of the layer into the memory from the video memory, although the addresses in the video memory and the addresses in the memory are not in one-to-one correspondence, the mapping relationship between the addresses in the video memory and the addresses in the memory is fixed, namely, each video memory address corresponds to N memory addresses, and after the GPU reads the rendering result of the target pixel of the pixel, the GPU repeatedly writes the rendering result of the target pixel read from one video memory address into N continuous addresses in the memory according to the address mapping relationship. Therefore, the method and the device can avoid that the GPU determines the writing position in the memory pixel by pixel, and the GPU continuously writes the rendering result of the pixels without addressing, thereby reducing the cost of calculating the writing address in the memory in the writing process.
For example, when the non-full pixel density is 1/4, the GPU repeatedly writes the rendering result of a pixel 4 times when reading the rendering result of the pixel from the video memory, and writes the rendering result of the pixel into 4 consecutive addresses in the memory space with the 1/4 pixel density, respectively, where each address is used to store the rendering result of a pixel.
S105, the CPU reads the rendering results of the layers from the memory, synthesizes the rendering results of the layers, and obtains the rendering result of the image to be rendered.
The memory space of each layer is independent, the rendering result stored in the memory space of each layer cannot be directly used for on-screen display, and the CPU needs to synthesize the rendering result of each layer into one image for display.
After the GPU writes the rendering result of the rendered pixel into the memory, the CPU reads the rendering result of the rendered pixel from the memory, and continuously synthesizes the read rendering results of the pixels of the layers, so that the rendering results of the layers are synthesized into one image.
Optionally, the CPU synthesizes by: the CPU reads the rendering result of the rendered pixel from the memory, and stores the rendering result of the rendered pixel into a frame buffer (frame buffer) according to the position information of the rendered pixel. The rendering results stored in the frame buffer are data that can be used directly for on-screen display, and the display screen reads the rendering results from the frame buffer and displays them.
Optionally, the frame buffer adopts multi-level buffering to avoid the problem of tearing of the picture caused by asynchronous frame rate of the rendering result and frame rate of the display screen. For example, double buffering may be employed, which may store rendering results in two buffers, one for rendering and one for display, with data being continually exchanged between the two buffers.
In this embodiment, the CPU generates a plurality of layers for an image to be rendered according to a gaze point rendering level, and allocates a memory space in the memory for each layer, where the gaze point rendering level corresponds to a plurality of pixel densities, and a pixel corresponding to each pixel density corresponds to one layer. The CPU sends rendering data for each layer to the GPU, the rendering data including pixel density and memory space information. The GPU renders each layer according to the rendering data of each layer, stores the rendering result in a video memory, and writes the rendering result of each layer from the video memory into a corresponding memory space according to the pixel density of each layer. And the CPU reads the rendering results of the layers from the memory, synthesizes the rendering results of the layers, and obtains the rendering result of the image to be rendered. The pixel density of each layer is the same, and the GPU renders the independent layers, so that when the rendering result of each layer is written into the memory from the video memory, the pixel density of the pixel is not required to be judged, and the writing address in the memory is not required to be determined, thereby reducing the cost of judging the pixel density and the cost of calculating the writing address in the writing process and reducing the load of the GPU.
On the basis of the first embodiment, the second embodiment of the present application provides a gaze point rendering method, in this embodiment, the GPU renders and writes based on the image block, fig. 4 is a flowchart of the gaze point rendering method provided in the second embodiment of the present application, and as shown in fig. 4, the method provided in this embodiment includes the following steps.
S201, the CPU generates a plurality of layers for the image to be rendered according to the gaze point rendering level, and allocates a memory space for each layer in the memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer.
S202, the CPU divides each image layer into a plurality of image blocks according to the size of the storage space of the video memory, and each image block comprises a plurality of continuous pixels.
In this embodiment, each large layer is divided into multiple tiles, each tile is independent, when the GPU renders each layer, the tiles are rendered with granularity, compared with the whole layer, the size of data stored and transmitted in the middle of rendering can be reduced, so that the memory and bandwidth consumption in the rendering process can be saved, and the tiles are divided into tiles which are easy to process in parallel.
In a rendering device, the size of the video memory is usually much smaller than that of the memory, and for example, the video memory of the rendering device is only 1-3M in size, so that not only is the rendering result stored in the video memory, but also some other rendering data needs to be stored in the video memory, and the CPU needs to determine the size of a space in the video memory, which can be used for storing the rendering result, according to the size of the video memory, so that the rendering result of a tile can be accommodated in the video memory.
Optionally, the tiles of the layer corresponding to the full pixel density are smaller than the tiles of the layer corresponding to the non-full pixel density. This is because each pixel in the tile of the layer corresponding to the full pixel density needs to be rendered, the rendering result of each pixel needs to be stored, while some pixels in the layer corresponding to the full pixel density do not need to be rendered, and another part of pixels only need to store the rendering result of the rendered pixels, so that more pixels may be included in the tile of the layer not corresponding to the full pixel density in the case where the rendering result of the same number of pixels is stored.
It is understood that the tile of the layer corresponding to the full pixel density may be greater than or equal to the tile of the layer corresponding to the non-full pixel density, which is not limited in the embodiment of the present application.
For example, if the tile size of the layer corresponding to the full pixel density is 16×16, the tile size of the layer corresponding to the non-full pixel density is greater than 16×16, and may be 20×20, 32×32, 32×64, 64×64, or other sizes.
Optionally, the size of the tile of the layer corresponding to the non-full pixel density is N times the size of the tile of the layer corresponding to the full pixel density, N is an integer greater than 1, and N is equal to the ratio of the full pixel density to the non-full pixel density.
For example, the size of the tile of the layer corresponding to the full pixel density is 16×16, the size of the tile of the layer corresponding to the 1/2 pixel density is 16×32, the size of the tile of the layer corresponding to the 1/4 pixel density is 32×32, and the size of the tile of the layer corresponding to the 1/8 pixel density is 64×64.
It should be noted that, when the layer is partitioned, the size of the screen may not be equal to or smaller than the size of the tiles in the edge region. For example, when the size of the full pixel density area is 800×600 and the size of the tile is 16×16, at this time, 800 divided by 16 is equal to 50, 600 divided by 16 is equal to 37.5, that is, the full pixel density area can be divided in the lateral direction and the full pixel density area can not be divided in the longitudinal direction, and if the blocks are divided in the order from left to right and from top to bottom starting from the first pixel in the upper left corner of the full pixel density area, 50×37 tiles of 16×16 and 50 tiles of 16×8, that is, the size of the last column of tiles is 16×8 and smaller than the tiles in other positions can be formed.
Fig. 5 is a schematic diagram of a layer corresponding to a full pixel density and a 1/4 pixel density, and as shown in fig. 5, the size of the layer corresponding to the full pixel density is 16×16, and the size of the layer corresponding to the 1/4 pixel density is 32×32.
After the CPU blocks each layer, the information of each tile is recorded, where the information of each tile includes the start position of the tile, where the start position of the tile refers to the position of the start pixel of the tile, and if the CPU blocks in order from left to right and from top to bottom, then the start position of the tile may be the position of the first pixel in the upper left corner of each tile.
Optionally, the tile information may also include the tile size and/or the partition direction. The dividing direction refers to a direction when the CPU blocks pixels in the image, and the dividing direction may be a direction from left to right, from top to bottom, or a direction from right to left, from top to bottom.
And S203, the CPU sends the rendering data of each layer to the GPU, wherein the rendering data of each layer comprises the primitive information, pixel density, memory space information and the information of the image blocks of the layer.
The tile information includes the starting position of the tile, and the size of the tile is pre-defined or pre-configured by the GPU and the CPU, or the GPU determines the size of the tile according to the starting positions of two adjacent tiles, so the size of the tile may not be included in the tile information sent to the GPU by the CPU.
Similarly, the dividing direction may be defined in advance or configured in advance by the GPU and the CPU, and the GPU may be capable of uniquely determining the position of a tile according to the acquired starting position of the tile, the size of the tile, and the dividing direction, and then rendering the tile.
And S204, the GPU renders the image blocks of each layer according to the rendering data of each layer, and the rendering results of the image blocks are stored in a video memory.
The GPU independently renders each layer, when each layer is rendered, the tiles can be rendered in parallel or in series according to the sequence of the tiles in the layer, when the GPU renders one tile, each pixel in the rendered tile writes a corresponding rendering result into the video memory, and after the pixels in the whole tile are rendered, the rendering result of the tile is written into the memory from the video memory.
And when the pixel density of the current layer is the full pixel density, the GPU renders each pixel in the image block and writes the rendering result into the video memory. When the pixel density of the current layer is not the full pixel density, the GPU renders target pixels in the rendering pixel groups in the image blocks, the rendering results of the target pixels are written into the memory, and one image block comprises one or more rendering pixel groups.
For example, assuming that the tile size of the layer corresponding to the 1/4 pixel density is 32×32, each tile of the layer includes 32×32/4=256 rendering pixel groups, and the GPU renders one target pixel in each rendering pixel group.
And S205, the GPU writes the rendering result of the image blocks of each image layer into the corresponding memory space from the video memory according to the pixel density of each image layer.
Each layer has only one pixel density, the pixel density of each block in each layer is equal to the pixel density of the layer, the GPU performs rendering according to the sequence of the blocks during rendering, and the GPU also performs writing according to the sequence of the blocks during writing.
When the GPU writes the rendering result of one image block into the memory from the video memory, the mapping relation between the address in the video memory and the address in the memory is determined according to the pixel density of the image block. When the pixel density of the image block is not the full pixel density, the addresses in the image memory are determined to correspond to N addresses in the memory, and the value of N is related to the non-full pixel density. And the GPU writes the rendering result of each pixel in the block into the memory according to the mapping relation between the address in the video memory and the address in the memory.
S206, the CPU reads the rendering results of the layers from the memory, synthesizes the rendering results of the layers, and obtains the rendering result of the image to be rendered.
When the GPU performs rendering and writing based on the image blocks, the CPU performs synthesis based on the image blocks, after the GPU finishes rendering one image block, the rendering result of the image block is written into the memory, the CPU reads the rendering result of the image block from the memory, and the rendering result of the image block is synthesized into the rendering result of the image to be rendered according to the position of the image block.
It should be noted that, the CPU may perform synthesis after all the layers are rendered, or may perform synthesis according to the rendering result of the rendered tile in the rendering process, where the latter manner may perform synthesis on the rendering result in time, so that the synthesis may be completed faster and earlier.
S207, displaying a picture by using a rendering result of the image to be rendered on the display screen.
The method includes the steps that if the rendering device is a display device, rendering results of images to be rendered are displayed through a display screen, and if the rendering device is not the display device, for example, rendering is performed through a cloud server, rendering results of the images to be rendered are obtained through the server, and the rendering results of the images to be rendered are sent to a terminal to be displayed.
In this embodiment, the CPU generates a plurality of layers for an image to be rendered according to the gaze point rendering level, and allocates a memory space for each layer in the memory, where a layer is generated corresponding to a pixel corresponding to each pixel density. The CPU divides each image layer into a plurality of image blocks according to the size of the storage space of the video memory, each image block comprises a plurality of continuous pixels, and the image element information, the pixel density, the memory space information and the image block information of the image layers are submitted to the GPU, and the GPU can render and write the image layers based on the image blocks. By dividing each layer into a plurality of image blocks, rendering and writing the image blocks with the image blocks as granularity, the data size stored and transmitted in the middle of rendering can be reduced, so that the memory and bandwidth consumption in the rendering process can be saved, and the image blocks are divided into the image blocks which are easy to process in parallel.
In order to facilitate better implementation of the gaze point rendering method of the embodiment of the application, the embodiment of the application also provides rendering equipment. Fig. 6 is a schematic structural diagram of a rendering device according to a third embodiment of the present application, and as shown in fig. 6, the rendering device 100 may include: CPU11, GPU12, video memory 13 and memory 14. The memory 14 is used to store a computer program and to transfer the program code to the CPU11, and the CPU11 reads instructions in the computer program from the memory 14 and performs the steps executed by the CPU in the above-described method embodiment. The memory 13 also stores a computer program, and the GPU12 reads the computer program and data provided by the CPU11 from the memory to perform image processing, and specifically, the CPU11 and the GPU12 function as follows.
The CPU11 is configured to generate a plurality of layers for an image to be rendered in the memory 14 according to a gaze point rendering level, and allocate a block of memory space for each layer, where the gaze point rendering level corresponds to a plurality of pixel densities, and a pixel corresponding to each pixel density corresponds to one layer;
the CPU11 is further configured to send rendering data of each layer to the GPU12, where the rendering data of each layer includes primitive information, pixel density, and memory space information of the layer;
the GPU12 is configured to render each layer according to the rendering data of each layer, and store the rendering result in the video memory 13;
the GPU12 is further configured to write, according to the pixel density of each layer, the rendering result of each layer from the video memory into a corresponding memory space;
the CPU11 is further configured to read rendering results of the multiple layers from the memory, and synthesize the rendering results of the multiple layers to obtain a rendering result of the image to be rendered.
In some alternative embodiments, GPU12 is specifically configured to:
rendering the pixels of each layer according to the pixel density and the primitive information of each layer, and storing rendering results in the video memory;
Determining an address mapping relation of each layer according to the pixel density of each layer, wherein the address mapping relation is the mapping relation between an address in a video memory and an address in a memory;
and writing the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer.
In some optional embodiments, the current layer is a layer corresponding to a full pixel density, and the GPU12 is specifically configured to: and rendering each pixel in the current layer in turn according to the full pixel density and the primitive information of the current layer, and storing the rendering result of each pixel in the video memory.
In some optional embodiments, the address mapping relationship of the layer corresponding to the full pixel density is that addresses in the video memory and addresses in the memory are in one-to-one correspondence;
the GPU12 is specifically configured to: and reading the rendering result of the pixel of the current layer from the video memory, and writing the read rendering result of the pixel into one address of a memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
In some optional embodiments, the current layer is a layer corresponding to a non-full pixel density, and the GPU12 is specifically configured to:
And rendering a target pixel in a rendering pixel group in the current layer according to the non-full pixel density and the primitive information of the current layer, and storing the rendering result of the target pixel in the video memory, wherein the current layer comprises a plurality of rendering pixel groups, one rendering pixel group comprises a plurality of adjacent pixels, the number of pixels in the rendering pixel group is related to the non-full pixel density, and the rendering result of the pixels in the rendering pixel group is the same.
In some optional embodiments, the address mapping relationship of the layer corresponding to the non-full pixel density is that one address in the video memory corresponds to N addresses in the memory, N is an integer greater than or equal to 2, and the value of N is equal to the number of pixels in the rendered pixel group;
the GPU12 is specifically configured to:
reading the rendering result of the target pixel of the current layer from the video memory;
and repeatedly writing the read rendering result of the target pixel into N continuous addresses of a memory space corresponding to the memory space information according to the address mapping relation of the current layer and the memory space information.
In some alternative implementations, the GPU renders and writes based on tiles.
In some alternative embodiments, the CPU11 is further configured to: dividing each image layer into a plurality of image blocks according to the size of the storage space of the video memory, wherein each image block comprises a plurality of continuous pixels; the rendering data also includes information of tiles of the layer.
In some alternative embodiments, GPU12 is specifically configured to:
according to the rendering data of each layer, rendering each image block in each layer in turn, and storing the rendering result of each image block in the video memory, wherein the pixel density of each image block in each layer is equal to the pixel density of the layer to which each image block belongs;
writing the rendering result of each layer from the video memory into the corresponding memory space according to the pixel density of each layer, including:
reading a rendering result of a block from the video memory each time;
according to the pixel density of the read image block, writing the rendering result of the read image block into the internal memory space of the image layer to which the image block belongs through a write-once request.
In some alternative embodiments, the tiles of the layer corresponding to the full pixel density are smaller in size than the tiles of the layer corresponding to the non-full pixel density.
In some alternative embodiments, the tile of the layer corresponding to the non-full pixel density is N times the size of the tile of the layer corresponding to the full pixel density, N being an integer greater than 1, N being equal to the ratio of the full pixel density to the non-full pixel density.
In some alternative embodiments, the CPU11 or GPU12 may include, but is not limited to: a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some alternative embodiments, the memory 14 or the video memory 13 may be volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
It will be appreciated that although not shown in fig. 6, in some alternative embodiments, the rendering device 100 may further include a camera module, a WIFI module, a positioning module, a bluetooth module, a display, a controller, other memories (memories other than a memory and a video memory), and so on, which are not described herein.
It will be appreciated that the various components in the rendering device 100 are connected by a bus system comprising, in addition to a data bus, a power bus, a control bus and a status signal bus.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
The present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the rendering device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the rendering device executes the method of the above method embodiment, which is not described herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A gaze point rendering method, characterized by being applied to a rendering device comprising a central processing unit CPU, an image processing unit GPU, a video memory and a memory, the method comprising:
the CPU generates a plurality of layers for an image to be rendered according to the gaze point rendering level, allocates a memory space for each layer in the memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer;
the CPU sends rendering data of each layer to the GPU, wherein the rendering data of each layer comprises graphic element information, pixel density and memory space information of the layer;
the GPU renders each layer according to the rendering data of each layer, and the rendering result is stored in the video memory;
The GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the pixel density of each layer;
and the CPU reads the rendering results of the layers from the memory, synthesizes the rendering results of the layers, and obtains the rendering result of the image to be rendered.
2. The method according to claim 1, wherein the GPU renders each layer according to the rendering data of each layer, and stores the rendering result in the video memory, comprising:
the GPU renders the pixels of each layer according to the pixel density and the primitive information of each layer, and the rendering result is stored in the video memory;
the GPU writes the rendering result of each layer from the video memory into the corresponding memory space according to the pixel density of each layer, and the method comprises the following steps:
the GPU determines the address mapping relation of each layer according to the pixel density of each layer, wherein the address mapping relation is the mapping relation between the address in the video memory and the address in the memory;
and the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer.
3. The method according to claim 2, wherein if the current layer is a layer corresponding to a full pixel density, the GPU renders pixels of each layer according to the pixel density and the primitive information of each layer, and stores the rendering result in the video memory, including:
and the GPU sequentially renders each pixel in the current layer according to the full pixel density and the primitive information of the current layer, and stores the rendering result of each pixel in the video memory.
4. The method of claim 3, wherein the address mapping relationship of the layer corresponding to the full pixel density is one-to-one correspondence between addresses in the video memory and addresses in the memory;
the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer, and the method comprises the following steps:
and the GPU reads the rendering result of the pixel of the current layer from the video memory, and writes the read rendering result of the pixel into one address of a memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
5. The method according to claim 2, wherein if the current layer is a layer corresponding to a non-full pixel density, the GPU renders pixels of each layer according to the pixel density and the primitive information of each layer, and stores the rendering result in the video memory, including:
the GPU performs rendering on a target pixel in a rendering pixel group in the current layer according to the non-full pixel density and the primitive information of the current layer, and stores a rendering result of the target pixel in the video memory, wherein the current layer comprises a plurality of rendering pixel groups, one rendering pixel group comprises a plurality of adjacent pixels, the number of pixels in the rendering pixel group is related to the non-full pixel density, and the rendering result of the pixels in the rendering pixel group is the same.
6. The method of claim 5, wherein the address mapping relationship of the layer corresponding to the non-full pixel density is that one address in the video memory corresponds to N addresses in the memory, N is an integer greater than or equal to 2, and the value of N is equal to the number of pixels in the rendered pixel group;
the GPU writes the rendering result of each layer into the corresponding memory space from the video memory according to the address mapping relation and the memory space information of each layer, and the method comprises the following steps:
The GPU reads the rendering result of the target pixel of the current layer from the video memory;
and the GPU repeatedly writes the read rendering result of the target pixel into N continuous addresses of a memory space corresponding to the memory space information according to the address mapping relation and the memory space information of the current layer.
7. The method of any of claims 1-6, wherein the GPU renders and writes based on tiles.
8. The method of claim 7, wherein after generating a plurality of layers for the image to be rendered according to the gaze point rendering level, the CPU further comprises:
the CPU divides each image layer into a plurality of image blocks according to the size of the memory space of the video memory, and each image block comprises a plurality of continuous pixels;
the rendering data also includes information of tiles of the layer.
9. The method of claim 8, wherein the GPU renders each layer according to the rendering data of each layer, and stores the rendering results in the video memory, comprising:
the GPU sequentially renders each image block in each image layer according to the rendering data of each image layer, and stores rendering results of each image block in the video memory, wherein the pixel density of each image block in each image layer is equal to the pixel density of the image layer to which each image block belongs;
The GPU writes the rendering result of each layer from the video memory into the corresponding memory space according to the pixel density of each layer, and the method comprises the following steps:
the GPU reads the rendering result of one image block from the video memory each time;
and the GPU writes the rendering result of the read image block into the memory space of the image layer to which the image block belongs through a write-once request according to the pixel density of the read image block.
10. The method of claim 8, wherein the tiles of the layer corresponding to full pixel density are smaller in size than the tiles of the layer corresponding to non-full pixel density.
11. The method of claim 10, wherein the tile of the layer corresponding to the non-full pixel density is N times the size of the tile of the layer corresponding to the full pixel density, N being an integer greater than 1, N being equal to the ratio of the full pixel density to the non-full pixel density.
12. The rendering device is characterized by comprising a central processing unit CPU, an image processing unit GPU, a video memory and a memory;
the CPU is used for generating a plurality of layers for the image to be rendered according to the gaze point rendering level, and distributing a memory space for each layer in the memory, wherein the gaze point rendering level corresponds to a plurality of pixel densities, and each pixel corresponding to the pixel density corresponds to one layer;
The CPU is further configured to send rendering data of each layer to the GPU, where the rendering data of each layer includes primitive information, pixel density and memory space information of the layer;
the GPU is used for rendering each layer according to the rendering data of each layer, and the rendering result is stored in the video memory;
the GPU is further used for writing rendering results of each layer from the video memory into corresponding memory spaces according to pixel density of each layer;
the CPU is further configured to read rendering results of the multiple layers from the memory, and synthesize the rendering results of the multiple layers to obtain a rendering result of the image to be rendered.
13. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 11.
14. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 11.
CN202311295469.8A 2023-10-08 2023-10-08 Gaze point rendering method, device, medium, and program Pending CN117496023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311295469.8A CN117496023A (en) 2023-10-08 2023-10-08 Gaze point rendering method, device, medium, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311295469.8A CN117496023A (en) 2023-10-08 2023-10-08 Gaze point rendering method, device, medium, and program

Publications (1)

Publication Number Publication Date
CN117496023A true CN117496023A (en) 2024-02-02

Family

ID=89677235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311295469.8A Pending CN117496023A (en) 2023-10-08 2023-10-08 Gaze point rendering method, device, medium, and program

Country Status (1)

Country Link
CN (1) CN117496023A (en)

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US5841439A (en) Updating graphical objects based on object validity periods
Regan et al. Priority rendering with a virtual reality address recalculation pipeline
CN113661471B (en) Hybrid rendering
US5630043A (en) Animated texture map apparatus and method for 3-D image displays
US10776997B2 (en) Rendering an image from computer graphics using two rendering computing devices
CN112465939B (en) Panoramic video rendering method and system
EP3337158A1 (en) Method and device for determining points of interest in an immersive content
US10217259B2 (en) Method of and apparatus for graphics processing
GB2578769A (en) Data processing systems
CN115552451A (en) Multi-layer reprojection techniques for augmented reality
JPH09161100A (en) Method and apparatus for display
JP2022543729A (en) System and method for foveated rendering
JP2009064356A (en) Program, information storage medium, and image generation system
CN115244492A (en) Occlusion of virtual objects in augmented reality by physical objects
US11211034B2 (en) Display rendering
CN111402369A (en) Interactive advertisement processing method and device, terminal equipment and storage medium
CN116243831A (en) Virtual cloud exhibition hall interaction method and system
JP4806578B2 (en) Program, information storage medium, and image generation system
CN117496023A (en) Gaze point rendering method, device, medium, and program
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
CN114596403A (en) Image processing method, image processing device, storage medium and terminal
KR101227183B1 (en) Apparatus and method for stereoscopic rendering 3-dimension graphic model
JP2009064355A (en) Program, information storage medium, and image producing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination