CN116954780A - Display screen rendering method, device, equipment, storage medium and program product - Google Patents

Display screen rendering method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN116954780A
CN116954780A CN202310092681.8A CN202310092681A CN116954780A CN 116954780 A CN116954780 A CN 116954780A CN 202310092681 A CN202310092681 A CN 202310092681A CN 116954780 A CN116954780 A CN 116954780A
Authority
CN
China
Prior art keywords
map
rendering
opaque
resolution
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310092681.8A
Other languages
Chinese (zh)
Inventor
张鹤
刘海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202310092681.8A priority Critical patent/CN116954780A/en
Publication of CN116954780A publication Critical patent/CN116954780A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a rendering method, a rendering device, equipment, a storage medium and a program product of a display picture, and relates to the technical field of computers. The method comprises the following steps: rendering a virtual scene in a first buffer area to obtain a scene map, wherein the resolution adopted by the first buffer area is a first resolution; rendering the UI component in a second buffer area to obtain a UI map, wherein the resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution; and merging the scene map in the first buffer area with the UI map in the second buffer area to obtain a display picture. By adopting the method provided by the embodiment of the application, the independent downsampling of the virtual scene can be realized, the high resolution of the UI component is maintained, and the blurring of the UI component is avoided. And because the down sampling processing is carried out on the rendering of the virtual scene, the pixel rendering quantity can be reduced, the rendering pressure is reduced, and the operation performance of the equipment is improved.

Description

Display screen rendering method, device, equipment, storage medium and program product
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a rendering method, a rendering device, rendering equipment, a storage medium and a program product of a display picture.
Background
The improvement of the screen resolution of the terminal equipment brings about the improvement of the image quality of the display picture, however, the pressure is brought to the operation of the equipment, and the phenomena of heating, clamping and the like are caused.
In the related art, the resolution may be reduced in a downsampling manner to reduce the amount of rendering, thereby reducing the device rendering pressure. However, the display will be blurred after downsampling, for example, the UI components in the display will be blurred, which has a large impact on the visual effect.
Disclosure of Invention
The embodiment of the application provides a rendering method, a device, equipment, a storage medium and a program product of a display picture, which can reduce rendering pressure and help to improve running performance. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for rendering a display screen, where the method includes:
rendering a virtual scene in a first buffer area to obtain a scene map, wherein the resolution adopted by the first buffer area is a first resolution;
rendering the UI component in a second buffer area to obtain a UI map, wherein the resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution;
And merging the scene map in the first buffer area with the UI map in the second buffer area to obtain a display picture.
In another aspect, an embodiment of the present application provides a rendering apparatus for a display screen, including:
the map rendering module is used for rendering the virtual scene in the first buffer area to obtain a scene map, and the resolution adopted by the first buffer area is a first resolution;
the map rendering module is further configured to render the UI component in a second buffer area, to obtain a UI map, where a resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution;
and the mapping merging module is used for merging the scene mapping in the first buffer area with the UI mapping in the second buffer area to obtain a display picture.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where at least one instruction, at least one section of program, a code set, or an instruction set is stored in the memory, where the at least one instruction, the at least one section of program, the code set, or the instruction set is loaded and executed by the processor to implement a method for rendering a display screen according to the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement a method of rendering a display as described in the above aspects.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the rendering method of the display screen provided in the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
in the embodiment of the application, the virtual scene is rendered in the first buffer zone with low resolution, the UI component is rendered in the second buffer zone with high resolution, and then the virtual scene and the UI component are combined to obtain the display picture. In this way, independent downsampling of the virtual scene can be achieved while maintaining high resolution of the UI components, avoiding blurring of the UI components. And because the down sampling processing is carried out on the rendering of the virtual scene, the pixel quantity of the rendering can be reduced, the rendering pressure is reduced, the problem of frame dropping caused by operation blocking due to overlarge rendering pressure can be avoided, and the device operation performance is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating a method for rendering a display screen according to an exemplary embodiment of the present application;
fig. 2 is a flowchart illustrating a method for rendering a display screen according to another exemplary embodiment of the present application;
FIG. 3 illustrates a schematic diagram of an opaque depth map provided by an exemplary embodiment of the present application;
FIG. 4 is a screen diagram illustrating a rendering process of a display screen according to an exemplary embodiment of the present application;
fig. 5 is a flowchart illustrating a method for rendering a display screen according to another exemplary embodiment of the present application;
FIG. 6 is a flowchart illustrating a rendering process of a display provided by an exemplary embodiment of the present application;
FIG. 7 illustrates a schematic diagram of the effect of special effect downsampling provided by one exemplary embodiment of the application;
Fig. 8 is a flowchart illustrating a method of rendering a display screen according to another exemplary embodiment of the present application;
fig. 9 is a block diagram showing a structure of a rendering apparatus of a display screen according to an exemplary embodiment of the present application;
fig. 10 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the related art, the process of rendering the display screen includes a virtual scene rendering process and a User Interface (UI) rendering process, and the two processes are combined into a frame buffer after the rendering is completed. During rendering, the frame buffers are the same buffer. When the frame buffer is the same buffer, the rendered virtual scene and the UI have the same resolution, if downsampling is needed, downsampling rendering is needed to be performed on the virtual scene and the UI at the same time, and in this way, the problem of blurring of the UI is caused. If downsampling is not performed, the rendering data volume is large, and the problem of stuck frame loss may occur.
Therefore, in order to solve the above-mentioned problems, an embodiment of the present application provides a method for rendering a display screen, by creating different buffers to respectively render a virtual scene and a UI in the different buffers, so as to implement a separate downsampling process for the virtual scene, reduce the amount of rendering data, and thus save resources, for example, reduce the occupancy rate of a central processor (Central Processing Unit, CPU) and the occupancy rate of a graphics processor (Graphics Processing Unit, GPU) in the rendering process, and also help to save electric quantity, reduce heating problems, and improve operation performance.
The method provided by the embodiment of the application can be applied to the rendering process of the game picture, and can be applied to the rendering of other pictures comprising scenes and UI components, such as virtual social scene pictures, and the like, and the embodiment is not limited to the above.
The method provided by the embodiment of the application can be applied to computer equipment. The computer device is a device with an image rendering function, and the device may be a terminal such as a smart phone, a tablet computer or a personal computer, or may be a server, which is not limited in this embodiment. The following will describe exemplary embodiments.
Referring to fig. 1, a flowchart of a method for rendering a display screen according to an exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for a computer device as an example, and the method includes the following steps:
and step 101, rendering a virtual scene in a first buffer area to obtain a scene map, wherein the resolution adopted by the first buffer area is a first resolution.
A Frame Buffer (Frame Buffer) is a memory Buffer for storing rendering data. The picture displayed on the display screen is composed of pixel points, each pixel point controls brightness and color by 4 to 64 bits of data, and a plurality of pixel points form a frame of display picture. In order to ensure the flow of the display picture, the pixel data of a plurality of frames are stored through a frame buffer area, so as to achieve the buffering effect. The frame buffer is a direct image of the picture displayed on the display screen, and is also called a Bit Map (Bit Map) or raster. Each storage unit in the frame buffer area corresponds to one pixel on the screen, the whole frame buffer area corresponds to one frame of image, and finally, the pixel data of one frame of image is read from the frame buffer area and converted into a graph to be displayed on the display screen.
The resolution of the frame buffer is a preset resolution, and cannot be changed after the frame buffer is created. When rendering pixels into the frame buffer, rendering is performed according to the resolution corresponding to the frame buffer. In the related art, when a frame of picture is rendered, the rendering is performed in the same frame buffer, so that the resolution adopted by the frame of picture is the same resolution, and if the virtual scene is downsampled, the UI components are required to be downsampled at the same time. In order to realize the downsampling processing of the virtual scene only, in the embodiment of the application, a first buffer area and a second buffer area are created, and the independent rendering of the virtual scene and the UI component is realized.
In one possible implementation, a computer device may create a first buffer at a first resolution for rendering a virtual scene. Optionally, the virtual scene is a visual range corresponding to the main camera, and the virtual scene includes all other display elements except the UI component element, for example, objects including a virtual background, a virtual character, a special effect, and the like. And rendering the virtual scene in the first buffer zone with low resolution, so that the down-sampling of the virtual scene can be realized, and the rendering pixel quantity is reduced.
And 102, rendering the UI component in a second buffer area to obtain a UI map, wherein the resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution.
For the UI component, because prompt information, interaction information and the like may be displayed, if the UI component is downsampled, the problem of information ambiguity may be caused, so in one possible implementation, the computer device creates a second buffer area with a second resolution for rendering the UI component, where the second resolution is higher than the first resolution, and renders the UI component with a high resolution, so as to ensure the definition of the UI.
Alternatively, the second resolution may be the original resolution, i.e. the screen resolution. Alternatively, other resolution sizes may be set as desired. Illustratively, the second resolution may be 1920×1080; the first resolution may be 1280 x 720. Virtual scenes are rendered based on a first resolution 1280 x 720, and UI components are rendered based on a second resolution 1920 x 1080.
And step 103, merging the scene map in the first buffer area with the UI map in the second buffer area to obtain a display picture.
Rendering a virtual scene in the first buffer area to obtain a scene map; rendering the UI component in the second buffer to obtain a UI map, and finally, the computer equipment can combine the scene map and the UI map to obtain a display picture for displaying on the display screen. Compared with the mode of rendering the whole display picture in the same buffer zone in the related art, the scheme provided by the embodiment of the application can realize independent rendering of the virtual scene and the UI component by creating different buffer zones, and can maintain the definition of the UI component while downsampling the virtual scene so as to avoid the influence on information.
In one possible implementation, when the scene map is combined with the UI map, the scene map may be posted to the second buffer area, so as to complete the combination of the scene map and the UI map, and obtain the display screen.
In summary, in the embodiment of the present application, the virtual scene is rendered in the first buffer with low resolution, the UI component is rendered in the second buffer with high resolution, and then the two are combined to obtain the display frame. In this way, independent downsampling of the virtual scene can be achieved while maintaining high resolution of the UI components, avoiding blurring of the UI components. And because the down sampling processing is carried out on the rendering of the virtual scene, the pixel quantity of the rendering can be reduced, the rendering pressure is reduced, the problem of frame dropping caused by operation blocking due to overlarge rendering pressure can be avoided, and the device operation performance is improved.
The virtual scene contains opaque objects, such as virtual background, virtual characters, etc., and may also contain semitransparent objects, such as special effects, etc. In one possible implementation, the computer device may further downsample translucent objects such as special effects to further reduce the rendering effort. The following will describe exemplary embodiments.
Referring to fig. 2, a flowchart of a method for rendering a display screen according to another exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for a computer device as an example, and the method includes the following steps:
in step 201, opaque objects in the virtual scene are rendered in the first buffer area, and an opaque object map is obtained.
When the virtual scene is rendered, the rendering object comprises an opaque object and a semitransparent object in the virtual scene. Illustratively, the opaque object may be a virtual character, the translucent object may be a character effect, or the like.
In one possible implementation, the computer device may render all objects contained in the virtual scene within the first buffer, resulting in a scene map.
In another possible embodiment, since the sharpness of the semitransparent object has less influence on the visual effect, for the semitransparent object, downsampling may be further performed to reduce the pixel amount of rendering the semitransparent object, thereby further reducing the rendering workload. Rendering opaque objects in the virtual scene in a first buffer zone to obtain an opaque object map; for semi-transparent objects, further downsampling of the semi-transparent objects may be achieved by rendering in a lower resolution buffer.
And 202, rendering semitransparent objects in the virtual scene in a third buffer zone to obtain a semitransparent object map, wherein the resolution adopted by the third buffer zone is a third resolution, and the third resolution is lower than the first resolution.
In one possible implementation, the computer device may create a third buffer for rendering the translucent object, the third buffer employing a third resolution, and the third resolution being lower than the first resolution. Alternatively, the resolution of the third resolution may be a size set randomly according to the requirement, or the resolution of the third resolution may be determined according to the first resolution, for example, the third resolution may be 1/2 of the first resolution, or may be 1/4 of the first resolution, which is not limited in this embodiment. Illustratively, the third resolution may be 480 x 270.
After creating the third buffer, the computer device may render the translucent object alone within the third buffer, resulting in a translucent object map. Schematically, the virtual background and the virtual character can be rendered at a first resolution 1920×1080 to obtain an opaque object map corresponding to the virtual background and the virtual character, and the special effect is rendered at a third resolution 480×270 to obtain a semitransparent object map corresponding to the special effect.
When the computer equipment renders the semitransparent object in the third buffer zone, partial pixels of the semitransparent object need to be removed in the rendering process in order to avoid shielding of the opaque object.
In one possible implementation, based on the occlusion relation between the semitransparent object and the opaque object, rendering the semitransparent object in the third buffer area to obtain a semitransparent object map, wherein pixels in the semitransparent object, which are used for generating occlusion for the opaque object, are eliminated.
The shielding relation between the semitransparent object and the opaque object can be determined according to the depth information of the pixel points in the rendering process. The computer device may determine a shielding condition of the opaque object by the translucent object according to the depth information of the pixel point, and render a translucent object map in the third buffer area according to the shielding condition, the process may include the steps of:
step one, rendering pixel points corresponding to the semitransparent objects in a third buffer zone.
In one possible implementation manner, first, rendering a pixel point corresponding to a semitransparent object in a third buffer zone, after each rendering of one pixel point is completed, judging whether the pixel point needs to be removed according to a shielding relation, if the pixel point needs to be removed, not writing the pixel point into the third buffer zone, and if the pixel point does not need to be removed, writing the pixel point into the third buffer zone.
And step two, determining that the shielded pixel points exist in the semitransparent object based on the depth information of the semitransparent object and the depth information of the opaque object at the target position indicated by the opaque depth map, wherein the target position is the overlapping position of the pixel points of the semitransparent object and the opaque object.
In the process of rendering each pixel, the rendering data comprises depth information of the pixel, and for an opaque object, an opaque depth map of the opaque object can be obtained after rendering, wherein the depth information of each pixel in the opaque object is included. After the computer equipment finishes rendering the pixel points belonging to the semitransparent object, determining whether the pixel points are blocked for the opaque object according to the depth information corresponding to the pixel points and the depth information of the pixel points of the opaque object at the corresponding target position in the opaque object map, wherein the process can comprise the following steps:
and step 1, downsampling the opaque depth map based on a third resolution to obtain an updated opaque depth map.
When the opaque object is rendered, the first resolution is adopted, so that the opaque depth map obtained by rendering is also the depth map under the first resolution, and the depth information of the semitransparent object is the depth information under the third resolution, so that the opaque depth map is required to be downsampled according to the third resolution to obtain the opaque depth map under the third resolution, and whether pixel points need to be removed is judged according to the depth information of the corresponding position.
Illustratively, as shown in fig. 3, a semitransparent object map 301 is rendered in the third buffer area based on the third resolution, and the opaque depth map is required to be downsampled according to the third resolution, so as to obtain an updated opaque depth map 302.
And 2, determining that the first pixel point is the pixel point with shielding when the depth information of the first pixel point is smaller than that of the second pixel point, wherein the first pixel point is the pixel point of the semitransparent object, and the second pixel point is the pixel point at the position corresponding to the first pixel point in the updated opaque depth map.
When the updated opaque depth map is obtained, for each pixel point of the semitransparent object, the judgment can be performed according to the depth information of the pixel point at the same pixel position in the opaque depth map, and when the depth information of the first pixel point belonging to the semitransparent object is smaller than the depth information of the second pixel point in the corresponding map, the first pixel point can shield the second pixel point, namely, the first pixel point is the pixel point with shielding, and the first pixel point needs to be removed. Schematically, when a first pixel point a belonging to a semitransparent object is at a target position a, a pixel point at a corresponding opaque depth map a is a second pixel point B, comparing the depth value of the first pixel point a with the depth value of the second pixel point B, and when the depth value of the first pixel point a is smaller than the depth value of the second pixel point B, rejecting the first pixel point a.
In one possible implementation manner, the computer device may render all pixels of the semitransparent object, then determine for each pixel, and reject the pixels having the occlusion, or in another possible implementation manner, determine according to the depth information after each rendering obtains one pixel, and if the pixels need to be rejected, do not write into the third buffer region.
And thirdly, eliminating the pixel points with shielding to obtain a semitransparent object map.
And after the pixel points with shielding are removed, obtaining a final semitransparent object map, and storing the final semitransparent object map in a third buffer zone.
Step 203, merging the opaque object map and the semitransparent object map to obtain the scene map.
After the translucent object map is obtained, the opaque object map and the translucent object map are combined to obtain the scene map.
In one possible implementation, the semi-transparent object map may be merged into the first buffer, during the merging process, the semi-transparent object map may be enlarged, the map scale may be adjusted to be consistent with the map scale of the semi-transparent object map, and after the adjustment, the two may be merged to obtain the scene map.
Illustratively, as shown in fig. 4, an opaque object map 402 at a first resolution is rendered in a first buffer, and a special effect map (semitransparent object map) 401 at a third resolution is rendered in a third buffer, after which the special effect map 401 at the third resolution and the opaque object map 402 at the first resolution may be combined to obtain a scene map 403.
And step 204, rendering the UI component in the second buffer area to obtain a UI map.
In one possible implementation, the second resolution may be the original resolution, i.e., the screen resolution of the display screen. In this case, the second buffer may be a default buffer, i.e., the UI component is rendered within the default buffer, resulting in a UI map.
Step 205, move the scene map to the UI image component, which is used to expose the map.
After the scene map and the UI map are obtained, the scene map and the UI map are combined. In one possible implementation, the scene map may be moved into the UI image component. The UI image component is a UI Rawimage, the UI Rawimage is a map display component, a map rendered by a camera can be displayed on the UI, and when the scene map is moved to the UI Rawimage component, the scene map can be displayed on the UI.
In one possible implementation, after the semitransparent object map and the opaque object map are combined to obtain the scene map, post-processing is further required to be performed on the scene map to adjust the map, and then the post-processed scene map is moved onto the UI rawmage component.
And 206, merging the UI image component containing the scene map with the UI map to obtain a display picture.
After moving the scene map onto the UI image component, the UI image component may be merged with the UI map to obtain a display screen including the UI component. It should be noted that, the display frames obtained by combining are stored in the second buffer, and when the subsequent display is needed on the display screen, the display frames can be read from the second buffer.
Schematically, as shown in fig. 4, a UI map 404 with a second resolution is rendered in the second buffer, and then the scene map 403 and the UI map 404 may be combined to obtain a display screen 405.
In this embodiment, in the rendering process of the virtual scene, downsampling may be performed to opaque objects and semitransparent objects to different extents, that is, a double downsampling manner is adopted, so that the semitransparent objects may be downsampled further compared with the opaque objects, thereby further reducing the rendering amount, contributing to improving the running performance, such as reducing the CPU usage rate and GPU usage rate, and reducing the heating problem caused by the picture rendering.
In the embodiment, in the process of rendering the semitransparent object, pixels of the semitransparent object are removed according to the depth information of the semitransparent object and the depth information of the opaque object, so that shielding of the semitransparent object to the opaque object after the rendering is completed is avoided, and visual effect is influenced.
In the rendering process, depth information and color information of an object are generally rendered to obtain a depth map and a color map, wherein the depth map is used for assisting in rendering of pixels, and finally the color maps are combined to obtain a display picture. The following will describe exemplary embodiments.
Referring to fig. 5, a flowchart of a method for rendering a display screen according to another exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for a computer device as an example, and the method includes the following steps:
in step 501, an opaque object in a virtual scene is rendered in a first buffer, and an opaque color map and an opaque depth map are obtained, where the opaque color map is used to indicate color information of the opaque object.
In the process of rendering the opaque object, the depth and the color of the opaque object are rendered simultaneously, and an opaque color map and an opaque depth map can be obtained. In one possible implementation, the first buffer includes a first color buffer and a first depth buffer, the rendered opaque color map may be stored in the first color buffer, and the opaque depth map may be stored in the first depth buffer. The opaque color map in the first color buffer zone records color data of each pixel corresponding to the opaque object; the opaque depth map in the first depth buffer records the depth value of each pixel for determining the occlusion relationship of the pixel, ensuring proper rendering.
Step 502, rendering a semitransparent object in the virtual scene in the third buffer zone to obtain a semitransparent color map, wherein the semitransparent color map is used for indicating color information of the semitransparent object.
In one possible implementation, for a semi-transparent object, a semi-transparent color map may be rendered and stored in a third buffer. For the depth information of the semitransparent object, the depth information is only needed to be obtained in the rendering process and then is used for determining whether the pixels need to be removed or not with the depth indication of the opaque depth map in the first depth buffer zone, and storage is not needed. That is, for a translucent object, only the translucent color map of the translucent object need be stored in the third buffer, and no depth map of the translucent object need be stored.
Step 503, merging the opaque color map and the semitransparent color map to obtain a scene map.
In performing the map merging, the color maps are actually merged, and the depth maps are used to assist in determining whether the pixels need to be eliminated. The opaque color map is used for indicating color information of the opaque object, the semitransparent color map is used for indicating color information of the semitransparent object, and the opaque color map and the semitransparent color map are combined to obtain the scene map.
The merged scene map is stored in a first color buffer and then merged with the UI map in a second buffer.
At step 504, the opaque depth map within the first buffer is deleted.
In the embodiment of the application, the opaque depth map is used for determining the pixel points to be removed in the process of rendering the semitransparent object, and the opaque depth map is not required to be used after the semitransparent object is rendered, so that the opaque depth map in the first buffer zone can be deleted to avoid continuous occupation of the storage space. In one possible implementation, the opaque depth map within the first depth buffer may be deleted.
And step 505, rendering the UI component in the second buffer to obtain a UI map.
Reference may be made to the above embodiments for implementation of this step, which are not repeated here.
Step 506, merging the scene map in the first buffer area with the UI map in the second buffer area to obtain the display screen.
In one possible implementation, the scene map in the first color buffer is merged into the second buffer, and the merging with the UI map is completed, so as to obtain a display screen.
In this embodiment, in the rendering process, a semitransparent color map of a semitransparent object and an opaque color map of an opaque object may be rendered, and in the rendering process, an occlusion relationship may be determined according to depth information indicated by the opaque depth map, so as to eliminate unnecessary pixels, and finally, the semitransparent color map and the opaque color map are combined to obtain a scene map, and then, the scene map and the UI map are combined to obtain a display screen, so that downsampling of a virtual scene may be implemented.
In this embodiment, since the partial pixels in the semitransparent object are removed according to the opaque depth map, the opaque depth map can be deleted after rendering the scene map, which is helpful for saving storage space.
Referring to fig. 6, the rendering process of the display screen provided by the embodiment of the present application mainly includes two processes of redirecting the main camera to the low resolution 601 and drawing to the UI 602, and writing the display screen into the final frame buffer after drawing to the UI for waiting for display.
The process of redirecting the main camera to the low resolution 601 is a process of downsampling and rendering the virtual scene. Because only the default buffer area is created for rendering the display picture in the general rendering process, in the embodiment of the application, in order to realize the independent downsampling of the virtual scene, a first buffer area with low resolution is required to be created, so as to realize the redirection of the main camera to the low resolution. In the process of redirecting the primary camera to the low resolution 601, including the virtual scene rendering process 603, the virtual scene rendering 603 may include rendering of opaque objects (background and character, etc.) 604 and semi-transparent objects (special effects) 605, and wherein the semi-transparent objects 605 are rendered into a third buffer, further downsampling is achieved, pixel culling may be performed according to the opaque depth map in the depth buffer during the rendering of the semi-transparent objects 605, avoiding occlusion of the opaque objects 604, after which rendering fusion 606 may be performed, i.e. merging the semi-transparent color map corresponding to the semi-transparent objects 605 with the opaque color map corresponding to the opaque objects 604 into the first color buffer.
And drawing the combined scene map on the UI, namely combining the scene map with the UI map, and carrying out post-processing on the scene map before combining to increase the picture effect. In the process of drawing the UI 602, the scene map after post-processing is moved to the UI Rawimage component, then the scene map corresponding to other UI components is combined to obtain a display picture, and the display picture is written into a frame buffer zone to complete the rendering process of the display picture.
In combination with the above example, the first resolution is 1280×720, the second resolution is 1920×1080, and the third resolution is 480×270, and during rendering, a first buffer of 1280×720, a second buffer of 1920×1080, and a third buffer of 480×270 are created. Firstly, rendering opaque objects in a first buffer area, and rendering semitransparent objects in a third buffer area, and then merging semitransparent object maps under 480 multiplied by 270 onto opaque object maps under 1280 multiplied by 720 to obtain scene maps corresponding to virtual scenes; and then the scene map is drawn on a 1920×1080 UI to finish rendering.
In rendering the translucent object, rendering is performed based on the third resolution. As shown in fig. 7, when the special effect is not downsampled and rendered to obtain a display frame 701, the special effect is further downsampled compared with the background character and rendered to obtain a display frame 702 (wherein the resolution of the special effect is 1/4 of the resolution of the background character), the blurring degree is different under different resolutions, and different downsampling coefficients, namely the resolution, can be set according to requirements.
In one possible implementation, the third resolution size may be determined according to the image quality requirements of the user. The image quality requirement and the third resolution are in positive correlation, the image quality requirement can be determined according to the picture definition setting, the corresponding relation between the image quality definition and the third resolution can be stored in the computer equipment in advance, and the third resolution can be determined according to the picture definition set by a user during rendering. Illustratively, the third resolution corresponding to the low image quality is 1/4 of the first resolution, the third resolution corresponding to the medium image quality is 1/2 of the first resolution, and the third resolution can be the same as the first resolution under the high image quality, i.e. the semi-transparent object is not further downsampled under the high image quality. When it is determined that the definition of the screen set by the user is low, it may be determined that the third resolution at which the translucent object is rendered is 1/4 of the first resolution.
In another possible implementation, the first resolution employed for rendering the opaque object may also be determined based on a user-set screen definition. Illustratively, the magnitude relation between the image quality definition and the first resolution may be: the first resolution corresponding to the low image quality is 1/2 of the second resolution (original resolution), the first resolution corresponding to the medium image quality is 2/3 of the second resolution, the first resolution corresponding to the high image quality is 3/4 of the second resolution, and the first resolution is the same as the second resolution in the extremely high image quality. When the definition of the screen set by the user is low, it may be determined that the first resolution at which the opaque object is rendered is 1/2 of the second resolution.
In the rendering process, the resolution of the first resolution and the third resolution can be determined according to the definition of the image quality set by a user, then a first buffer area and a third buffer area are respectively created based on the determined first resolution and third resolution, and the rendering of the virtual scene is completed in the first buffer area and the third buffer area, so that the down-sampling rate can be dynamically adjusted according to the user requirement.
In the embodiment of the application, the adopted rendering mode is off-screen rendering. Off-screen rendering refers to storing rendering data in an off-screen buffer area, and putting the rendering data of a plurality of layers in a frame buffer area after superposition calculation. The first buffer area, the second buffer area and the third buffer area created in the embodiment of the application can be off-screen buffer areas, namely, opaque objects, semitransparent objects and UI components are firstly off-screen rendered, and then the maps are combined and written into a frame buffer area. Because of the off-screen rendering mode, the cache can be reused. In one possible implementation, since the definition of the semitransparent object has less influence on the visual effect, the buffer of the semitransparent object can be reused, so that the rendering frame rate of the semitransparent object is reduced, and the frame dropping rendering is realized.
Referring to fig. 8, a flowchart of a method for rendering a display screen according to another exemplary embodiment of the present application is shown. This embodiment will be described by taking the method for a computer device as an example, and the method includes the following steps:
in step 801, opaque objects in a virtual scene are rendered in a first buffer based on a first frame rate, resulting in an opaque object map.
In one possible implementation, the same frame rate may be used for rendering for both opaque and semi-transparent objects, etc.
In yet another possible implementation, different frame rates may be employed for rendering opaque objects as well as semi-transparent objects. Wherein the computer device may render the opaque object at the first frame rate to obtain an opaque object map. The first frame rate may be a frame rate used for original rendering, that is, a normal frame rate is used for rendering opaque objects such as virtual background and virtual character in the virtual scene.
Illustratively, the first frame rate corresponds to a transmission frame number per second (Frames Per Second, FPS) of 60, i.e., the opaque object is rendered with a frame rate of 60 FPS.
And step 802, rendering the semitransparent objects in the virtual scene in the third buffer zone based on a second frame rate, so as to obtain a semitransparent object map, wherein the second frame rate is smaller than the first frame rate.
For semitransparent objects in the virtual scene, the semitransparent objects which are rendered in the buffer memory can be reused, namely, the semitransparent objects can be rendered by adopting a second frame rate smaller than the first frame rate, and the frame-dropping rendering of the semitransparent objects is realized.
Illustratively, the second frame rate may be 30FPS, and in the case of the first frame rate of 60FPS, one frame buffer may be multiplexed in rendering the semi-transparent object when rendering the semi-transparent object at 30 FPS. In the case of rendering 60 frames per second of opaque object map, only 30 frames of semi-transparent object map need be rendered to reduce the rendering frame rate of the semi-transparent object. For another example, when the second frame rate is 15FPS, two frame buffers may be multiplexed during the process of rendering the semitransparent object, and when 60 frames of opaque object maps are rendered per second, only 15 frames of semitransparent object maps need to be rendered, so that the rendering frame rate of the semitransparent object may be further reduced.
In one possible implementation, the second frame rate may be a preset frame rate, and the rendering is performed directly at the preset second frame rate. Alternatively, in another possible implementation, the second frame rate may be dynamically adjusted according to the user's image quality requirement, and the image quality sharpness and the second frame rate are in positive correlation. Illustratively, the correspondence between the second frame rate and the image quality definition may be: the second frame rate at low image quality is 1/4 of the first frame rate, the second frame rate at medium image quality is 1/2 of the first frame rate, and the second frame rate at high image quality is the same as the first frame rate. When the definition of the image set by the user is the middle image quality, the second frame rate can be determined to be 1/2 of the first frame rate, and the opaque object and the semitransparent object are rendered according to different frame rates, so that a virtual scene is obtained.
Step 803, merging the opaque object map with the semi-transparent object map to obtain the scene map.
The implementation of step 803 may refer to step 203 in the above embodiment, and this embodiment is not repeated.
At step 804, the UI component is rendered in the second buffer to obtain a UI map.
In one possible implementation, the UI component may also be rendered at the first frame rate as it is rendered, resulting in a UI map.
In step 805, the scene map in the first buffer area is combined with the UI map in the second buffer area to obtain a display screen.
The implementation of step 805 may refer to steps 205 to 206 in the above embodiments, which are not described in detail in this embodiment.
In this embodiment, different frame rates are adopted for the rendering process of the opaque object and the semitransparent object, where the rendering of the semitransparent object can be buffered in a multiplexing manner, so as to achieve the rendering with a smaller frame rate, thereby reducing the rendering workload through frame dropping rendering, and further saving the resources required by the rendering.
And in the process of frame-dropping rendering for the semitransparent object, the second frame rate can be dynamically adjusted according to the definition of the image quality set by the user, so that the image effect meets the requirement of the user.
Performance tests are performed on the rendering scheme provided in the related art and the rendering scheme provided in the embodiment of the present application, and the results are as follows:
Different types of equipment are adopted for testing in the testing process. The original resolution of device a is 1334×750, where the downsampling factor is 0.06667, the resolution of the downsampled virtual scene is 890×500, the ui resolution is 1334×750, and the test results for device a are shown in table 1:
TABLE 1
RT CMD DR1 DR2 RI RAW
FPS 57.77 57.72 58.01 58.24 58.19 58.20
CPU utilization 30.88% 31.12% 32.13% 30.22% 30.59% 30.50%
GPU utilization 54.64% 52.61% 53.61% 57.76% 48.97% 61.09%
Temperature (temperature) 41.62 40.98 42.09 41.69 39.86 41.62
The original resolution of the device B is 2436×1125, where the downsampling factor is 0.06667, the resolution of the downsampled virtual scene is 1624×750, the ui resolution is 2436×1125, and the test results for the device B are shown in table 2:
TABLE 2
RT CMD DR1 DR2 RI RAW
FPS 59.34 59.12 59.79 59.85 59.77 59.70
CPU utilization 15.86% 15.80% 14.28% 14.93% 14.15% 14.44%
GPU utilization 57.25% 57.34% 42.90% 56.61% 44.04% 46.83%
Temperature (temperature) 38.59 37.84 36.67 37.41 36.23 36.35
Power consumption 467.79 465.35 421.23 440.56 416.65 426.91
The original resolution of the device C is 1136×640, where the downsampling factor is 0.75, the resolution of the downsampled virtual scene is 854×480, the ui resolution is 1136×640, and the test results for the device C are shown in table 3:
TABLE 3 Table 3
RT CMD RI RAW
FPS 20.17 20.23 20.41 18.96
CPU utilization 57.52% 60.07% 54.83% 58.87%
GPU utilization 22.80% 22.92% 16.33% 13.71%
Temperature (temperature) 67.54 67.57 66.84 67.52
The RI is a method provided by the embodiment of the present application, RT, CMD, DR and RAW are related technical schemes, and according to the performance test result, it can be known that the method provided by the embodiment of the present application can improve performance in different aspects, for example, for the device a, the RI scheme provided by the embodiment of the present application can reduce frame rate FPS, reduce GPU usage rate and reduce heat productivity; aiming at the equipment B, the RI scheme provided by the embodiment of the application can reduce the CPU utilization rate and the power consumption and the heat productivity; aiming at the equipment C, the RI scheme provided by the embodiment of the application can reduce the frame rate FPS, the CPU utilization rate and the GPU utilization rate, and the heat productivity. In the schemes such as CPU utilization rate, GPU utilization rate, FPS, temperature and power consumption, compared with the schemes in the related art, the scheme provided by the embodiment of the application has the advantages that rendering resources can be saved, and the device operation performance is improved.
Fig. 9 is a block diagram of a display screen rendering apparatus according to an exemplary embodiment of the present application, and as shown in fig. 9, the apparatus includes:
the map rendering module 901 is configured to render a virtual scene in a first buffer area, so as to obtain a scene map, where a resolution adopted by the first buffer area is a first resolution;
the map rendering module 901 is further configured to render a UI component in a second buffer area, to obtain a UI map, where a resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution;
and a map merging module 902, configured to merge the scene map in the first buffer area with the UI map in the second buffer area to obtain a display picture.
Optionally, the map rendering module 901 is further configured to:
rendering opaque objects in the virtual scene in the first buffer zone to obtain an opaque object map;
rendering semitransparent objects in the virtual scene in a third buffer zone to obtain a semitransparent object map, wherein the resolution adopted by the third buffer zone is a third resolution which is lower than the first resolution;
and combining the opaque object map and the semitransparent object map to obtain the scene map.
Optionally, the map rendering module 901 is further configured to:
and rendering the semitransparent object in the third buffer zone based on the shielding relation between the semitransparent object and the opaque object to obtain the semitransparent object map, wherein pixels which are used for shielding the opaque object in the semitransparent object are removed.
Optionally, the opaque object map includes an opaque depth map, the opaque depth map being used to indicate depth information of the opaque object;
the map rendering module 901 is further configured to:
rendering pixel points corresponding to the semitransparent object in the third buffer zone;
determining that a pixel point which is shielded exists in the semitransparent object based on the depth information of the semitransparent object and the depth information of an opaque object at a target position indicated by the opaque depth map, wherein the target position is an overlapping position of the pixel point of the semitransparent object and the pixel point of the opaque object;
and eliminating the pixel points with shielding to obtain the semitransparent object map.
Optionally, the map rendering module 901 is further configured to:
downsampling the opaque depth map based on the third resolution to obtain an updated opaque depth map;
And under the condition that the depth information of the first pixel point is smaller than that of the second pixel point, determining that the first pixel point is the pixel point with shielding, wherein the first pixel point is the pixel point of the semitransparent object, and the second pixel point is the pixel point at the position corresponding to the first pixel point in the updated opaque depth map.
Optionally, the map rendering module 901 is further configured to:
rendering an opaque object in the virtual scene in the first buffer zone to obtain an opaque color map and an opaque depth map, wherein the opaque color map is used for indicating color information of the opaque object;
rendering a semitransparent object in the virtual scene in a third buffer zone to obtain a semitransparent color map, wherein the semitransparent color map is used for indicating color information of the semitransparent object;
and merging the opaque color map and the semitransparent color map to obtain the scene map.
Optionally, the apparatus further includes:
and the map deleting module is used for deleting the opaque depth map in the first buffer area.
Optionally, the map rendering module 901 is further configured to:
Rendering the opaque objects in the virtual scene in the first buffer zone based on a first frame rate to obtain the opaque object map;
and rendering the semitransparent objects in the virtual scene in the third buffer zone based on a second frame rate to obtain the semitransparent object map, wherein the second frame rate is smaller than the first frame rate.
Optionally, the map merging module 902 is further configured to:
moving the scene map to a UI image component, the UI image component for displaying the map;
and merging the UI image component containing the scene map with the UI map to obtain the display picture.
Optionally, the map rendering module 901 is further configured to:
and rendering the UI component in a default buffer zone to obtain the UI map, wherein the resolution adopted by the default buffer zone is the original resolution, and the original resolution is the screen resolution of the display screen.
In the embodiment of the application, the virtual scene is rendered in the first buffer zone with low resolution, the UI component is rendered in the second buffer zone with high resolution, and then the virtual scene and the UI component are combined to obtain the display picture. In this way, downsampling of the virtual scene can be achieved while maintaining high resolution of the UI components, avoiding blurring of the UI components. And because the down sampling processing is carried out on the rendering of the virtual scene, the pixel quantity of the rendering can be reduced, the rendering pressure is reduced, the problem of frame dropping caused by operation blocking due to overlarge rendering pressure can be avoided, and the device operation performance is improved.
It should be noted that: the apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the method embodiments are described in the method embodiments, which are not repeated herein.
Referring to fig. 10, a schematic structural diagram of a computer device according to an exemplary embodiment of the present application is shown. Specifically, the present application relates to a method for manufacturing a semiconductor device. The computer apparatus 1000 includes a central processing unit (Central Processing Unit, CPU) 1001, a system memory 1004 including a random access memory 1002 and a read only memory 1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The computer device 1000 also includes a basic Input/Output system (I/O) 1006, which helps to transfer information between various devices within the computer, and a mass storage device 1007 for storing an operating system 1013, application programs 1014, and other program modules 1015.
In some embodiments, the basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, or the like, for a user to input information. Wherein the display 1008 and the input device 1009 are connected to the central processing unit 1001 via an input output controller 1010 connected to a system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the computer device 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes random access Memory (Random Access Memory, RAM), read Only Memory (ROM), flash Memory or other solid state Memory technology, compact disk (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1004 and mass storage devices 1007 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1001, the one or more programs containing instructions for implementing the methods described above, the central processing unit 1001 executing the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the application, the computer device 1000 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 1000 may be connected to the network 1012 through a network interface unit 1011 connected to the system bus 1005, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1011.
The memory also includes one or more programs stored in the memory, the one or more programs including steps for performing the methods provided by the embodiments of the present application, as performed by the computer device.
The embodiment of the application also provides a computer readable storage medium, in which at least one instruction, at least one section of program, a code set or an instruction set is stored, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor to implement the method for rendering a display screen according to any one of the embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the rendering method of the display screen provided in the above aspect.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing related hardware, and the program may be stored in a computer readable storage medium, which may be a computer readable storage medium included in the memory of the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into the terminal. The computer readable storage medium stores at least one instruction, at least one program, a code set, or an instruction set, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the method for rendering a display screen according to any one of the method embodiments.
Alternatively, the computer-readable storage medium may include: ROM, RAM, solid state disk (Solid State Drives, SSD), or optical disk, etc. The RAM may include resistive random access memory (Resistance Random Access Memory, reRAM) and dynamic random access memory (Dynamic Random Access Memory, DRAM), among others. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but is intended to cover all modifications, equivalents, alternatives, and improvements falling within the spirit and principles of the application.

Claims (14)

1. A method of rendering a display, the method comprising:
rendering a virtual scene in a first buffer area to obtain a scene map, wherein the resolution adopted by the first buffer area is a first resolution;
Rendering the UI component in a second buffer area to obtain a UI map, wherein the resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution;
and merging the scene map in the first buffer area with the UI map in the second buffer area to obtain a display picture.
2. The method of claim 1, wherein rendering the virtual scene in the first buffer to obtain a scene map comprises:
rendering opaque objects in the virtual scene in the first buffer zone to obtain an opaque object map;
rendering semitransparent objects in the virtual scene in a third buffer zone to obtain a semitransparent object map, wherein the resolution adopted by the third buffer zone is a third resolution which is lower than the first resolution;
and combining the opaque object map and the semitransparent object map to obtain the scene map.
3. The method of claim 2, wherein rendering the semi-transparent object in the virtual scene in the third buffer results in a semi-transparent object map, comprising:
and rendering the semitransparent object in the third buffer zone based on the shielding relation between the semitransparent object and the opaque object to obtain the semitransparent object map, wherein pixels which are used for shielding the opaque object in the semitransparent object are removed.
4. A method according to claim 3, wherein the opaque object map comprises an opaque depth map for indicating depth information of the opaque object;
rendering the semi-transparent object in the third buffer area based on the shielding relation between the semi-transparent object and the opaque object to obtain the semi-transparent object map, including:
rendering pixel points corresponding to the semitransparent object in the third buffer zone;
determining that a pixel point which is shielded exists in the semitransparent object based on the depth information of the semitransparent object and the depth information of an opaque object at a target position indicated by the opaque depth map, wherein the target position is an overlapping position of the pixel point of the semitransparent object and the pixel point of the opaque object;
and eliminating the pixel points with shielding to obtain the semitransparent object map.
5. The method of claim 4, wherein the determining that there are occluded pixels in the semi-transparent object based on the depth information of the semi-transparent object and the depth information of the opaque object at the target location indicated by the opaque depth map comprises:
Downsampling the opaque depth map based on the third resolution to obtain an updated opaque depth map;
and under the condition that the depth information of the first pixel point is smaller than that of the second pixel point, determining that the first pixel point is the pixel point with shielding, wherein the first pixel point is the pixel point of the semitransparent object, and the second pixel point is the pixel point at the position corresponding to the first pixel point in the updated opaque depth map.
6. The method according to any one of claims 2 to 5, wherein rendering the opaque objects in the virtual scene in the first buffer results in an opaque object map, comprising:
rendering an opaque object in the virtual scene in the first buffer zone to obtain an opaque color map and an opaque depth map, wherein the opaque color map is used for indicating color information of the opaque object;
rendering the semitransparent object in the virtual scene in a third buffer area to obtain a semitransparent object map, including:
rendering a semitransparent object in the virtual scene in a third buffer zone to obtain a semitransparent color map, wherein the semitransparent color map is used for indicating color information of the semitransparent object;
The merging the opaque object map and the semitransparent object map to obtain the scene map includes:
and merging the opaque color map and the semitransparent color map to obtain the scene map.
7. The method of claim 6, wherein the rendering of the semi-transparent objects in the virtual scene in the third buffer results in a semi-transparent color map, the method further comprising:
deleting the opaque depth map within the first buffer.
8. The method according to any one of claims 2 to 5, wherein rendering the opaque objects in the virtual scene in the first buffer results in an opaque object map, comprising:
rendering the opaque objects in the virtual scene in the first buffer zone based on a first frame rate to obtain the opaque object map;
rendering the semitransparent object in the virtual scene in a third buffer area to obtain a semitransparent object map, including:
and rendering the semitransparent objects in the virtual scene in the third buffer zone based on a second frame rate to obtain the semitransparent object map, wherein the second frame rate is smaller than the first frame rate.
9. The method according to any one of claims 1 to 5, wherein merging the scene map in the first buffer with the UI map in the second buffer to obtain a display includes:
moving the scene map to a UI image component, the UI image component for displaying the map;
and merging the UI image component containing the scene map with the UI map to obtain the display picture.
10. The method according to any one of claims 1 to 5, wherein rendering the UI component in the second buffer to obtain the UI map includes:
and rendering the UI component in a default buffer zone to obtain the UI map, wherein the resolution adopted by the default buffer zone is the original resolution, and the original resolution is the screen resolution of the display screen.
11. A rendering apparatus for displaying a picture, the apparatus comprising:
the map rendering module is used for rendering the virtual scene in the first buffer area to obtain a scene map, and the resolution adopted by the first buffer area is a first resolution;
the map rendering module is further configured to render the UI component in a second buffer area, to obtain a UI map, where a resolution adopted by the second buffer area is a second resolution, and the second resolution is higher than the first resolution;
And the mapping merging module is used for merging the scene mapping in the first buffer area with the UI mapping in the second buffer area to obtain a display picture.
12. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the method of rendering a display according to any one of claims 1 to 10.
13. A computer-readable storage medium, wherein at least one program is stored in the readable storage medium, and the at least one program is loaded and executed by a processor to implement the method of rendering a display according to any one of claims 1 to 10.
14. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which computer instructions a processor of a computer device reads, which processor executes the computer instructions to implement a method of rendering a display according to any of claims 1 to 10.
CN202310092681.8A 2023-01-16 2023-01-16 Display screen rendering method, device, equipment, storage medium and program product Pending CN116954780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310092681.8A CN116954780A (en) 2023-01-16 2023-01-16 Display screen rendering method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310092681.8A CN116954780A (en) 2023-01-16 2023-01-16 Display screen rendering method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116954780A true CN116954780A (en) 2023-10-27

Family

ID=88459153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310092681.8A Pending CN116954780A (en) 2023-01-16 2023-01-16 Display screen rendering method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116954780A (en)

Similar Documents

Publication Publication Date Title
CN110377257B (en) Layer composition method and device, electronic equipment and storage medium
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
US8803896B2 (en) Providing a coherent user interface across multiple output devices
US20220139352A1 (en) Method and Device for Image Composition, Electronic Device and Storage Medium
US20220139017A1 (en) Layer composition method, electronic device, and storage medium
US9881592B2 (en) Hardware overlay assignment
US8355030B2 (en) Display methods for high dynamic range images and user interfaces for the same
EP0566847A2 (en) Multi-media window manager
KR20190020197A (en) Exploiting frame to frame coherency in a sort-middle architecture
CN112596843B (en) Image processing method, device, electronic equipment and computer readable storage medium
US11398065B2 (en) Graphic object modifications
US7830397B2 (en) Rendering multiple clear rectangles using a pre-rendered depth buffer
CN108846815B (en) Image anti-aliasing processing method and device and computer equipment
CN112740278A (en) Blending adjacent bins
CN111290680B (en) List display method, device, terminal and storage medium
US20220028360A1 (en) Method, computer program and apparatus for generating an image
CN107506119B (en) Picture display method, device, equipment and storage medium
WO2020036214A1 (en) Image generation device, and image generation method and program
CN111477183A (en) Reader refresh method, computing device, and computer storage medium
CN105378645A (en) Virtualizing applications for multi-monitor environments
CN116954780A (en) Display screen rendering method, device, equipment, storage medium and program product
CN109859328B (en) Scene switching method, device, equipment and medium
CN111489418B (en) Image processing method, device, equipment and computer readable storage medium
CN114625997A (en) Page rendering method and device, electronic equipment and computer readable medium
CN108255568B (en) Terminal interface display method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication