CN113837920B - Image rendering method and electronic equipment - Google Patents

Image rendering method and electronic equipment Download PDF

Info

Publication number
CN113837920B
CN113837920B CN202110951444.3A CN202110951444A CN113837920B CN 113837920 B CN113837920 B CN 113837920B CN 202110951444 A CN202110951444 A CN 202110951444A CN 113837920 B CN113837920 B CN 113837920B
Authority
CN
China
Prior art keywords
model
function
rendering
module
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110951444.3A
Other languages
Chinese (zh)
Other versions
CN113837920A (en
Inventor
陈聪儿
刘金晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110951444.3A priority Critical patent/CN113837920B/en
Priority to CN202310227425.5A priority patent/CN116703693A/en
Publication of CN113837920A publication Critical patent/CN113837920A/en
Application granted granted Critical
Publication of CN113837920B publication Critical patent/CN113837920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses an image rendering method and electronic equipment, relates to the field of image processing, and can reduce rendering overhead through a variable rate coloring mechanism without influencing user experience due to reduction of coloring rate. The specific scheme is as follows: and acquiring a first rendering command issued by the application program, wherein the first rendering command is used for drawing a first model. A first shading rate for the first model is determined. The first model is rendered according to the first rendering commands and the first shading rate. And acquiring a second rendering command issued by the application program, wherein the second rendering command is used for drawing a second model. A second shading rate for the second model is determined. Drawing the second model according to the second rendering commands and the second rendering rate. The first model and the second model are included in the same frame image, and the first shading rate and the second shading rate are different.

Description

Image rendering method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering method and an electronic device.
Background
When rendering the image, the electronic device performs rendering processing on the image, including rendering processing on the image. For example, a Graphics Processing Unit (GPU) of the electronic device may perform rendering on each pixel of the image, and then perform rendering on the entire image.
As the number of pixels of an image increases, the rendering process of the image may generate a higher rendering load for the electronic device, such as increasing the effort and power consumption overhead during the rendering process.
Disclosure of Invention
The embodiment of the application provides an image rendering method and electronic equipment, which can reduce rendering overhead through a variable rate coloring mechanism, and meanwhile, cannot generate influence on user experience due to reduction of the coloring rate.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an image rendering method is provided, which is applied to an electronic device, where an application program is installed in the electronic device, and the method includes: and acquiring a first rendering command issued by the application program, wherein the first rendering command is used for drawing a first model. A first shading rate for the first model is determined. The first model is rendered according to the first rendering commands and the first shading rate. And acquiring a second rendering command issued by the application program, wherein the second rendering command is used for drawing a second model. A second shading rate for the second model is determined. Drawing the second model according to the second rendering commands and the second rendering rate. The first model and the second model are included in the same frame image, and the first shading rate and the second shading rate are different.
Based on this scheme, an example of a scheme for different rate shading for different models is provided. In this example, the electronic device may determine the shading rates for the various models from rendering commands for the different models. And then, with a rendering command (i.e. Drawcall) as a granularity, rendering operations such as drawing of the model are colored at different rates, and then part of the model is colored at a low rate, so that rendering overhead such as computational power overhead and heat generation in the coloring process is saved.
In one possible design, the determining a first shading rate for the first model includes: according to the first rendering command, determining a depth of the first model, the depth of the first model being used for identifying a distance between the first model and an observer in a viewing space or a clipping space, the greater the depth of the first model, the greater the distance between the first model and the observer. The first shading rate is determined according to the depth of the first model. Based on this scheme, a mechanism is provided to determine the shading rates of the different models. Illustratively, the corresponding shading rates may be determined according to the depths of the different models. It will be appreciated that the greater the depth, the further away the model is from the user in the image, and therefore lower rate rendering can be employed, saving rendering overhead.
In one possible design, the determining a depth of the first model from the first rendering command includes: according to the first rendering command, depths of n vertices of the first model are determined. And determining the depth of the first model according to the depths of the n vertexes. Wherein n vertices are included in the vertices of the first model. Based on this approach, an example of a solution for determining model depth is provided. Illustratively, the depth of the model may be determined from the depths of the vertices of the model. Where the depth of the vertices of the model may be a view space or a crop space or other depth that can be used to identify vertex-to-user distances. In this example, n may be equal to the maximum number of vertices of the model. Thus, the depth of the model can be determined according to the depths of all the vertexes of the model. In other embodiments, n may be less than the maximum number of vertices of the model, e.g., n vertices may be randomly selected n vertices of all vertices of the model. This can save the amount of calculation in calculating the model vertices.
In a possible design, the electronic device is configured with an interception module and a data processing module, and the obtaining of the first rendering command issued by the application program includes. The interception module intercepts the first rendering command. The interception module transmits the intercepted first function and second function to the data processing module. The first function is a function carrying a first parameter, the second function is a function carrying a second parameter, the first parameter is a parameter carried in the process of transferring vertex data by the application program, and the second parameter is a parameter carried in the process of transferring the MVP matrix by the application program. Based on the scheme, a specific scheme example of intercepting the command is provided. In this example, the interception module may intercept a full amount of the first rendering command, and transmit an instruction (or a function) carrying the first parameter and the second parameter in the first rendering command to the data processing module, so as to intercept the useful data. The useful data can be vertex related data and MVP matrix.
In one possible design, the method further includes: the interception module recalls functions other than the first function and the second function in the first rendering command to a graphics library of the electronic device. Based on the scheme, a callback mechanism is provided. It will be appreciated that in native logic, the first rendering command may be issued to the graphics library for processing. In this example, the interception module may transmit other partial commands to the graphics library after intercepting the first function and the second function to ensure normal rendering logic execution.
In one possible design, after the intercepting module transmits the intercepted first function and second function to the data processing module, the method further includes: the data processing module determines a first storage location of vertex coordinates corresponding to the first model and a second storage location of the first MVP matrix according to the first function and the second function, where the first storage location and the second storage location are included in a first storage area, and the first storage area is a storage area that can be called by a processor of the electronic device. Based on this scheme, an example of a scheme for determining vertex coordinates and MVP matrices is provided. For example, a first function carrying a first parameter may be used to indicate the cache ID of the vertex coordinates of the model to which the current Drawcall corresponds. The data processing module may resolve the cache ID for the vertex coordinates of the first model according to the first function. And further determining the specific storage position of the vertex coordinate according to the preset or first function indicated attribute, offset and other parameters. The data processing module may also obtain the MVP matrix based on similar logic. The storage location determined by the data processing module may be a storage location for backing up stored data, so that normal calling of other modules may be ensured.
In one possible design, the method further includes: the data processing module recalls the first function and the second function to a graphic library of the electronic equipment. Based on the scheme, another callback mechanism is provided. Illustratively, the interception module has called back the functions of the first command except the first function and the second function, and then the data processing module may call back the first function and the second function to implement a full call back of the first rendering command, thereby ensuring accurate execution of the rendering logic.
In one possible design, before obtaining the first rendering command issued by the application program, the method further includes: the interception module intercepts a third rendering command issued by the application program, wherein the third rendering command is used for storing first data used in the running process of the application program in a second storage area, the second storage area is a storage area used by a GPU of the electronic equipment, and the first data comprises vertex coordinates of the first model and the first MVP matrix. The interception module transmits a third function and a fourth function to the data processing module, wherein the third function carries the first parameter, and the fourth function carries the second parameter. The data processing module stores the data corresponding to the third function and the fourth function in the first storage area. Based on the scheme, a mechanism for backing up the stored data is provided. It is to be understood that, in the above-described first rendering command (or second rendering command), the corresponding rendering indication may be implemented by calling data that the third rendering command has been loaded to the GPU. The data loaded to the GPU is invisible to other modules, and therefore, in this example, the loaded data is backed up and stored in the schedulable storage space of the CPU through the backup storage mechanism, thereby ensuring that each module successfully calls the loaded data when the model depth needs to be determined in the game running process.
In one possible design, the method further includes: the interception module recalls functions other than the third function and the fourth function in the third rendering command to a graphics library of the electronic device. Based on the scheme, another callback mechanism is provided. Therefore, during the loading process, the electronic device can correctly load the instructions except the third function and the fourth function.
In one possible design, the method further includes: the data processing module recalls the third function and the fourth function to a graphic library of the electronic equipment. Based on the scheme, another callback mechanism is provided. Therefore, the electronic equipment can correctly load the third function and the fourth function in the loading process. Illustratively, the callback mechanism may be executed after the data processing module completes the backup storage of the third function and the fourth function.
In one possible design, the method further includes: the data processing module stores a first corresponding relation, and the first corresponding relation is used for indicating the corresponding relation between the storage address of the same data in the first storage area and the storage address of the same data in the second storage area. Based on the scheme, a mechanism for correctly calling data is provided. It is understood that during the backup storage process, data cannot be stored in the area configured for the GPU in the memory. In this example, the corresponding data stored in the backup can be accurately found subsequently according to the native command by storing the corresponding relationship between the address stored in the backup and the address indicated in the native command.
In a possible design, the data processing module determines, according to the first function and the second function, a first storage location of vertex coordinates corresponding to the first model and a second storage location of the first MVP matrix, and includes: the data processing module determines a first storage position of the vertex coordinate corresponding to the first model according to the cache position indicated by the first function and the first corresponding relation. The data processing module determines a second storage location of the first MVP matrix according to the cache location indicated by the second function and the first corresponding relationship. Based on the scheme, an example of a scheme for determining the vertex coordinates and the MVP matrix corresponding to the current Drawcall is provided, for example, in combination with the cache location indicated by the current command and the corresponding relationship, the corresponding data can be found in the backup storage. The data may be the vertex coordinates and MVP matrix of the first model as indicated by the current Drawcall.
In one possible design, a computing module is further configured in the electronic device to determine depths of n vertices of the first model according to the first rendering command, including: when the intercepting module intercepts the drawing element draw in the first rendering command, the calculating module calculates the depths of the n vertexes according to the vertex coordinates corresponding to the first model and the first MVP matrix. The vertex data corresponding to the first model is obtained by the calculation module from the first storage location, and the first MVP matrix is obtained by the calculation module from the second storage location. Based on the scheme, a trigger mechanism for calculating the model depth is provided. For example, when the application is intercepted to issue Drawelement, the instruction of data such as vertex coordinates and MVP matrix is determined to have been completed, so that corresponding data can be called and calculated according to the vertex coordinates and the storage location of the MVP matrix determined in the above scheme and indicated by the current Drawcall.
In one possible design, the determining the depth of the first model from the depths of the n vertices includes: the computation module determines a depth of the first model as a mean of depths of the n vertices. Based on the scheme, a specific scheme example for determining the model depth according to the vertex depth is provided. For example, the model depth may be the mean of the chosen n vertex depths.
In one possible design, a decision module is further configured in the electronic device, and the method further includes: the calculation module transmits the depth of the first model to the decision module. And the decision module searches the table item matched with the depth of the first model from a preset second corresponding relation according to the depth of the first model. The decision module determines the shading rate indicated by the matching entry as the first shading rate. Based on this scheme, an example of a scheme for determining a corresponding shading rate according to a model depth is provided. For example, according to the preset depth range of the model depth, a matching entry may be found in the second corresponding relationship, and the appearance may indicate a preset coloring rate. The preset shading rate may be a first shading rate corresponding to the depth of the current model.
In one possible design, the method further includes: the decision module transmits shading instructions to the graphics library indicating the first shading rate. Based on the scheme, a scheme mechanism is provided that triggers coloring according to a first coloring rate. For example, after determining that the first model uses the first shading rate, the decision module may call an API in the graphics library that corresponds to the first shading rate. In some implementations, the invocation of the API may be implemented by sending a shading instruction corresponding to the first shading rate.
In one possible design, the rendering the first model according to the first rendering commands and the first shading rate includes: and the graphics library calls a first Application Programming Interface (API) corresponding to the first rendering command according to the functions called back by the interception module and the data processing module. The graphics library invokes a first variable rate shading API corresponding to the first shading rate in accordance with the shading instructions. The graphics library issues rendering instructions to a GPU of the electronic device according to the first API and the first variable-rate shading API, so that the GPU can perform rendering operation on the first model according to the first shading rate. Based on the scheme, a scheme mechanism for the GPU to perform variable rate shading on the first model is provided. Illustratively, based on the aforementioned callback mechanism, the GPU may receive all indications of the first rendering command (the second rendering command). Additionally, the GPU may also receive instructions indicated by the corresponding API that is invoked according to the first shading rate. Thus, the GPU may perform a rendering operation on the first model drawn as indicated by the first rendering command according to the first shading rate. And flexible adjustment of the coloring rate of the drawing model is realized.
In a second aspect, an electronic device is provided that includes one or more processors and one or more memories; one or more memories coupled with the one or more processors, the one or more memories storing computer instructions; the computer instructions, when executed by the one or more processors, cause the electronic device to perform an image rendering method as described above in the first aspect and any of various possible designs.
In a third aspect, an embodiment of the present application provides an image rendering apparatus, which may include: a processor and a memory. The memory is used to store computer-executable program code, which includes instructions. The instructions, when executed by the processor, cause the electronic device to perform the method of image rendering according to the first aspect and any of a variety of possible designs.
In a fourth aspect, there is provided a computer readable storage medium comprising computer instructions which, when executed, perform the image rendering method of the first aspect and any one of the various possible designs as described above.
It should be understood that, technical features of the technical solutions provided by the second aspect, the third aspect, and the fourth aspect may all correspond to the image rendering method provided by the first aspect and possible designs thereof, and therefore, similar beneficial effects can be achieved, and details are not repeated herein.
Drawings
FIG. 1 is a schematic diagram of a coordinate space;
FIG. 2 is a schematic diagram of a variable rate coloring;
fig. 3 is a schematic composition diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic software composition diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a rendering process provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart of an image rendering method according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of an image rendering method according to an embodiment of the present application;
fig. 10 is a schematic flowchart of an image rendering method according to an embodiment of the present application;
fig. 11 is a schematic flowchart of an image rendering method according to an embodiment of the present application;
fig. 12 is a schematic flowchart of an image rendering method according to an embodiment of the present application;
fig. 13 is a schematic diagram illustrating comparison of image rendering results according to an embodiment of the present disclosure;
fig. 14 is a schematic composition diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The electronic equipment can perform rendering of different frame images according to rendering commands issued by the application programs installed in the electronic equipment, so that display data corresponding to each frame image are obtained, and the display is controlled to display each frame image according to the display data.
In the process of image rendering, the electronic device needs to determine vertex positions of one or more objects included in the current frame image.
For example, the rendering commands issued by the application program may include vertex coordinates of the object. In some implementations, the vertex coordinates included in the rendering commands may be coordinates based on the local coordinate system of the object itself. In the present application, a distribution Space of an object based on a Local coordinate system may be referred to as a Local Space (Local Space). In order for the electronic device to be able to determine the coordinates of the various vertices of the object on the display screen, a matrix transformation may be performed based on the coordinates of the object in local space. Thereby obtaining the coordinates of the object in a display Screen based spatial (e.g., called Screen Space) coordinate system.
As an example, the electronic device may convert local coordinates of each vertex of the object in the local Space into coordinates in the Screen Space through a matrix transformation process from the local Space to the World Space (World Space) to the View Space (View Space) to the Clip Space (Clip Space) to the Screen Space (Screen Space).
Illustratively, referring to FIG. 1, a logical process schematic of a matrix transformation of coordinates from local space to world space to view space to crop space is shown. In this example, the rendering command issued by the application may include rendering of the object 1. As shown in fig. 1, in local space, the coordinate system may be based on the object 1. For example, the origin of the coordinate system in the local space may be set at the center of the object 1, or a position where a vertex is located, or the like. The application program may carry the coordinates of each vertex of the object 1, that is, the local coordinates, in the coordinate system of the local space in issuing the rendering command for the object 1. The electronic device can convert the coordinates in the local space into coordinates in the world space through the M matrix issued by the application program. Where world space may be a larger region relative to local space. For example, a rendering command issued by an application program is used to render a game image. The local space may correspond to a smaller area that can cover a certain object, such as the object 1. And world space may correspond to a map of an area in the game that includes object 1 as well as other objects, such as object 2. The electronic device may perform M-matrix transformation on the local coordinates in the local space in combination with the M matrix, thereby obtaining coordinates of the object 1 in the world space. Similarly, in the case where the application issues a rendering command for the object 2 in the frame of image, the electronic device may also acquire coordinates of the object 2 in the world space through the M-matrix transformation.
After obtaining the coordinates of the vertices of the objects in the current frame image in the world space, the electronic device may convert the coordinates in the world space into the coordinates in the observation space according to the V matrix issued by the application program. It will be appreciated that the coordinates in world space may be coordinates in three-dimensional space. When the electronic device presents the frame image to the user, each object (e.g., object 1, object 2, etc.) is displayed on the two-dimensional display screen. When objects in world space are viewed using different viewing angles, different two-dimensional pictures are seen. The viewing angle may be related to the position of a camera (or viewer) disposed in world space. In this example, the coordinate space corresponding to the camera position may be referred to as a viewing space. Illustratively, the positive direction of the y-axis of the camera in the world space is taken as an example. Then based on the transformation of the V matrix, the coordinates of the respective vertices of object 1 and object 2 in the viewing space corresponding to the camera position can be obtained. As shown in fig. 1, since the camera is positioned in the positive y-axis direction, shooting is performed downward, and thus the object 1 and the object 2 corresponding to the observation space can be presented as a plan view effect.
After the electronic device acquires the coordinates of each object in the viewing space, it may be projected to the cropping coordinates. The coordinate space to which the clipping coordinates correspond may be referred to as a clipping space. It will be appreciated that in performing the V-matrix transformation, there may be a transformation of a larger region in world space, and therefore the range of images acquired may be relatively large. Due to the limited size of the display screen of the electronic device, all objects in the viewing space may not be displayed simultaneously. In this example, the electronic device may project the coordinates of the various objects in the viewing space into the cropping space. After projection into the clipping space, the coordinates of the objects that can be displayed on the display screen may be in the range of-1.0 to 1.0. And the coordinates of the part of the object which can not be displayed on the display screen can be out of the range of-1.0 to 1.0. Thus, the electronic device can correspondingly display the vertex coordinates with the coordinates ranging from-1.0 to 1.0. For example, the electronic device may perform P-matrix transformation on each coordinate in the observation space according to a P-matrix issued by the application program, so as to obtain a clipping coordinate in the clipping space corresponding to each coordinate.
It is understood that through the above-described transformation of the MVP matrix (i.e., M-matrix transformation, V-matrix transformation, and P-matrix transformation), the electronic device can acquire coordinates (i.e., clipping coordinates) of vertices of respective objects displayed on the display screen. The electronic device may then also Transform the cropping coordinates to screen coordinates, such as using a Viewport Transform (Viewport Transform) to Transform coordinates in the range-1.0 to a range of coordinates defined by the glViewport function. And finally, the transformed coordinates are sent to a grating device and are converted into fragments, and then display data corresponding to each pixel are obtained. Based on the display data, the electronic device can control the display screen to perform corresponding display.
In the process of rendering the image, the electronic device needs to determine the vertex coordinates of each object according to the above scheme, and also needs to color each pixel in the current frame image, that is, determine the color data of each pixel. Therefore, the display is controlled to display corresponding colors at corresponding pixel positions according to the color data of each pixel.
In some implementations, the electronic device can color each pixel in units of one pixel during the rendering process, thereby enabling the entire frame of image to be colored. With the improvement of the resolution and refresh rate of the display screen of the electronic device and the increasing complexity of the scene of the frame image to be rendered, the rendering process of the electronic device can be subjected to high memory and power consumption overhead based on the coloring of one pixel, and then the phenomenon of heating or frame dropping occurs, so that the user experience is influenced. It should be noted that, in some implementations of the present application, the rendering process and the rendering process may be implemented by a component having a graphics processing function, such as a GPU provided in the electronic device.
To address the above issues, some electronic devices may reduce memory and power consumption overhead during the rendering process by providing a variable rate rendering function.
For example, under a general shading mechanism, the electronic device may use shaders to shade one pixel. After the shading operation for the pixel is completed, the electronic device may use the shader to shade another pixel. For example, in conjunction with fig. 2 (a), the electronic device may use a shader to color the pixels located in the first row and the first column. After completing the shading of the pixels of the first row and the first column, the electronic device may use the shader to shade other pixels, such as pixels of the first row and the second column. Thus, to complete the rendering of 5 × 5 pixels as shown in fig. 2 (a), the electronic device needs to perform at least 25 rendering operations using the shader. It should be noted that, in conjunction with the foregoing description, the process of shading by a shader may be performed by a GPU in an electronic device. When the GPU has a strong parallel processing capability, for example, the GPU may simultaneously perform shading on 3 pixels respectively through the shaders, and then the electronic device may also perform shading operations on a plurality of pixels (for example, 3 pixels) in parallel through the shaders. However, the process of multiple parallel processing can save processing time, but does not reduce the load of the electronic device in the process of performing the coloring operation. For convenience of description, the following description will take an example in which a GPU in an electronic device uses a shader to color 1 pixel at the same time.
In contrast to performing the shading operation in units of a single pixel, in the case where the electronic device uses a variable rate shading function, the electronic device may complete shading of a plurality of pixels through one shading operation using a shader. For example, the rendering of 4 pixels can be completed by one rendering operation. In conjunction with (b) of fig. 2, the electronic device may perform the coloring of the first pixel of the first column to the second pixel of the second column through one coloring operation. Thus, with variable rate shading, the electronic device may perform shading during rendering of an image with fewer shading operations.
It will be appreciated that the color fineness of the image composed of pixels after variable rate rendering may be lower than that obtained by a typical rendering mechanism (i.e., rendering operations at a pixel granularity). How to reasonably use variable rate rendering without significantly affecting the look and feel of the image for the user becomes critical to the use of the variable rate rendering function.
In order to solve the above problem, the rendering scheme provided in the embodiment of the present application can reasonably select an area in a frame image that needs to use a variable rate coloring function, so that the electronic device can reduce power consumption and heat generation in the rendering process through the variable rate coloring function, and the rendering of the acquired image does not significantly affect the look and feel of the user. Therefore, the power consumption and the heat generation of the electronic equipment can be reduced, and the user experience is improved.
The scheme provided by the embodiment of the application is described in detail below with reference to the accompanying drawings.
It should be noted that the rendering method provided in the embodiment of the present application may be applied to an electronic device of a user. For example, the electronic device may be a portable mobile device such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and a media player, and may also be a wearable electronic device such as a smart watch that can provide shooting capability. The embodiment of the present application does not specifically limit the specific form of the apparatus.
Please refer to fig. 3, which is a schematic structural diagram of an electronic device 300 according to an embodiment of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, a headset interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identification Module (SIM) card interface 395, and the like. The sensor module 380 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic device 300. In other embodiments, electronic device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units, such as: the processor 310 may include a Central Processing Unit (CPU), an Application Processor (AP), a modem processor, a GPU, an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more of the processors 310. As an example, in the present application, the ISP may process the image, such as the processing may include Automatic Exposure (Automatic Exposure), automatic focusing (Automatic Focus), automatic White Balance (Automatic White Balance), denoising, backlight compensation, color enhancement, and the like. Among them, the process of auto exposure, auto focus, and auto white balance may also be referred to as 3A process. After processing, the ISP can obtain the corresponding photo. This process may also be referred to as the sheeting operation of the ISP.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The electronic device 300 may implement a shooting function through the ISP, the camera 393, the video codec, the GPU, the display 394, the application processor, and the like.
The ISP is used to process the data fed back by the camera 393. For example, when a photo is taken, the shutter is opened, light is transmitted to the photosensitive element of the camera 393 through the lens, an optical signal is converted into an electric signal, and the photosensitive element of the camera 393 transmits the electric signal to the ISP for processing and converting the electric signal into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located in camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 300 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. In this way, the electronic device 300 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the electronic device 300, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The charging management module 340 is used to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the electronic device 300. The charging management module 340 may also supply power to the electronic device 300 through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 and provides power to the processor 310, the internal memory 321, the external memory, the display 394, the camera 393, and the wireless communication module 360. The power management module 341 may also be configured to monitor parameters such as the capacity of the battery 342, the number of cycles of the battery 342, and the state of health (leakage, impedance) of the battery 342. In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor 310, the baseband processor 310, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in electronic device 300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 300. The mobile communication module 350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 350 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 350 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave to radiate the electromagnetic wave through the antenna 1. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the processor 310. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 370A, the receiver 370B, etc.) or displays images or video through the display 394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 310, and may be disposed in the same device as the mobile communication module 350 or other functional modules.
The wireless communication module 360 may provide solutions for wireless communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 300 is coupled to mobile communication module 350 and antenna 2 is coupled to wireless communication module 360 such that electronic device 300 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 300 implements display functions via the GPU, the display 394, and the application processor 310, among other things. The GPU is a microprocessor for image processing, coupled to a display screen 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD) 394, an organic light-emitting diode (OLED), an active matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device 300. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 300, and the like. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 300 may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the headphone interface 370D, the application processor 310, and the like. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310. The speaker 370A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic device 300 may listen to music or to a hands-free conversation through the speaker 370A. The receiver 370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 300 receives a call or voice information, it is possible to receive voice by placing the receiver 370B close to the human ear. Microphone 370C, also known as a "microphone," is used to convert the sound signal into an electrical signal. When a call is placed or a voice message is sent or a voice assistant is required to trigger the electronic device 300 to perform some function, a user may speak via his or her mouth near the microphone 370C and input a voice signal into the microphone 370C. The electronic device 300 may be provided with at least one microphone 370C. In other embodiments, the electronic device 300 may be provided with two microphones 370C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may further include three, four or more microphones 370C to collect sound signals, reduce noise, identify sound sources, and perform directional recording. The earphone interface 370D is used to connect a wired earphone. The headset interface 370D may be the USB interface 330, or may be an open mobile electronic device 300 platform (OMTP) standard interface of 3.5mm, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 394, and the touch sensor and the display screen 394 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. In some embodiments, visual output related to touch operations may be provided through the display screen 394. In other embodiments, the touch sensor may be disposed on a surface of the electronic device 300 at a different location than the display screen 394.
The pressure sensor is used for sensing a pressure signal and converting the pressure signal into an electric signal. In some embodiments, the pressure sensor may be disposed on the display screen 394. There are many types of pressure sensors, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor, the capacitance between the electrodes changes. The electronic device 300 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 394, the electronic apparatus 300 detects the intensity of the touch operation according to the pressure sensor. The electronic apparatus 300 may also calculate the position of the touch from the detection signal of the pressure sensor. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message. The gyro sensor may be used to determine the motion pose of the electronic device 300. The acceleration sensor may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). A distance sensor for measuring a distance. The electronic device 300 may measure the distance by infrared or laser. The electronic device 300 can utilize the proximity light sensor to detect that the user holds the electronic device 300 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The ambient light sensor is used for sensing the ambient light brightness. The fingerprint sensor is used for collecting fingerprints. The temperature sensor is used for detecting temperature. In some embodiments, the electronic device 300 implements a temperature processing strategy using the temperature detected by the temperature sensor. The audio module 370 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part obtained by the bone conduction sensor, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor, and a heart rate detection function is realized.
Keys 390 include a power-on key, a volume key, etc. The motor 391 may generate a vibration cue. Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 395 is used for connecting a SIM card. The electronic device 300 may support 1 or N SIM card interfaces 395, N being a positive integer greater than 1. The SIM card interface 395 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 395 at the same time. The SIM card interface 395 may also be compatible with different types of SIM cards. The SIM card interface 395 may also be compatible with an external memory card. The electronic device 300 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The rendering method provided by the embodiment of the application can be applied to the electronic equipment with the composition shown in fig. 3.
It should be noted that fig. 3 and the description thereof are only an example of an application carrier of the solution provided by the embodiments of the present application. The composition of fig. 3 does not constitute a limitation on the solution described in the examples of the present application. In other embodiments, the electronic device may have more or fewer components than shown in FIG. 3.
In the example shown in fig. 3, a hardware component of an electronic device is provided. In some embodiments, the electronic device may also run an operating system through its various hardware components (e.g., hardware components as shown in fig. 3). In the operating system, different software hierarchies can be set, so that different programs can run.
Exemplarily, fig. 4 is a schematic diagram of a software component of an electronic device according to an embodiment of the present application. As shown in fig. 4, the electronic device may include an application layer 401, a framework layer 402, a system library 403, a hardware layer 404, and the like.
The application layer 401 may also be referred to as an application layer or an Application (APP) layer. In some implementations, the application layer may include a series of application packages. The application package may include camera, gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications. The application package may also include applications that require a picture or video to be presented to a user by rendering an image. For example, the applications included in the application layer 401 may be game-like applications, for example
Figure GDA0003882147530000121
And the like.
The framework layer 402 may also be referred to as an application framework layer. The framework layer 402 may provide an Application Programming Interface (API) and a programming framework for the application programs of the application layer 401. The framework layer 402 includes some predefined functions.
Illustratively, the framework layer 402 may include a window manager, a content provider, a view system, an explorer, a notification manager, an activity manager, an input manager, and the like. The Window Manager provides a Window Management Service (WMS), which may be used for Window management, window animation management, surface management, and a relay station as an input system. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text in a status bar at the top of the system, such as a notification of a running application in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc. The Activity Manager may provide an Activity Manager Service (AMS), which may be used for the start-up, switching, scheduling of system components (e.g., activities, services, content providers, broadcast receivers), and the management and scheduling work of application processes. The Input Manager may provide an Input Manager Service (IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The IMS takes events from the input device nodes and assigns them to the appropriate windows through interaction with the WMS.
In this embodiment of the present application, one or more functional modules may be disposed in the framework layer 402, so as to implement the rendering scheme provided in this embodiment of the present application. As an example, the framework layer 402 may be provided with an interception module, a data processing module, a calculation module, a decision module, and the like. In the following examples, the functions of the above-described respective modules will be described in detail.
The system library 403 may include a plurality of functional modules. For example: surface manager (surface manager), media Framework (Media Framework), standard C library (libc), open graphics library for Embedded Systems (OpenGL for Embedded Systems, openGL ES), vulkan, SQLite, webkit, and the like.
Wherein the surface manager is used for managing the display subsystem and providing the fusion of the 2D and 3D layers for a plurality of application programs. The media framework supports playback and recording of a variety of commonly used audio and video formats, as well as still image files, and the like. The media library may support a variety of audio-video encoding formats, such as: moving Pictures Experts Group 4 (MPEG 4), h.264, moving Picture Experts Group Audio Layer3 (MP 3), advanced Audio Coding (AAC), adaptive Multi-code decoding (AMR), joint Photographic Experts Group (Joint Photographic Experts Group, JPEG, or JPG), portable Network Graphics (PNG), and the like. OpenGL ES and/or Vulkan provide for the rendering and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of electronic device 400. In some implementations, openGL ESs in the system library 403 can provide variable rate shading functionality. The electronic device may then call a variable rate shading API in the OpenGL ES to implement, along with other instructions, variable rate shading of the current draw command (draw call) when variable rate shading needs to be performed for that draw command. For example, the electronic device may color the current Drawcall using a lower rate (e.g., 2 × 1,2 × 2,4 × 4, etc.), thereby reducing the overhead associated with coloring the current Drawcall.
In the example of fig. 4, a hardware layer 404 may also be included in the electronic device. The hardware layer 404 may include a processor (e.g., a CPU, a GPU, etc.), and a component with a memory function (e.g., the internal memory 321 shown in fig. 3, etc.). In some implementations, the CPU may be configured to control each module in the framework layer 402 to implement its respective function, and the GPU may be configured to execute a corresponding rendering process according to an API in a graphics library (e.g., openGL ES) called by an instruction processed by each module in the framework layer 402.
In order to more clearly describe the functions of each layer in the software architecture provided in the embodiment of the present application, the following takes image rendering as an example to illustrate the functional implementation of each component having the software composition shown in fig. 4.
For example, please refer to fig. 5. When the application program in the application layer needs to perform image rendering, a rendering command can be issued. In the following description, a rendering command issued by an application may also be referred to as a Drawcall. In different examples, the rendering commands may include different content. For example, in some embodiments, the application needs to render graphics in the frame image. Vertex data for the graphics to be rendered may be included in the issued rendering commands. In some implementations, the vertex data may be used to indicate coordinates of vertices of the graphics to be rendered. The coordinates may be local space based coordinates. In the rendering command, an MVP matrix as in the illustration shown in fig. 1, and one or more drawing elements (draw elements) may also be included. The framework layer 402 may convert the rendering command into rendering instructions after receiving the rendering command, and the rendering instructions may carry the vertex data, the MVP matrix, and one or more draobjects, etc. In some implementations, the framework layer 402 can also obtain an API required by the current Drawcall from a graphics library of the system library 403 according to the instruction of the application program, so as to instruct other modules (e.g., GPUs) to perform rendering operations using the function corresponding to the API. For example, the electronic device may determine parameters to be used in the variable rate shading process prior to drawelment. The electronic device may also send variable shading instructions by calling a variable rate shading API, in conjunction with the aforementioned parameters. Variable rate coloring of subsequent drawelements is achieved. Take the example of a GPU in the hardware layer 404 performing rendering. The GPU may fetch the variable shading instruction and, in response to the variable shading instruction, execute Drawelement using the shading rate indicated by the corresponding parameter.
The rendering method provided by the embodiment of the application can also be applied to electronic equipment consisting of software shown in fig. 4. The following describes the scheme provided in the embodiments of the present application with reference to the software components shown in fig. 4.
In the following examples, in order to more clearly explain the scheme provided by the present application, the electronic device is divided into modules according to different functions, and the module division may be understood as another division form of the electronic device having the composition shown in fig. 3 or fig. 4. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device related to the electronic device may be divided into function modules according to the scheme provided in the embodiment of the present application, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
For example, an interception module, a data processing module, a calculation module, and a decision module may be disposed in the framework layer 402 of the electronic device.
According to the rendering method provided by the embodiment of the application, the electronic device can intercept rendering commands from an application program (such as a game application) through each module arranged in the electronic device. The electronic device may also screen for vertex data for each vertex included in the current rendering command from the rendering commands. Vertex coordinates may be included in the vertex data. The electronic device may further obtain an MVP matrix corresponding to the current rendering command. The electronic device may determine the depth of each vertex according to the vertex coordinates and the MVP matrix. And determining the depth of the model to be drawn by the current Drawcall according to the depth of each vertex. Wherein, the greater the depth of the model is, the farther the model is from the observer in the current frame image is. Correspondingly, the smaller the depth of the model, the closer the model is to the observer. It will be appreciated that models that are farther from the viewer (i.e., the user) are generally not of interest to the user, while models that are closer to the user may be of interest to the user. Therefore, in the scheme provided by the embodiment of the application, the model with larger depth can be colored at a lower rate, so that the rendering overhead corresponding to coloring is reduced under the condition of not influencing the user experience. In contrast, for a model with a small depth, the electronic device can perform coloring at a high rate, so that the model is accurately colored, the display quality of the model in the current frame image is improved, and the effect of improving the user experience is achieved.
The following describes in detail a specific implementation of the rendering method provided in the embodiment of the present application, with reference to the above module division. For convenience of explanation, in the following example, an application program that issues rendering commands is described as a game application using an OpenGL graphics library as an example. It should be understood that in other different rendering engines/rendering environments, the implementation mechanism is similar, and only the corresponding functions may differ.
At the beginning of game run (or game load), the game application may load data that may be used during rendering of subsequent frame images. In some implementations, the data loaded at game load may include vertex data for all models that may be used in subsequent rendering processes and an MVP matrix for one or more frame images. In other implementations, only a portion of the model's vertex data may be loaded at a time of game loading. Thus, when a new model needs to be used, the electronic device may perform the loading process again to load the vertex data of the new model into the GPU. Or, the electronic device may load the vertex data of the new model by carrying the vertex data in the issued rendering instruction. In other implementations, the game application may only transmit vertex data during the game loading process, and the MVP matrix may be different for each frame of image, and the game application may transmit the MVP matrix during the game running process. In embodiments of the present application, the vertex data may include vertex coordinates, which may be local space-based coordinates.
For convenience of illustration, in the following example, the game is loaded once, and vertex coordinates of all models and loading of the MVP matrix are implemented as an example.
In this example, with the game loading, the game application may transmit all the vertex coordinates that may be used with the model and one or more MVP matrices via a command that includes a plurality of instructions. Through these instructions, the vertex coordinates of the model and the MVP matrix may be stored in a memory space that the GPU can call.
Illustratively, at game start-up, the game application may issue command 1 for implementing the loading process described above. In some embodiments, one or more of a glGenbuffers function, a glBindbuffer function, a glbuffertata function, and a glBufferSubData function are included in the command 1 as examples.
Among other things, the glGenbuffers function may be used to create the cache. That is, one or more storage spaces are divided in the memory of the electronic device, and each storage space may have an Identification (ID). The divided storage space can be used for storing various items of data in the rendering process. For example, some caches may be used to store vertex coordinates of the model, some caches may be used to store MVP matrices, and so on.
The glBindbuffer function may be used to bind the cache. Through the binding of the function, the subsequent operation can be bound on the corresponding cache. For example, the created caches include cache 1, cache 2, and cache 3. Through glBindbuffer (1), the subsequent operations can be bound to cache 1. For example, if the subsequent operation includes an operation to write data (e.g., vertex coordinates), the electronic device may write the vertex coordinates to cache 1 for storage.
The glBufferdata function may be used to transfer data. For example, when the data carried by the glBufferdata function is not NULL (NULL), the electronic device can store the data (or a pointer to the data) carried by the glBufferdata function on the already bound cache. For example, when the glBufferdata function carries vertex coordinates, the electronic device may store the vertex coordinates in the bound frame buffer. For another example, when the glbuffer data function carries an index of a vertex coordinate, the electronic device may store the vertex coordinate index to the bound frame buffer.
The glBufferSubData function may be used for updating of data. For example, the game application may update some or all of the vertex coordinates through the glBufferSubData function. Therefore, the effect of instructing the electronic equipment (such as GPU) to perform drawing and rendering according to the new vertex coordinates is achieved.
In this embodiment, the electronic device may intercept an instruction stream of command 1, thereby obtaining an instruction for transmitting vertex data and an MVP matrix. The electronic device may also perform backup storage of the retrieved instructions. For example, the electronic device may store the data in a memory in an area that the CPU can call. Therefore, before the game is run, vertex data (such as vertex coordinates) and an MVP matrix which are possibly used in a subsequent rendering process can be stored in the memory of the electronic device. It will be appreciated that the native command (e.g., the instruction stream in command 1) is used to transfer data to the memory region that the GPU can call, and thus, with the backup storage in this example, the CPU may also have the ability to call the vertex data and MVP matrix, thereby ensuring the implementation of subsequent schemes.
For example, as shown in fig. 6, the intercepting module in the electronic device may intercept a glGenbuffers function, a glBindbuffer function, a glBufferdata function, and a glBufferSubData function included in the command 1. The interception module can also transmit the functions to the data processing module for analysis processing. For example, the data processing module may filter the function carrying parameter 1 from the functions from the interception module. Wherein, the parameter 1 may be a parameter indicating a parameter for performing vertex-related data transmission. In this way, the data processing module can obtain instructions related to transmitting vertex data. The data processing module may perform a backup storage of the vertex data based on the functions obtained by the screening.
Wherein the parameter 1 may be obtained by offline analysis. In some embodiments, the parameter 1 may be pre-saved in an electronic device (e.g., a data processing module) so that the data processing module may perform the filtering of vertex data related instructions based on the parameter 1. As a possible implementation, the parameter 1 may include GL _ ELEMENT _ ARRAY _ BUFFER and/or GL _ ARRAY _ BUFFER.
Similar to the interception and backup storage for the vertex data instruction, the glGenbuffers function, the glBindbuffer function, the glBufferdata function, and the glbuffersata function included in the interception command 1 intercepted by the interception module may also be used to transmit the MVP matrix.
Then, the data processing module may also filter the function carrying the parameter 2 from the function of the intercepting module to obtain the function for transmitting the MVP matrix. Wherein, the parameter 2 may be a parameter indicating the MVP matrix transmission. Thus, the data processing module can obtain the instructions related to transmitting the MVP matrix. Then based on these filtered functions, the data processing module may perform a backup store of the MVP matrix.
Wherein the parameter 2 may be obtained by offline analysis. In some embodiments, the parameter 2 may be pre-stored in an electronic device (e.g., a data processing module), so that the data processing module may perform a filtering on the MVP matrix-related instruction based on the parameter 2. As a possible implementation, the parameter 2 may comprise GL _ UNIFORM _ BUFFER.
In the above example, the example is described with the interception module directly transmitting all intercepted instructions to the data processing module without processing to perform analysis processing. In other embodiments of the present application, the interception module may also have basic analysis capabilities. For example, the interception module may intercept only the glGenbuffers function, the glBindbuffer function, the glbuffata function, and the glBufferSubData function carrying the parameters 1 and 2. Wherein, the parameter 1 may be a parameter indicating a parameter for performing vertex-related data transmission. The parameter 2 may be a parameter indicating a parameter for performing MVP matrix transmission.
Therefore, the data processing module can directly back up and store the instruction from the interception module. Thereby, the data processing pressure of the data processing module can be relieved.
In the following example, as shown in fig. 6, an example is described in which an interception module intercepts vertex-related instructions (such as an instruction carrying parameter 1) and MVP-related instructions (such as an instruction carrying parameter 2) and transmits the vertex-related instructions and MVP-related instructions to a data processing module for backup storage.
The backup storage related in the embodiment of the present application may be implemented in the form of a jump table. The jump table may be used to indicate a correspondence of the native ID to the backup ID. The native ID may be a cache ID carrying a function indicated operation required in command 1. The backup ID may be a cache ID configured in the memory and called by the CPU for performing backup storage of data.
Illustratively, the functions of the vertex data related instructions intercepted by the interception module include the following functions:
glGenbuffers (GL _ ARRAY _ BUFFER, 1)// creating a cache for vertex data with a cache ID of 1;
glBindbuffer (GL _ ARRAY _ BUFFER, 1)// cache with vertex data binding ID of 1;
glbuffer data (GL _ ARRAY _ BUFFER, data 1)// write data1 to already bound caches;
glbuffersubData (GL _ ARRAY _ BUFFER, data 2)// updating the data in the already bound cache to data2.
Then the native ID in this example may be 1. Take the corresponding backup ID as 11 as an example.
According to the intercepted glGenbuffers (GL _ ARRAY _ BUFFER, 1), the data processing module can create a BUFFER with ID 11 corresponding to 1 in the backup BUFFER.
According to the intercepted glBindbuffer (GL _ ARRAY _ BUFFER, 1), the data processing module may control the subsequent operation to be performed on the cache whose ID is 11 corresponding to 1.
According to the intercepted glbuffer data (GL _ ARRAY _ BUFFER, data 1), the data processing module can write data1 into the storage space with ID 11 in the backup cache.
According to the intercepted glBufferSubData (GL _ ARRAY _ BUFFER, data 2), the data processing module can update the data2 to the storage space with ID 11 in the backup cache.
Data1 and data2 may include vertex data, such as vertex coordinates, normal vectors of vertices, and the like.
Therefore, backup storage of the vertex data related instruction carried in the command 1 can be realized.
Similar to the backup storage of vertex data, the data processing module may also perform backup storage on the MVP matrix.
Illustratively, taking the functions of the MVP matrix-related instructions intercepted by the interception module as examples, the functions include the following functions:
glGenbuffers (GL _ UNIFORM _ BUFFER, 2)// creating a cache for UNIFORM variables (such as MVP matrix), with a cache ID of 2;
glBindbuffer (GL _ UNIFORM _ BUFFER, 2)// is a UNIFORM variable (e.g., MVP matrix) binds to a cache with ID 2;
glbuffer data (GL _ UNIFORM _ BUFFER, data 3)// write data3 to already bound caches;
glBufferSubData (GL _ UNIFORM _ BUFFER, data 4)// updating the data in the already bound cache to data4.
Among them, data3 and data4 may include MVP matrices.
Then the native ID in this example may be 2. Take the corresponding backup ID as 22 as an example.
According to the intercepted glGenbuffers (GL _ UNIFORM _ BUFFER, 2), the data processing module can create a BUFFER with ID 22 corresponding to 2 in the backup BUFFER. The backup cache with ID 22 may be used to store data corresponding to a uniform variable, such as the uniform variable may include an MVP matrix.
According to the intercepted glBindbuffer (GL _ UNIFORM _ BUFFER, 2), the data processing module can control the subsequent operations to be performed on the cache 22 whose ID corresponds to 2.
According to the intercepted glbuffer data (GL _ UNIFORM _ BUFFER, data 3), the data processing module can write data3 into the storage space with ID 22 in the backup cache.
According to the intercepted glBufferSubData (GL _ UNIFORM _ BUFFER, data 4), the data processing module can update the data4 to the storage space with ID 22 in the backup cache.
The data3 and data4 may include MVP matrices, such as M matrix, VP matrix, etc.
Therefore, backup storage of the MVP matrix related instruction carried in the command 1 can be realized.
Besides performing backup storage on the instructions and the related data, the data processing module can also store a jump table comprising the corresponding relation between the original ID and the backup ID, so that the ID of the corresponding data in the backup storage can be accurately found according to the ID in the command issued by the subsequent application.
For example, table 1 below shows an example of a jump table.
TABLE 1
Native ID Backup ID
1 11
2 22
…… ……
Based on table 1, when the game application issues an instruction to perform an operation on the cache with ID 1, the electronic device may determine that corresponding data may be stored in the storage space with backup ID 11. Similarly, when the game application issues an instruction to perform an operation on the cache with ID 2, the electronic device may determine that corresponding data may be stored in the storage space with the backup ID 22.
It should be noted that, in order to ensure the smooth execution of the command 1, in this example, the interception module may also call back an instruction (such as a callback instruction a) that does not carry the parameter 1 or the parameter 2 to the graphics library, so as to control a component (such as a GPU) in the hardware layer to execute a corresponding function by calling a corresponding interface in the graphics library. In other implementations, the data processing module may also call back an instruction (such as a call-back instruction b) from the interception module to the graphics library after completing the backup storage of the vertex data and the MVP matrix, so as to control a component (such as a GPU) in the hardware layer to execute a corresponding function by calling a corresponding interface in the graphics library.
Therefore, while the backup storage of the vertex data and the MVP matrix is realized, the complete execution of the command 1 can be realized, so that the execution of the command issued by the subsequent game application is not influenced.
In this embodiment of the application, according to the data stored in the backup during the loading process, the electronic device may implement relevant processing on the command during the game running process, so as to determine the vertex coordinates of the model to be drawn by the current command (i.e., the current Drawcall) and the MVP matrix corresponding to the current Drawcall.
Exemplarily, in connection with fig. 7. During game play, the game application may issue command 2. This command 2 may be used, among other things, to implement rendering including the model.
It will be appreciated that in conjunction with the foregoing description, the vertex data of the model to be drawn and the MVP matrix may have been loaded by command 1. That is, the data may have been stored in a memory space that the GPU may call. Then, in command 2, the vertex data and the relevant parameters of the MVP matrix that need to be used may be indicated, so that the GPU may obtain the corresponding vertex data and MVP matrix from the already loaded data, thereby drawing the corresponding model.
In this example, the command 2 issued in the game may include an instruction stream composed of a plurality of instructions (i.e., functions). In order to implement the above functions, a function of binding cache, a function indicating a vertex data parsing manner, and a function indicating relevant parameters of the MVP matrix may be included in the command 2.
As a possible implementation, at least the following instructions may be included in command 2:
binding a cached function, such as a glBindbuffer function;
a glVertexAttribPoint function for indicating a vertex data parsing manner;
a glBindBufferRange function for indicating the MVP matrix-related parameters.
Then, the intercepting module may be configured to intercept the instruction during the game. In some embodiments, the intercept module may intercept vertex-related instructions including a glBindbuffer function, a glVertexAttribPoint function. The interception module may also intercept an MVP-related instruction that includes a glBindBufferRange function. The interception module can also transmit the vertex related instruction and the MVP related instruction to the data processing module for analysis.
Similar to the above description of data interception in the loading process, in this example, the interception module may also have a certain data parsing capability. Then, the intercepting module can intercept the glBindbuffer function carrying parameter 1 (e.g., GL _ ELEMENT _ ARRAY _ BUFFER and/or GL _ ARRAY _ BUFFER) and the glVertexAttribPoint function. The interception module may also intercept a glBindBufferRange function carrying a parameter 2 (e.g., GL _ unitform _ BUFFER).
Then, after receiving the glBindbuffer function carrying parameter 1 and the glVertexAttribPoint function, the data processing module may determine the vertex coordinates to be used by the current Drawcall accordingly.
In some embodiments of the present application, the data processing module may determine the vertex coordinates to be used by the current Drawcall in conjunction with locally stored parameter 3. The parameter 3 may be determined by static analysis of the current game.
It will be appreciated that multiple items of vertex related data may be included in the vertex data. Different data may be stored in different attributes (attributes). For example, atteude 0 may be used to store vertex coordinates, attribute1 may be used to store vertex normal vectors, and so on. For a gaming application, the ID of the attribute (e.g., 0) used to store the vertex coordinates during operation is typically unchanged. Thus, in this example, the ID of the attribute (e.g., 0) used by the current game to store the vertex coordinates may be included in parameter 3.
In this way, after receiving the instruction from the intercepting module, the data processing module may determine whether the glVertexAttribPoint function is used for transmitting vertex data according to whether the attribute ID of the storage data indicated by the glVertexAttribPoint function matches with parameter 3. In the case where the attribute ID indicated by the glvertetxattertbpoint function matches parameter 3, the data processing module may determine that the glvertexaattertbpoint function is a function related to vertex coordinates.
Then, the data processing module may determine the storage location of the vertex data indicated by the current Drawcall according to the cache ID bound by the glBindbuffer function before the glVertexAttribPoint function. The data processing module may determine from this the storage location of the vertex coordinates indicated by the current Drawcall in the cache of the backup storage.
For example, the data processing module receiving the instruction from the interception module includes:
glBindbuffer (GL _ ARRAY _ BUFFER, 1)// cache with binding ID 1;
glVertexAttribPoint (0, 3, GL _FLOAT, GL _FALSE,3 sizerof (float), (void _ 0))// each parameter is indicated in sequential order: the attribute value is 0, each set of data includes 3 values (e.g., XYZ), the type is floating point type, normalization is not required, the step size is 3 × 4, and the start address is 0.
Take the attribute ID indicated by the parameter 3 as 0 as an example.
The data processing module may determine that the attribute ID (e.g. 0) indicated by the glverterxattribpoint function matches parameter 3, and then the ID bound by the glverterxattribpoint function, that is, the ID bound by the glBindbuffer (GL _ ARRAY _ BUFFER, 1) before the glVertexAttribPoint function, is a cache with 1, and is a cache for passing vertex data of the model to be drawn by the current Drawcall.
Next, the data processing module may determine, according to the corresponding relationship between the native ID and the backup ID (as in table 1), that the cache ID storing the vertex coordinates of the model to be drawn by the current Drawcall is 11 in the data stored in the backup. Next, the vertex coordinates of each vertex of the model are determined according to the attribute ID (i.e., ID indicated by parameter 3) indicated by the glVertexAttributbPoint function and the offset.
The above description details the manner in which the vertex coordinates of the current Drawcall model (referred to simply as the current model) are obtained. The following describes the acquisition of the MVP matrix corresponding to the current model.
In the game running process, the intercepting module can also intercept the glbindbuffer range function carrying the parameter 2 (such as GL _ UNIFORM _ BUFFER).
The glBindBufferRange function carrying parameter 2 can be transmitted to a data processing module for processing.
For example, in some embodiments of the present application, the data processing module may determine the MVP matrix to be used by the current Drawcall in conjunction with the locally stored parameters 4. The parameter 4 may be determined by static analysis of the current game.
In some embodiments, this parameter 4 may include storing an offset of the M matrix, and/or the VP matrix. It will be appreciated that for certain gaming applications, the offset of the M matrix is stored in the corresponding buffer, and the offset of the VP matrix is generally unchanged. Therefore, in this example, the data processing module may determine, according to the parameter 4, whether the intercepted glbindbufferrrange function is used for transmitting the MVP matrix corresponding to the current Drawcall in combination with the parameter carried by the intercepted glbindbufferrrange function.
In a case where the data processing module determines that the offset carried by the glBindBufferRange function matches with the parameter 4, the data processing module may determine that the glBindBufferRange function is used for transmitting the MVP matrix corresponding to the current Drawcall.
The data processing module may determine, according to the cache ID indicated by the glBindBufferRange function and the skip list (e.g., table 1), that the cache ID of the MVP matrix corresponding to the current Drawcall is stored in the backup storage. In addition, the specific storage positions of the M matrix and the VP matrix in the buffer memory can be determined according to the offset indicated by the glBindBufferRange function (or the offset indicated by the parameter 4).
For example, the data processing module receiving the instruction from the intercepting module includes:
glbindbufferange (GL _ unitrm _ BUFFER,2,0, 152)// the meaning of the individual parameters indicated in chronological order is: the target is GL _ UNIFORM _ BUFFER, the BUFFER ID is 2, the offset head address is 0, and the data size is 152.
Take the offset first address 0 indicated by parameter 4 and the data size 152 as an example.
The data processing module may determine that the offset indicated by the currently intercepted glBindBufferRange function matches parameter 4, and then the glBindBufferRange function is used to deliver the current Drawcall corresponding MVP matrix. Therefore, according to the ID (e.g. 2) indicated by the glBindBufferRange function, the data processing module may determine that the MVP matrix corresponding to the current Drawcall corresponds to a native ID of 2.
Next, the data processing module may determine, according to the corresponding relationship between the native ID and the backup ID (as in table 1), that the cache ID of the MVP matrix storing the model to be drawn by the current Drawcall is 22 in the data stored in the backup. Based on the offset indicated by the glbindbufferrrange function (i.e., the offset indicated by parameter 4), the data processing module can determine the MVP matrix of the model.
It should be noted that, in this example, similar to the foregoing backup storage process, the interception module may also implement a callback to an instruction in command 2 that is not related to the vertex data and the MVP matrix through the callback instruction c. The data processing module can realize the callback of the instruction related to the vertex data and the MVP matrix in the command 2 through the callback instruction d.
In order to make it clear to those skilled in the art that the embodiments of the present application provide solutions, regarding the process of storing data in a backup manner during the game loading process, and the process of determining vertex data and MVP matrix during the game running process, the functions of the modules in the process are exemplarily described below from the perspective of instruction flow.
For example, please refer to fig. 8, which is a schematic flowchart illustrating a process of storing data backup when a game provided by the embodiment of the present application is loaded (or started). As shown in fig. 8, the process may include:
s801, upon receiving the instruction P, determines whether the instruction P is a vertex-related instruction.
In this example, at game load time, the game application may issue an instruction P, which may be used to perform the loading of the vertex data.
If the instruction P is a vertex-related instruction, the following S802 is executed. In the case where the instruction P is not a vertex-related instruction, the following S811 is performed.
In connection with the foregoing example, the vertex-related instruction may include a function carrying a parameter of 1. Such as glGenbuffers function, glBindbuffer function, glBufferdata function, glbuffersata function, etc. carrying parameter 1.
S802, the interception module sends a vertex correlation instruction to the data processing module. Wherein the vertex related instruction may comprise instruction P.
And S803, the data processing module controls the memory to backup and store the vertex related data.
The memory in this example may correspond to the cache in the above example. The memory (or cache) may be a part of the storage space in the internal storage medium of the electronic device, which may be called by the CPU.
In the above S801 to S803, the processes of intercepting, analyzing, storing and the like of vertex data are similar to the specific implementation in the foregoing description, and are not described again here. Therefore, the backup storage of the vertex data can be realized.
In some embodiments of the present application, in the process of executing S803, the data processing module may further store a corresponding relationship between the native ID and the backup ID for calling of subsequent data.
In the process of executing S801-S803, the electronic device may also implement normal operation of a command issued by the game application, such as normal loading of data, by instruction callback. Illustratively, the process may include:
s811, the interception module calls back an instruction 1-1 to the graphic library.
For example, in the case where the instruction P is not a vertex-related instruction, the intercept module may callback the instruction to the graphics library. For example, the instruction 1-1 may include instruction P.
S812, the graphic library calls the relevant API 1-1. The relevant API 1-1 may be an API called to implement the function of the callback instruction 1-1.
S813, the graphic library sends an instruction 1-1 to the GPU. The instruction 1-1 may carry a code corresponding to the API 1-1.
S814, the GPU executes the operation corresponding to the instruction 1-1.
Similar to the interception module, the data processing module may also call back vertex related instructions. Illustratively, the process may include:
and S815, calling back an instruction 1-2 from the graphic library by the data processing module. The instructions 1-2 may include instructions that are intercepted by an interception module and transmitted to a data processing module. For example, the instruction may include a vertex-related instruction in instruction P.
S816, the graphic library calls the relevant API 1-2. The API 1-2 may be an API called to implement the function of the callback instruction 1-2.
S817, the graphic library sends the instruction 1-2 to the GPU. The instruction 1-2 may carry a code corresponding to the API 1-2.
S818, the GPU executes the operation corresponding to the instruction 1-2.
Therefore, through S811-S818, call-back of all data in the instruction P is realized, and loading of data in the instruction P is smoothly realized.
In this example, the electronic device may further implement backup storage of the MVP matrix through the following process. Illustratively, the process may include:
s804, after the instruction Q is received, whether the instruction Q is an MVP related instruction is determined.
In this example, at game load time, the game application may issue an instruction Q, which may be used to perform the loading of MVP data. The interception module may intercept a function carrying parameter 2 included in the instruction Q.
Such as glGenbuffers function, glBindbuffer function, glBufferdata function, glBufferSubData function, etc. carrying parameter 2.
In the case where the instruction Q is an MVP-related instruction, the following S805 is executed. In the case where the instruction P is not an MVP-related instruction, the following S821 is executed.
S805, the interception module sends an MVP related instruction to the data processing module. The MVP-related instruction may include an instruction Q.
S806, the data processing module controls the memory to backup and store the MVP related data.
In the foregoing S804-S806, processes of intercepting, analyzing, and storing the MVP matrix are similar to the specific implementation in the foregoing description, and are not described herein again. Therefore, backup storage of the MVP matrix can be realized.
Similar to the foregoing callback process regarding vertex data, in this example, the electronic device may also perform callback on the MVP instruction, so as to implement normal loading of the MVP matrix. Illustratively, the process may include:
and S821, calling back an instruction 2-1 to the graphics library by the interception module.
For example, in the case where the instruction P is not an MVP-related instruction, the interception module may callback the instruction to the graphics library. For example, the instruction 2-1 may include instruction Q.
S822, calling the relevant API 2-1 by the graphic library. The relevant API 2-1 may be an API called to implement the function of the callback instruction 2-1.
S823, the graphics library sends an instruction 2-1 to the GPU. The instruction 2-1 may carry a code corresponding to the API 2-1.
S824, the GPU executes the operation corresponding to the instruction 2-1.
Similar to the interception module, the data processing module may also call back the MVP-related instruction. Illustratively, the process may include:
s825, the data processing module calls back an instruction 2-2 to the graphic library. The instructions 2-2 may include instructions that are intercepted by the interception module and transmitted to the data processing module. For example, the instruction may include an MVP-related instruction in instruction Q.
S826, the graphics library calls the relevant API 2-2. The API 2-2 may be an API called to implement the function of the callback instruction 2-2.
S827, the graphics library sends the instruction 2-2 to the GPU. The instruction 2-2 may carry code corresponding to the API 2-2.
S828, the GPU executes the operation corresponding to the instruction 2-2.
Therefore, through S821-S828, call-backs to all instructions in the instruction Q are realized, and loading of data in the instruction Q is smoothly realized.
In the following, the determination process of the vertex coordinates and the MVP matrix corresponding to the model to be drawn by the current Drawcall during the game running process is exemplified in combination with the view of the instruction stream.
For example, in conjunction with FIG. 9, the game application issues an instruction N to indicate the vertex data of the current model during operation. The process may include:
s901, after receiving the instruction N, the interception module determines whether the instruction N is a vertex related instruction.
In this example, the vertex-related instruction may be an instruction carried in instruction N for indicating vertex data corresponding to a model to be drawn by current Drawcall. In some embodiments, these instructions may carry a parameter 1 associated with the vertex data. In conjunction with the foregoing description, the vertex-related instruction may include a glvertetxattetbpoint function, a corresponding glBindbuffer function, and so on.
In the case where the instruction N is a vertex-related instruction, the following S902 may be performed. In the case that instruction N is not a vertex-related instruction, a callback to instruction N may be performed, such as performing S911.
S902, the interception module sends the vertex correlation instruction to the data processing module. Wherein the vertex related instruction may comprise instruction N.
S903, the data processing module determines the buffer ID of the transmission vertex data.
For example, the data processing module may determine that the currently intercepted function is used for indicating vertex data corresponding to the current Drawcall model when the attribute ID indicated by the glVertexAttribPoint function intercepted by the interception module matches with a preset parameter 3.
The data processing module may determine the buffer ID of the transmission vertex data according to the glBindbuffer function preceding the glVertexAttribPoint function. The cache ID may be a native ID.
S904, the data processing module determines the storage position in the backup storage of the vertex data.
For example, the data processing module may determine the backup ID corresponding to the current native ID from a jump table including a correspondence of the native ID to the backup ID. Therefore, the cache ID of the vertex data corresponding to the model to be drawn by the current Drawcall in the backup storage can be determined. In addition, the storage position of each vertex coordinate in the backup storage can be accurately acquired according to the attribute ID and the offset of the vertex coordinate.
In some embodiments of the present application, after determining the storage location in the backup storage of the vertex data, the data processing module may dump the vertex coordinates to preset location 1 for subsequent recall. In other embodiments, after determining the storage location in the backup storage for the vertex data, the data processing module may mark the location in the backup storage where the current Drawcall corresponding vertex coordinates are stored for subsequent recall.
It should be noted that, in order to ensure normal operation of the instruction N, in this embodiment of the application, the interception module and the data processing module may further perform instruction callback. Illustratively, the process may include:
and S911, calling back an instruction 3-1 from the graphics library by the interception module.
Illustratively, in the case where instruction N is not a vertex-related instruction, this step may be performed to implement a callback to instruction N. In some embodiments, the instruction 3-1 may include instruction N.
S912, calling the relevant API 3-1 by the graphic library. The API 3-1 may be an API in the graphics library for implementing the function corresponding to the instruction 3-1.
S913, the graphics library sends an instruction 3-1 to the GPU. The instruction 3-1 may include code corresponding to the API 3-1.
S914, the GPU executes the operation corresponding to the instruction 3-1.
It should be noted that, in some embodiments, the execution of S911-S914 may be performed after S902.
Similar to the interception module, the data processing module may also perform instruction callbacks. Illustratively, the process may include:
and S915, calling back an instruction 3-2 from the graphic library by the data processing module.
Illustratively, the instruction 3-2 may include a vertex related instruction intercepted by the interception module in the instruction N.
S916, the graphic library calls the relevant API 3-2. The API 3-2 may be an API in the graphics library for implementing the function corresponding to the instruction 3-2.
S917, the graphics library sends an instruction 3-2 to the GPU. The instruction 3-2 may include code corresponding to the API 3-2.
S918, the GPU executes the operation corresponding to the instruction 3-2.
In some embodiments, the execution of S915-S918 described above may be performed after S904.
Therefore, the full callback of the instruction N is realized, and the normal execution of the instruction N is ensured.
In this example, the electronic device may further determine a storage location of the MVP matrix corresponding to the current Drawcall in the backup storage through the following procedure. Illustratively, in conjunction with fig. 9, the game application issues an instruction M during running to indicate the MVP matrix of the current model. The process may include:
s905, after receiving the instruction M, the interception module determines whether the instruction N is an MVP related instruction.
In this example, the MVP related instruction may be an instruction carried in the instruction M to indicate that the current Drawcall is to draw the MVP matrix corresponding to the model. In some embodiments, these instructions may carry a parameter 2 associated with the MVP matrix. In conjunction with the foregoing description, the MVP-related instruction may include a glBindBufferRange function, etc.
In the case where the instruction M is an MVP-related instruction, the following S906 may be performed. In the case that the instruction M is not an MVP-related instruction, a callback to the instruction M may be performed, such as performing S921.
S906, the interception module sends an MVP related instruction to the data processing module.
S907, the data processing module determines the buffer ID of the MVP matrix.
For example, the data processing module may determine that the currently intercepted function is used for indicating the MVP matrix corresponding to the current Drawcall model when the offset indicated by the glBindBufferRange function intercepted by the interception module matches with the preset parameter 4.
The data processing module may determine the buffer ID of the transmission MVP matrix according to the buffer ID indicated by the glBindBufferRange function. The cache ID may be a native ID.
S908, the data processing module determines the storage position in the backup storage of the MVP matrix.
For example, the data processing module may determine the backup ID corresponding to the current native ID from a jump table including a correspondence of the native ID and the backup ID. Therefore, the cache ID of the MVP matrix corresponding to the model to be drawn by the current Drawcall in the backup storage can be determined. In addition, the M matrix and/or the storage position of the VP matrix in the backup storage can be accurately obtained according to the offset of the MVP matrix.
In some embodiments of the present application, similar to the processing of the vertex data described above, after determining the storage location in the backup storage of the MVP matrix, the data processing module may dump the MVP matrix to preset location 2 for subsequent invocation. In other embodiments, after determining the storage location in the backup storage of the MVP matrix, the data processing module may mark the location in the backup storage where the MVP matrix corresponding to the current Drawcall is stored for subsequent invocation.
It should be noted that, in order to ensure normal operation of the instruction M, in this embodiment of the application, the interception module and the data processing module may also perform instruction callback. Illustratively, the process may include:
and S921, calling back an instruction 4-1 to the graphics library by the interception module.
Illustratively, in the case where the instruction M is not an MVP-related instruction, this step may be performed to implement a callback to the instruction M. In some embodiments, this instruction 4-1 may comprise instruction M.
S922, the graphic library calls a relevant API 4-1. The API 4-1 may be an API in a graphics library for implementing a function corresponding to the instruction 4-1.
S923, the graphic library sends an instruction 4-1 to the GPU. The instruction 4-1 may include code corresponding to the API 4-1.
S924, the GPU executes the operation corresponding to the instruction 4-1.
It should be noted that, in some embodiments, the execution of S921-S924 may be performed after S906.
Similar to the interception module, the data processing module may also perform instruction callbacks. Illustratively, the process may include:
and S925, calling back an instruction 4-2 from the graphics library by the data processing module.
Illustratively, the instruction 4-2 may include an MVP related instruction intercepted by the interception module in the instruction M.
S926, the graphics library calls the relevant API 4-2. The API 4-2 may be an API in the graphics library for implementing the corresponding function of the instruction 4-2.
S927, the graphic library sends an instruction 4-2 to the GPU. The instruction 4-2 may include code corresponding to the API 4-2.
And S928, executing the operation corresponding to the instruction 4-2 by the GPU.
In some embodiments, the execution of S925-S928 above may be performed after S908.
Therefore, the full callback of the instruction N is realized, and the normal execution of the instruction M is ensured.
By the above example, the electronic device may obtain the vertex coordinates and MVP matrix of the model corresponding to the current Drawcall.
In the embodiment of the application, the electronic device can also calculate the depth of the current model according to the depth, and further perform rendering processing on the model by using a reasonable coloring rate.
For example, in conjunction with the computing module of fig. 10, the computing module may be configured to determine the depth of the graph (or model) to be drawn by the current drawcall according to the vertex coordinates and the MVP matrix stored in the memory.
Wherein the depth of the model may be determined by the depth of some or all of the vertices on the model.
First, a method of determining the depth of the vertex will be described below.
For example, in some embodiments, in the case that Drawebement is intercepted by the interception module, the computation of the depth of each vertex by the computation module may be triggered.
It is understood that, in the embodiment of the present application, the calculation module may cooperate with the decision module to determine whether to perform low-rate shading on a part of the vertices by calculating the depths of the respective vertices. Thus, in different implementations of the present application, the trigger mechanism by which the computation module computes the depth of each vertex may be different. Just before issuing the corresponding drastatement to the GPU, the calculation module and/or the decision module can know the vertex depth corresponding to the drastatement. For example, in this example, the calculation of the depth of each vertex by the calculation module may be triggered by interception of drawebement by the interception module. In other implementations, the calculation module may trigger the calculation of the depth of each vertex after caching the vertex coordinates and the MVP matrix parameters in the memory, so that the calculation module may store the depth information obtained by calculation in the memory for subsequent use.
In some embodiments of the present application, the calculation module may call corresponding data according to the vertex coordinates corresponding to the current Drawcall and the storage address of the MVP matrix, which are determined by the data processing module, to calculate and obtain the depth information of each vertex.
Illustratively, the local coordinates of vertex 1 indicated by the vertex coordinates are (x 1, y1, z 1). The computation module may perform MVP transformation on the local coordinates of vertex 1 according to equation (1) below, and thereby determine the depth of vertex 1 in the clipping space (or screen space).
And the cutting coordinate = P.V.M (x 1, y1, z 1) \8230; 8230; formula (1).
Take the clipping coordinates of vertex 1 obtained by calculation as (x 2, y2, z 2) as an example. The calculation module may determine that the depth of current vertex 1 is z2.
Similar to the vertex 1, the calculation module may perform depth calculation on other vertices of the corresponding model in the current drawcall to obtain the depth of each vertex. It is understood that the depth may be a depth below the crop space. The deeper the depth, the farther away the vertex (or model) is from the user's viewing position during the display of the current frame image, and thus, the less noticeable the user is when performing a lower rate of rendering for the vertex (or model) having the greater depth.
In some embodiments of the present application, in combination with the above vertex depth calculation manner, the electronic device may determine the depth of the current drawcall corresponding model according to the vertex depth calculation manner.
For example, as one possible implementation, the electronic device may calculate the depths of all vertices of the current model and take the mean of these depths as the depth of the model. For example, the calculation module may, in the case of triggering depth calculation, retrieve all vertex coordinates currently stored from the memory, and perform matrix transformation according to the above formula (1) according to the MVP matrix, thereby obtaining clipping coordinates of each vertex, and further obtaining the depth of each vertex. The electronic device may take the mean of these vertex depths as the depth of the model.
In general, the number of vertices of a model corresponding to one drawcall is very large. Therefore, in some implementations of the present application, the electronic device may select a portion of the computed depth from the vertex coordinates stored in the memory, and determine the depth of the current model based on the portion of the computed depth. Thereby achieving the effect of reducing the calculation overhead.
For example, the electronic device may randomly select n vertices from the vertices of the current drawcall corresponding model according to a preset number of vertices (e.g., n vertices). The depth of the current model is obtained by determining the depths of the n vertices and taking the average value.
As a possible implementation, the calculation module may retrieve from the memory the n vertex coordinates that are random among all the vertex coordinates currently stored, in case of triggering the depth calculation. The calculation module may perform matrix transformation on the n vertex coordinates according to the above formula (1) according to the MVP matrix, thereby obtaining the clipping coordinates of the n vertices. The calculation module may obtain the depth of each vertex according to the clipping coordinates of the n vertices. The calculation module may further perform weighted average calculation according to the obtained depths of the n vertices, so as to obtain the depth of the model. The weights involved in the weighted average may be preset or flexibly adjusted.
In the above example, the calculation module is described as an example of implementing random selection of n vertices when retrieving vertex coordinates from the memory. In other implementations, the calculation module may further retrieve all stored vertex coordinates from the memory, and during the depth calculation, the calculation module may select n vertex coordinates from all retrieved vertex coordinates to perform the depth calculation, and other vertex coordinates are not processed or discarded. Random selection of n vertex coordinates can also be achieved thereby.
In other embodiments of the present application, the electronic device may adopt different strategies to determine the depth of the model according to the size of the current drawcall corresponding model. Wherein the size of the model can be identified by the number of vertices.
Take the depth calculation of intercepting the drastatement trigger calculation module by the interception module as an example.
The electronic device may determine the number of vertices of the current model according to the GLsizel count value indicated in the intercepted drawelment.
The electronic device may determine that the current model is a large model or a small model according to the determined vertex number and a size relationship between a preset vertex number threshold. For example, the vertex number threshold may be 5000. Then, when the count value is greater than 5000, the current model is considered to be a large model. Correspondingly, when the count value is less than 5000, the current model is considered as a small model.
According to the embodiment of the application, different depth calculation strategies are provided for the large model and the small model respectively, so that the depth of the model can be calculated more reasonably.
For example, for a large model, the electronic device may select 6 vertices with the largest or smallest x, y, and z components among all vertices of the current model. The depths of the 6 vertices are calculated and averaged to determine the depth of the current model.
For example, the vertex P1 with the largest x component among the vertices of the current model is represented by (x) max Y1, z 1), the vertex P2 with the smallest x component has the coordinate of (x) min Y2, z 2), the vertex P3 with the largest y component is (x 3, y) max Z 3), the vertex P4 with the smallest y component has the coordinate of (x 4, y) min Z 4) and the vertex P5 with the largest z-component has the coordinate of (x 5, y5, z) max ) The vertex P6 with the smallest x component has the coordinate of (x 6, y6, z) min ) For example.
After MVP matrix transformation, the coordinates of P1-P6 under the clipping coordinate are respectively: p1 (x' max ,y’1,z’1),P2(x’ min ,y’2,z’2),P3(x’3,y’ max ,z’3),P4(x’4,y’ min ,z’4),P5(x’5,y’5,z’ max ),P6(x’6,y’6,z’ min )。
Then, electronic device may determine the depth of current model as (z '1+ z 2+ z 3+ z +z' max +z’ min )/6。
In other implementations, for small models, the electronic device may randomly choose n vertices among all vertices of the current model. The depths of the n vertices are calculated and averaged as the depth of the current model. For example, n may be preconfigured, e.g., n may take any integer between 20-100.
It is understood that, in the above example, for a large model, the electronic device may select the vertex with the most edge for calculating the depth of the model, so that the depth of the large model can be truly and accurately identified by the calculation method in the case that the depth of different vertices of the model is greatly different. For a small model, because of the limitation of the size of the model, the depths of different vertices of the model do not have a large difference, and therefore, in this example, a preset number (or a random number) of vertices may be selected from the model, the depths thereof are respectively calculated and averaged to determine the depth of the model.
In the above example, the depth at which the z-coordinate value at the clipping coordinate is set as the corresponding vertex is described as an example. In other embodiments of the present application, the depth of the vertex may also be determined from a z-coordinate value at the clipping coordinate. For example, the vertex depth may be determined from the z-Coordinate of the vertex in Normalized Device Coordinates (NDC). For example, after coordinates of vertices in clipping space are obtained, z coordinates in NDC space may be obtained by perspective division. Thus, the z-coordinate in the NDC space can be used to identify the depth of the vertex. Furthermore, in the above example, determining the model depth from the depth z of the crop space, which identifies how far and near the model is in the image when presented to the user, is only one possible implementation. In other examples of the present application, the degree of closeness of the model (i.e., the depth of the model) may also be identified by the model length of the model in the viewing space based on the viewing coordinates. That is, the degree of distance of the model may be determined based on the distance from the model to the camera in the observation space.
It can be seen that, in the foregoing description of the present application, each module in the electronic device may obtain the depth of the model by obtaining vertex coordinates of the model and processing the vertex coordinates. In other implementations of the present application, each module in the electronic device may also determine the depth of the model by obtaining information of other models. Illustratively, the depth of the model is determined by obtaining bounding box information of the model. The interception module may be configured to intercept instructions related to bounding box information in the rendering command, and transmit the information to the data processing module. The data processing module and/or the calculation module may be configured to determine a depth of the current model based on the bounding box information obtained by the interception.
Therefore, the electronic equipment can obtain the depth of the current drawcall corresponding model through the calculation of the calculation module.
In this embodiment, the calculation module may cooperate with the decision module to determine whether to perform variable rate coloring on the current drawcall corresponding model, such as performing lower rate coloring on the current drawcall corresponding model.
For example, the decision module may determine the coloring rate to be performed on the model according to the depth of the model obtained by the calculation module. In this embodiment, the decision module may be configured with (or may retrieve from a memory) the shading rates corresponding to different model depths. If the shading rate corresponding to the current model depth is a lower shading rate (i.e., the shading rate is less than the shading rate in units of 1 pixel), the decision module may call the corresponding variable rate shading API from the graphics library of the system library when passing the drawfile corresponding to the drawcall to the hardware layer.
As an example, table 2 below shows a model depth versus shading rate correspondence.
TABLE 2
Depth of model Rate of coloration
[1,10) 1×1
(10,50] 2×1
(50,100] 2×2
(100,200] 4×2
Greater than 200 4×4
As shown in table 2, in case the model depth is less than 10, then the decision module may determine that the current model is colored using a coloring rate of 1 × 1. I.e., the depth of the current model is small, and there is no need to use a variable rate coloring mechanism. Then, the decision module may directly issue the rendering instruction corresponding to the rendering command issued by the game application to the hardware layer (e.g., a GPU issued to the hardware layer) without calling the variable rate shading API, so that the GPU may execute the rendering operation of the current rendering command in units of 1 pixel.
In the case where the model depth is greater than 10 and less than or equal to 50, then the decision module may determine that the current model is to be colored using a2 × 1 coloring rate. The decision module may call the variable rate shading API when issuing rendering instructions corresponding to the rendering commands to a hardware layer (e.g., a GPU of the hardware layer), and set the shading rate to 2 × 1, so that the GPU performs the rendering operation of the current rendering commands at the shading rate of 2 × 1.
In the case where the model depth is greater than 50 and less than or equal to 100, then the decision module may determine that the current model is to be colored using a2 x2 coloring rate. The decision module may call the variable rate shading API when issuing rendering instructions corresponding to the rendering commands to a hardware layer (e.g., a GPU of the hardware layer), and set the shading rate to 2 × 2, so that the GPU performs the rendering operation of the current rendering commands at the 2 × 2 shading rate.
In the case where the model depth is greater than 100 and less than or equal to 200, then the decision module may determine that the current model is to be colored using a4 x2 coloring rate. The decision module may call the variable rate shading API when issuing a rendering instruction corresponding to the rendering command to a hardware layer (e.g., a GPU of the hardware layer), and set the shading rate to 4 × 2, so that the GPU performs the rendering operation of the current rendering command at the shading rate of 4 × 2.
In the case where the model depth is greater than 200, then the decision module may determine that the current model is to be colored using a4 x4 coloring rate. The decision module may call the variable rate shading API when issuing rendering instructions corresponding to the rendering commands to a hardware layer (e.g., a GPU of the hardware layer), and set the shading rate to 4 × 4, so that the GPU performs the rendering operation of the current rendering commands at the 4 × 4 shading rate.
In addition, the interception module may call back other instructions of command 2 to the graphics library, so that by calling a corresponding API in the graphics library, the corresponding rendering operation may be performed at the coloring rate determined in the above scheme.
In order to make the solution provided by the embodiment of the present application clearer for those skilled in the art, the following description is continued from the viewpoint of instruction flow. Illustratively, referring to fig. 11, the process may include:
s1101, the interception module and the data processing module determine vertex coordinates and an MVP matrix of the current model in backup storage.
For example, the interception module may determine a storage location of the vertex coordinates of the current model in the backup storage according to an instruction N issued by the game application. The data processing module can determine the storage position of the MVP matrix of the current model in the backup storage according to an instruction M issued by game application.
For a specific implementation process of S1101, reference may be made to the relevant description of fig. 9, and details are not repeated here.
S1102, the calculation module acquires the vertex coordinates and the MVP matrix from the memory.
For example, in the case where the data processing module stores the vertex coordinates and the MVP matrix in a specific location, the calculation module may acquire the vertex coordinates and the MVP matrix from the specific location. In other embodiments, when the data processing module identifies the vertex coordinates and the storage location of the MVP matrix, the calculation module may obtain the vertex coordinates and the MVP matrix from the memory according to the identification.
In some embodiments of the present application, the calculation module may obtain the vertex coordinates and the MVP matrix after the game application issues the instruction R. One or more Drawebements can be included in the instruction R.
For example, the interception module may instruct the calculation module to execute S1102 upon receiving the instruction R.
S1103, the calculation module calculates the model depth.
S1104, the calculation module sends the model depth to the decision module.
S1105, the decision module determines the coloring rate 1 of the current model.
For the process of determining the model depth and determining the corresponding coloring rate (e.g., coloring rate 1) according to the model depth, reference may be made to the above description, and details are not repeated here.
S1106, the decision module calls the API corresponding to the calling of the graphic library. Illustratively, the corresponding API may be a variable rate shading API 1 corresponding to a shading rate of 1.
S1107, the graphics library correspondingly calls variable rate shading API 1.
In this way, the electronic device may call a corresponding API in the graphics library according to the depth of the model, so as to control a component (e.g., GPU) in the hardware layer to perform corresponding variable rate shading.
In conjunction with the foregoing description, in this example, the electronic device may call back the received command of the game application, so that the command is executed correctly, in addition to executing the above S1101-S1107.
Illustratively, the process may include:
s1108, calling back the related instructions by the interception module and the data processing module.
The related instructions may include a callback instruction 3-1 and a callback instruction 3-2 corresponding to the instruction N, and a callback instruction 4-1 and a callback instruction 4-2 corresponding to the instruction M. For the callback process and the content, please refer to the description of fig. 9, which is not repeated herein.
In this example, the correlation instruction may also include an instruction of instruction R.
S1109, calling the corresponding API by the graphic library. The corresponding APIs may include APIs for implementing the functions indicated by instructions N, M, and R.
S1110, the graphic library sends an instruction 2 to the GPU. The instruction 2 may include the corresponding code obtained by calling the API described above. Illustratively, the instruction 2 may include corresponding code obtained by calling the variable rate coloring API 1, based on which variable rate coloring of the coloring rate 1 may be implemented.
S1111, the GPU executes rendering operation corresponding to the rendering rate 1.
Therefore, the electronic equipment can flexibly adjust the use mechanism of variable-rate coloring according to the depth of the model in the current rendering command by taking drawcall as granularity, so that the effect of reducing the rendering overhead in the coloring process is achieved while the user experience is not influenced.
It should be understood that, in the above example, the game application issues a command to perform a model drawing as an example, and the specific implementation details thereof are explained. In the actual implementation process, the game application can issue a plurality of commands to draw a plurality of models. In the process of drawing each model, the above scheme can also be adopted to realize the variable rate coloring which is self-adaptive according to the depth of each model.
Illustratively, command 1 is used for data loading, command 2 is used for drawing model 1, and command 3 is used for drawing model 2.
Referring to FIG. 12, the gaming application may issue command 1, command 2, and command 3, respectively.
For command 1, the modules of the framework layer setup may perform backup storage of vertex data and/or MVP matrix according to the scheme as shown in fig. 6 or fig. 8. Meanwhile, each module in the framework layer may also combine with a callback mechanism as shown in fig. 8 to implement loading of data corresponding to command 1. For example, the framework layer may call an API that responds in the graphics library according to command 1, so as to issue instruction 1 corresponding to command 1 to the GPU, thereby completing loading of data.
For command 2, each module of the framework layer may determine the depth of the model 1 to be drawn by the command 2 according to the schemes shown in fig. 7, fig. 9, fig. 10, and fig. 11, and further determine the rendering rate (e.g., rendering rate 1) corresponding to the model 1. In conjunction with the callback mechanism, the framework layer may call a corresponding API in the graphics library based on command 2, and shading rate 1, to issue instruction 2 to the GPU. The GPU may render model 1 according to the rendering parameters of rendering rate 1, based on instruction 2. This makes it possible to obtain the model 1 shown in fig. 12. As an example, model 1 may be stored in a pre-allocated frame buffer corresponding to command 2 (e.g., frame buffer 1).
Similar to command 2, for command 3, each module of the framework layer may determine the rendering rate (e.g., rendering rate 2) corresponding to model 2 according to the schemes shown in fig. 7, fig. 9, fig. 10, and fig. 11 according to the depth of model 2 to be rendered by the command 3. In conjunction with the callback mechanism, the framework layer may call a corresponding API in the graphics library based on command 3, and shading rate 2, to issue instruction 3 to the GPU. The GPU may render model 2 according to the rendering parameters of rendering rate 2, based on instruction 3. Thereby, the model 2 as shown in fig. 12 can be obtained. As an example, model 1 may be stored in a pre-allocated frame buffer corresponding to command 2 (e.g., frame buffer 2).
Take the example that model 1 and model 2 belong to the same frame image. Before presenting the frame of image, the GPU may also render the images on frame buffer 1 and frame buffer 2 onto a base frame buffer (e.g., frame buffer 0), thereby obtaining the full content of the frame of image. Illustratively, referring to FIG. 13, the GPU may render model 1 and model 2 onto the same canvas.
It should be appreciated that the depth of the model may be different for different depths due to the mechanism of variable rate coloring provided by embodiments of the present application. Take the example that the depth of model 1 is less than the depth of model 2. I.e. model 1 is closer to the user, while model 2 is further away from the user. In conjunction with the foregoing, for a model 1 with a smaller depth, it can be colored with a higher coloring rate. For example, the rendering rate of model 1 is 1 × 1. In the process of rendering the model 1, each pixel included in the model 1 may be rendered in units of one pixel. Correspondingly, for a model 2 with a greater depth, it can be colored with a lower coloring rate. For example, the rendering rate of model 2 is 2 × 2. Then model 2, including pixels, may be rendered in 2 x2 pixels during rendering of model 2.
Thus, after the rendering operations are completed (e.g., after all rendering operations are completed and the two models are rendered on a canvas), it can be seen that the color of each pixel can be different among the pixels included in model 1. However, in the pixels included in model 2, there are a large number of adjacent 2 × 2 pixels having the same or similar colors. Therefore, colors of the model which is closer to the user can be rendered and displayed more accurately. Correspondingly, the color granularity of models farther away from the user may be larger, thereby saving corresponding rendering overhead.
The foregoing mainly introduces aspects of the embodiments of the present application from the perspective of electronic devices. In order to implement the above functions, it includes a hardware structure and/or a software module for performing each function. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional modules of the devices involved in the method may be divided according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Fig. 14 shows a schematic block diagram of an electronic device 1400. As shown in fig. 14, the electronic device 1400 may include: a processor 1401, and a memory 1402. The memory 1402 is used to store computer-executable instructions. For example, in some embodiments, the processor 1401, when executing the instructions stored by the memory 1402, can cause the electronic device 1400 to perform any of the image rendering methods illustrated in the embodiments described above.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The functions or actions or operations or steps, etc., in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (17)

1. An image rendering method is applied to electronic equipment, an application program is installed in the electronic equipment, an interception module and a data processing module are configured in the electronic equipment, and the method comprises the following steps:
acquiring a first rendering command issued by the application program, wherein the first rendering command is used for drawing a first model;
the obtaining of the first rendering command issued by the application program includes;
the interception module intercepts the first rendering command;
the interception module transmits the intercepted first function and second function to the data processing module;
the first function is a function carrying a first parameter, the second function is a function carrying a second parameter, the first parameter is a parameter carried in the process of transferring the vertex data by the application program, and the second parameter is a parameter carried in the process of transferring the MVP matrix by the application program;
determining a first shading rate for the first model;
rendering the first model according to the first rendering commands and the first shading rate;
acquiring a second rendering command issued by the application program, wherein the second rendering command is used for drawing a second model;
determining a second shading rate for the second model;
rendering the second model according to the second rendering commands and the second shading rate;
the first model and the second model are included in the same frame image, and the first shading rate and the second shading rate are different.
2. The method of claim 1, wherein determining the first coloring rate for the first model comprises:
determining a depth of the first model according to the first rendering command, wherein the depth of the first model is used for identifying the distance between the first model and an observer in a viewing space or a clipping space, and the larger the depth of the first model is, the farther the distance between the first model and the observer is;
determining the first shading rate according to the depth of the first model.
3. The method of claim 2, wherein determining the depth of the first model from the first rendering commands comprises:
determining depths of n vertices of the first model according to the first rendering command;
determining the depth of the first model according to the depths of the n vertexes;
wherein n vertices are included in the vertices of the first model.
4. The method according to any one of claims 1-3, further comprising:
and the interception module calls back functions except the first function and the second function in the first rendering command to a graphic library of the electronic equipment.
5. The method of claim 4, wherein after the intercepting module transmits the intercepted first and second functions to the data processing module, the method further comprises:
the data processing module determines a first storage location of vertex coordinates corresponding to the first model and a second storage location of a first MVP matrix according to the first function and the second function, where the first storage location and the second storage location are included in a first storage area, and the first storage area is a storage area that can be called by a processor of the electronic device.
6. The method of claim 5, further comprising:
and the data processing module recalls the first function and the second function to a graphic library of the electronic equipment.
7. The method of claim 5, wherein prior to obtaining the first rendering command issued by the application, the method further comprises:
the intercepting module intercepts a third rendering command issued by the application program, the third rendering command is used for storing first data used in the running process of the application program in a second storage area, the second storage area is a storage area used by a GPU of the electronic equipment, and the first data comprises vertex coordinates of the first model and the first MVP matrix;
the interception module transmits a third function and a fourth function to a data processing module, wherein the third function carries the first parameter, and the fourth function carries the second parameter;
and the data processing module stores the data corresponding to the third function and the fourth function in the first storage area.
8. The method of claim 7, further comprising:
and the interception module calls back functions except the third function and the fourth function in the third rendering command to a graphic library of the electronic equipment.
9. The method of claim 7 or 8, further comprising:
and the data processing module calls the third function and the fourth function back to a graphic library of the electronic equipment.
10. The method according to claim 7 or 8, characterized in that the method further comprises:
the data processing module stores a first corresponding relation, and the first corresponding relation is used for indicating the corresponding relation between the storage address of the same data in the first storage area and the storage address of the same data in the second storage area.
11. The method of claim 10, wherein the determining, by the data processing module, a first storage location of vertex coordinates corresponding to the first model and a second storage location of the first MVP matrix according to the first function and the second function comprises:
the data processing module determines a first storage position of the vertex coordinate corresponding to the first model according to the cache position indicated by the first function and the first corresponding relation;
and the data processing module determines a second storage position of the first MVP matrix according to the cache position indicated by the second function and the first corresponding relation.
12. The method of claim 11, wherein a computing module is further configured in the electronic device, and wherein determining the depths of the n vertices of the first model according to the first rendering command comprises:
the interception module, in the event of intercepting a drawing element Drawelement in the first rendering command,
the calculation module calculates the depths of the n vertexes according to the vertex coordinates corresponding to the first model and the first MVP matrix; the vertex data corresponding to the first model is obtained by the calculation module from the first storage position, and the first MVP matrix is obtained by the calculation module from the second storage position.
13. The method of claim 12, wherein determining the depth of the first model from the depths of the n vertices comprises:
the computation module determines a depth of the first model as a mean of depths of the n vertices.
14. The method of claim 12 or 13, wherein a decision module is further configured in the electronic device, and wherein the method further comprises:
the calculation module transmits the depth of the first model to the decision module;
the decision module searches a table item matched with the depth of the first model from a preset second corresponding relation according to the depth of the first model;
the decision module determines the shading rate indicated by the matching entry as the first shading rate.
15. The method of claim 14, further comprising:
the decision module transmits shading instructions to the graphics library indicating the first shading rate.
16. The method of claim 15, wherein said rendering the first model according to the first rendering commands and the first shading rate comprises:
the graphics library calls a first Application Programming Interface (API) corresponding to the first rendering command according to the functions called back by the interception module and the data processing module;
the graphics library calls a first variable rate shading API corresponding to the first shading rate according to the shading instruction;
and the graphic library issues a rendering instruction to a GPU of the electronic equipment according to the first application programming interface API and the first variable rate coloring API, so that the GPU can perform rendering operation on the first model according to the first coloring rate.
17. An electronic device, comprising one or more processors and one or more memories; the one or more memories coupled with the one or more processors, the one or more memories storing computer instructions;
the computer instructions, when executed by the one or more processors, cause the electronic device to perform the image rendering method of any of claims 1-16.
CN202110951444.3A 2021-08-18 2021-08-18 Image rendering method and electronic equipment Active CN113837920B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110951444.3A CN113837920B (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment
CN202310227425.5A CN116703693A (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110951444.3A CN113837920B (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310227425.5A Division CN116703693A (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113837920A CN113837920A (en) 2021-12-24
CN113837920B true CN113837920B (en) 2023-03-10

Family

ID=78960776

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110951444.3A Active CN113837920B (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment
CN202310227425.5A Pending CN116703693A (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310227425.5A Pending CN116703693A (en) 2021-08-18 2021-08-18 Image rendering method and electronic equipment

Country Status (1)

Country Link
CN (2) CN113837920B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635789A (en) * 2022-08-18 2024-03-01 华为技术有限公司 Coloring method, coloring device and electronic equipment
CN117710404A (en) * 2022-09-07 2024-03-15 荣耀终端有限公司 Image processing method and electronic equipment
CN116401062B (en) * 2023-04-13 2023-09-12 北京大学 Method and device for processing server non-perception resources and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275547B2 (en) * 2015-06-03 2019-04-30 The Mathworks, Inc. Method and system for assessing performance of arbitrarily large arrays
US9916682B2 (en) * 2015-10-28 2018-03-13 Intel Corporation Variable precision shading
CN106504185B (en) * 2016-10-26 2020-04-07 腾讯科技(深圳)有限公司 Rendering optimization method and device
CN111062858B (en) * 2019-12-27 2023-09-15 西安芯瞳半导体技术有限公司 Efficient rendering-ahead method, device and computer storage medium

Also Published As

Publication number Publication date
CN116703693A (en) 2023-09-05
CN113837920A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113726950B (en) Image processing method and electronic equipment
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN113837920B (en) Image rendering method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
US11762529B2 (en) Method for displaying application icon and electronic device
CN113254120B (en) Data processing method and related device
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
CN113409427A (en) Animation playing method and device, electronic equipment and computer readable storage medium
CN114222187B (en) Video editing method and electronic equipment
CN112835501A (en) Display method and electronic equipment
CN116208704A (en) Sound processing method and device
CN117234398A (en) Screen brightness adjusting method and electronic equipment
CN114708289A (en) Image frame prediction method and electronic equipment
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
WO2022078116A1 (en) Brush effect picture generation method, image editing method and device, and storage medium
CN117009005A (en) Display method, automobile and electronic equipment
CN114283195A (en) Method for generating dynamic image, electronic device and readable storage medium
CN115291779A (en) Window control method and device
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN117764853B (en) Face image enhancement method and electronic equipment
CN112783993B (en) Content synchronization method for multiple authorized spaces based on digital map
WO2022206600A1 (en) Screen projection method and system, and related apparatus
CN116266159B (en) Page fault exception handling method and electronic equipment
CN114356186A (en) Method for realizing dragging shadow animation effect and related equipment
CN117692714A (en) Video display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant