CN117710180A - Image rendering method and related equipment - Google Patents

Image rendering method and related equipment Download PDF

Info

Publication number
CN117710180A
CN117710180A CN202311005225.1A CN202311005225A CN117710180A CN 117710180 A CN117710180 A CN 117710180A CN 202311005225 A CN202311005225 A CN 202311005225A CN 117710180 A CN117710180 A CN 117710180A
Authority
CN
China
Prior art keywords
model
image
rendering
rate
coloring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311005225.1A
Other languages
Chinese (zh)
Inventor
龙云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311005225.1A priority Critical patent/CN117710180A/en
Publication of CN117710180A publication Critical patent/CN117710180A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application relates to the technical field of image processing, aims to solve the problem of high power consumption expenditure in a rendering process, and provides an image rendering method and related equipment. The image rendering method is applied to the electronic equipment, an application program is installed in the electronic equipment, and the application program transmits a rendering instruction stream to render a first model in a first image, and the image rendering method comprises the following steps: intercepting a specific instruction in a rendering instruction stream; acquiring a central position of the first model according to a specific instruction, wherein the central position is a position of a central point of the first model in the first image; determining a coloring rate of the first model according to the central position of the first model and the target area of the first image, wherein the coloring rate of the first model is lower than or equal to the coloring rate of the target area; rendering the first model according to the shading rate of the first model. The method has the beneficial effects that the corresponding coloring rate is set for the model, so that the coloring rate of partial areas in the image is reduced, and the power consumption is further reduced.

Description

Image rendering method and related equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image rendering method and related devices.
Background
The electronic device includes coloring processing of an image when rendering the image. Illustratively, after the program developer sets the shading rate through software algorithms and image post-processing techniques, the shading rate is applied to the entire image. A graphics processor (graphics processing unit, GPU) of the electronic device may render each pixel of the image separately, thereby completing the rendering process of the entire image. However, as image pixels increase and rendering scenes in images becomes more complex, the rendering of the images can create higher rendering loads on the electronic device, such as increasing computational power and power consumption overhead in the rendering process.
Disclosure of Invention
The application provides an image rendering method and related equipment, so as to solve the problems of increased computational effort and increased power consumption overhead in the rendering process.
In a first aspect, an embodiment of the present application provides an image rendering method, applied to an electronic device, in which an application program is installed, and the application program issues a rendering instruction stream to render a first model in a first image, where the method includes: intercepting a specific instruction in a rendering instruction stream; acquiring a central position of the first model according to a specific instruction, wherein the central position is a position of a central point of the first model in the first image; determining a coloring rate of the first model according to the central position of the first model and the target area of the first image, wherein the coloring rate of the first model is lower than or equal to the coloring rate of the target area; rendering the first model according to the shading rate of the first model.
According to the image rendering method, different coloring rates can be set for different areas in the image to be displayed, so that the electronic device can reduce the rendering workload, the memory and the bandwidth of the electronic device on partial areas by reducing the coloring rate of the partial areas in the image. By reasonably setting different coloring rates for different areas in the image to be displayed, the rendered image does not have obvious influence on the look and feel of a user while reducing power consumption and heat in the rendering process. Therefore, the power consumption and the heating of the electronic equipment can be reduced, and the user experience is improved. Specifically, for a target area in the image, which is concerned by a user, the change of the color accuracy of the model is easily perceived by the user, and then a higher coloring rate can be adopted for the target area, so as to obtain a high-accuracy coloring effect. For areas with poor user perception (such as areas outside the target area), the change of the model color accuracy is not easy to be perceived by the user, so that the partial areas can be quickly colored by adopting a lower coloring rate, and the power consumption is reduced. Furthermore, by improving the software framework of the electronic device, the electronic device acquires specific instructions in the rendering instruction stream, and further, the center position of the model can be quickly and accurately determined according to the specific instructions, so that the relationship between the center position of the model and a target area in an image can be quickly and accurately determined, and further, the corresponding coloring rate is adaptively set for the model based on the relationship between the model and the target area, and a guarantee is provided for reducing the calculation power and the power consumption cost in the subsequent rendering process.
In one possible implementation manner, the specific instruction includes a first specific instruction, and acquiring the center position of the first model according to the specific instruction includes: obtaining vertex data of each vertex of a first model and an MVP matrix corresponding to the first model according to a first specific instruction; and determining the position of the center point of the first model in the first image according to the vertex data and the MVP matrix.
And acquiring vertex data and MVP matrix of the first model by intercepting the specific instruction, and determining the position of the center point of the first model in the first image according to the vertex data and MVP matrix of the first model.
In one possible implementation, determining the coloring rate of the first model from the center position of the first model and the target region of the first image includes: acquiring a first distance according to the central position of the first model and a target area of the first image, wherein the first distance is the distance between the central position of the first model and the target area in an observation space or a clipping space; and determining the coloring rate corresponding to the first model according to the first distance. Wherein the smaller the first distance, the higher the coloring rate of the first model.
The relationship between the first model and the target region is determined by calculating a first distance between a center position of the first model and the target region. When the first distance is smaller, that is, the first model is closer to the target area, the possibility that the first model is focused by the user is higher, and the corresponding coloring rate can be set to be higher, so that a high-precision coloring effect can be obtained. When the first distance is larger, that is, the first model is far away from the target area, the possibility that the first model is focused by a user is lower, and the corresponding coloring rate can be set to be lower, so that the power consumption is reduced.
In one possible implementation, determining the coloring rate of the first model from the center position of the first model and the target region of the first image includes: acquiring the category of the first model; acquiring a first distance, wherein the first distance is the distance between the central position of the first model and the target area in the observation space or the clipping space; a rate of coloration of the first model is determined based on the class of the first model and the first distance.
The coloring rate of the first model can be set more reasonably by referring to the category and the first distance of the first model, so that the use experience of a user is improved.
In one possible implementation manner, determining the coloring rate corresponding to the first model according to the category of the first model and the first distance includes: acquiring a specific object category corresponding to an application program; when the class of the first model is a specific object class, the coloring rate corresponding to the specific object class is used as the coloring rate corresponding to the first model; and when the category of the first model is not the specific object category, determining the coloring rate corresponding to the first model according to the first distance.
The coloring rate corresponding to the model class can be set according to different application scenes or different game applications, so that the use experience of a user is improved.
In one possible implementation manner, the specific instruction includes a second specific instruction, and acquiring the category of the first model includes: obtaining model information of the first model according to the second specific instruction; a class of the first model is determined from model information of the first model.
And intercepting the second specific instruction, and further obtaining model information of the first model according to the second specific instruction, so as to determine the category of the first model.
In one possible implementation, the method further includes: determining the range of the target area according to the type of the application program; and determining the target area in the first image according to the range of the target area and the position of the target object in the first image, wherein the target object is related to the attention point of the user.
By setting the corresponding target area according to the attention point of the user, the coloring rate of each model can be determined based on the target area, so that the power consumption and heating in the rendering process are reduced, the rendered image can not obviously influence the look and feel of the user, and the user experience is improved.
In one possible implementation, when the shading rate of the first model is lower than the shading rate of the target area, then rendering the first model according to the shading rate of the first model includes: and rendering the target area according to the highest coloring rate and the image quality enhancement algorithm, and rendering the first model according to the coloring rate of the first model.
By adding the highest coloring rate and the image quality enhancement algorithm to render the target area, a high-precision coloring effect and an image rendering effect are provided for the user, and at the same time, the first model is rendered according to the low coloring rate, so that the power consumption can be reduced.
In one possible implementation, the electronic device includes a central processor and a graphics processor, rendering the first model according to a shading rate of the first model includes: the central processing unit calls a first variable rate coloring API corresponding to the coloring rate of the first model, and issues a first model rendering instruction to the graphic processor according to the first variable rate coloring API; the graphics processor renders the first model according to a shading rate of the first model in response to the first model rendering instructions.
By invoking a first variable rate shading API corresponding to the shading rate of the first model, it may be ensured that the first model is rendered according to the shading rate of the first model.
In one possible implementation, the first image further includes a second model, and the method further includes: acquiring a second distance according to the central position of the second model and the target area of the first image, wherein the second distance is the distance between the central position of the second model and the target area in the observation space or the clipping space; determining a coloring rate corresponding to the second model according to the second distance, wherein the second distance is larger than the first distance, and the coloring rate of the second model is lower than that of the first model; rendering the second model according to the shading rate of the second model.
The second distance is smaller than the first distance, and accordingly, the coloring rate of the second model is set to be higher than that of the first model. That is, the closer to the target region the model has a higher corresponding coloring rate, and the farther from the target region the model has a lower corresponding coloring rate.
In one possible implementation, rendering the second model according to the shading rate of the second model includes: the central processing unit calls a second variable rate coloring API corresponding to the coloring rate of the second model, and issues a second model rendering instruction to the graphic processor according to the second variable rate coloring API; the graphics processor renders the second model according to a shading rate of the second model in response to the second model rendering instructions.
Different coloring rates correspond to different coloring rate APIs. For example, when the first distance and the second distance distribution belong to different distance ranges, the corresponding coloring rates are different, thereby calling different coloring rate APIs.
In a second aspect, embodiments of the present application provide an electronic device comprising one or more processors and one or more memories; one or more memories coupled to the one or more processors, the one or more memories storing computer instructions; the computer instructions, when executed by one or more processors, cause an electronic device to perform the image rendering method of any of the above.
In a third aspect, embodiments of the present application provide a computer-readable storage medium comprising computer instructions that, when executed, perform the image rendering method of any one of the above.
In a fourth aspect, embodiments of the present application provide a chip system, the chip system including a processor and a communication interface; the processor is configured to call and run a computer program stored in a storage medium from the storage medium, to perform the image rendering method as any one of the above.
In addition, the technical effects of the second aspect to the fourth aspect may be referred to in the description related to the method designed in the method section, and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly describe the drawings in the embodiments, it being understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a process of image rendering by an electronic device according to a rendering instruction stream of an application program according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a game master scenario provided in an embodiment of the present application;
FIG. 2B is a schematic illustration of the game character of the game master scene of FIG. 2A after movement;
fig. 3 is a schematic software structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an image rendering method according to an embodiment of the present application;
fig. 5 is a schematic diagram of intercepting a rendering instruction flow provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of calculating a center position of a first model according to an embodiment of the present application;
fig. 7 is a flowchart of another image rendering method according to an embodiment of the present application;
fig. 8 is a flowchart of another image rendering method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present application, "and/or" describes an association relationship of an association object, three relationships may exist in the representation, for example, a and/or B may represent: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The image rendering method provided by the embodiment of the application can be executed by the electronic equipment. The electronic device may be a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a smart phone, a computer, a smart television, a personal digital assistant (personal digital assistant, PDA), a wearable device, an augmented reality (augmented reality, AR) \virtual reality (VR) device, a media player, or a portable mobile device. The electronic device may also be a vehicle-mounted device, an internet of things device, or other devices capable of performing image rendering processing. The electronic device may be a device running an Android system, IOS system, windows system, or other operating system. The type of the electronic device and the operating system operated by the electronic device are not particularly limited in the embodiments of the present application.
Referring to fig. 1, a process of image rendering by an electronic device according to a rendering instruction stream of an application program is exemplarily described. The electronic device 100 includes a central processing unit (Central Processing Unit, CPU) 101, a graphics processor 102, a memory 103, and a display 104. The central processor 101 is used to run application programs 111 and an operating system 112. The application 111 may be a game application, a video application (video player), or the like. The operating system 112 provides a graphics library and image composition module. The image synthesis module is used for synthesizing two-dimensional or three-dimensional images. The rendering pipeline (Rendering pipeline) in the graphics processor 102 is a series of operations that the graphics processor 102 sequentially performs in rendering graphics or images, including, but not limited to, the following operations: vertex processing (vertex processing), primitive processing (primitive processing), rasterization (rasterisation) and fragment processing (fragment processing). The Memory 103 includes one or more of internal Memory (i.e., memory), cache (cache), video Memory (Video Memory), and external Memory.
When the application 111 needs to render an image, a stream of rendering instructions is issued. The central processor 101 invokes an application program interface (application programming interface, API) in the graphics library according to the rendering instruction stream issued by the application program 111 in order to instruct the graphics processor 102 to perform the corresponding rendering operation. The graphics library generates a stream of instructions that the rendering pipeline can recognize from the invoked application program interface. Graphics processor 102 receives the instruction stream sent by the graphics library and performs rendering through the rendering pipeline. The rendering results after the graphics processor 102 performs rendering may be stored in a memory 103 (e.g., a video memory) in the electronic device 100. The image composition module of the operating system 112 obtains the rendering result from the memory 103, and synthesizes the rendering result and displays it on the display 104.
More and more applications, such as gaming applications or video applications, require the display of scene rich images on electronic devices. These images are typically rendered by the electronic device based on a model. Taking an application program as an example of a game application, a main scene displayed on a display by the game application is shown in fig. 2A, and the main scene includes various models, such as a character model, a stone model, a grass model, and the like. In order to provide an image of fine image quality, the image is generally rendered at a higher coloring rate. The number of simultaneously colored pixels corresponding to a higher coloring rate is less than the number of simultaneously colored pixels corresponding to a lower coloring rate. For example, comparing the coloring rate of 1×1 with the coloring rate of 2×2, the coloring rate of 2×2 may be a lower coloring rate, the coloring rate of 1×1 may be a higher coloring rate, the coloring rate of 1×1 indicates that the pixel color is calculated once per pixel, and the coloring rate of 2×2 indicates that the pixel color is calculated once for 4 pixels in common. In other words, when an image is rendered at a higher rendering rate, the pixel color is calculated for multiple times, and then the rendering process is performed for multiple times, so that the computing power in the rendering process of the electronic device is increased, the memory overhead is increased, and the electronic device is caused to have larger power consumption, and then the electronic device generates heat or frames dropping phenomenon, thereby influencing the user experience.
The inventors have discovered that in some scenarios, such as when a user uses a gaming application or looks at video, the area of interest to the user may be more concentrated. The character model illustrated in fig. 2A is a game character that is manipulated by a player who is often interested in the area in which the game character is located. As shown in fig. 2B, the content of the image will also change with the movement of the game character, but the focus of attention of the player is always on the game character, that is, the central area of the image, and conversely, the player does not feel strong on the edge area of the image, such as the stone model and the grass model in the image.
In view of this, the embodiments of the present application provide an image rendering method and related device, which can set different coloring rates for different regions in an image to be displayed, so that an electronic device may reduce the rendering workload, memory and bandwidth of the electronic device for a partial region by reducing the coloring rate of the partial region in the image. By reasonably setting different coloring rates for different areas in the image to be displayed, the rendered image does not have obvious influence on the look and feel of a user while reducing power consumption and heat in the rendering process. Therefore, the power consumption and the heating of the electronic equipment can be reduced, and the user experience is improved. Specifically, for a target area in the image, which is concerned by a user, the change of the color accuracy of the model is easily perceived by the user, and then a higher coloring rate can be adopted for the target area, so as to obtain a high-accuracy coloring effect. For areas with poor user perception (such as areas outside the target area), the change of the model color accuracy is not easy to be perceived by the user, so that the partial areas can be quickly colored by adopting a lower coloring rate, and the power consumption is reduced. Furthermore, by improving the software framework of the electronic device, the electronic device acquires specific instructions in the rendering instruction stream, and further, the center position of the model can be quickly and accurately determined according to the specific instructions, so that the relationship between the center position of the model and a target area in an image can be quickly and accurately determined, and further, the corresponding coloring rate is adaptively set for the model based on the relationship between the model and the target area, and a guarantee is provided for reducing the calculation power and the power consumption cost in the subsequent rendering process.
The following describes the schemes provided in the embodiments of the present application in detail with reference to the accompanying drawings.
The software system of the electronic device may adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. In the following, the embodiment of the application takes an Android system with a layered architecture as an example, and illustrates a software structure of an electronic device. Of course, in other operating systems, the embodiments of the present application may also be implemented as long as the functions implemented by the respective functional modules are similar to those of the embodiments of the present application.
Referring to fig. 3, a software structure of an electronic device provided in an embodiment of the present application is exemplarily described.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely an application program layer, an application program framework layer, a system library and a hardware layer from top to bottom.
The application layer may include a series of application packages. The application packages may include applications such as gaming applications, video applications, and the like that require pictures or video to be presented to the user by rendering images.
The application framework layer may provide an application programming interface and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
By way of example, the application framework layer may include a window manager, a content provider, a view system, a resource manager, an activity manager, an input manager, and the like. The window manager provides window management services (Window Manager Service, WMS) that may be used for window management, window animation management, surface management, and as a relay station to the input system. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The activity manager may provide activity management services (Activity Manager Service, AMS) for managing lifecycle of individual applications and navigation rollback functionality. AMS may be used for startup, handoff, scheduling of system components (e.g., activities, services, content providers, broadcast receivers), and management and scheduling of application processes. The input manager may provide input management services (Input Manager Service, IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The IMS retrieves events from the input device node and distributes the events to the appropriate windows through interactions with the WMS.
In the embodiment of the present application, one or more functional modules may be disposed in an application framework layer, for implementing the image rendering method provided in the embodiment of the present application. Illustratively, an interception module and a determination module may be disposed in the application framework layer. The interception module is used for intercepting specific instructions in a rendering instruction stream issued by the application program, wherein the rendering instruction stream is used for indicating a model in a rendering image. The determination module is to determine a rate of coloration of the model. In other embodiments, the application framework layer may also include computing modules. The calculation module is used for calculating the center position of the model. In other embodiments, the application framework layer may also include an identification module. The identification module is used for identifying the category of the model. The determination module may also be configured to determine a corresponding rate of coloration for the model based on one or both of a center position of the model and a category of the model. In the following examples, the functions of the respective modules described above will be described in detail.
The interception module, the calculation module, the determination module and the identification module may be a section of program running on the electronic device, respectively, for implementing the corresponding functions. In other embodiments, the identification module may identify a category of the current rendering model. For example, the model category of the game is analyzed offline in advance, and corresponding characteristics are recorded. The recognition module performs feature analysis on the data intercepted in the real-time running process of the game and matches the features obtained by offline analysis to determine the category of the model.
The system library may include a plurality of functional modules. Such as a graphics library, surface manager (surface manager), etc.
Graphics libraries, also known as drawing libraries, are used to define cross-programming language, cross-platform application programming interfaces that contain many functions that process graphics (images). Taking an open graphics library (Open Graphics Library, openGL) as an example, an API defined by OpenGL includes an interface for implementing a corresponding function, for example, an interface for drawing a two-dimensional image or a three-dimensional image, and the interface includes a drawing function, for example, a drawing function gldragwelements (). For another example, an interface for presenting an image drawn by a drawing function onto a display interface (display), the interface including a presentation function, such as the function eglswappuffers ()), embodiments of the application are not illustrated herein.
The functions in OpenGL may be called by instructions, for example, rendering instructions in an instruction stream may call a draw function to draw a two-dimensional image or a three-dimensional image. The instructions in the rendering instruction stream are instructions written by a developer according to functions in the graphics library during game application development, and are used for calling interfaces of the graphics library corresponding to the instructions.
Wherein the graphics library includes, but is not limited to: graphics libraries such as embedded open graphics library (open graphics library for embedded system, openGL ES), ke Luonuo s platform graphics interface (the khronos platform graphics interface) or Vulkan (a cross-platform drawing application program interface).
The surface manager is used to manage the display subsystem and provides a fusion of the two-dimensional and three-dimensional layers for the plurality of applications.
In the embodiment of the application, a graphics library (such as OpenGL ES) in a system library can provide an expansion function of variable rate coloring. Then the electronic device may call the variable rate shading API in OpenGL ES to implement variable rate shading of the current draw call when it is desired to perform variable rate shading for a certain draw call. For example, the electronic device may use a lower rate (e.g., 2 x 1,2 x 2,4 x 4, etc.) to color the current draw call, thereby reducing the overhead associated with coloring the current draw call.
The hardware layer may include a processor (e.g., a central processing unit, a graphics processor, etc.), a component having a storage function (e.g., a memory), and a component having a display function (e.g., a display). The relevant contents of the central processing unit, the graphics processor and the memory can be referred to in fig. 1 above. The central processor and the graphics processor in the electronic device may be located on the same chip, or may be separate chips.
In some embodiments, the central processor may be configured to control each module in the application framework layer to implement its respective function, and the graphics processor may be configured to perform a corresponding rendering process according to an API in a graphics library (e.g., openGL ES) called by an instruction processed by each module in the application framework layer.
In order to more clearly describe the functions of each layer in the software architecture provided in the embodiments of the present application, the functional implementation of each component having the software composition shown in fig. 3 is exemplified below by taking image rendering performed by a game application as an example.
Referring to fig. 4, an exemplary image rendering method provided in an embodiment of the present application is described.
In step S40, the game application issues a rendering instruction stream.
In the embodiment of the application, an application program installed in the electronic device can respond to the operation of a user to issue a rendering instruction stream. For example, the gaming application may initiate a game, load a game, or run a game in response to a user's operation, upon which the gaming application issues a stream of rendering instructions to a central processor of the electronic device to instruct the electronic device to render an image.
Wherein one rendering instruction stream corresponds to one frame of image, the rendering instruction stream may be used to instruct rendering of one or more models in the one frame of image. Rendering for a model in an image may be issued by one Drawcall, and the rendering instruction stream may include one or more Drawcall. For example, when the gaming application needs to render the primary scene image, the gaming application issues to the central processor a rendering instruction stream that renders the primary scene image, which may include a rendering of one or more of a character model, a house model, and a tree model for the primary scene image, i.e., the rendering instruction stream includes one or more of a Drawcall that indicates rendering the character model, a Drawcall that indicates rendering the house model, and a Drawcall that indicates rendering the tree model.
In this embodiment, taking an example that a rendering instruction stream issued by an application program is used to instruct to render a first model in a first image, that is, the rendering instruction stream includes a Drawcall that instructs to render the first model in the first image. In other embodiments, the rendering instruction stream may be used to instruct rendering of two or more models (such as a first model and a second model) in the first image, and the rendering process for other models in the first image than the first model may refer to the first model, which is not described herein.
In an embodiment of the present application, the rendering instruction stream may include one or more instructions (i.e., functions), and the Drawcall that instructs the rendering of the first model may include one or more instructions. Taking OpenGL as an example of a graphics library in an electronic device, the rendering instruction stream may include one or more of the following instructions (functions): glGenBuffer instruction, glBindBuffer instruction, glBufferData instruction, glBufferSubData instruction, glMapBuffer instruction, glMapBuferrange instruction, glUseProgram instruction, and glDrawElement instruction.
Wherein the glGenbuffer instruction may be used to create a buffer. I.e. dividing one or more storage spaces in a memory (e.g. a memory) of the electronic device, each storage space may have an Identification (ID). The partitioned storage space may be used to store various items of relevant data in the rendering process. For example, some buffers may be used to store vertex coordinates of a Model to be rendered (e.g., a first Model), some buffers may be used to store a Model-View-Projection (MVP) matrix corresponding to the Model to be rendered, and so on.
The MVP matrix includes three matrices, namely, a Model (i.e., M matrix), an observation (View) matrix (i.e., V matrix), and a Projection (Projection) matrix (i.e., P matrix). The M matrix is used to convert local coordinates in the local space to world coordinates in the world space. The V matrix is used to convert world coordinates in world space to viewing coordinates in viewing space. The P matrix is used to convert the viewing coordinates in the viewing space into cropping coordinates in the cropping space.
Wherein the glBindbuffer instruction may be used for binding buffering. With the instruction binding buffer, subsequent rendering operations may use the corresponding buffer. Illustratively, the created buffers include buffer A and buffer B. Through glBindbuffer (A), when the electronic device performs a subsequent rendering operation, such as writing data (e.g., vertex coordinates), the electronic device may write the vertex coordinates to buffer a for storage.
Wherein the glBufferdata instruction can be used to pass data. Illustratively, when the data carried by the glBufferdata instruction is not NULL (NULL), the electronic device may store the data (or a pointer to the data) carried by the glBufferdata instruction onto the already bound buffer. For example, when the glBufferdata instruction carries vertex coordinates, the electronic device may store the vertex coordinates on the already bound frame buffer. For another example, when the glBufferdata instruction carries an index of vertex coordinates, then the electronic device may store the vertex coordinate index onto the already bound frame buffer. Similar to the glBufferdata instruction, the glMapBuffer instruction and the glMapBufferRange instruction can also be used for data transfer. For example, the glMapBuffer instruction may be used to map data in one buffer object to an address space in the electronic device. Thus, when the GPU needs to use the data, the data can be read directly from the address space in the electronic device. Unlike the glMapBuffer instruction, the glMapBufferRange instruction may map a portion of specific data into the address space of the electronic device to facilitate subsequent GPU calls.
Wherein the glBufferSubData instruction may be used for updating of data. For example, the game application may update a part or all of the vertex coordinates through the glBufferSubData instruction, so as to achieve the effect of instructing an electronic device (such as a GPU of the electronic device) to perform rendering according to the new vertex coordinates.
Wherein the glaseprogram instruction uses the program object program as part of the current rendering state.
Wherein the gldragelement instruction may be used to draw a three-dimensional image, which may draw a triangle consisting of four vertices. The instruction includes four parameters: mode (mode for specifying drawing), count (vertex number for specifying drawing), type (data type for indicating vertex index), and indices (pointer for pointing to index array), i.e., it defines a geometric sequence using count elements whose index values are held in the indices array.
In some embodiments, the rendering instruction stream may also include shader-related instructions, such as instructions including glLinkProgram (chained shader object instructions), glShaderSource (associated shader instructions), glCompileShader (shader compilation instructions), and gluuserprogram (shader call instructions). The embodiment of the application does not specifically limit the instructions included in the rendering instruction stream.
In the embodiment of the present application, the rendering instruction stream may include different contents, which is not particularly limited. For example, when a video application in the application layer needs to render a certain image (e.g., an nth frame image, i.e., a first image), a rendering instruction stream may be initiated as follows. The rendering instruction stream may include various relevant data in the rendering process, such as vertex data of a model to be rendered in the first image, MVP matrix (i.e., M matrix, V matrix, and P matrix), and one or more drawing elements (drawing elements), where the vertex data may be used to indicate vertex coordinates of vertices of the model to be rendered, and the vertex coordinates may be coordinates based on a local space. In some implementations, the stream of rendering instructions may also include frame buffers that need to be invoked to render the nth frame image, as well as rendering operations that need to be performed in each frame buffer. For example, the nth frame image needs to use FB0 and FB1 as an example of frame buffering. The rendering instruction stream may include a bindbrame buffer (1) instruction, thereby enabling binding of the current rendering operation with FB 1. After binding FB1, rendering operations on FB1 may be implemented via the glDrawElement instruction.
In step S41, the interception module intercepts a first specific instruction in the rendering instruction stream to obtain vertex data of each vertex of the first model and an MVP matrix corresponding to the first model.
In an embodiment of the present application, the interception module may intercept a specific instruction in the rendering instruction stream based on a Hook (Hook) function to obtain a center position of the first model in the first image based on the specific instruction. Where hook functions refer to various techniques for modifying or extending the behavior of an operating system, application, or other software component by intercepting function calls, messaging, event transfers between software modules. In other words, a hook function is code for handling intercepted function calls, events, messages.
In an embodiment of the present application, the specific instruction includes a first specific instruction. The first specific instruction is used to obtain related data that can be used to calculate a central position of the first model in the first image, where the related data may include, for example, vertex data of each vertex of the first model and an MVP matrix corresponding to the first model.
In the embodiment of the present application, the vertex data includes at least vertex coordinates of each vertex and index data of the vertex coordinates, where the vertex coordinates may be local coordinates based on a local space.
In an embodiment of the present application, the MVP matrix corresponding to the first model includes: the local coordinates of the first model are converted into three matrices, an M matrix, a V matrix, and a P matrix, required to clip coordinates (or screen coordinates).
Specifically, the first specific instruction includes a first instruction carrying a first parameter and/or a second instruction carrying a second parameter.
Wherein the first parameter may be a parameter indicating a parameter for transmitting vertex data related to the vertex. The first parameter may include a target (target) such as gl_element_array_buffer, gl_array_buffer, and a corresponding BUFFER object.
It can be understood that the buffer in OpenGL is a block of memory area, and binds the buffer to a specific buffer object, that is, the buffer is given a corresponding specific meaning. For example, when the BUFFER object is bound to the gl_array_buffer, the BUFFER object is used to store vertex data, such as vertex coordinates, normals, colors, texture coordinates, and the like. When the BUFFER object is bound to the GL_ELEMENT_ARRAY_BUFFER, the BUFFER corresponding to the BUFFER object is index data for storing vertex data.
Taking the first instruction as an example, the first instruction includes, but is not limited to, the following instructions: the glgen buffers instruction, which is used to generate one or more buffer objects and return their identifiers. A glBindBuffer instruction for binding a buffer object to a specified target. A glBufferData instruction for allocating data to a buffer bound to a specified target and setting the manner of use thereof. A glBufferSubData instruction for updating a portion of data bound to a buffer of a specified destination. The glvertextribpointer instruction, which is used to define vertex attributes such as vertex coordinates, normals, colors, etc. The glenablevertextribarray instruction, which is used to enable specified vertex attributes.
The second parameter may be a parameter for MVP matrix transmission. The second parameter may include a GL_UNFORM_BUFFER or the like object, a corresponding BUFFER object, or the like. The second instructions include, but are not limited to, the following instructions: glBufferSubData instruction, glGenbuffer instruction, glBindbuffer instruction, glBufferdata instruction, and the like.
In some embodiments, the first instruction and the second instruction may be the same instruction, i.e., the first specific instruction may be an instruction carrying both the first parameter and the second parameter, such as a glBufferSubData instruction.
In the embodiment of the application, the interception module intercepts the target and the buffer object in the first specific instruction to intercept vertex data and the MVP matrix. Specifically, when vertex data is intercepted, the interception module Hook intercepts a first specific instruction to the target GL_ELEMENT_ARRAY_BUFFER, GL_ARRAY_BUFFER and a BUFFER object, and further intercepts acquired data (data) based on the target and the BUFFER object and BUFFERs the data. When intercepting MVP matrix: the interception module first analyzes a vertex shader (VS loader) corresponding to the first model to determine a variable (Uniform) name of an MVP matrix corresponding to the first model, for example, an M matrix is prime_ LocalToWorldTranslated, VP matrix is view_transitedworld clip. And secondly, determining the value transmission mode of the MVP matrix. And finally, acquiring the value of the corresponding MVP matrix.
Referring to fig. 5, the rendering instruction stream issued by the application program includes glGenBuffer, glBindBuffer, glBufferData, glBufferSubData, glUseProgram and gldragwmelent instructions. The first specific instruction includes a first instruction and a second instruction.
Taking the following first instruction as an example, the interception module intercepts:
the glGenbuffers (1, buffer 111) are intercepted, representing the creation of 1 buffer object buffer111 for vertex data, the buffer111 pointing to a block of buffer.
The interception of the glBindbuffer (gl_array_buffer, BUFFER 111) represents binding the BUFFER object BUFFER111 with the target gl_array_buffer.
Intercepting glBufferdata (GL_ARRAY_BUFFER, data 1) represents writing data1 into a bound BUFFER, namely after a BUFFER object BUFFER111 and a target GL_ARRAY_BUFFER are bound, accessing the BUFFER corresponding to the BUFFER111 through the target GL_ARRAY_BUFFER, and writing data1 into the BUFFER corresponding to the BUFFER 111;
the glBufferSubData (gl_array_buffer, data 2) is intercepted, representing updating the data in the already bound BUFFER to data2.
Wherein, data1 and data2 may include vertex data such as vertex coordinates, normal vectors of vertices, and the like.
Taking the following second instruction as an example, the interception module intercepts:
The glGenbuffers (1, buffer 111) are intercepted, representing the creation of 1 buffer object buffer111 for a unified variable (e.g., MVP matrix).
The interception of glBindbuffer (gl_unforrm_buffer, BUFFER 111) represents binding the BUFFER object BUFFER111 with the target gl_unforrm_buffer.
Intercepting glBufferdata (GL_UNFORM_BUFFER, data 3), wherein writing data3 into the bound BUFFER means that after a BUFFER object BUFFER111 and a target GL_UNFORM_BUFFER are bound, the target GL_UNFORM_BUFFER can access the BUFFER corresponding to the BUFFER111, and then writing data3 into the BUFFER corresponding to the BUFFER 111;
the glBufferSubData (gl_uniconbuffer, data 4) is intercepted, representing updating the data in the already bound BUFFER to data4. Wherein data3 and data4 may include an MVP matrix.
Illustratively, a frame of image in the game is intercepted by using a GPU debugging tool (such as renderDoc), and taking a small yellow duck model (i.e. a first model) in the frame of image (i.e. a first image) as an example, vertex data and an MVP matrix of the intercepted small yellow duck model are described in detail.
Intercepting vertex data:
the drawing model corresponding to Draw #56 in table 1 is a little yellow duck model.
TABLE 1
Instruction ID (EID) Draw# Name (Name)
978 55 glDrawElements(4329)
990 56 glDrawElements(1560)
1002 57 glDrawElements(1563)
Illustratively, the instruction sequences corresponding to the little yellow duck model are as shown in Table 2 with ID 979 to ID 990.
TABLE 2
The interception module acquires a BufferID corresponding to the little yellow duck model by intercepting the glBindBuffer instruction, and intercepts the glBufferData or the glBufferSubData instruction based on the BufferID. And acquiring vertex data of the little yellow duck model by intercepting the glBufferrData or the glBuffersubData instruction. The interception module intercepts the glVertexAttribute pointer instruction to obtain corresponding parameters, and analyzes vertex data in the buffer based on the parameters to obtain vertex data. The parameters in the glVertexAttribute pointer instruction represent the organization form of the vertex data in the buffer, and the interception module analyzes the vertex data in the buffer through the intercepted parameters such as index, step length, data bias and the like.
Intercepting an MVP matrix:
illustratively, the vertex shader corresponding to the analysis little yellow duck is a loader 2039, and the analysis loader 2039 may determine that the variable name of the M matrix is pritive_localtoworld dtranslatedvb1, and the corresponding variable is stored in Buffer 10626. The variable name of the VP matrix is View_TranslatedWorldToClipvb0, and the corresponding variable is stored in Buffer 2832. The interception module intercepts buffers 10626 and 2832 and analyzes the data to obtain MVP data.
In step S42, the interception module stores the obtained vertex data and MVP matrix in the memory.
In this embodiment of the present application, the interception module is further configured to perform backup storage on specific intercepted data. The interception module stores the obtained vertex coordinates and the MVP matrix in a memory, and the memory can be a memory or the like. For example, the electronic device may store these data in an area of memory that the CPU can call. The memory of the electronic device can store vertex data (such as vertex coordinates) and an MVP matrix which may be used in the subsequent rendering process. It will be appreciated that the rendering instruction stream issued by the native application is used to transfer data to a memory area that the GPU can call, so, through the backup storage in this example, the CPU may also have a calling capability (such as to vertex data and MVP matrix) of the data intercepted by the interception module, thereby ensuring implementation of a subsequent scheme, such as calculating the central position of the first model.
In step S43, the calculation module obtains vertex data and MVP matrix, and calculates the center position of the first model according to the vertex data and MVP matrix.
After the interception module performs backup storage on the intercepted data in step S42, the calculation module may obtain the data intercepted by the interception module, such as vertex data and MVP matrix, from a storage (such as a memory).
In the embodiment of the present application, the center position of the first model may be: in the viewing space or cropping space, the location of the center point of the first model in the first image may be a viewing coordinate, cropping coordinate, or screen coordinate.
In the embodiment of the application, the calculation module acquires the vertex data and the MVP matrix intercepted by the interception module, and then calculates the center coordinate V1 of the first model in the local space according to the vertex coordinates in the vertex data and the idea of an AABB (Axis-aligned bounding box) bounding box: { X, Y, Z }, wherein X= (x_min+x_max)/2, Y= (y_min+y_max)/2, Z= (z_min+z_max)/2, wherein x_min, x_max, y_min, y_max, z_min, and z_max are local coordinates of the first model in local space obtained from vertex data. x_min is the minimum of the local coordinates of the first model in the x direction. x_max is the maximum value of the local coordinates of the first model in the x direction. y_min is the minimum of the local coordinates of the first model in the y direction. y_max is the maximum value of the local coordinates of the first model in the y direction. z_min is the minimum of the local coordinates of the first model in the z direction. z_max is the maximum value of the local coordinates of the first model in the z direction. And finally, a calculation module changes the central coordinate V1 of the first model in the local space through an MVP matrix to obtain the position of the central point of the first model in the first image, wherein the calculation process comprises the following steps: p V M V1.
The following describes in detail the process of calculating the center position of the first model by the calculation module according to the vertex coordinates and the MVP matrix, taking the center position as the screen coordinates.
Referring to fig. 6, a first image 60 presented on a display of an electronic device includes an object 61, an object 62, an object 63, and an object 64, and the object 61, the object 62, the object 63, and the object 64 correspond to models indicated to be rendered in a rendering instruction stream issued by a game application, which are exemplified by a first model 611, a second model 621, a third model 631, and a fourth model 641, respectively. The following description is given by way of example to calculate the center position of the first model 611, and the center position calculation of other models may refer to the first model 611, which is not described herein.
The game application may carry, in the rendering instruction stream issued to the first model 611 of the first image 60, vertex coordinates of respective vertices of the first model 611 in the coordinate system of the local space, that is, local coordinates. As described above, the calculation module calculates the center coordinates V1 of the first model 611 based on the vertex coordinates of the respective vertices of the first model 611 and the concept of the bounding box in the coordinate system of the local space, and the center coordinates V1 are the local coordinates of the local space.
The calculation module may convert the central coordinate V1 of the first model 611 in the local space into the coordinate in the world space through the M matrix corresponding to the first model 611 issued by the game application, to obtain the central coordinate V2. The center coordinate V2 is the world coordinate of the center point of the first model in world space. As shown in fig. 6, after the first model 611 is converted to coordinates of world space, the positions of the second model 621, the third model 631, and the fourth model 641 in world space can be seen.
After the calculation module obtains the central coordinate V2 of the central point of the first model 611 in the world space, the calculation module may convert the central coordinate V2 of the central point of the first model 611 in the world space into a coordinate in the observation space according to the V matrix corresponding to the first model 611 issued by the game application, to obtain the central coordinate V3. The center coordinate V3 is an observation coordinate of the center point of the first model 611 in the observation space. As shown in fig. 6, after the first model 611 is converted to the coordinates of the viewing space, the positions of the second model 621, the third model 631, and the fourth model 641 in the viewing space can be seen.
In this example, the coordinate space corresponding to the camera position may be referred to as the viewing space. Illustratively, the positive y-axis direction in which the camera is disposed in world space is taken as an example. As shown in fig. 6, since the camera is located in the y-axis forward direction, photographing is performed downward, and thus the first through fourth models 611 through 641 corresponding to the viewing space may appear as a top view effect.
After the calculation module obtains the center coordinate V3 of the center point of the first model 611 in the observation space, the calculation module may convert the center coordinate V3 of the center point of the first model 611 in the observation space into a coordinate in the clipping space according to the P matrix corresponding to the first model 611 issued by the game application, to obtain the center coordinate V4. The center coordinate V4 is a clipping coordinate of the center point of the first model in the clipping space. As shown in fig. 6, after the first model 611 is converted to coordinates of the clipping space, the positions of the second model 621, the third model 631, and the fourth model 641 in the clipping space can be seen.
Through the above-described transformation of the MVP matrix (i.e., M-matrix transformation, V-matrix transformation, and P-matrix transformation), the electronic device can acquire coordinates (i.e., clipping coordinates) of the center point of each model displayed on the display. The electronic device may then transform the clipping coordinates into screen coordinates, for example, using viewport transformation (Viewport Transform) to transform the center coordinates V4 lying in the range-1.0 to coordinates within the coordinate range defined by the glViewport instruction, resulting in center coordinates V5. The central coordinate V5 is the screen coordinate of the central point of the first image projected onto the screen of the electronic device. And finally, the center coordinate V5 transformed is the center position of the first model. The center coordinates V5 are sent to the rasterizer, which in turn acquires display data corresponding to each pixel. Based on the display data, the electronic device can control the display to display correspondingly. As shown in fig. 6, the first model 611 is displayed on the display as the object 61, and the center coordinate V5 of the first model 611 corresponds to the coordinate P1 of the center of the object 61.
Similarly, in the case where the game application issues the rendering instruction stream for the second model 621 to the fourth model 641 in the first image 60, the electronic apparatus may further acquire the screen coordinates of the respective center points of the second model to the fourth model in the screen by the MVP transform and the viewport transform described above, the second model 621 to the fourth model 641 being displayed as the objects 62 to 64 on the display, the center positions of the second model to the fourth model corresponding to the coordinates P2 of the center of the object 62, the coordinates P3 of the center of the object 63, and the coordinates P4 of the center of the object 64, respectively. In some embodiments, the computing module hook graphics draws related instructions to perform a center position computation of the model before the instructions invoke the GPU driver. As shown in fig. 5, the interception module also intercepts the DrawElements instruction to calculate the center position of the first model by the calculation module before the DrawElements instruction invokes the GPU driver.
In other embodiments, the interception module intercepts data associated with a particular instruction (e.g., a first particular instruction), stores the intercepted data to memory, and stores a status data. The status data is used for indicating relevant status information of the data intercepted by the interception module. Taking the first specific instruction as an example, the state data is "ok", the interception module is instructed to intercept related data in the first specific instruction, such as vertex data and MVP matrix, and the intercepted data is already stored in the memory. When the state data read from the memory by the calculation module is "ok", the calculation module may continue to read the vertex data and MVP matrix intercepted by the interception module from the memory. When the state data read from the memory by the computing module is not "ok", the computing module does not read the data intercepted by the intercepting module from the memory.
In other embodiments, the intercept module may trigger the compute module to retrieve state data from memory before the DrawElements instruction invokes the GPU driver. When the state data is "ok", the computing module acquires the data intercepted by the intercepting module from the memory.
It will be appreciated that the trigger mechanism by which the calculation module calculates the central position of each model may be different in different implementations of the present application. As long as the determination module is able to know the center location corresponding to the draw before issuing the corresponding draw to the GPU. For example, in this example, the computation of the center position of each model by the computation module may be triggered by the interception module intercepting the draw.
In step S44, the calculation module sends the center position of the first model to the determination module.
As illustrated in fig. 6, the electronic device determines the center position as the screen coordinates, the calculation module uses the calculated center coordinates V5 as the center position of the first model, and transmits the center coordinates V5 to the determination module. It may be understood that when the electronic device determines that the center position is the clipping coordinate, the calculating module uses the calculated center coordinate V4 as the center position of the first model, and transmits the center coordinate V4 to the determining module.
In the embodiment of the application, in the process of rendering an image, the electronic device needs to determine the vertex coordinates of each model, and further needs to color each pixel in the current first image, that is, determine the color data of each pixel. And controlling the display to display the corresponding color at the corresponding pixel position according to the color data of each pixel.
In the embodiment of the present application, when determining the color data of each pixel in the first image, the model in the first image is taken as a unit, and the coloring rate of the area where the model is located is determined. The calculating module sends the calculated center position of the first model to the determining module, so that the determining module decides the coloring rate of the first model according to the center position of the first model. Accordingly, the computing module may also send the center position of the second model in the first image to the determining module, such that the determining module decides the coloring rate of the second model based on the center position of the second model.
In step S45, the determining module determines a coloring rate of the first model according to the center position of the first model.
In the embodiment of the application, the determining module obtains a first distance according to the center position of the first model and the target area of the first image, and determines the coloring rate corresponding to the first model according to the first distance. Wherein the smaller the first distance, i.e. the closer the center position of the first model is to the target area, the higher the coloring rate of the first model. Accordingly, the greater the first distance, i.e., the farther the center position of the first model is from the target area, the lower the coloring rate of the first model. The coloring rate of the first model is lower than or equal to the coloring rate of the target area.
In this embodiment of the present application, the target area of the first image may be a central area of the first image, and may also be an area where the target model is located in the first image, for example, an area where a game character controlled by a player is located. In other embodiments, the focus of the user on the first image may be determined by mining a click, touch or hover selection behavior of the user, so that a region where the focus of the user on the first image is located is determined as a target region.
In this embodiment of the present application, the coordinates corresponding to the target area of the first model may be: in the viewing space or cropping space, the coordinates of the target area in the first image, i.e. the coordinates of the target area, may be viewing coordinates, cropping coordinates or screen coordinates.
In this embodiment of the present application, the first distance is a distance between a center position of the first model and the target area in the observation space or the clipping space, and the distance between the center position of the first model and the target area may be a distance between a center point of the first model and a center point of the target area, or may be a distance between the center position of the first model and an edge of the target area, where the edge of the target area may be a closest edge to the first model. The definition of the distance between the center position of the first model and the target area may be set according to practical situations, which is not specifically limited in the embodiment of the present application.
When the first distance is acquired, the distance between the central position and the target area is compared with the same space or the same type of coordinates. Taking the first distance as an example of the distance between the central point of the first model and the central point of the target area, acquiring a cutting coordinate A of the central point of the target area in a cutting space, acquiring a cutting coordinate B of the central point of the first model in the cutting space, and calculating the distance between the cutting coordinate A and the cutting coordinate B to obtain the first distance.
In this embodiment of the present application, the determining module may preset a correspondence between the distance and the coloring rate, for example, set different coloring rates according to different distance ranges. Illustratively, the correspondence is shown in table 3 below:
TABLE 3 Table 3
Distance of Rate of coloration
First distance range 1X1
Second distance range 1X2
Third distance range 2X2
Wherein when the first distance is within the first distance range, indicating that the center position of the first model is within the target area, the determination module may determine that the current first model uses a 1X1 shading rate according to table 3 without using the variable rate shading mechanism. That is, for a model with a center position at the target area, the corresponding coloring rate is 1X1. When the first distance is within the second distance range, indicating that the first model is not within the target region, and a variable rate coloring mechanism is needed, the determination module may determine that the current first model uses a 1X2 coloring rate according to table 3. When the first distance is within the third distance range, indicating that the first model is located in the edge region of the first image, a variable rate coloring mechanism is needed at this time, the determination module may determine that the current first model uses a 2X2 coloring rate according to table 3.
Illustratively, the first distance range may be greater than or equal to 1cm and less than or equal to 3cm, the second distance range may be greater than 3cm and less than or equal to 5cm, and the third distance may be greater than 5cm and less than the edge of the first image, for example. The first distance range, the second distance range, and the third distance range may be set according to the size of the first image and related parameters of the display, which are not specifically limited herein.
It can be seen that the greater the first distance, the lower its corresponding shading rate, which correspondingly reduces the power consumption overhead in the shading process. Accordingly, the smaller the first distance, the higher the coloring rate and the clearer the corresponding coloring effect.
The correspondence between the distance and the coloring rate shown in table 3 is only an example. In other embodiments of the present application, the correspondence between the distance and the coloring rate may be different from the correspondence shown in table 3, and the setting of the correspondence may be flexibly adjusted according to a specific application program and a scene applied in the implementation process.
Taking table 3 as an example, as shown in fig. 6, a center area of the first image 60 is taken as a target area, and an area surrounded by the frame 601 is taken as a target area, and the target area corresponds to the first distance range. I.e. when the center position of the model in the image is within the area enclosed by the border 601, the first distance between the center position of the model and the target area is within the first distance range.
In the region surrounded by the frame 602, the region excluding the target region corresponds to the second distance range. That is, when the center position of the model in the image is within the annular region surrounded by the border 601 and the border 602, the first distance of the center position of the model from the target region is within the second distance range.
The frame 603 is an edge of the first image 60, and an area outside the area surrounded by the frame 602 corresponds to a third distance range within the area surrounded by the frame 603. I.e. when the center position of the model in the image is within the annular area enclosed by the border 602 and the border 603, the first distance of the center position of the model from the target area is within the third distance range.
In the conversion to screen coordinates, the center position P1 of the first object 61 and the center position P2 of the second object 62 are both located within the target area, whereby the coloring rates of the first object 61 and the second object 62 are the coloring rates 1X1 corresponding to the first distance range. The center position P3 of the third object 63 is located outside the target area and within the frame 602, and the coloring rate of the third object 63 is 1X2 corresponding to the second distance range. The center position P4 of the fourth object 64 is located outside the target area and within the frame 603, and the coloring rate of the fourth object 64 is 2X2 corresponding to the third distance range.
In step S46, the determination module outputs a corresponding call instruction to the graphics library to call the variable rate shading API corresponding to the shading rate of the first model.
In an embodiment of the present application, after the determining module determines the coloring rate of the first model according to the above scheme, the variable rate coloring API corresponding to the coloring rate of the first model in the graphics library may be called. The determination module outputs a corresponding call instruction to the graphics library to call a variable rate shading API corresponding to the shading rate of the first model.
Illustratively, taking the coloring rate 1X1 corresponding to the first variable rate coloring API and the coloring rate 1X2 corresponding to the second variable rate coloring API as an example, the determining module obtains a first distance between the first model and the target area, the first distance belonging to the first distance range, and the determining module determines the coloring rate of the first model to be 1X1 according to the first distance. The determination module may determine that the shading rate when the draw call included in the first model is executed is 1X1, the determination module outputting a call instruction to the graphics library, the call instruction instructing to call the first variable rate shading API corresponding to the shading rate 1X1. The determining module obtains a second distance between the second model and the target area, the second distance belongs to a second distance range, and the determining module determines that the coloring rate of the second model is 1X2 according to the second distance. The determination module may determine that the shading rate when the draw call included in the second model is executed is 1X2, the determination module outputting a call instruction to the graphics library, the call instruction instructing to call a second variable rate shading API corresponding to the shading rate 1X2.
As one example, the determination module may issue the call instruction and the draw instruction to the graphics library so that the graphics library may send corresponding model rendering instructions to the GPU. The drawing instruction may be a gldragwrelecomments instruction in OpenGL or a draglndexedpriitive instruction in DirectX. Referring to fig. 5, the determining module may issue the call instruction for calling the variable rate coloring API corresponding to the coloring rate corresponding to the first model and the drawing element in the current drawing call to the graphics library.
In step S47, the graphics library calls the corresponding variable rate shading API in response to the call instruction and sends a first model rendering instruction to the graphics processor.
Taking the example that the determining module determines that the rendering rate corresponding to the first model is 2X2, after passing through the graphics library, the first model rendering instruction transmitted to the GPU may include: glVRS (2X 2) and gldragwanelents. Wherein glVRS (2X 2) is used to instruct the current model to be colored using a coloring rate of 2X 2. That is, the first model rendering instruction is used to instruct that the glVRS (2X 2) be enabled before the gldragwelements is invoked.
Therefore, by taking Drawcall as a unit, the electronic equipment can color the first model according to the central position of the first model by adopting the corresponding coloring rate, so that the purposes of reasonably distributing the coloring rate, reducing the power consumption expenditure of the coloring operation and improving the coloring effect are achieved.
In step S48, the graphics processor performs a shading operation on the first model according to the shading rate corresponding to the first model in response to the first model rendering instruction.
The first model rendering instruction may carry code corresponding to a variable shading rate API corresponding to the first model. The GPU, in response to the first model rendering instruction, executes draw commands using the corresponding parameter-indicated shading rates. As in the example above, the code corresponding to the variable shading rate API corresponding to the first model includes glVRS (2X 2).
For details of the rendering operation by the graphics processor in response to the first model rendering instruction, reference may be made to the related art, and details thereof will not be described herein.
In the embodiment of the application, the user attention point is taken as a core, the target area is determined based on the user attention point, the load reduction operation (namely, the coloring rate is reduced) is performed on the picture content outside the user attention point (target area), and the coloring effect of the content of the target area can be ensured, so that the image quality experience is ensured, and the performance of the electronic equipment is improved. Through improvement on an operating system of the electronic equipment, a specific instruction in a rendering instruction stream issued by an application program is intercepted by an interception module, the center position of a first model in a first image can be rapidly and accurately acquired based on the specific instruction, and the relationship between the first model and a target area can be rapidly and accurately described based on the center position. And then reasonably setting the corresponding coloring rate for the first model based on the relation between the first model and the target area.
Referring to fig. 7, fig. 7 is a flowchart of another image rendering method according to an embodiment of the present application. Fig. 7 differs from fig. 4 in that: the specific instruction intercepted by the interception module is further used for acquiring the category of the first model, and specifically, the interception module is further used for intercepting the second specific instruction so as to acquire model information which can be used for identifying the category to which the model belongs based on the second specific instruction. The application framework layer further includes an identification module that determines a rate of coloring of the first model based on the calculated center position of the first model and the identified category of the first model.
In step S700, the game application issues a rendering instruction stream.
Step S700 may refer to the relevant content in fig. 4.
In step S701, the interception module intercepts a first specific instruction and a second specific instruction in the rendering instruction stream to obtain vertex data of each vertex of the first model, an MVP matrix corresponding to the first model, and model information of the first model.
The first specific instruction, the vertex data, and the MVP matrix may be referred to above, and will not be described herein.
In this embodiment of the present application, the specific instruction includes a second specific instruction, where the second specific instruction may be an instruction carrying a category parameter. The class parameter may be any parameter used for identifying the class to which the model belongs, for example, the class parameter includes a parameter indicating a characteristic feature of a specific object, and taking the specific object such as smoke, gun fire, spray and the like as an example, and the corresponding rendering instruction carries a rendering special effect parameter for increasing the semitransparent particles. In other embodiments, the category parameters may also include size, map information, coordinate variables in the shader, and data buffer characteristics, among others. The mapping information may include floor model maps, building model maps, and the like.
In the embodiment of the present application, the classes to which the model belongs may be divided, including but not limited to: translucent particles, character models, building models, etc.
Illustratively, the first model is smoke, which corresponds to instruction segment A, which starts with a glEnable () instruction and ends with a glDisable () instruction. The gleable instruction enables (enable) a color mixing state, i.e. the gleable instruction instructs the electronic device to start rendering of translucent particles. The glDisable instruction instructs the electronic device to completely shut down the current color mixing operation for the translucent particles. When the interception module intercepts the glEnable instruction and the glDisable instruction, the second specific instruction may be obtained, where the second specific instruction includes a glEnable () instruction and a glDisable () instruction. The interception module is based on model information which can be obtained by the glEnable () instruction and the glDisable () instruction, and is used for rendering special effects of semitransparent particles.
In other embodiments, modeling the person with the first model, the intercepting module intercepts a second specific instruction to obtain model information including, but not limited to: human skeleton node coordinate information, human face skin map information, hair style map information and the like.
In some embodiments, the first specific instruction and the second specific instruction may be the same instruction. For example, vertex data may be obtained through the first instruction, and thus, the number of vertices of the model may be obtained. Model information, such as the size of the model, can also be determined based on the number of vertices, and the class of the model can be determined based on the size of the model, such as the model size of the building model is generally larger than that of the grass model.
In step S702, the interception module stores the obtained vertex data, MVP matrix and model information in the memory.
In step S703, the calculation module obtains vertex data and MVP matrix, and calculates the center position of the first model according to the vertex data and MVP matrix.
Steps S702 to S703 may refer to the relevant contents in fig. 4.
In step S704, the identification module acquires the model information, and identifies the category of the first model according to the model information.
In the embodiment of the application, the identification module acquires the model information of the first model intercepted by the interception module from the memory, and the identification module can analyze the model information of the first model offline so as to determine the category of the first model.
In some embodiments, the electronic device may pre-store model information corresponding to a specific object class, such as model information a corresponding to a character model, and model information B corresponding to a transparent particle model. The identification module matches the obtained model information of the first model with the model information A and the model information B so as to determine whether the category of the first model belongs to the specific object category. If so, it is determined whether the class of the first model belongs to the character model or the transparent particle model.
The trigger mechanism for the recognition module to acquire the model information from the memory may refer to the related content in fig. 4, for example, refer to the trigger mechanism for the calculation module in fig. 4 to calculate the center position of the first model, which is not described herein.
In step S705, the calculation module sends the center position of the first model to the determination module.
Step S705 may refer to the relevant content in fig. 4.
In step S706, the identification module sends the category of the first model to the determination module.
It is to be understood that the execution order of the above-described steps S703 to S706 is not particularly limited. For example, the computing module and the identifying module may simultaneously acquire corresponding data from the memory, or the computing module and the identifying module may simultaneously send the processed data to the determining module.
In step S707, the determining module determines a coloring rate of the first model according to the center position and the category of the first model.
In the embodiment of the present application, the determining module obtains a specific object class corresponding to the application program, when the class of the first model is the specific object class, takes a coloring rate corresponding to the specific object class as a coloring rate corresponding to the first model, and when the class of the first model is not the specific object class, determines the coloring rate corresponding to the first model according to the first distance.
The memory of the electronic device may store in advance the specific object categories corresponding to the respective application programs, and the coloring rates corresponding to the respective specific object categories. The specific object type and the coloring rate corresponding to the specific object type corresponding to each application programmer may be set according to actual situations, which is not specifically limited in the embodiment of the present application.
Illustratively, the specific object class corresponding to the first game application includes a character model and a translucent particle model, the character model corresponding to a coloring rate of 1X2, and the translucent particle model corresponding to a coloring rate of 2X2. The first gaming application instructs rendering of the first model, and when the first model is a human model, determines that the coloring rate of the first model is 1X2. When the first model is a translucent particle model, the coloring rate of the first model is determined to be 2X2. When the first model is a grass model, i.e. the class of the first model does not belong to the specific object class corresponding to the first game application, the determining module determines the coloring rate of the first model based on the center position of the first model.
In the embodiment of the application, the corresponding coloring rate can be determined for the model based on the actual use of each application program, and then the personalized coloring rate can be provided for the model in the use process of a user. Such as games based on the first game application being shooting-type, the corresponding coloring rate of the character model may be increased in order to enhance the user experience. The rate of coloring of the character model may be increased even if the character model is at the edges of the image (i.e., outside the target area). While the effect that it exhibits is a semitransparent state for the semitransparent particle model, even if it is in the target region, the coloring rate thereof can be reduced.
In step S708, the determination module outputs a corresponding call instruction to the graphics library to call the variable rate shading API corresponding to the shading rate of the first model.
In step S709, the graphics library calls the corresponding variable rate shading API in response to the call instruction and sends a first model rendering instruction to the graphics processor.
In step S710, the graphics processor performs a shading operation on the first model according to a shading rate corresponding to the first model in response to the first model rendering instruction.
Steps S708 to S710 may refer to the relevant contents in fig. 4.
In the embodiment of the application, the determining module may further consider a category of the first model when determining the coloring rate of the first model. The coloring rate corresponding to the model class can be set according to different application scenes or different game applications, so that the use experience of a user is improved.
In other embodiments, in fig. 4 and 7, the electronic device determines the scope of the target area according to the application type of the application program. For example, with the first game application as the shooting application, the screen of the second game application is displayed with a darker overall, and the brightness area is smaller, for example, the target area range of the first game application may be set to be larger than the target area range of the second game application based on the information presented by the game images of the two games and the game type.
In other embodiments, in fig. 4 and 7, the electronic device may determine the range of the target area according to the type of application, and then determine the target area in the first image according to the range of the target area and the location of the target object in the first image, where the target object is related to the point of interest of the user.
In other embodiments, in fig. 4 and 7, the interception module may intercept a third specific instruction, the third specific instruction carrying a center location related parameter. When the third specific instruction is included in the rendering instruction stream, the third specific instruction may be directly intercepted to obtain the center position of the first model.
In other embodiments, in fig. 4 and 7, when the coloring rate of the first model is lower than the coloring rate of the target region, then the graphics processor renders the first model according to the coloring rate of the first model, and renders the target region according to the highest coloring rate and the image quality enhancement algorithm, and renders the first model according to the coloring rate of the first model. When the coloring rate of the target area is the highest, the target area is rendered at the highest coloring rate, and the target area is rendered by using an image quality enhancement algorithm. When the coloring rate of the target area is not the highest, the target area is rendered at the highest coloring rate and the image quality enhancement algorithm. This can improve the image quality of the target area with the same power consumption. Wherein the image quality enhancement algorithm may be implemented based on a shader language.
The image rendering method provided by the embodiment of the application is described below with an electronic device as an execution subject.
Referring to fig. 8, fig. 8 is a flowchart of another image rendering method according to an embodiment of the present application.
In step S81, the application installed by the electronic device issues a rendering instruction stream, where the rendering instruction stream is used to instruct rendering of the first model in the first image.
In step S82, the electronic device intercepts a specific instruction in the rendering instruction stream.
In step S83, the electronic device obtains a center position of the first model according to the specific instruction, where the center position is a position of a center point of the first model in the first image.
In step S84, the electronic device determines the coloring rate of the first model according to the center position of the first model and the target area of the first image.
In step S85, the electronic device renders the first model according to the coloring rate of the first model.
The relevant content in fig. 8 may refer to the relevant content in fig. 4 and 7, and will not be described herein.
The embodiment of the application also provides electronic equipment, which comprises the hardware and the software shown in fig. 1. In other embodiments, an electronic device includes at least: one or more processors and one or more memories. The processor includes a central processor and a graphics processor. One or more memories coupled to the one or more processors, the one or more memories storing computer instructions; the one or more processors, when executing the computer instructions, cause the electronic device to perform the image rendering methods described above and shown in fig. 4 and 7.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Fig. 9 shows a schematic diagram of the composition of a chip system 1600. The chip system 1600 may include: a processor 1601 and a communication interface 1602 for supporting the relevant devices to implement the functions referred to in the above embodiments. In one possible design, the chip system further includes a memory to hold the necessary program instructions and data for the electronic device. The chip system can be composed of chips, and can also comprise chips and other discrete devices. It should be noted that, in some implementations of the present application, the communication interface 1602 may also be referred to as an interface circuit.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to perform the above-mentioned related steps to implement the image rendering method in the above-mentioned method embodiments.
The present application also provides a computer storage medium including computer instructions that, when executed on an electronic device, cause the electronic device to perform the image rendering method of the above embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip system provided in the embodiments of the present application are configured to perform the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects of the corresponding methods provided above, and are not described herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated unit may be stored in a readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application.

Claims (14)

1. An image rendering method, applied to an electronic device, in which an application program is installed, the application program issuing a rendering instruction stream to render a first model in a first image, the method comprising:
intercepting a specific instruction in the rendering instruction stream;
acquiring a central position of the first model according to the specific instruction, wherein the central position is a position of a central point of the first model in the first image;
determining a coloring rate of the first model according to the central position of the first model and a target area of the first image, wherein the coloring rate of the first model is lower than or equal to the coloring rate of the target area;
rendering the first model according to a shading rate of the first model.
2. The image rendering method according to claim 1, wherein the specific instruction includes a first specific instruction, and the acquiring the center position of the first model according to the specific instruction includes:
Obtaining vertex data of each vertex of the first model and an MVP matrix corresponding to the first model according to the first specific instruction;
and determining the position of the center point of the first model in the first image according to the vertex data and the MVP matrix.
3. The image rendering method according to claim 1 or 2, wherein the determining the coloring rate of the first model from the center position of the first model and the target area of the first image includes:
acquiring a first distance according to the central position of the first model and a target area of the first image, wherein the first distance is the distance between the central position of the first model and the target area in an observation space or a clipping space;
and determining the coloring rate corresponding to the first model according to the first distance.
4. The image rendering method according to claim 1 or 2, wherein the determining the coloring rate of the first model from the center position of the first model and the target area of the first image includes:
acquiring the category of the first model;
acquiring a first distance, wherein the first distance is a distance between the central position of the first model and the target area in an observation space or a clipping space;
Determining a coloring rate of the first model according to the category of the first model and the first distance.
5. The image rendering method of claim 4, wherein the determining a shading rate corresponding to the first model according to the class of the first model and the first distance comprises:
acquiring a specific object category corresponding to the application program;
when the category of the first model is the specific object category, the coloring rate corresponding to the specific object category is used as the coloring rate corresponding to the first model;
and when the category of the first model is not the specific object category, determining the coloring rate corresponding to the first model according to the first distance.
6. The image rendering method of claim 5, wherein the specific instruction comprises a second specific instruction, and wherein the obtaining the class of the first model comprises:
obtaining model information of the first model according to the second specific instruction;
and determining the category of the first model according to the model information of the first model.
7. The image rendering method according to any one of claims 1 to 6, characterized in that the method further comprises:
Determining the range of the target area according to the type of the application program;
and determining the target area in the first image according to the range of the target area and the position of a target object in the first image, wherein the target object is related to the attention point of the user.
8. The image rendering method according to any one of claims 1 to 7, wherein when a coloring rate of the first model is lower than a coloring rate of the target region, then the rendering the first model according to the coloring rate of the first model includes:
and rendering the target area according to the highest coloring rate and an image quality enhancement algorithm, and rendering the first model according to the coloring rate of the first model.
9. The image rendering method according to claim 3 or 4, wherein the electronic device includes a central processor and a graphics processor, and wherein the rendering the first model according to the coloring rate of the first model includes:
the central processing unit calls a first variable rate coloring API corresponding to the coloring rate of the first model, and issues a first model rendering instruction to the graphics processor according to the first variable rate coloring API;
The graphics processor renders the first model according to a shading rate of the first model in response to the first model rendering instructions.
10. The image rendering method of claim 9, wherein the first image further comprises a second model, the method further comprising:
acquiring a second distance according to the central position of the second model and a target area of the first image, wherein the second distance is the distance between the central position of the second model and the target area in an observation space or a clipping space;
determining a coloring rate corresponding to the second model according to the second distance, wherein the second distance is larger than the first distance, and the coloring rate of the second model is lower than that of the first model;
rendering the second model according to a coloring rate of the second model.
11. The image rendering method of claim 10, wherein the rendering the second model according to the shading rate of the second model comprises:
the central processing unit calls a second variable rate coloring API corresponding to the coloring rate of the second model, and issues a second model rendering instruction to the graphics processor according to the second variable rate coloring API;
The graphics processor renders the second model according to a shading rate of the second model in response to the second model rendering instructions.
12. An electronic device comprising one or more processors and one or more memories; the one or more memories coupled to the one or more processors, the one or more memories storing computer instructions; the computer instructions, when executed by the one or more processors, cause the electronic device to perform the image rendering method of any one of claims 1 to 11.
13. A computer readable storage medium, characterized in that the computer readable storage medium comprises computer instructions which, when run, perform the image rendering method of any one of claims 1 to 11.
14. A chip system, wherein the chip system comprises a processor and a communication interface; the processor is configured to call up and execute a computer program stored in a storage medium from the storage medium, to perform the image rendering method according to any one of claims 1 to 11.
CN202311005225.1A 2023-08-09 2023-08-09 Image rendering method and related equipment Pending CN117710180A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311005225.1A CN117710180A (en) 2023-08-09 2023-08-09 Image rendering method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311005225.1A CN117710180A (en) 2023-08-09 2023-08-09 Image rendering method and related equipment

Publications (1)

Publication Number Publication Date
CN117710180A true CN117710180A (en) 2024-03-15

Family

ID=90150326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311005225.1A Pending CN117710180A (en) 2023-08-09 2023-08-09 Image rendering method and related equipment

Country Status (1)

Country Link
CN (1) CN117710180A (en)

Similar Documents

Publication Publication Date Title
WO2022116759A1 (en) Image rendering method and apparatus, and computer device and storage medium
KR101086570B1 (en) Dynamic window anatomy
US7839419B2 (en) Compositing desktop window manager
US11087430B2 (en) Customizable render pipelines using render graphs
CN114669047B (en) Image processing method, electronic equipment and storage medium
US11094036B2 (en) Task execution on a graphics processor using indirect argument buffers
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
CN112337091B (en) Man-machine interaction method and device and electronic equipment
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111583379A (en) Rendering method and device of virtual model, storage medium and electronic equipment
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN112799801B (en) Method, device, equipment and medium for drawing simulated mouse pointer
CN113419806B (en) Image processing method, device, computer equipment and storage medium
US10529100B2 (en) Interaction-driven format for graph visualization
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN117710180A (en) Image rendering method and related equipment
CN116958390A (en) Image rendering method, device, equipment, storage medium and program product
CN117112950B (en) Rendering method, device, terminal and storage medium for objects in electronic map
CN117557703A (en) Rendering optimization method, electronic device and computer readable storage medium
CN116958368A (en) Optimization method and device for LOD strategy in virtual space and terminal equipment
CN115779418A (en) Image rendering method and device, electronic equipment and storage medium
CN117710548A (en) Image rendering method and related equipment thereof
Newmarch et al. Basic OpenVG on the Raspberry Pi
CN117707676A (en) Window rendering method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination