CN118212339A - Texture processing method and device, computer readable storage medium and electronic equipment - Google Patents

Texture processing method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN118212339A
CN118212339A CN202211626004.1A CN202211626004A CN118212339A CN 118212339 A CN118212339 A CN 118212339A CN 202211626004 A CN202211626004 A CN 202211626004A CN 118212339 A CN118212339 A CN 118212339A
Authority
CN
China
Prior art keywords
texture
rendering
application program
program interface
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211626004.1A
Other languages
Chinese (zh)
Inventor
吴炜荣
李宁騛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211626004.1A priority Critical patent/CN118212339A/en
Publication of CN118212339A publication Critical patent/CN118212339A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The present disclosure provides a texture processing method, a texture processing apparatus, a computer readable storage medium, and an electronic device, and relates to the field of computer technology. The texture processing method comprises the following steps: detecting an application program interface called in a rendering process to determine the type of the application program interface; and adding storage indication information aiming at the texture by combining the class of the application program interface and the rendering target of the rendering, wherein the storage indication information is used for indicating the texture to be stored in the last level cache LLC. The method and the device can reduce the frequency of accessing the memory in the rendering process.

Description

Texture processing method and device, computer readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a texture processing method, a texture processing apparatus, a computer readable storage medium, and an electronic device.
Background
In the process of executing the rendering task by the computer system, the processor can frequently access the memory, and has high requirement on the memory bandwidth and high power consumption. Especially for terminal devices performing game rendering, problems may occur that the gaming experience is affected by high latency and the like.
Disclosure of Invention
The present disclosure provides a texture processing method, a texture processing apparatus, a computer readable storage medium, and an electronic device, so as to overcome, at least to some extent, the problem that the processor accesses a memory frequently.
According to a first aspect of the present disclosure, there is provided a texture processing method, including: detecting an application program interface called in a rendering process to determine the type of the application program interface; and adding storage indication information aiming at the texture by combining the class of the application program interface and the rendering target of the rendering, wherein the storage indication information is used for indicating the texture to be stored in the last level cache LLC.
According to a second aspect of the present disclosure, there is provided a texture processing apparatus comprising: the interface detection module is used for detecting the application program interfaces called in the rendering process so as to determine the types of the application program interfaces; the texture storage module is used for adding storage indication information aiming at textures in combination with the type of the application program interface and the rendering target of rendering, wherein the storage indication information is used for indicating the textures to be stored in the last level cache LLC.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the texture processing method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising a processor; and a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the texture processing method described above.
In the technical solutions provided in some embodiments of the present disclosure, by adding storage indication information to textures, textures used in a rendering process may be stored in a final level cache LLC, so that a processor may obtain textures from the final level cache LLC without accessing a memory, thereby reducing the frequency of accessing the memory in the rendering process, contributing to reducing delay and power consumption, and improving device performance and user experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 shows a schematic flow diagram of a delayed rendering;
FIG. 2 shows a schematic diagram of the topology of a texture processing scheme of an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of different modes of a last level cache LLC of an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a software architecture of a game rendering of an embodiment of the present disclosure;
FIG. 5 shows a flow diagram of a game engine rendering algorithm;
FIG. 6 shows a flow diagram of another game engine rendering algorithm;
FIG. 7 schematically illustrates a flow chart of a texture processing method according to an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a texture processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a texture processing apparatus according to another exemplary embodiment of the present disclosure;
fig. 10 schematically illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations. In addition, all of the following terms "first," "second," are used for distinguishing purposes only and should not be taken as a limitation of the present disclosure.
In the rendering process, especially in the process of the terminal device executing the game rendering task, two kinds of scenes exist. The first is to render a frame of pictures in stages, that is, multiple rendering stages are required before the frame content is generated. The other is post-processing, in which case the GPU (Graphics Processing Unit, graphics processor) performs some post-processing operations on the frame buffer in memory.
The pipeline of the game engine Unity can realize delayed rendering, and compared with forward rendering, the delayed rendering can reduce the operand of GPU illumination calculation and is beneficial to improving the performance, but the frequency of memory reading and writing can be increased.
Delayed rendering is an example where rendering a frame of a picture requires multiple rendering phases, and on existing mobile platforms, there are corresponding hardware designs and software functions to optimize the flow. Such a technique is known as Framebuffer Fetch or Pixel Local Storage, for example. Through reasonable design, the content of the screen block of the previous rendering stage can be ensured to be directly read by the next rendering stage, so that data is prevented from passing through memory circulation, and memory broadband and power consumption are saved.
Fig. 1 shows a schematic flow diagram of a delayed rendering. Referring to fig. 1, there may be a diffuse reflection map (map), a normal map, an illumination map, etc. in a memory, and a frame image is obtained through processing of a G-Buffer (Geometry Buffer) stage, a coloring stage, a combining stage in a GPU. Wherein each stage of the GPU involves a Tile Buffer.
Fig. 2 shows a schematic diagram of the topology of a texture processing scheme of an embodiment of the present disclosure. Referring to fig. 2, the topology may include an IP core, a bus, a last level cache LLC, and memory. By introducing the last level cache LLC in a terminal equipment platform SoC (System on Chip), delay of accessing the memory by the IP core can be reduced, performance can be improved, frequency of accessing the memory by the IP core can be reduced, and power consumption can be reduced.
Typically, the size, partition, mode of the last level cache LLC is configurable. Referring to FIG. 3, the modes of the last level Cache LLC may include a Cache mode (Cache mode), an SPM (SCRATCHCPAD MEMORY, scratch pad) mode, and a hybrid mode of Cache mode in combination with SPM mode. The SPM is similar to the memory, and there may be no memory backup, and the space of the SPM may be allocated as the memory allocation on the software. In addition, which IP core the last level cache LLC is allocated to is also controllable.
Aiming at the characteristics of a rendering pipeline, the embodiment of the disclosure fully utilizes the characteristics of the last level cache LLC and provides a scheme for performing image post-processing in the last level cache LLC.
Fig. 4 shows a schematic diagram of a software architecture of game rendering of an embodiment of the present disclosure. In order to achieve the purpose of reducing the memory access frequency, the processing scheme of the embodiment of the disclosure may include driving expansion driven by the GPU user mode and texture positioning of the android frame layer. Among them, the driver extensions include, but are not limited to, driver extensions GLES and Vulkan, and GLES and Vulkan are graphics APIs (Application Programming Interface, application program interface).
Referring to fig. 4, it should be understood that after performing the driving expansion, the game end may use the expanded content to implement the corresponding function, and in this scenario where the game application uses the API expansion alone, the scheme of storing the control data of the present disclosure into the last level cache LLC may not improve the android framework layer. And for the scene that the game terminal cannot be changed or has no right to be changed, namely, the scene that the game terminal does not use API expansion, after driving expansion, the android frame layer can be improved so as to achieve the purpose that the corresponding content is stored in the LLC of the last level.
It should be appreciated that the driver extensions preformed by the embodiments of the present disclosure conform to the standards and definitions of the API itself and are part of the GPU user-state driver. The content of the drive extension is associated with the last level cache LLC, and the content of the drive extension includes content that is associated with adding information to cause data to be stored to the last level cache LLC. The following will describe the GLES-driven extension and the Vulkan-driven extension, respectively.
For GLES driven extensions, the following extension content may be included:
extension content 1, query extension:
const char*glGetString(GL_EXTENSIONS)
the function return string with "GL_ OPPO _LLC" indicates that the OPPO LLC extension is supported.
Extended content 2, open close OPPO LLC function:
glEnable(GL_OPPO_LLC),glDisable(GL_OPPO_LLC)
Expanding content 3, obtaining the maximum LLC SPM size available for application, saving the value in data:
void glGetIntegerv(GL_MAX_OPPO_LLC_SPM,GLint*data)
extension 4, application notification driver stores texture (texture) in LLC, supports compatible bits GL TEXTURE _ OPPO _llc based on standard glTexImage D and glTexStorage D function first parameter GLenum target, such as:
void glTexImage2D(GL_TEXTURE_2D|GL_TEXTURE_OPPO_LLC,…);
void glTexStorage2D(GL_TEXTURE_2D|GL_TEXTURE_OPPO_LLC,…)
Embodiments of the present disclosure are directed to post-processing related processes of GPU rendering flows, using textures as attachments (attributes) to frame buffer objects, excluding external texture uploads.
Extended content 5, informing the driver of the failure of texture attachment bound to the frame buffer:
void glInvalidateFramebuffer(GL_TEXTURE_2D|GL_TEXTURE_OPPO_LLC,…)
for Vulkan driven extensions, the following extension content may be included:
extension content 1, query extension:
void vkEnumerateDeviceExtensionProperties(physicalDevice,…)
The output parameters of the function with "VK OPPO LLC" indicate that this OPPO LLC extension is supported.
Extension content 2, creation device VKCREATEDEVICE function the ppEnabledExtensionNames pointer of the second parameter VkDeviceCreateInfo structure adds an extension VK OPPO LLC string.
Extension content 3, obtaining the maximum LLC size available for application:
void vkGetPhysicalDeviceProperties(VkPhysicalDevice,VkPhysicalDeviceProperties*)
The return value may be stored in a field VKPHYSICALDEVICELIMITS of VkPhysicalDeviceProperties, with a new uint32_ t maxLLCSize field.
When the content 4 is extended and vkAllocateMemory is used for MEMORY allocation, the memoryTypeIndex field of the VkMemoryAllocateInfo structure can be added with the compatible type of the vk_memory_property_ OPPO _llc, and can be used together with the vk_memory_property_device_local_bit to ensure that the allocated MEMORY is in LLC SPM.
It should be understood that the above-described API driven extensions are merely exemplary representations, and that the present disclosure is not limited to the type of API and the extension content.
By means of an API driven extension as described above, storing data to the last level cache LLC can be achieved in scenarios where the game application solely uses the API extension. In addition, the android frame layer can be improved to store data to the last level cache LLC.
In view of the process of the texture processing scheme of the embodiment of the present disclosure related to the game engine Unity rendering flow, a game engine rendering algorithm will be first described by taking fig. 5 and 6 as an example.
Aiming at screen space algorithms of a game engine Unity, such as SSAO, bloom, SSS, SSR and other full-screen post-processing algorithms, SCREEN SPACE Shadowmap and other real-time shadow technologies, real-time terrain generation, real-time footprint generation and other environment interaction technologies, TAA, SMAA and other screen space antialiasing technologies, and the like, all relate to the rendering technology of screen space. The processes associated with them in embodiments of the present disclosure may be illustrated with reference to fig. 5.
In fig. 5, first, a drawing command 1 corresponds to an off-screen rendering target 1, and a texture is attached to a color attachment or a depth attachment of the rendering target. Next, the drawing command 2 samples the texture and renders to the rendering target 0. Rendering target 0 may then be presented on the display.
The Grab Pass of the game engine Unity is a rendering process of acquiring all pixels which are rendered by a current rendering target in a current shader and drawing the pixels on the same rendering target after related calculation, and is often used in the situation that a part of objects which are drawn in advance in a current screen need to be processed, such as refraction of underwater objects, distortion of part of objects, and the like.
For the procedure of the Unity Grab Pass, reference is made to FIG. 6.
In fig. 6, first, drawing command 1 corresponds to off-screen rendering target 1. When the Grab Pass is triggered, the current drawing result is copied one copy at the GPU end and is called Grab texture. Next, the drawing instruction 2 samples the Grab texture and renders to the rendering target 0. Rendering target 0 may then be presented on the display.
Next, a texture processing method according to an embodiment of the present disclosure will be described by taking GLES as an example. The texture processing method of the embodiment of the present disclosure is implemented by a terminal device, that is, the terminal device may perform the respective steps of the texture processing method of the embodiment of the present disclosure. The terminal device may be, for example, a smart phone, a tablet computer, a personal computer, an intelligent wearable device, a server, etc., and the disclosure does not limit the type of the terminal device.
Fig. 7 schematically shows a flow chart of a texture processing method of an exemplary embodiment of the present disclosure. Referring to fig. 7, the texture processing method may include the steps of:
s72, detecting an application program interface called in the rendering process to determine the type of the application program interface;
s74, adding storage indication information aiming at textures by combining the type of the application program interface and the rendering target of rendering, wherein the storage indication information is used for indicating the textures to be stored in the last level of cache LLC.
According to some embodiments of the present disclosure, for a post-processing scenario corresponding to fig. 5, a frame buffer object for which the application program interface is a binding texture is detected as a program interface for a color attachment or a depth attachment. Where texture (texture) is a graphic term, its data may include image data, depth data of objects in a screen coordinate system, or other types of data. A color attachment (color attachment) may be used to store color data (e.g., RGB values of pixels) of each pixel in a drawing result when the terminal device performs drawing according to a rendering instruction. The depth accessory (DEPTH ATTACHMENT) may be used to store depth data of each pixel in the drawing result when the terminal device draws according to the rendering instruction.
Firstly, the terminal device can acquire a rendering instruction and judge whether a rendering target corresponding to the rendering instruction comprises a rendering target of off-screen rendering. Among them, the rendering target in the embodiment of the present disclosure refers to a rendering buffer (buffer) divided by the GPU.
Specifically, a rendering target of off-screen rendering may be marked to obtain first marking information. For example, the address of the buffer of the rendering target is used as the first flag information. In this case, if the rendering instruction includes the first identification information, the terminal device may determine that the rendering target corresponding to the rendering instruction includes a rendering target of off-screen rendering. If the rendering instruction does not contain the first identification information, the terminal equipment determines that the rendering target corresponding to the rendering instruction does not comprise the rendering target of off-screen rendering.
And under the condition that the rendering target corresponding to the rendering instruction comprises the rendering target of off-screen rendering, the texture can be found out from the context (context).
Specifically, the texture may be marked to obtain the second identification information. In this case, the texture can be found from the context using the second identification information.
Next, storage indication information may be added for the texture. As described in the drive extension above, the storage indication information may be added to the texture by the drive of the application program interface and the addition of the storage indication information to achieve storage of the texture to the last level cache LLC. Specifically, the storage instruction information is gl_ TEXTURE _ OPPO _llc.
According to further embodiments of the present disclosure, for the grass Pass scene corresponding to FIG. 6, the detected application program interface is the program interface associated with the texture copy. In this case of copying, there is a process of transferring data from the copy source to the copy destination, and both the copy source and the copy destination may be a buffer area. In this case, the texture may be a destination texture corresponding to the copy destination.
First, the terminal device may determine a source texture of a copy source and a destination texture of a copy destination corresponding to the application program interface. It will be appreciated that the copy source is identical to the copy destination data, as the copy operation is performed with respect to the data itself.
Next, in the case that the texture feature of the source texture is the same as the texture feature of the texture bound to the rendering target of the on-screen rendering and the rendering target of the rendering instruction is the same as the rendering target of the on-screen rendering, if the instruction included in the rendering instruction collects the target texture, in the process of copying the next frame of texture, storage indication information is added for the target texture, so that the storage of the texture to the final level cache LLC is realized.
Texture features in embodiments of the present disclosure include texture attributes such as texture size, height, width, format, data type, and the like, which are not limited by the present disclosure.
Specifically, it may be first determined whether the texture features of the source texture are the same as the texture features of the texture bound to the rendering target of the on-screen rendering. If the texture features are different, the subsequent judging process of adding the storage indication information for the texture is terminated.
If the texture characteristics of the two are the same, judging whether the rendering target of the rendering instruction is the same as the rendering target of the on-screen rendering. If the two rendering targets are different, the subsequent judging process of adding storage indicating information for the texture is terminated.
If the two rendering targets are the same, judging whether the instruction contained in the rendering instruction is acquired with the target texture. If not, the subsequent judging process of adding storage indication information for the texture is terminated.
If the instruction contained in the rendering instruction collects the target texture, in the process of copying the next frame of texture, storage indication information is added for the target texture so as to store the texture to the last level cache LLC. Specifically, the storage instruction information is gl_ TEXTURE _ OPPO _llc.
For textures to which storage instruction information is not added after termination, a memory storage manner may be performed.
It will be appreciated that in the continuous frame scheme, the above-described texture feature comparison, rendering target comparison, and determination of whether to collect the target texture may be performed frame by frame, and if the above-described determination conditions are satisfied, the storage indication information is configured for that frame. If any judging condition is not met, the frame is configured in a memory storage mode.
In combination with the determination of the application program interface and the rendering target, the embodiment of the disclosure may add storage indication information obtained based on the drive expansion to the texture when the condition is satisfied, and indicate that the texture is stored in the last level cache LLC. Therefore, the processor can acquire textures from the last level cache LLC without accessing the memory, the frequency of accessing the memory in the rendering process is reduced, delay and power consumption are reduced, and device performance and user experience can be improved.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Further, in this example embodiment, a texture processing apparatus is also provided.
Fig. 8 schematically shows a block diagram of a texture processing apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 8, the texture processing apparatus 8 according to an exemplary embodiment of the present disclosure may include an interface detection module 81 and a texture storage module 83.
Specifically, the interface detection module 81 may be configured to detect an application program interface invoked in the rendering process, so as to determine a class of the application program interface; the texture storage module 83 may be configured to add storage indication information for the texture in combination with the class of the application program interface and the rendering target of the rendering, where the storage indication information is used to indicate that the texture is stored in the final level cache LLC.
According to an exemplary embodiment of the present disclosure, the application program interface is a program interface that binds a frame buffer object of a texture as a color attachment or a depth attachment. In this case, the texture storage module 83 may be configured to perform: acquiring a rendering instruction, and finding out textures from the context under the condition that a rendering target corresponding to the rendering instruction comprises a rendering target of off-screen rendering; storage indication information is added for textures.
According to an example embodiment of the present disclosure, the texture storage module 83 may be configured to perform: marking a rendering target of off-screen rendering to obtain first identification information; under the condition that the rendering instruction contains the first identification information, determining that the rendering target corresponding to the rendering instruction comprises the rendering target of off-screen rendering.
According to an example embodiment of the present disclosure, the texture storage module 83 may be configured to perform: marking the texture when the application program interface is detected so as to obtain second identification information; wherein the texture is found from the context by the second identification information.
According to an exemplary embodiment of the present disclosure, the application program interface is a program interface related to copying of textures, which are destination textures corresponding to the copy destination. In this case, the texture storage module 83 may be configured to perform: determining a source texture of a copy source corresponding to the application program interface and a destination texture of a copy destination; and under the condition that the texture characteristics of the source texture are the same as those of the texture bound with the rendering target of the on-screen rendering and the rendering target of the rendering instruction is the same as that of the on-screen rendering, if the instruction contained in the rendering instruction samples the target texture, in the process of copying the next frame of texture, the storage indication information is added for the target texture.
According to an example embodiment of the present disclosure, the texture storage module 83 may be configured to perform: and adding storage indication information to the texture through driving of the application program interface and the storage indication information.
According to an exemplary embodiment of the present disclosure, referring to fig. 9, the texture processing apparatus 9 may further include a driving expansion module 91 with respect to the texture processing apparatus 8.
Specifically, the drive extension module 91 may be configured to perform: driving expansion is carried out on an application program interface in advance; wherein the content of the drive extension is related to the last level cache LLC, and the content of the drive extension includes content related to the addition of the storage indication information.
Since each functional module of the texture processing apparatus in the embodiment of the present disclosure is the same as that in the above-described method embodiment, a detailed description thereof will be omitted.
Fig. 10 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. The terminal device of the exemplary embodiments of the present disclosure may be configured as in the form of fig. 10. It should be noted that the electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs that when executed by the processor, enable the processor to implement the texture processing method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 10, the electronic device 100 may include: processor 1010, internal memory 1021, external memory interface 1022, universal serial bus (Universal Serial Bus, USB) interface 1030, charge management module 1040, power management module 1041, battery 1042, antenna 1, antenna 2, mobile communication module 1050, wireless communication module 1060, audio module 1070, sensor module 1080, display 1090, camera module 1091, indicator 1092, motor 1093, keys 1094, and subscriber identity module (Subscriber Identification Module, SIM) card interface 1095, among others. The sensor module 1080 may include, among other things, a depth sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the structure illustrated in the embodiments of the present disclosure does not constitute a specific limitation on the electronic device 100. In other embodiments of the present disclosure, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1010 may include one or more processing units, such as: the Processor 1010 may include an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics Processing Unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a video codec, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a baseband Processor and/or a neural network Processor (Neural-network Processing Unit, NPU), and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In addition, a memory may be provided in the processor 1010 for storing instructions and data.
The electronic device 100 may implement a photographing function through an ISP, a camera module 1091, a video codec, a GPU, a display 1090, an application processor, and the like. In some embodiments, the electronic device 100 may include 1 or N camera modules 1091, where N is a positive integer greater than 1, and if the electronic device 100 includes N cameras, one of the N cameras is a master camera.
The internal memory 1021 may be used to store computer executable program code including instructions. The internal memory 1021 may include a storage program area and a storage data area. The external memory interface 1022 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100.
The present disclosure also provides a computer-readable storage medium that may be included in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by one such electronic device, cause the electronic device to implement the methods as described in the embodiments of the present disclosure.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A texture processing method, comprising:
Detecting an application program interface called in a rendering process to determine the type of the application program interface;
And adding storage indication information for the texture according to the class of the application program interface and the rendering target of rendering, wherein the storage indication information is used for indicating the texture to be stored in the last level cache LLC.
2. The texture processing method according to claim 1, wherein the application program interface is a program interface binding a frame buffer object of the texture as a color accessory or a depth accessory; wherein, in combination with the class of the application program interface and the rendering target of the rendering, storage indication information is added for the texture, including:
Acquiring a rendering instruction, and searching the texture from the context under the condition that a rendering target corresponding to the rendering instruction comprises a rendering target of off-screen rendering;
And adding the storage indication information for the texture.
3. The texture processing method according to claim 2, further comprising:
Marking a rendering target of off-screen rendering to obtain first identification information;
and under the condition that the rendering instruction contains the first identification information, determining that the rendering target corresponding to the rendering instruction comprises the rendering target of off-screen rendering.
4. The texture processing method according to claim 2, further comprising:
When the application program interface is detected, marking the texture to obtain second identification information;
and searching the texture from the context through the second identification information.
5. The texture processing method according to claim 1, wherein the application program interface is a program interface related to a copy of a texture, the texture being a destination texture corresponding to a copy destination; wherein, in combination with the class of the application program interface and the rendering target of the rendering, storage indication information is added for the texture, including:
Determining a source texture of a copy source and a destination texture of a copy destination corresponding to the application program interface;
And if the instruction contained in the rendering instruction samples the target texture under the condition that the texture characteristics of the source texture are the same as the texture characteristics of the texture bound with the rendering target of the on-screen rendering and the rendering target of the rendering instruction is the same as the rendering target of the on-screen rendering, adding the storage indication information for the target texture in the process of copying the next frame of texture.
6. The texture processing method according to any one of claims 1 to 5, wherein adding storage instruction information for a texture includes:
And adding the storage indication information to the texture through the driving of the application program interface and the storage indication information.
7. The texture processing method according to claim 6, further comprising:
driving expansion is carried out on the application program interface in advance;
Wherein the content of the drive extension is related to the last level cache LLC, and the content of the drive extension includes content related to adding the storage indication information.
8. A texture processing apparatus, comprising:
the interface detection module is used for detecting the application program interfaces called in the rendering process so as to determine the types of the application program interfaces;
And the texture storage module is used for adding storage indication information aiming at textures by combining the class of the application program interface and the rendering target of rendering, wherein the storage indication information is used for indicating the textures to be stored in the last level cache LLC.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the texture processing method according to any one of claims 1 to 7.
10. An electronic device, comprising:
A processor;
A memory for storing one or more programs that, when executed by the processor, cause the processor to implement the texture processing method of any one of claims 1 to 7.
CN202211626004.1A 2022-12-15 2022-12-15 Texture processing method and device, computer readable storage medium and electronic equipment Pending CN118212339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211626004.1A CN118212339A (en) 2022-12-15 2022-12-15 Texture processing method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211626004.1A CN118212339A (en) 2022-12-15 2022-12-15 Texture processing method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN118212339A true CN118212339A (en) 2024-06-18

Family

ID=91452669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211626004.1A Pending CN118212339A (en) 2022-12-15 2022-12-15 Texture processing method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118212339A (en)

Similar Documents

Publication Publication Date Title
US10164459B2 (en) Selective rasterization
KR101980990B1 (en) Exploiting frame to frame coherency in a sort-middle architecture
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
US20230351604A1 (en) Image cutting method and apparatus, computer device, and storage medium
US8872824B1 (en) System, method, and computer program product for performing shadowing utilizing shadow maps and ray tracing
CN114596383A (en) Line special effect processing method and device, electronic equipment, storage medium and product
CN116563083A (en) Method for rendering image and related device
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
JP2013026933A (en) Image processing apparatus, image processing method, and program
CN113064728A (en) High-load application image display method, terminal and readable storage medium
CN114170366B (en) Three-dimensional reconstruction method based on dotted line feature fusion and electronic equipment
CN113963000B (en) Image segmentation method, device, electronic equipment and program product
EP4379647A1 (en) Render format selection method and device related thereto
CN118212339A (en) Texture processing method and device, computer readable storage medium and electronic equipment
CN114549303B (en) Image display method, image processing method, image display device, image processing apparatus, image display device, image processing program, and storage medium
CN114820660A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN112989924A (en) Target detection method, target detection device and terminal equipment
CN115829846A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116206041A (en) Rendering method and related equipment thereof
CN112465692A (en) Image processing method, device, equipment and storage medium
CN116453131B (en) Document image correction method, electronic device and storage medium
CN116309160B (en) Image resolution restoration method, device, equipment and storage medium
WO2024045701A9 (en) Data processing method and apparatus, and device and storage medium
CN112950516B (en) Method and device for enhancing local contrast of image, storage medium and electronic equipment
CN113034358B (en) Super-resolution image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination