CN117152320A - Image processing method and electronic device - Google Patents
Image processing method and electronic device Download PDFInfo
- Publication number
- CN117152320A CN117152320A CN202310145970.XA CN202310145970A CN117152320A CN 117152320 A CN117152320 A CN 117152320A CN 202310145970 A CN202310145970 A CN 202310145970A CN 117152320 A CN117152320 A CN 117152320A
- Authority
- CN
- China
- Prior art keywords
- source code
- shader source
- shader
- program
- target application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 156
- 230000008569 process Effects 0.000 claims abstract description 107
- 230000000694 effects Effects 0.000 claims abstract description 41
- 238000009877 rendering Methods 0.000 claims description 92
- 230000015654 memory Effects 0.000 claims description 78
- 238000012545 processing Methods 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 17
- 238000004040 coloring Methods 0.000 claims description 16
- 230000001934 delay Effects 0.000 claims description 4
- 238000000611 regression analysis Methods 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 abstract description 7
- 238000004891 communication Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 239000012634 fragment Substances 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 230000015572 biosynthetic process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000003786 synthesis reaction Methods 0.000 description 10
- 239000008186 active pharmaceutical agent Substances 0.000 description 9
- 239000003086 colorant Substances 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000005236 sound signal Effects 0.000 description 7
- 238000009434 installation Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 208000033748 Device issues Diseases 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 4
- 238000004806 packaging method and process Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007873 sieving Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Stored Programmes (AREA)
- Image Generation (AREA)
Abstract
The embodiment of the application provides an image processing method and electronic equipment. The method comprises the following steps: the terminal equipment acquires a target file of a target application, wherein the target file comprises a first shader source code and a second shader source code, the first shader source code comprises shader source codes with probability larger than a preset value, and the probability is related to one or more of the following characteristic information: scene data, graphics, dynamic effect data of the graphics, time length required for compiling the shader source codes and space occupation of the shader source codes related to the running process of the target application; the terminal device precompiled the first shader source code with the target file and not precompiled the second shader source code. Therefore, when the first shader source code is used for shading the graph, the terminal equipment can use the pre-compiled first shader source code, so that the time for compiling the first shader source code is saved, and the phenomenon of blocking of the terminal equipment caused by overlong time for compiling the shader source code is reduced.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image processing method and an electronic device.
Background
Along with the development of terminal technology, images displayed by terminal equipment are more and more abundant, and the image quality of the images is also continuously improved. The terminal device may render the image using an image processor (graphic processing unit, GPU) and display the rendered image on a display screen. In the process of rendering an image by the GPU, the terminal device may use a Shader (Shader) to render the image.
When the program corresponding to the shader source code is not stored in the terminal equipment, the terminal equipment compiles the shader source code into a GPU identifiable binary program, and the rendering engine can render the image based on the compiled program. In the GPU rendering process, the terminal device may cause a jam due to longer time for rendering the image, for example, when the terminal device installs the application program and starts the application program for the first time, the time for compiling the source code of the shader is longer, which causes the terminal device to jam.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which are applied to the technical field of terminals; by pre-compiling part of the shader source code into a program recognizable by the GPU, the GPU can use the program when the shader source code is used for shading the graph, so that the time for compiling the shader source code is saved, and further, the stuck scene caused by longer compiling time is reduced.
In a first aspect, an embodiment of the present application provides an image processing method. The method comprises the following steps: the terminal equipment acquires a target file of a target application, wherein the target file comprises a first shader source code and a second shader source code, the first shader source code comprises shader source codes with probability larger than a preset value, and the probability is related to one or more of the following characteristic information: scene data, graphics, dynamic effect data of the graphics, time length required for compiling the shader source codes and space occupation of the shader source codes related to the running process of the target application; the terminal device precompiled the first shader source code with the target file and not precompiled the second shader source code. In this way, when the terminal equipment colors the graph of the first shader source code, the first program can be called, so that the time length for the terminal equipment to compile the first shader source code is reduced, and the probability of the occurrence of the clamping of the terminal equipment is reduced; when the terminal equipment colors the graph of the second shader source code, the probability that the coding flow of the second shader source code possibly causes a jam is also lower. Furthermore, the terminal equipment precompiles part of the shader source codes, so that the quantity of the shader source codes needing precompilation is reduced, and the occupied resource space and the power consumption are reduced.
In a possible implementation, the probability is derived based on the assignment of the characteristic information; when shader source codes are needed in the scene data related to the running process of the target application, the assignment of the scene data related to the running process of the target application comprises 1; when shader source codes are not needed in the scene data related to the running process of the target application, assigning values of the scene data related to the running process of the target application comprise 0; the graphic assignment drawn by the first shader source code includes 1; the graphics assignment that the first shader source code does not draw includes 0; the dynamic effect data assignment of the graph with dynamic effect comprises 1; the dynamic effect data of the graph without dynamic effect includes 0; compiling a value of a required duration of the shader source code to include the required duration; the assignment of space usage by the shader source code includes space usage. Thus, the electronic device can accurately predict the probability of the shader source code according to the relevant information of the shader source code.
In one possible implementation, before the terminal device precompiles the first shader source code with the target file, the method includes: the terminal equipment creates a precompiled thread; the precompiled thread is different from the main thread in the running of the target application; the terminal equipment acquires the operation parameters of a Central Processing Unit (CPU); the operating parameters of the CPU include at least one of: the number of processes in the CPU running and/or the working frequency of the CPU; the terminal device precompiles the first shader source code with the target file, including: when the terminal equipment determines that the running parameters of the CPU meet the preset conditions, the terminal equipment precompiles the first shader source codes in the precompilation thread to obtain a first program corresponding to the first shader source codes; wherein the first program comprises a program recognizable by the graphics processor GPU; the preset conditions include at least one of the following: the number of processes in the running process of the CPU is lower than a preset process number threshold value; and/or the working frequency of the CPU is higher than a preset working frequency threshold value. In this way, the terminal device may not increase the system load when executing the precoding procedure.
In one possible implementation manner, after the terminal device obtains the operation parameters of the central processing unit CPU, the method further includes: if the running parameters of the CPU do not meet the preset conditions, the terminal equipment reduces the priority of the precompiled threads, so that the terminal equipment delays precompiled first shader source codes. In this way, the terminal device may not increase the system load when executing the precoding procedure.
In one possible implementation, after the precompiling module precompiles the first shader source code, the method includes: the precompiled module stores the first program into a file system and/or a memory; the memory includes GPU program memory. In this way, the subsequent terminal device may find the first program of the first shader source code from the file system and/or memory.
In one possible implementation, the method further includes: the terminal equipment starts a target application; the target application issues a drawing instruction to a hardware acceleration drawing user interface HWII; the HWII converts the drawing instruction into third shader source code and synchronizes the third shader source code to the rendering engine; the third shader source code includes first shader source code and/or second shader source code; the rendering engine searches whether the precompiled program exists in the third shader source code in the memory; if the memory includes a program, the rendering engine reports the program in the memory to the GPU. Thus, when the third shader source code is the precompiled shader source code, the terminal device can find the program corresponding to the third shader source code in the storage space, the terminal device can use the program without compiling, the time for compiling the shader source code is shortened, and GPU rendering is accelerated.
In one possible implementation, after the rendering engine looks up in memory whether the third shader source code has a precompiled program; further comprises: if the memory does not contain the program, the rendering engine searches the program in the file system; if the file system comprises the program, the rendering engine reports the program in the file system to the GPU; or if the file system does not comprise the program, the terminal equipment compiles the third shader source code into the program and stores the program into the memory and/or the file system; the rendering engine reports the program to the GPU. Thus, when the third shader source code is not the precompiled shader source code, the terminal equipment compiles the third shader source code, and the process can cause the probability of the terminal equipment to be blocked to be smaller, so that the phenomenon of the terminal equipment to be blocked is reduced.
In a second aspect, an embodiment of the present application provides an image processing method. The method comprises the following steps: the electronic equipment acquires a shader source code of a target application; the electronic equipment screens the first shader source codes from the shader source codes to obtain first shader source codes; the number of first shader source code is less than the number of shader source code, the first shader source code comprising: shader source code with probability greater than a preset value, the probability is related to one or more of the following feature information: scene data, graphics, dynamic effect data of the graphics, time length required for compiling the shader source codes and space occupation of the shader source codes related to the running process of the target application; the electronic equipment classifies and packages the first shader source code and the second shader source code into a target file of a target application; the second shader source code comprises shader source code which is corresponding to the target application and is removed from the first shader source code. In this way, after the target application is released, the terminal device downloads and installs the target application and obtains the first shader source code. The terminal equipment does not need to pre-compile all the shader source codes, so that the occupied space of resources is reduced, and the compiling time in the use process of the first shader source codes is reduced.
In one possible implementation, the electronic device screens the shader source code for a first shader source code; comprising the following steps: the electronic equipment identifies the characteristic information of the source codes of the coloring devices; the electronic equipment carries out assignment on the characteristic information of the source codes of the coloring devices; the electronic equipment carries out regression analysis on the characteristic information of the assigned shader source codes to obtain the probability of the shader source codes; and the electronic equipment screens out the shader source codes with probability higher than a preset value to obtain a first shader source code. Thus, the electronic device can accurately predict the probability of the shader source code according to the relevant information of the shader source code.
In one possible implementation, when shader source code is needed in the scenario data related to the target application running process, the assignment of the scenario data related to the target application running process includes 1; when shader source codes are not needed in the scene data related to the running process of the target application, assigning values of the scene data related to the running process of the target application comprise 0; the graphic assignment drawn by the first shader source code includes 1; the graphics assignment that the first shader source code does not draw includes 0; the dynamic effect data assignment of the graph with dynamic effect comprises 1; the dynamic effect data of the graph without dynamic effect includes 0; compiling a value of a required duration of the shader source code to include the required duration; the assignment of space usage by the shader source code includes space usage. Thus, the electronic device can accurately predict the probability of the shader source code according to the relevant information of the shader source code.
In one possible implementation, the probability satisfies the following formula:
wherein y is probability; x is the characteristic information of the assigned shader source code; w (W) T The weight coefficient is corresponding to the characteristic information of the source code of the shader; b is the offset.
In a third aspect, an embodiment of the present application provides a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device includes: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to cause the terminal device to perform the methods as in the first and second aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the method as in the first and second aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run, causes a computer to perform the methods as in the first and second aspects.
In a sixth aspect, embodiments of the present application provide a chip comprising a processor for invoking a computer program in a memory to perform the methods as in the first and second aspects.
It should be understood that, the third aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect and the second aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
FIG. 1 is a flow diagram of compiled shader source code in a possible implementation;
FIG. 2 is a schematic flow diagram of a terminal device using shader source code in a possible implementation;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
Fig. 4 is a schematic software structure diagram of a terminal device according to an embodiment of the present application;
fig. 5 is a schematic software structure diagram of another terminal device according to an embodiment of the present application;
fig. 6 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 9 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 10 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 11 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 12 is a schematic view of a scenario of an image processing method according to an embodiment of the present application;
fig. 13 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 14 is a schematic view of a scenario of an image processing method according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
For purposes of clarity in describing the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The "at … …" in the embodiment of the present application may be an instant when a certain situation occurs, or may be a period of time after a certain situation occurs, which is not particularly limited. In addition, the display interface provided by the embodiment of the application is only used as an example, and the display interface can also comprise more or less contents.
The terminal device may render the image using the GPU and display the rendered image on the display screen. In the process of rendering an image by the GPU, the terminal device may use a shader to perform shading processing on the image. A shader can be understood as a programmable program running on a GPU that enables image rendering, by running the shader program, the GPU enabling the shading of images. When the program corresponding to the source code of the shader is not stored in the terminal equipment, the terminal equipment compiles the shader into a GPU recognizable binary program, and the rendering engine can render the image based on the compiled program. In the GPU rendering process, the terminal device may cause a blocking phenomenon due to long time for rendering the image, for example, when the terminal device installs the application program and starts the application program for the first time, the terminal device needs to compile the shader source code of the graphics to be displayed, and the time for compiling the shader source code is long, which causes the terminal device to block.
The following describes, with reference to fig. 1, a flow of executing GPU rendering by a terminal device in the above possible implementation and a cause of a stuck in the GPU rendering process, as shown in fig. 1:
in performing GPU rendering, the terminal device may instruct the shader to perform shading operations according to the rendering instructions. In a possible implementation, the terminal device may use a shader to perform shading operations on vertices (vertexes) and/or fragments (fragments) of the image. The vertex may be the vertex coordinates of the graphic to be rendered in the image, and the fragment may be a unit that one or more points in the graphic may constitute. The Shader information issued by the application program may include information of vertex shaders and information of fragment shaders (also referred to as pixel shaders) to achieve complete rendering of the image.
The GPU rendering process is described below taking an open graphics library (Open Graphics Library, openGL) rendering platform as an example.
Illustratively, a shader is created S101.
The terminal device may issue OpenGL ES APIs: the glCreateShader instruction instructs OpenGL (OpenGL for Embedded Systems, openGL ES) to create shader objects for embedded systems. It will be appreciated that OpenGL ES creates a block of memory for writing shaders using the OpenGL shader language (OpenGL shading languageGLSL).
S102, attaching a vertex source to the shader to obtain a vertex shader.
The vertex shader is used to perform image model transforms, projective transforms, visual transforms, illumination, and the like. The terminal device may issue OpenGL ES APIs: the glShaderSource instruction instructs OpenGL ES to attach and/or bind a vertex source (vertex source) to the shader. Shader resources (Shader source) may include a point source and a fragment source (fragment source). The shader resource may be a character string that the CPU can recognize, and the shader resource may be used to define a shader object, for example, when a vertex source is attached to the shader object, the shader object in step S101 may be defined as a vertex shader.
S103, compiling the vertex shader.
The terminal device may issue OpenGL ES APIs: the glCompileShader instruction instructs OpenGL ES to compile the vertex shader. It will be appreciated that the vertex source is a string that the CPU can recognize, for example: c++, java and other formats, the terminal equipment needs to compile the vertex source into a binary file which can be identified by the GPU so as to facilitate the subsequent GPU to perform rendering operation by using the compiled loader source code.
S104, creating a shader.
S105, attaching the fragment source to the shader to obtain the fragment shader.
Fragment shaders may be used to calculate colors of images, obtain textures, color fill pixels, and so on.
S106, compiling the fragment shader.
The principles of steps S104-S106 are similar to those of steps S101-S103, and reference is made to the text descriptions at steps S101-S103, which are not repeated here.
S107, creating a program.
The terminal device may issue OpenGL ES APIs: the glCreateProgram instruction may be used to instruct the creation of an empty program (program). The program created by the glCreateProgram instruction can be used to attach the shader.
S108, attaching the compiled vertex shader and the compiled fragment shader to the program.
The terminal device may issue OpenGL ES APIs: the glatatchloader instruction may be used to indicate that a compiled shader is to be attached to a program.
S109, linking the program to the application program.
The terminal device may issue OpenGL ES APIs: the glLinkProgram instruction may be used to instruct linking the program to the application. The application may be an application running on the terminal device.
S110, running a program.
The terminal device may issue OpenGL ES APIs: a glaseprogram instruction that can be used to instruct a running program. After binding the shader to the program, the terminal equipment can call the vertex shader and the fragment shader according to the instruction issued by the application program, and the purpose of shading the image in the application program is achieved by running the compiled shader resources in the shader.
S111, rendering to a frame buffer (also referred to as frame buffer).
After the terminal device uses the shader to render the image model, openGL renders the image model onto a frame buffer object (frame buffer object, FBO) so that the subsequent terminal device can send and display the image.
It will be appreciated that the terminal device may retain the coloring result of the graphics in the image after GPU rendering of the graphics. For the same graph, openGL does not perform repeated compilations. When the terminal device needs to render the graphic again, the terminal device may multiplex the last coloring result. And for a new graph which is not rendered by the terminal equipment, the terminal equipment needs to compile a loader of the graph for use. Taking surface graphics rendering as an example, the process of invoking the coloring result by the terminal device may be as shown in fig. 2:
S201, the terminal equipment starts an application program.
When the terminal device receives a triggering operation of the icon for the application program, the terminal device starts the application program. The terminal device creates an EGL Surface. The EGL Surface may be used to store images that need to be displayed in a display screen during application execution.
S202, the terminal equipment acquires compiled shader source codes from a file system and stores the shader source codes into a Blobcache.
The file system may be a disk, in which one or more shader compiled programs of the graphics may be stored. The terminal equipment can load the programs into the Blobcache; the blob cache may be understood as a memory, where multiple programs of graphics may be stored.
It will be appreciated that the terminal device may have run the application multiple times before the application is run this time. Accordingly, the file system may store therein related information of an image rendered when the application program was previously run, for example, the related information of the image may be a program.
S203, when a drawing instruction of the graph is received, the terminal equipment inquires whether a program corresponding to the graph exists in a program cache according to the drawing instruction of the graph.
The drawing instruction may be a dragop. The Program cache may be understood as a memory that may be used to store the Program of graphics in the image rendered by the GPU and to send the Program to the GPU. When the terminal device needs to draw a certain graph, the terminal device can issue a drawtop of the graph. After receiving the DrawOP, the terminal device searches the program of the graph in the program cache.
It can be understood that when the terminal device just starts the application program, the program cache may be an empty memory, and as the application program runs, the terminal device may continuously write the program in the program cache.
S204, if a program corresponding to the graph exists in the program cache, executing step S209.
The program cache has the program corresponding to the graph, which indicates that the program cache has loaded the program of the graph. The terminal equipment can report the program in the program cache to the GPU so that the GPU can use the program to color the picture.
S205, if the program corresponding to the graph does not exist in the program cache, inquiring whether the program corresponding to the graph exists in the Blobcache.
S206, if the program corresponding to the graph exists in the Blobcache, storing the program corresponding to the graph into the program cache, and executing step S209.
The program corresponding to the graph exists in the Blobcache, the terminal equipment loads the program of the graph into the program cache, and then the program in the program cache is reported to the GPU.
S207, if the program corresponding to the graph does not exist in the Blobcache, the terminal equipment compiles the graph by using a shader to obtain the program corresponding to the graph.
S208, storing the program corresponding to the graph into a file system, a program cache and a Blobcache respectively.
For new graphics, the terminal device may shader compile and save it for the next use. The shader compiling process may refer to steps S101-S108, which are not described here again.
S209, reporting the program and rendering instructions in the program cache to the GPU.
It can be seen that in some scenarios, for example: installing an application, starting for the first time, loading a new drawing page in application program use, wherein the page has color gamut change, fillet change, alpha change and the like, and the terminal equipment clears a cache and the like. Under these scenarios, when the terminal device renders an image, it needs to perform loader compiling on a new graphic (a graphic of a corresponding program is not stored in the file system and/or the program cache). The compilation process is for example: the vertex shader is compiled in step S103, and the fragment shader is compiled in step S104. Usually, compiling a graphics node takes about 7ms, but when a frame of image has a plurality of new graphics, the terminal device often takes tens to hundreds of milliseconds to execute the process of loader compiling, so that the rendering time is too long when the GPU renders the frame of image, and further the problems of frame loss and clamping and blocking occur when the terminal device displays the image.
In a possible implementation, the terminal device precompresss all the loader source codes under the platform directory of the operation platform, and stores the precompiled result file. Upon receiving a rendering instruction, the precompiled component may read the precompiled result file that has already been compiled, thereby reducing the shader compilation time. However, this method has some problems such as: after the terminal equipment acquires the source codes of the Shader, the precompiled component precompiled all acquired source codes of the loader, and more Program files occupy part of memory; for another example, the precompiled component may rely on the host process of the application to perform loader precompiled, and when the number of loader source codes is greater, more resources of the host process of the application are also occupied.
In view of this, an embodiment of the present application provides an image processing method, which selects a loader source code of a graph related to a katon scene by sieving, precompiled the loader source code, and stores a corresponding program file. The stuck scene can be understood as a scene occupation of the terminal equipment caused by long time for compiling the loader source code by the terminal equipment.
Therefore, when the terminal equipment needs to render the graph, the terminal equipment can use the pre-compiled program files without executing the compiling process of the program files, so that the time for the terminal equipment to compile the part of loader source codes is reduced, and the blocking phenomenon is reduced. For the loader source codes without precompiled, the probability that the loader source codes can cause a jamming phenomenon is small, so that even if the terminal equipment adopts a conventional compiling flow to compile the loader source codes, the terminal equipment has few jamming scenes. Therefore, the method can shorten the coding time of the loader source code and reduce the blocking of the terminal equipment; and the memory pressure caused by precompilation of all the loader source codes can be relieved by precompilation of part of the loader source codes.
In order to better understand the embodiments of the present application, the structure of the terminal device of the embodiments of the present application is described below. Fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
It is understood that the terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc. The terminal device may be a mobile phone (mobile phone) with a display screen, a smart tv, a wearable device, a tablet (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in an industrial control (industrial control), a wireless terminal in an unmanned (self-driving), a wireless terminal in a teleoperation (remote medical surgery), a wireless terminal in a smart grid (smart grid), a wireless terminal in a transportation security (transportation safety), a wireless terminal in a smart city (smart city), a wireless terminal in a smart home (smart home), or the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the terminal equipment.
As shown in fig. 3, the terminal device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, an indicator 192, a camera 193, a display 194, and the like.
It will be appreciated that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal device. In other embodiments of the application, the terminal device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A memory may also be provided in the processor 110 for storing instructions and data.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge a terminal device, or may be used to transfer data between the terminal device and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the charge management module 140 and the processor 110.
The wireless communication function of the terminal device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Antennas in the terminal device may be used to cover single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on a terminal device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wirelesslocal area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), etc. as applied on a terminal device.
The terminal device implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the terminal device may include 1 or N display screens 194, N being a positive integer greater than 1.
The terminal device may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The camera 193 is used to capture still images or video. In some embodiments, the terminal device may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the terminal device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area.
The terminal device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal device can listen to music through the speaker 170A or listen to hands-free calls. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal device picks up a call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear. The earphone interface 170D is used to connect a wired earphone.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. In the embodiment of the present application, the terminal device may receive the sound signal for waking up the terminal device based on the microphone 170C and convert the sound signal into an electrical signal that may be processed later, and the terminal device may have at least one microphone 170C.
The sensor module 180 may include one or more of the following sensors, for example: pressure sensors, gyroscopic sensors, barometric pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, or bone conduction sensors, among others.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The terminal device may receive key inputs, generating key signal inputs related to user settings of the terminal device and function control. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The software system of the terminal device may adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, a cloud architecture, or the like, which will not be described herein.
In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 4 and 5 are block diagrams of software structures of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system may include: an application layer (applications), an application framework layer (application framework), a Libs & Services layer, and a kernel layer (kernel), which may become the driver layer.
In one possible implementation manner, the terminal device may execute the image processing method provided by the embodiment of the present application based on a loader precompiled module of a Surface player module of the Libs & Services layer. The software architecture of the terminal device shown in fig. 4 is suitable for use in the scenario of GPU synthesis.
As shown in fig. 4, the application layer may include a series of application packages. Application packages may include cameras, calendars, maps, phones, music, settings, mailboxes, videos, games, and the like.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layer may include a window manager, a content provider, a resource manager, a view system, a notification manager, a camera access interface, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
The Libs & Services layer may include a plurality of library modules and service modules. For example: the Libs & Services layer may include a HWUI module, a skua module, a Surface Flinger module, and an OpenGL module, where the Surface Flinger module may include a Shader (loader) pre-compilation module. The Android system can load corresponding modules for the equipment hardware, so that the purpose of accessing the equipment hardware by the application program framework layer is achieved.
HWUI (hardware accelerated rendering UI, hardware user interface) module for 2D hardware rendering. The HWUI module may use OpenGL ES to perform GPU hardware drawing, to improve drawing performance of the whole system, mainly by the following ways: rendering (OpenGLRenderer), display list rendering (displaylertrenderer), time-lapse rendering list (DeferredDisplay List).
The skua (android system underlying image library, skia Graphics Library) module is a rendering engine for processing underlying images. The terminal device may call an application program interface (application programming interface, API) provided by the skua to draw the image and/or render the image.
The Surface Flinger module is used for layer synthesis, and the Surface Flinger comprises two layer synthesis modes of hardware synthesis (hardware composer, HWC) and GPU synthesis. In one possible implementation, GPU synthesis may be used in embodiments of the present application. The GPU synthesis process can be understood as the Surface Flinger drawing on more complex surfaces to form new surfaces.
A Shader (loader) precompiled module, which may be a sub-module of a Surface loader module, may be used to precompiled the loader of the graph. In one possible implementation manner, the image processing method provided by the embodiment of the application can be applied to the GPU synthesis process.
The OpenGL module is a graphic program interface for rendering 2D and 3D vector graphic hardware; the skua module may transmit the precompiled shader source code to the GPU via the OpenGL module.
The kernel layer is a layer between hardware and software. The kernel layer is used for driving the hardware so that the hardware works. The kernel layer may contain camera device drivers, display drivers, audio drivers, etc.
Alternatively, in one possible implementation manner, the terminal device may execute the image processing method provided by the embodiment of the present application based on a loader precompiled module in the GPU rendering program of the application layer. The software architecture of the terminal device shown in fig. 5 is suitable for use in a scene rendered by a GPU.
As shown in fig. 5: the application layer may include a series of application packages. The application packages may include cameras, calendars, maps, phones, music, settings, videos, games, and a loader precompiled module, etc.
And the loader pre-compiling module is used for compiling loader data into binary programs in advance. In one possible implementation, in an embodiment of the present application, the loader pre-compilation module may be used as an application package (application package, apk) in the application layer, and the terminal device may activate and use the loader pre-compilation module when the GPU renders the image.
The application framework layer may refer to the description in fig. 4, and will not be described in detail herein.
The Libs & Services layer may include a plurality of library modules and service modules. For example: the Libs & Services layer may include HWUI module, skua module, and OpenGL module. The Android system can load corresponding modules for the equipment hardware, so that the purpose of accessing the equipment hardware by the application program framework layer is achieved.
The Libs & Services layer may refer to the description in fig. 4 and will not be described here.
The kernel layer is a layer between hardware and software. The kernel layer is used for driving the hardware so that the hardware works. The kernel layer may contain camera device drivers, display drivers, audio drivers, etc.
In the embodiment of the application, after the terminal equipment starts the application program, the application program can issue a rendering instruction, and the rendering instruction can be used for indicating the GPU to call a rendering function to perform image rendering and generate a corresponding image, and the corresponding image is displayed in a display screen of the terminal equipment. The rendering instructions may include shader information, texture information, etc. in the image to be rendered. An image to be rendered for a frame may include a plurality of graphics, for example, the graphics may be primitives (points, lines, circles, triangles, and/or rectangles, etc.) that make up the image. Taking the status bar as an example, the graphic may be a rectangle constituting the status bar.
In the embodiment of the application, when the terminal equipment performs rendering on a certain graph when an application program runs, a program corresponding to the source code of a shader of the graph is stored in a storage space (a file system and a memory) of the terminal equipment; when the terminal device re-runs the application program and needs to render the graph, the terminal device can call the program of the graph from the storage space, and the program of the graph is not needed to be obtained by compiling the shader source code of the graph. The program may be a binary program.
For graphics that are not rendered by the terminal device and/or that have been rendered by the terminal device, but the rendering data for the graphics are cleared. When rendering the graphics, the terminal equipment needs to compile the source codes of the shaders of the graphics into binary program recognizable by the GPU; the loader source code may also be called a Shader resource, and is a string that includes vertex information and segment information of a graphic. The object applicable to the image processing method in the embodiment of the application can be the unrendered graph of the part of terminal equipment and/or the graph which does not store the rendering data.
The image processing method provided by the embodiment of the application is described in detail below with reference to the accompanying drawings. The "at … …" in the embodiment of the present application may be an instant when a certain situation occurs, or may be a period of time after a certain situation occurs, which is not particularly limited.
An image processing method according to an embodiment of the present application is described below with reference to fig. 6. As shown in fig. 6:
in an exemplary embodiment, S601, the terminal device obtains a target file of the target application, where the target file includes a first shader source code and a second shader source code.
The target application can be an application program newly downloaded by the terminal equipment or an application program installed on the terminal equipment; the target file may be an installation file for installing the application program, or may be a patch package, a new version file, or the like for updating the installed application program.
The first shader source code includes shader source code having a probability greater than a preset value, the probability being related to one or more of the following characteristic information: scene data, graphics, dynamic effect data of the graphics, time length required for compiling the source codes of the shader, space occupation of the source codes of the first shader, and times of calling the source codes of the first shader by the terminal equipment in preset time which are related to the running process of the target application; the first shader source code may include one or more shader source code. The second shader source code comprises shader source code with probability smaller than or equal to a preset value; the second shader source code may include one or more shader source code.
Wherein the scenario data related to the target application running process may relate to the following scenarios in the target application running process, for example: a scene of a click operation is received, a scene of a slide operation is received, a scene of a target application is received and/or a scene of a target application is received. The graphic may be the shape of the graphic, for example: triangle, circle, rectangle, etc. The graphical effects data may be an animator data relating to the effects of scaling, transparency, translation and/or rotation of the graphic. The length of time required to compile the shader source code is used to characterize the length of time required for the terminal device to compile the first shader source code into a binary first program.
When the terminal device downloads and installs the target application and/or updates the target application, the terminal device may acquire a target file of the target application. The target file carries shader source code for shading the graphics to be rendered by the target application. The shader source code may be divided into first shader source code and second shader source code.
It should be noted that, in the embodiment of the present application, partial feature information related to probability is described as an example, the feature information may also be represented by other parameters, for example, complexity of the first shader source code, and number of times of calling the first shader source code in a preset time. The probability may be calculated using some or all of the above-described feature information in embodiments of the present application, which are not limited in this regard.
S602, the terminal equipment precompiles the first shader source code by utilizing the target file, and does not precompile the second shader source code.
The terminal device may obtain the first shader source code in the target file, precompiled the first shader source code, and not precompiled the second shader source code. The precompiled process can be understood as: before the terminal equipment performs coloring operation on the graph corresponding to the first shader source code, compiling the first shader source code into a binary first program recognizable by the GPU in advance.
It can be understood that the probability of the first shader source code is higher than the preset value, which indicates that the probability of a scene with excessively long shader source code compiling time may occur in the process of using the first shader source code to shader the graph by the terminal device. The terminal device may pre-compile the first shader source code to obtain a first program. When the subsequent terminal equipment is used for coloring the graph by using the first shader source code, the terminal equipment can acquire the compiled first program, and does not need to execute a compiling flow for compiling the first shader source code into the first program, so that the time consumption for compiling the first shader source code is reduced.
The probability of the second shader source code is lower than or equal to the preset value, which indicates that the probability of a scene with overlong shader source code compiling time possibly appears in the process of using the second shader source code to color the graph by the terminal equipment is lower. The terminal device may compile the second shader source code into a second program when the terminal device is shading the graphic using the second shader source code. Because the probability of the second shader source code is lower, the probability of the scene of the occurrence of the jamming of the terminal equipment caused by compiling the second shader source code is lower. In order to reduce occupation of resource space by precompiled shader source codes, the image processing method provided by the embodiment of the application precompiled first shader source codes and not precompiled second shader source codes; and compiling the second shader source code when the terminal equipment performs shading processing on the graph corresponding to the second shader source code.
In one possible implementation, the first shader source code may be marked with an identifier, and the terminal device obtains the first shader source code through the identifier, and performs centralized precompiled on the first shader source code marked with the identifier. In another possible implementation, the first shader source code is stored in the target path; the storage paths of the first shader source code and the second shader source code are different; the terminal device may read the first shader source code from the target path and the second shader source code from the native path. The embodiment of the application does not limit the method for distinguishing the first shader source code from the second shader source code.
According to the image processing method provided by the embodiment of the application, the target file of the target application is acquired through the terminal equipment, the target file comprises the first shader source code and the second shader source code, the terminal equipment precompiled the first shader source code by utilizing the target file, and the second shader source code is not precompiled. In this way, when the terminal equipment colors the graph of the first shader source code, the first program can be called, so that the time length for the terminal equipment to compile the first shader source code is reduced, and the probability of the occurrence of the clamping of the terminal equipment is reduced; when the terminal equipment colors the graph of the second shader source code, the probability that the coding flow of the second shader source code possibly causes a jam is also lower. Furthermore, the terminal equipment precompiles part of the shader source codes, so that the quantity of the shader source codes needing precompilation is reduced, and the occupied resource space and the power consumption are reduced.
Steps S601 to S602 are further described below in connection with steps S701 to S705. Fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the flowchart is shown in fig. 7:
s701, the terminal equipment downloads the target application and acquires a target file of the target application.
When the terminal device receives an operation for installing the target application, the terminal device starts downloading and installing the target application. The terminal equipment can acquire the installation files, patch packages, new version files and other target files of the target application. The first shader source code and the second shader source code are pre-distinguished in the target files.
In some embodiments, the terminal device may store the acquired first shader source code and the second shader source code in different storage paths, for example, the terminal device stores the first shader source code in a directory of a platform on which the pre-compiling module operates; and storing the second shader source code under other directories. The terminal device may also store the first shader source code and the second shader source code on the same path.
S702, the terminal equipment precompiles the first shader source code to obtain a first program.
The first shader source code is source code recognizable by the CPU, for example, the first shader source code may be in OpenGL shading language (OpenGL shading language, GLSL) format, high-level rendering language (High Level Shading Language, HLSL) format, or the like, and the first program is a program recognizable by the GPU, for example, the first program may be a binary program or an n-ary program recognizable by the GPU. The first program may be a program corresponding to the source code of the first shader. In one possible implementation, during a period of time when the target application is installed, the terminal device may centrally precompiled some or all of the first shader source code in the target file. The GPU, when executing the rendering process, needs to process the first shader source code into GPU language. The GPU can perform coloring processing on the graphics corresponding to the first shader source codes. In the embodiment of the application, the pre-compiling module can execute a pre-compiling flow. After the target application is installed, the pre-compilation module may obtain the first shader source code from a storage path of the first shader source code and compile the first shader source code into a first program.
It will be appreciated that the process of precompiled first shader source code by the terminal device is similar to the process of compiling second shader source code by the terminal device. However, in general, the process of precompiling the first shader source code by the terminal device is earlier than the process of compiling the second shader source code by the terminal device, so that it can be ensured that the first program is called to complete the shading of the graphics without executing the operation of encoding the first shader source code in the running process of the target application.
S703, the pre-compiling module of the terminal device stores the first program into a file system and/or a memory.
The file system can be used for storing the first program and searching the first program from the stored first program; for example: the file system may be a magnetic disk, a solid state disk, or the like. The memory may be used to store GPU programs; for example: the memory may be a program cache.
In some embodiments, the pre-compilation module, when storing the first program, may set a tag for the first program that is associated with the first shader source code, such as: program ID, key (key), etc. Subsequently, when the GPU needs to render the graphics, the first program can be used based on the tag without accessing the first shader source code. The embodiment of the application does not limit the method for setting the association relation between the first shader source code and the first program.
It can be understood that, when the first program is stored in the program cache and the terminal device uses the first program, the GPU may load the first program in the program cache; when the first program is stored in the disk and the terminal equipment uses the first program, the terminal equipment loads the first program in the disk into the program cache, and the GPU loads the first program from the program cache.
The embodiment of the application provides an image processing method, which downloads a target application through terminal equipment and acquires a target file of the target application; the precompiled module of the terminal equipment precompiled the source codes of the first shader to obtain a first program; the pre-compiling module of the terminal device stores the first program into the file system and/or the memory. In this way, the terminal device can implement the pre-compiling process of the first shader source code, and the subsequent GPU can use the first program to shade the graph, so that the compiling time of the first shader source code is shortened.
Optionally, in order to reduce the operation pressure of the precompilation process of the shader source code on the main thread of the application program, before step S702, the precompiling module of the terminal device precompiles the first shader source code to obtain the first program, the method further includes:
Illustratively, S705, the terminal device creates a precompiled thread, on which the precompiled module runs.
The precompiled thread may be Shader Compile Thread, unlike the main thread of the target application. The precompiled thread may be one or more threads. The terminal equipment can execute the pre-compiling process of the shader source codes in a multi-task distributed parallel manner through the pre-compiling thread, so that the pre-compiling efficiency of the terminal equipment on the shader source codes is improved; meanwhile, the precompiled thread for processing the precompiled flow is an independent thread, and when the precompiled flow is executed, the precompiled module does not occupy the resources of the main thread of the target application.
In one possible implementation manner, the terminal device may download the target application, and create a precompiled thread when the target file of the target application is obtained; for example, when the terminal device downloads the target application and the CPU is running well, the terminal device may create a precompiled thread and precompiled the first shader source code into the first program.
In another possible implementation manner, the terminal device may create the precompiled thread after downloading the target application and acquiring the target file of the target application; for example, when the terminal device downloads the target application, but the CPU is running poorly, the terminal device may not create the precompiled thread. After the running condition of the CPU is good, the terminal equipment creates a precompiled thread and precompiled the first shader source code into a first program.
Illustratively, the above determination of the time to create the precompiled thread may be as shown in step S706:
s706, the terminal equipment acquires the operation parameters of the CPU; the operating parameters of the CPU include: the number of processes running on the CPU and/or the operating frequency of the CPU.
The operating parameters of the CPU may reflect the operating conditions of the CPU. The number of processes running in the CPU can reflect the load condition of the system; the operating frequency of the CPU may reflect the operating speed of the CPU.
S707, when the operation parameter of the CPU meets the preset condition, the terminal device may execute step S702.
Illustratively, the preset conditions include at least one of: the number of processes in the running process of the CPU is lower than a preset process number threshold value; and/or the working frequency of the CPU is higher than a preset working frequency threshold value.
If the number of processes running in the CPU is lower than a preset process number threshold, the process processed by the CPU is fewer, and the precompiled threads in the embodiment of the application cannot occupy the resources of other processes in operation. If the working frequency of the CPU is higher than the preset working frequency threshold, the CPU running speed is higher, and the running of the precompiled thread can be supported.
In one possible implementation, the terminal device may set the priority of the precompiled thread, e.g., when the terminal device allows running the precompiled thread, the priority of the precompiled thread may be set to the same priority as the priority of the thread running in the CPU; in this way, the terminal device can process multiple threads and precompiled threads in parallel.
It can be appreciated that in the embodiment of the present application, other terminal operation parameters may be used as a criterion for determining whether the terminal device runs the precompiled thread, which is not limited in the embodiment of the present application.
And S708, when the running parameters of the CPU do not meet the preset conditions, the terminal equipment reduces the priority of the precompiled thread so that the precompiled module delays precompiled of the first shader source codes.
For example: when the number of processes running in the CPU is higher than or equal to a preset process number threshold and/or the working frequency of the CPU is lower than or equal to a preset working frequency threshold, the terminal equipment sets the priority of the precompiled thread to be lower than the priority of the processes running in the CPU.
If the number of processes running in the CPU is greater than or equal to a preset process number threshold, which indicates that the processes processed by the CPU are more, the precompiled thread in the embodiment of the present application may occupy the resources of other processes in operation. If the operating frequency of the CPU is lower than or equal to the preset operating frequency threshold, the CPU load is possibly larger, and the operating frequency is limited by the rise of the CPU temperature. At this point, the terminal device may not run the precompiled thread and/or not run a portion of the precompiled thread.
The terminal equipment sets the priority of the precompiled thread to be lower than the priority of the process running in the CPU, and the CPU can execute other processes preferentially; and when other processes are finished, the CPU load is smaller, and the terminal equipment runs the precompiled thread.
According to the image processing method provided by the embodiment of the application, the running parameters of the CPU are obtained through the terminal equipment; when the running parameters of the CPU meet preset conditions, the terminal equipment runs the precompiled thread; and when the running parameters of the CPU do not meet the preset conditions, the terminal equipment reduces the priority of the precompiled threads so that the precompiled module delays precompiled the first shader source codes. In this way, the terminal device may not increase the system load when executing the precoding procedure.
In some embodiments, the terminal device may use the precompiled first program in starting the process of running the target application. The following describes a flow of using a first program corresponding to the source code of a first shader by a GPU according to an embodiment of the present application with reference to fig. 8.
In one possible implementation, when the terminal device installs the application program and starts the target application, the GPU needs to render the graphics in the target application. The graphics may include graphics corresponding to the first shader source code and graphics corresponding to the second shader source code. When the graphic is a graphic corresponding to the source code of the first shader, the terminal device may execute the flow shown in fig. 8:
s801, the terminal equipment starts a target application.
S802, the target application of the terminal equipment issues a drawing instruction.
The drawing instruction may be a dragop instruction. The target application of the terminal device issues drawing instructions of the graphics to the HWUI module. It is to be appreciated that when the shader source code of the graphics to be rendered by the GPU is the first shader source code, the drawing instructions of the graphics may be translated to corresponding first shader source code based on the HWUI.
S803, the rendering engine of the terminal equipment searches a first program corresponding to the source code of the first shader in the memory according to the drawing instruction.
The rendering engine may be a skua module. After the HWII module converts the rendering instruction into first shader source code, the first shader source code is sent to the SKIA module. The SKIA module may look up the first shader source code in the program cache based on a tag (e.g., key value) of the first shader source code.
S804, if the first program is stored in the memory, the rendering engine reports the first program to the GPU.
It will be appreciated that the terminal device stores the first program in memory after precompiling the first shader source code. The rendering engine may find the first program in memory. The rendering engine reports the first program to the GPU.
S805, the GPU of the terminal equipment colors the graph based on the first program.
Or, if the first program is not stored in the memory, the rendering engine searches the file system for the first program corresponding to the source code of the first shader, and reports the first program to the GPU.
In some scenarios, the terminal device may clear the cache content in the program cache of the GPU. At this point, the rendering engine is not looking up the first program in memory. The terminal device may continue to look up the first program to the file system. After the rendering engine finds the first program, step S805 is performed.
In one possible implementation, the skua module cannot read the first program from the file system. When the target application of the terminal device issues a drawing instruction, the terminal device loads a first program in the file system into a block cache, for example: blob cache. The SKIA module searches the Blob cache for the first program, and the precompiled module stores the first program to the file system after precompiling the first shader source code, so the SKIA module can find the first program in the Blob cache. After the SKIA module obtains the first program, the first program in the Blob cache is loaded into the program cache, and the first program in the program cache is reported to the GPU.
The embodiment of the application provides an image processing method, which comprises the steps that when a GPU renders graphics corresponding to source codes of a first shader, a rendering engine searches a first program in a memory and/or a file system, and the first program is reported to the GPU; the GPU renders the graphics based on the first program. Thus, the time for compiling the source code of the first shader is saved, and the rendering time of the GPU is shortened.
In another possible implementation, when the terminal device installs the target application program and starts the target application, the GPU needs to render the graphics in the target application. The graphics may include graphics corresponding to the first shader source code and graphics corresponding to the second shader source code. When the graphic is the graphic corresponding to the second shader source code, the terminal device may execute the flow as shown in fig. 9:
s901, starting a target application by the terminal equipment.
S902, the target application of the terminal equipment issues a drawing instruction.
The drawing instruction may be a dragop instruction. The target application of the terminal device issues drawing instructions of the graphics to the HWUI module. It is to be appreciated that when the shader source code of the graphics to be rendered by the GPU is the second shader source code, the drawing instructions of the graphics may be converted to corresponding second shader source code based on the HWUI.
S903, the rendering engine of the terminal equipment does not find a second program corresponding to the second shader source code in the memory according to the drawing instruction.
The rendering engine may be a skua module. After the HWII module converts the rendering instruction into second shader source code, the second shader source code is sent to the SKIA module. The SKIA module may look up the second shader source code in the program cache based on a tag (e.g., key value) of the second shader source code.
It will be appreciated that, since the terminal device does not precompiled the second shader source code, the rendering engine may not find the second program corresponding to the second shader source code in memory.
S904, the rendering engine does not find a second program corresponding to the second shader source code in the file system.
In one possible implementation, the skua module cannot read the second program from the file system. When the target application of the terminal device issues a drawing instruction, the terminal device loads a program in the file system into a block cache, for example: blob cache. The SKIA module searches the Blob cache for the second program, and the precompiled module does not precompiled the second shader source code, so the second program is not stored in the file system at this time.
S905, the terminal equipment compiles the second shader source code and stores the second program into a file system and a memory.
And the terminal equipment compiles the second shader source code into a binary second program, and stores the second program into a disk and a program cache.
S906, the rendering engine reports the second program to the GPU.
S907, the GPU of the terminal equipment colors the graph based on the second program.
The embodiment of the application provides an image processing method, which comprises the steps that when a graphic corresponding to second shader source codes is rendered by a GPU, terminal equipment compiles the second shader source codes to obtain a second program. The GPU renders the graphics based on the second program. Therefore, the probability of the terminal equipment blocking caused by the compiling process of the second shader source code is low, and the use experience of a user is not affected.
It should be noted that, steps S801 to S805 and steps S901 to S907 are taken as an example of first starting the target application after the terminal device installs the target application, and the image processing method according to the embodiment of the present application is described, and the embodiment of the present application is applicable to other scenarios as well, for example: after the terminal equipment installs the target application, rendering a new drawing page for the first time, wherein color gamut change, fillet change and Alpha change appear in the page; and after the terminal equipment installs and uses the target application, the cache and the like related to the target application are cleared. The embodiment of the application is not listed for applicable scenes.
It will be appreciated that the embodiments shown in fig. 8 and fig. 9 respectively combine the first shader source code and the second shader source code, and the procedure of using the precompiled program by the terminal device in the embodiment of the present application is described. In the actual running process of the terminal equipment, the drawing instruction issued by the target application may not specifically distinguish the first shader source code from the second shader source code, but the terminal equipment searches whether a precompiled program exists in the storage space according to the drawing instruction. Exemplary, as shown in fig. 10:
s1001, the terminal equipment starts a target application.
S1002, the target application issues a drawing instruction to the HWII module.
S1003, the HWUI converts the drawing instruction into third shader source code and synchronizes the third shader source code to the rendering engine. The third shader source code includes first shader source code and/or second shader source code.
The third shader source code is a shader source code analyzed by the terminal equipment according to the drawing instruction, and the third shader source code can be the first shader source code and/or the second shader source code.
S1004, the rendering engine searches whether the precompiled program exists in the third shader source code in the memory.
S1005, if the memory contains the program, the rendering engine reports the program in the memory to the GPU.
Or, if the memory does not include the program, the rendering engine searches the file system for the program in S1006.
S1007, if the file system includes a program, the rendering engine reports the program in the file system to the GPU.
In one possible implementation, the third shader source code may be shader source code (the first shader source code) having a probability higher than a preset value, and since the precompiled thread precompiled the third shader source code and stores the program of the third shader source code in the file system and/or the memory, the rendering engine may find the program of the third shader source code in the memory and/or the file system. The rendering engine reports the programs obtained from the memory or file system to the GPU.
Or if the file system does not include the program, the terminal device compiles the third shader source code into the program and stores the program in the memory and/or the file system; the rendering engine reports the program to the GPU.
In one possible implementation, the third shader source code may be shader source code (second shader source code) having a probability below a preset value, and since the precompiled thread does not precompiled the third shader source code, the rendering engine may not find programs of the third shader source code in memory and/or in the file system. The terminal device performs a conventional shader source code compilation process to compile the third shader source code into a program, as shown in steps S207-S208.
S1009, the GPU colors the graphics based on the program.
The execution principle of steps S1001 to S1009 is similar to that of fig. 8 and 9, and steps S1001 to S1009 are described with reference to the embodiments shown in fig. 8 and 9. The embodiments of the present application will not be described in detail.
The embodiment of the application provides an image processing method, and terminal equipment starts a target application; the target application issues a drawing instruction to the HWII; the HWII converts the drawing instruction into third shader source code and synchronizes the third shader source code to the rendering engine; the third shader source code includes first shader source code and/or second shader source code; the rendering engine searches whether the precompiled program exists in the third shader source code in the memory; if the memory includes a program, the rendering engine reports the program in the memory to the GPU; if the memory does not contain the program, the rendering engine searches the program in the file system; if the file system comprises the program, the rendering engine reports the program in the file system to the GPU; or if the file system does not comprise the program, the terminal equipment compiles the third shader source code into the program and stores the program into the memory and/or the file system; the rendering engine reports the program to the GPU. Therefore, when the third shader source code is the first shader source code, the method and the device can save the time for compiling the first shader source code and shorten the rendering time of the GPU; when the third shader source code is the second shader source code, the probability of the terminal equipment jamming is lower due to the compiling process of the second shader source code, and the use experience of the user is not affected.
The foregoing embodiment describes a pre-compiling mechanism in the terminal device, and a process of intelligently screening the first shader source code by the electronic device on the development side is described below with reference to fig. 11 to 12.
Fig. 11 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the schematic flow chart is shown in fig. 11:
s1101, the electronic equipment acquires the shader source codes of the target application.
The electronic device may be a server device, for example: personal computers, cell phones, tablets, and the like. The target application may be an application to be issued to a server; the target application may be an unpublished application or an application that has been published but needs to be updated.
In developing the target application, the electronic device may use a command line tool to obtain shader source code for the target application.
S1102, the electronic equipment screens the first shader source codes from the shader source codes.
The number of the first shader source codes is smaller than the number of the shader source codes; the first shader source code includes: shader source code with probability greater than a preset value, the probability is related to one or more of the following feature information: scene data, graphics, dynamic effect data of the graphics, time length required for compiling the shader source code and space occupation of the shader source code related to the running process of the target application.
The electronic device may calculate a probability based on the one or more feature information, and screen out a shader source code having a probability greater than a preset value, to obtain a first shader source code. After the electronic equipment screens out the first shader source codes from the shader source codes, the rest shader source codes can be second shader source codes; and/or the electronic equipment calculates the probability based on the one or more items of characteristic information, and screens out the shader source codes with the probability smaller than or equal to a preset value to obtain a second shader source code.
It can be appreciated that the probability of the first shader source code is higher than the preset value, which indicates that in the process of using the first shader source code to shader the graphics, the probability of a scene with excessively long shader source code compiling time may occur is higher. The electronic device screens out the first shader source code. After the subsequent terminal equipment installs the target application, the first shader source code can be compiled in advance to obtain a first program; when the terminal equipment is used for coloring the graph by using the first shader source code, the terminal equipment can acquire the compiled first program, and does not need to execute a compiling flow for compiling the first shader source code into the first program, so that the time consumption for compiling the first shader source code is reduced.
It should be noted that, the embodiment of the present application exemplarily describes part of feature information related to probability, and the feature information may also be characterized by other parameters, for example, complexity of shader source codes. The probability may be calculated using some or all of the above-described feature information in embodiments of the present application, which are not limited in this regard.
S1103, the electronic equipment classifies and packages the first shader source code and the second shader source code in a target file of a target application; the second shader source code comprises shader source code which is corresponding to the target application and is removed from the first shader source code.
The target file may be an installation file of the target application, or may be a patch package, a new version file, etc. of the target application. The target file of the target application comprises a first shader source code and a second shader source code. In the process of packaging and publishing the target application, the electronic equipment can package the target file carrying the first shader source code and the second shader source code; and uploading the packaged target file to a server which can be accessed and downloaded by the terminal equipment of the user for release, so that the terminal equipment can download the target application.
Classification packaging can be understood as that in the process of packaging a target application, the electronic device can distinguish the shader source codes through configuration files, identification information, path information and the like, and classify a first shader source code and a second shader source code in the shader source codes. The embodiment of the application does not limit the method for classifying and packaging the first shader source codes and the second shader source codes by the electronic equipment.
In some possible implementations, as shown in fig. 12, the electronic device may store the first shader source code in a preset file library, and when the electronic device first issues the target application, the electronic device may package an installation package of the target application and a configuration file (profile) of the first shader source code together, and upload the package and the configuration file to the cloud server; the terminal equipment downloads a target application from a cloud server and acquires an installation package of the target application and a configuration file of a first shader source code; and the terminal equipment feeds back the installation information of the target application to the cloud server.
In some cases, for example, the target application may have a situation that needs to be updated, or the electronic device updates the shader source code with a probability higher than a preset value. The electronic device may screen out fourth shader source code, the probability of the fourth shader source code being higher than a preset value, and the fourth shader source code being different from the first shader source code. The electronic device determines that the preset file library does not have the fourth shader source code, at this time, the electronic device may store the fourth shader source code in the preset file library, upload a configuration file (optimized profile) of the fourth shader source code to the cloud server, and synchronize the fourth shader source code to the terminal device by the cloud server.
Thus, the electronic equipment can update the first shader source code, and accuracy of the scheme is improved.
According to the image processing method provided by the embodiment of the application, the electronic equipment is used for acquiring the shader source code of the target application; the electronic equipment screens the first shader source codes from the shader source codes to obtain first shader source codes; the electronic device classifies and packages the first shader source code and the second shader source code into a target file of the target application. In this way, after the target application is released, the terminal device downloads and installs the target application and obtains the first shader source code. The terminal equipment does not need to pre-compile all the shader source codes, so that the occupied space of resources is reduced, and the compiling time in the use process of the first shader source codes is reduced.
The following describes a process of intelligently screening source codes of a first shader by an electronic device according to an embodiment of the present application with reference to fig. 13. As shown in fig. 13:
s1301, the electronic equipment acquires the source codes of the shaders in the target application.
Step S1301 may refer to the related description at step S1101, and will not be described here again.
S1302, the electronic equipment identifies the characteristic information of the source codes of the shader.
The characteristic information of the shader source code includes at least one of: scene data related to the running process of the target application, graphics corresponding to the shader source codes, dynamic effect data of the graphics, time required for compiling the shader source codes, space occupation of the shader source codes and calling times of the shader source codes in a preset time.
For example, the scenario data associated with the target application run-time may relate to the following scenarios in the target application run-time, such as: a scene of receiving a click operation, a scene of receiving a slide operation, a scene of receiving a scene for starting a target application, and/or a scene for exiting a target application, etc. The graphics corresponding to the shader source code may be the shape of the graphics, for example: circles (dragcircle), rectangles (dragrect), points (dragpoints), lines (dragline), etc. The graphical effects data may be an animator data relating to the zoom, transparency, translational and/or rotational effects of the graphic, etc. The length of time required to compile the shader source code is used to characterize the length of time required to compile the shader source code into a binary program. The space occupation of the shader source code may be the file size of the shader source code. The number of times of calling the shader source code within the preset duration may be a statistical number of times of using the shader source code within a period of time.
And S1303, the electronic equipment performs assignment on the characteristic information of the source codes of the coloring devices.
The electronic equipment carries out coding processing on the characteristic information of the source codes of the color device, assigns values for the characteristic information of the source codes of the color device, and obtains the numerical characteristic information.
In One possible implementation, for discrete feature information, a Hash (Hash) algorithm or One-Hot encoding (One-Hot) algorithm may be used to assign values. For example, the discrete feature information may include: scene data related to the running process of the target application, graphs corresponding to the source codes of the shader and dynamic effect data of the graphs. This part of the characteristic information can be represented in binary form of 0 or 1 after being assigned.
For example, the electronic device may assign a value to the target application run-time related scene data of the shader source code based on whether the shader source code is applied to the target application run-time related scene. When shader source codes are needed in the scene data related to the running process of the target application, the assignment of the scene data related to the running process of the target application comprises 1; when shader source code is not needed in the scene data related to the target application running process, the assignment of the scene data related to the target application running process includes 0. The embodiment of the application describes discrete characteristic information in the form of 0 or 1, and the embodiment of the application can be characterized by other numerical values.
Specifically, for example: the target application uses the shader source code a in the starting process, and the electronic device assigns the scene data of the target application starting process of the shader source code a to be 1. The target application does not use the shader source code A in the exiting process, and the electronic equipment assigns the scene data of the target application exiting process of the shader source code A to be 0.
For example, the electronic device may assign a graph based on whether the graph is a graph corresponding to shader source code. For example: the graphic assignment drawn by the first shader source code includes 1; the graphics assignment that the first shader source code does not draw includes a 0. The graph drawn by the shader source code A is a rounded rectangle, and then DrawRrect is assigned to be 1; drawcircle is assigned a value of 0.
For example, the electronic device may assign a value to the dynamic effect data of the graphic based on whether the dynamic effect data corresponds to the dynamic effect of the graphic. For example: the dynamic effect data assignment of the graph with dynamic effect comprises 1; the motion effect data of the graph where no motion effect exists includes 0. The corresponding dynamic effect of the shader source code a is a translation, then the translation effect is assigned a value of 1, and the scaling dynamic effect is assigned a value of 0.
For continuous type of feature information, the electronic device may assign a true value or a normalized value to it. The continuous characteristic information may include: compiling the required duration of the shader source code, the space occupation of the shader source code and the calling times of the shader source code in the preset duration.
For example: compiling a value of a required duration of the shader source code to include the required duration; the assignment of the space occupation of the shader source codes comprises the space occupation, and the assignment of the calling times of the shader source codes in the preset time length comprises the calling times.
In one possible implementation, the electronic device assigns a true value to the characteristic information of the shader source code; illustratively, the time required for compiling the shader source code is a, and the electronic device assigns the time required for compiling the shader source code to be a; the space occupation of the shader source codes is b, the electronic equipment assigns the space occupation of the shader source codes as b and the calling times of the shader source codes in the preset time length as c, and the electronic equipment assigns the calling times of the shader source codes in the preset time length as c.
For example: the required duration for compiling shader source code a is 18ms; the space occupation of the source code A of the coloring device is 300k; the number of times of calling the source codes of the coloring device in the preset time period is 20 times/kilohour.
In another possible implementation manner, the electronic device normalizes the real value of the characteristic information of the shader source code, and assigns a normalized value to the characteristic information of the shader source code. It will be appreciated that in some embodiments, the electronic device recognizes that the real value of the feature information of the shader source code is larger, and if the real value is assigned to the feature information of the shader source code, the electronic device computes the feature information based on the model, so that the computing time is longer. Meanwhile, the true value of the characteristic information of the source code of the shader is normalized, and the accuracy of the operation result can be improved. The embodiment of the application does not limit the method used for normalization processing.
Table 1 exemplifies the above partial characteristic information. As shown in table 1:
TABLE 1 characterization information for assigned shader
For example: table 1 may represent: the shader source A can be applied to a target application starting process, the shape of a graph which can be drawn by the shader source A is a rounded rectangle, the time required by the shader source A to be compiled into a program is 18ms, the space occupation of the shader source A is 300k, and the number of times of calling the shader source A every thousand hours is 20.
Table 1 may also represent: the shader source B can be applied to the sliding process of the target application, the shape of the graphic which can be drawn by the shader source B is circular, the time required for compiling the shader source B into a program is 20ms, the space occupation of the shader source B is 100k, and the number of times of calling the shader source B every thousand hours is 300.
It will be appreciated that, table 1 is exemplified by shader source codes in the same target application, and when the target application to be published is a plurality of applications, the names of the plurality of target applications may be further included in table 1. The above table only exemplarily illustrates a method for assigning the characteristic information of the source code of the color marker in the embodiment of the present application, and the embodiment of the present application may also implement the assignment by using other characteristic information and other methods, which is not limited in this embodiment of the present application.
And 1304, performing regression analysis on the assigned characteristic information of the shader source codes by the electronic equipment to obtain the probability of the shader source codes.
And substituting the characteristic information of the assigned shader source codes into a logistic regression model by the electronic equipment to obtain the probability of the shader source codes. The probability of shader source code may satisfy the following formula:
wherein x is the characteristic information of the assigned shader source code; y is the probability of the source code of the color former; w (W) T The weight coefficient is corresponding to the characteristic information of the source code of the shader; b is the offset.
The probability of shader source code may also satisfy the following formula:
wherein x is the characteristic information of the assigned shader source code, x T A transposed matrix of x; y is the probability of the source code of the color former;when the formula (2) is used, the weight coefficient corresponding to the characteristic information of the source code of the shader is obtained; w is X and X T And combining the characteristic information of the plurality of the characteristic information blocks, wherein the weight coefficient corresponds to the combined characteristic information.
Other algorithms may also be used in embodiments of the present application to implement the calculation of probabilities, such as: logistic regression algorithms (logistic regression, LR), factorizer (factorization Machin, FM) algorithms, field-aware factorizer (field-aware factorization machine, FFM) algorithms, width & deep learning (WDL) algorithms, depth factorizer (deep factorization Machin, deep FM) algorithms, depth feature cross network (DCN), very deep factorizer (extreme deep factorization machine, xDeepFM) algorithms, and the like. The specific algorithm of the embodiment of the application is not limited to this.
It will be appreciated that W T Can be a preset value; the electronic equipment can set a weight coefficient for the characteristic information of the shader source code according to the influence degree of target application blocking caused by the characteristic information of the shader source code on the shader source code compiling process. For example, in the embodiment of the present application, the scenario data related to the running process of the target application may be biased, so that the number of scenarios related to the running process of the target applicationThe weighting coefficients according to the data may be higher than the weighting information of the other feature information.
The embodiment of the application sets W T The method of the present application is not limited, and the method and the data of statistics and analysis are not unique, so that the embodiment of the present application does not limit how to set the weight coefficient of the feature information.
And S1305, the electronic equipment screens out the shader source codes with probability higher than a preset value to obtain first shader source codes.
The electronic device may calculate the probability of any shader source code acquired in step S1301 based on steps S1302-S1304. The electronic device may filter the probabilities of the shader source codes based on a preset value.
In one possible implementation, the electronic device may rank probabilities of the plurality of shader source codes; and the electronic equipment screens out the shader source codes with probability higher than a preset value to obtain a first shader source code. And the electronic equipment removes the first source code in the shader to obtain second shader source code.
According to the image processing method provided by the embodiment of the application, the electronic equipment is used for acquiring the source codes of the coloring devices in the target application; the electronic equipment identifies the characteristic information of the source codes of the coloring devices; the electronic equipment carries out assignment on the characteristic information of the source codes of the coloring devices; the electronic equipment carries out regression analysis on the characteristic information of the assigned shader source codes to obtain the probability of the shader source codes; and the electronic equipment screens out the shader source codes with probability higher than a preset value to obtain a first shader source code. Thus, the electronic device can accurately predict the probability of the shader source code according to the relevant information of the shader source code.
The above embodiments respectively introduce the processes of the electronic device intelligently screening the first shader source code, the terminal device precompiling the first shader source code, and the terminal device using the precompiled first shader source code. The overall flow of the target application from development to use is described below in conjunction with fig. 14, as shown in fig. 14:
when the electronic equipment develops the target application, the shader source codes in the target application are acquired through the command line tool, and the first shader source codes with the probability larger than a preset value are intelligently screened out from the shader source codes by using the method shown in the steps S1302-S1305. And then, the electronic equipment packages and publishes the target file of the target application to the server, wherein the target file comprises the first shader source code.
The terminal device may download the target application from the server. During or after the terminal equipment installs the target application, the terminal equipment acquires the first shader source code and precompiled the first shader source code into a first program; the terminal device stores the first program in the memory and/or the file system. When the terminal device needs to use the first shader source code, the terminal device may obtain a first program from the memory and send the first program to the GPU.
Taking the example of starting the target application by the terminal equipment and exiting the target application by the terminal equipment, under the two scenes, the terminal equipment can reduce the probability of a cartoon scene caused by compiling the source code of the shader, can reduce the probability of 66.6 percent, and has obvious effect.
In the embodiment of the application, the GPU rendering scene is taken as an example, and the image processing method provided by the embodiment of the application is described. The image processing method provided by the embodiment of the application can also be applied to a GPU synthesis scene, the scheme principles of the two are similar, and the image processing method in the GPU synthesis scene is not repeated.
The image processing method according to the embodiment of the present application has been described above, and an apparatus for performing the image processing method according to the embodiment of the present application is described below. It will be appreciated by those skilled in the art that the methods and apparatus may be combined and referred to, and that the related apparatus provided in the embodiments of the present application may perform the steps in the image processing method described above.
As shown in fig. 15, fig. 15 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus may be a terminal device in the embodiment of the present application, or may be a chip or a chip system in the terminal device.
As shown in fig. 15, the image processing apparatus 1500 may be used in a communication device, a circuit, a hardware component, or a chip, and includes: a display unit 1501, and a processing unit 1502. Wherein the display unit 1501 is used for supporting the steps of display performed by the image processing apparatus 1500; the processing unit 1502 is used to support the image processing apparatus 1500 to execute steps of information processing.
In a possible implementation, the image processing apparatus 1500 may also include a communication unit 1503. Specifically, the communication unit is configured to support the image processing apparatus 1500 to perform the steps of transmitting data and receiving data. The communication unit 1503 may be an input or output interface, a pin, or a circuit, among others.
In a possible embodiment, the image processing apparatus may further include: a storage unit 1504. The processing unit 1502 and the storage unit 1504 are connected by a line. The storage unit 1504 may include one or more memories, which may be one or more devices, devices in a circuit for storing programs or data. The storage unit 1504 may exist independently and is connected to the processing unit 1502 provided in the image processing apparatus through a communication line. The storage unit 1504 may also be integrated with the processing unit 1502.
The storage unit 1504 may store computer-executed instructions of the method in the terminal device to cause the processing unit 1502 to execute the method in the above-described embodiment. The storage unit 1504 may be a register, a cache, a RAM, or the like, and the storage unit 1504 may be integrated with the processing unit 1502. The storage unit 1504 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, and the storage unit 1504 may be independent of the processing unit 1502.
The image processing method provided by the embodiment of the application can be applied to the electronic equipment with the communication function. The electronic device includes a terminal device, and specific device forms and the like of the terminal device may refer to the above related descriptions, which are not repeated herein.
The embodiment of the application provides a terminal device, which comprises: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory to cause the terminal device to perform the method described above.
The embodiment of the application provides a chip. The chip comprises a processor for invoking a computer program in a memory to perform the technical solutions in the above embodiments. The principle and technical effects of the present application are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium stores a computer program. The computer program realizes the above method when being executed by a processor. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
In one possible implementation, the computer readable medium may include RAM, ROM, compact disk-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and accessible by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (Digital Versatile Disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed, causes a computer to perform the above-described method.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the application.
Claims (14)
1. An image processing method, comprising:
the method comprises the steps that terminal equipment obtains a target file of a target application, wherein the target file comprises first shader source codes and second shader source codes, the first shader source codes comprise shader source codes with probability larger than a preset value, and the probability is related to one or more of the following characteristic information: scene data, graphics, dynamic effect data of the graphics, time length required by compiling the shader source codes and space occupation of the shader source codes related to the running process of the target application;
the terminal device precompiled the first shader source code with the target file and not precompiled the second shader source code.
2. The method of claim 1, wherein the probability is derived based on a valuation of the characteristic information, and wherein the valuation of the target application run-related scene data includes 1 when the shader source code is required in the target application run-related scene data; when the shader source code is not needed in the scene data related to the target application running process, assigning the scene data related to the target application running process comprises 0; the graphic assignment drawn by the first shader source code includes 1; the graph assignment that the first shader source code does not draw includes 0; the dynamic effect data assignment of the graph with dynamic effect comprises 1; the dynamic effect data of the graph without dynamic effect includes 0; compiling an assignment of a required duration of the shader source code to include the required duration; the assignment of the space occupation of the shader source code includes the space occupation.
3. The method of claim 2, wherein the terminal device, prior to precompiling the first shader source code with the target file, comprises:
the terminal equipment creates a precompiled thread; the precompiled thread is different from a main thread in the running of the target application;
the terminal equipment acquires the operation parameters of a Central Processing Unit (CPU); the operating parameters of the CPU include at least one of: the number of processes in the CPU running and/or the working frequency of the CPU;
the terminal device precompiles the first shader source code using the target file, including:
when the terminal equipment determines that the running parameters of the CPU meet preset conditions, the terminal equipment precompiles the first shader source codes in the precompilation thread to obtain a first program corresponding to the first shader source codes; wherein the first program comprises a program recognizable by a graphics processor GPU; the preset conditions include at least one of the following: the number of processes in the running process of the CPU is lower than a preset process number threshold; and/or the working frequency of the CPU is higher than a preset working frequency threshold value.
4. A method according to claim 3, characterized in that after the terminal device has acquired the operating parameters of the central processing unit CPU, it further comprises:
If the running parameters of the CPU do not meet the preset conditions, the terminal equipment reduces the priority of the precompiled thread, so that the terminal equipment delays precompiled the first shader source codes.
5. The method of claim 3, after the precompiling module precompiles the first shader source code, comprising:
the pre-compiling module stores the first program into a file system and/or a memory; the memory includes GPU program memory.
6. The method as recited in claim 5, further comprising:
the terminal equipment starts the target application;
the target application issues a drawing instruction to a hardware acceleration drawing user interface HWII;
the HWII converts the drawing instruction into third shader source code and synchronizes the third shader source code to a rendering engine; the third shader source code includes first shader source code and/or second shader source code;
the rendering engine searches whether the precompiled program exists in the third shader source code in a memory;
if the program is included in the memory, the rendering engine reports the program in the memory to the GPU.
7. The method of claim 6, wherein after the rendering engine looks in memory for the third shader source code if there is a precompiled program; further comprises:
if the memory does not contain the program, the rendering engine searches the file system for the program;
if the file system comprises the program, the rendering engine reports the program in the file system to the GPU; or if the file system does not include the program, the terminal device compiles the third shader source code into the program and stores the program in the memory and/or the file system; the rendering engine reports the program to the GPU.
8. An image processing method, comprising:
the electronic equipment acquires a shader source code of a target application;
the electronic equipment screens the shader source codes to obtain first shader source codes; the number of first shader source code is less than the number of shader source code, the first shader source code comprising: shader source code having a probability greater than a preset value, the probability being related to one or more of the following characteristic information: scene data, graphics, dynamic effect data of the graphics, time length required by compiling the shader source codes and space occupation of the shader source codes related to the running process of the target application;
The electronic equipment classifies and packages the first shader source code and the second shader source code into target files of the target application; the second shader source code comprises shader source code which is corresponding to the target application and is removed from the first shader source code.
9. The method of claim 8, wherein the electronic device screens the shader source code for a first shader source code; comprising the following steps:
the electronic equipment identifies characteristic information of the shader source code;
the electronic equipment carries out assignment on the characteristic information of the source codes of the coloring devices;
the electronic equipment carries out regression analysis on the assigned characteristic information of the shader source codes to obtain the probability of the shader source codes;
and the electronic equipment screens out the shader source codes with the probability higher than a preset value to obtain the first shader source codes.
10. The method of claim 9, wherein when the shader source code is required in the target application run-related scene data, the assignment of the target application run-related scene data includes 1; when the shader source code is not needed in the scene data related to the target application running process, assigning the scene data related to the target application running process comprises 0; the graphic assignment drawn by the first shader source code includes 1; the graph assignment that the first shader source code does not draw includes 0; the dynamic effect data assignment of the graph with dynamic effect comprises 1; the dynamic effect data of the graph without dynamic effect includes 0; compiling an assignment of a required duration of the shader source code to include the required duration; the assignment of the space occupation of the shader source code includes the space occupation.
11. The method according to any one of claims 8-10, wherein the probability satisfies the following formula:
wherein y is the probability; x is the characteristic information of the assigned shader source code; w (W) T The weight coefficient is corresponding to the characteristic information of the source code of the shader; b is the offset.
12. A terminal device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory to cause the terminal device to perform the method of any one of claims 1-7 or 8-11.
13. A computer readable storage medium storing a computer program, which when executed by a processor implements the method of any one of claims 1-7 or 8-11.
14. A computer program product comprising a computer program which, when run, causes a computer to perform the method of any of claims 1-7 or 8-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310145970.XA CN117152320B (en) | 2023-02-15 | 2023-02-15 | Image processing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310145970.XA CN117152320B (en) | 2023-02-15 | 2023-02-15 | Image processing method and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117152320A true CN117152320A (en) | 2023-12-01 |
CN117152320B CN117152320B (en) | 2024-08-06 |
Family
ID=88903348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310145970.XA Active CN117152320B (en) | 2023-02-15 | 2023-02-15 | Image processing method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152320B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201506372D0 (en) * | 2015-04-15 | 2015-05-27 | Fabius Aidan | Method and systems for generating shaders to emulate a fixed-function graphics pipeline |
US20190362459A1 (en) * | 2018-05-24 | 2019-11-28 | Canon Kabushiki Kaisha | Information processing device and method of controlling same, and non-transitory computer readable medium |
CN110609688A (en) * | 2019-09-19 | 2019-12-24 | 网易(杭州)网络有限公司 | Processing method and processing device of shader, storage medium and processor |
CN112381918A (en) * | 2020-12-03 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Image rendering method and device, computer equipment and storage medium |
CN112882694A (en) * | 2021-01-29 | 2021-06-01 | 中国建设银行股份有限公司 | Program compiling method and device, electronic equipment and readable storage medium |
CN113485709A (en) * | 2021-06-15 | 2021-10-08 | 荣耀终端有限公司 | Application optimization method and device and electronic equipment |
CN113971072A (en) * | 2021-11-15 | 2022-01-25 | 腾讯数码(天津)有限公司 | Information processing method, device, equipment, storage medium and computer program product |
CN114527984A (en) * | 2020-11-23 | 2022-05-24 | 杭州海康威视数字技术股份有限公司 | Shader generation method and device, player and storage medium |
CN114663272A (en) * | 2022-02-22 | 2022-06-24 | 荣耀终端有限公司 | Image processing method and electronic equipment |
-
2023
- 2023-02-15 CN CN202310145970.XA patent/CN117152320B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201506372D0 (en) * | 2015-04-15 | 2015-05-27 | Fabius Aidan | Method and systems for generating shaders to emulate a fixed-function graphics pipeline |
US20190362459A1 (en) * | 2018-05-24 | 2019-11-28 | Canon Kabushiki Kaisha | Information processing device and method of controlling same, and non-transitory computer readable medium |
CN110609688A (en) * | 2019-09-19 | 2019-12-24 | 网易(杭州)网络有限公司 | Processing method and processing device of shader, storage medium and processor |
CN114527984A (en) * | 2020-11-23 | 2022-05-24 | 杭州海康威视数字技术股份有限公司 | Shader generation method and device, player and storage medium |
CN112381918A (en) * | 2020-12-03 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Image rendering method and device, computer equipment and storage medium |
US20230033306A1 (en) * | 2020-12-03 | 2023-02-02 | Tencent Technology (Shenzhen) Company Limited | Image rendering method and apparatus, computer device, and storage medium |
CN112882694A (en) * | 2021-01-29 | 2021-06-01 | 中国建设银行股份有限公司 | Program compiling method and device, electronic equipment and readable storage medium |
CN113485709A (en) * | 2021-06-15 | 2021-10-08 | 荣耀终端有限公司 | Application optimization method and device and electronic equipment |
CN113971072A (en) * | 2021-11-15 | 2022-01-25 | 腾讯数码(天津)有限公司 | Information processing method, device, equipment, storage medium and computer program product |
CN114663272A (en) * | 2022-02-22 | 2022-06-24 | 荣耀终端有限公司 | Image processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117152320B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114669047B (en) | Image processing method, electronic equipment and storage medium | |
CN114979785B (en) | Video processing method, electronic device and storage medium | |
CN116028149B (en) | Window rendering method, system, device, storage medium and computer program product | |
CN113747199A (en) | Video editing method, video editing apparatus, electronic device, storage medium, and program product | |
CN113687816A (en) | Method and device for generating executable code of operator | |
CN113038141B (en) | Video frame processing method and electronic equipment | |
CN111031377B (en) | Mobile terminal and video production method | |
CN116700601B (en) | Memory optimization method, equipment and storage medium | |
CN112231029A (en) | Frame animation processing method applied to theme | |
CN117152320B (en) | Image processing method and electronic device | |
CN116089368B (en) | File searching method and related device | |
CN110659024A (en) | Graphic resource conversion method, apparatus, electronic device and storage medium | |
CN117290004A (en) | Component preview method and electronic equipment | |
CN113642010B (en) | Method for acquiring data of extended storage device and mobile terminal | |
CN113254132A (en) | Application display method and related device | |
CN113138815A (en) | Image processing method and device and terminal | |
CN116672707B (en) | Method and electronic device for generating game prediction frame | |
CN117689796B (en) | Rendering processing method and electronic equipment | |
CN116688494B (en) | Method and electronic device for generating game prediction frame | |
CN117971305B (en) | Upgrading method of operating system, server and electronic equipment | |
CN116708334B (en) | Notification message display method and electronic equipment | |
CN116743908B (en) | Wallpaper display method and related device | |
CN116089320B (en) | Garbage recycling method and related device | |
CN117813600A (en) | Application control method, device, electronic equipment and storage medium | |
CN116991532A (en) | Virtual machine window display method, electronic equipment and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |