WO2024083014A1 - 界面生成方法及电子设备 - Google Patents

界面生成方法及电子设备 Download PDF

Info

Publication number
WO2024083014A1
WO2024083014A1 PCT/CN2023/124034 CN2023124034W WO2024083014A1 WO 2024083014 A1 WO2024083014 A1 WO 2024083014A1 CN 2023124034 W CN2023124034 W CN 2023124034W WO 2024083014 A1 WO2024083014 A1 WO 2024083014A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
electronic device
tree
instructions
task amount
Prior art date
Application number
PCT/CN2023/124034
Other languages
English (en)
French (fr)
Inventor
陈健
李煜
余谭其
吉星春
周耀颖
王昕鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024083014A1 publication Critical patent/WO2024083014A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present application relates to the field of electronic technology, and in particular to an interface generation method and an electronic device.
  • the resolution and refresh rate of the screens of electronic devices are getting higher and higher, wherein the resolution of the screen affects the pixels contained in a frame of interface, and the refresh rate affects the time to generate a frame of interface.
  • the electronic device Before the electronic device displays a first frame interface, the electronic device needs to expend computing resources to generate the first frame interface; before the electronic device displays a second frame interface, the electronic device needs to expend computing resources again to generate the second frame interface.
  • the electronic device When the electronic device fails to generate the second frame interface in time, the content displayed on the screen of the electronic device will freeze. In order to ensure that the second frame interface can be generated in time, the electronic device often increases the CPU operating frequency to improve the computing power of the electronic device, which leads to higher energy consumption for the electronic device to generate a frame interface, reducing the energy efficiency of interface generation.
  • the embodiment of the present application provides an interface generation method and an electronic device.
  • the interface generation method provided by the embodiment of the present application splits a rendering tree into multiple sub-rendering trees, and then converts the multiple sub-rendering trees into rendering instructions in parallel, thereby reducing the time it takes for the sub-rendering trees to be converted into rendering instructions, thereby reducing the time it takes to generate an interface, and avoiding freezes, frame drops, etc. in the interface.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device running a first application, the method comprising: the electronic device generates a first rendering tree, the first rendering tree comprising a drawing operation for generating a frame interface of the first application; the electronic device splits the first rendering tree to obtain N sub-rendering trees, where N is greater than 1; the electronic device converts the N sub-rendering trees into first rendering instructions in parallel, the rendering instructions are instructions in a rendering engine, an image processing library, or a GPU driver; the electronic device generates a frame interface of the first application based on the first rendering instruction.
  • the rendering tree is split into multiple sub-rendering trees, and then the multiple sub-rendering trees are converted into rendering instructions in parallel, which reduces the time it takes to convert the sub-rendering trees into rendering instructions, thereby reducing the time it takes to generate an interface and avoiding freezes, frame drops, etc. in the interface.
  • the electronic device splits the first rendering tree to obtain N sub-rendering trees, specifically including: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, and the first task amount is used to indicate the time consumption or calculation amount of converting the first rendering tree into the first rendering instruction; in response to the electronic device determining that the first task amount is greater than a first threshold, the electronic device splits the first rendering tree to obtain the N sub-rendering trees.
  • the rendering tree is split into multiple sub-rendering trees to reduce the time.
  • the rendering tree may not be split.
  • the method after the electronic device determines the first task amount, the method also includes: the electronic device determines M based on the first task amount and the first threshold, where M is less than or equal to N, and M is an integer greater than or equal to the ratio of the first task amount to the first threshold; the electronic device determines the integer greater than or equal to M as N.
  • splitting the rendering tree may first determine the minimum number of sub-rendering trees that need to be split, and then determine the method of splitting the rendering tree; the number of sub-rendering trees to be split may be determined by a threshold and the amount of tasks of the rendering tree.
  • the electronic device determines the first task amount, specifically including: the electronic device determines the first task amount by determining the task amount of drawing operations in the first rendering tree.
  • the first task amount may be determined in a variety of ways.
  • the N sub-rendering trees include a second rendering tree and a third rendering tree, a difference between the second task amount and the third task amount is less than a difference threshold, the second task amount is the task amount of the second rendering tree, and the second task amount is used to measure the time consumption or the amount of calculation for converting the second rendering tree into rendering instructions, and the third task amount is the task amount of the third rendering tree, and the third task amount is used to measure the time consumption or the amount of calculation for converting the third rendering tree into rendering instructions.
  • the sub-rendering trees can be further split so that the task amount of each sub-rendering tree is less than a threshold.
  • the electronic device splits the first rendering tree to obtain N sub-rendering trees, specifically including: the electronic device determines that a root rendering node of the first rendering tree has N child nodes, and the child nodes are rendering nodes directly connected to the rendering node; and the electronic device splits the first rendering tree into the N sub-rendering trees.
  • the rendering tree may be split according to its data structure to obtain multiple rendering sub-trees.
  • the electronic device splits the first rendering tree to obtain N sub-rendering trees, specifically including: the electronic device divides the interface of the first application into N areas; the electronic device splits the first rendering tree based on the N areas to obtain N sub-rendering trees, and the N sub-rendering trees correspond one-to-one to the N areas.
  • the interface may be divided into N regions first, and then the rendering tree may be split according to the regions to obtain N sub-rendering trees.
  • the electronic device splits the first rendering tree to obtain N sub-rendering trees, specifically including: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, the first task amount is used to measure the time or calculation amount of the first rendering tree being converted into rendering instructions, and the first task amount is greater than a first threshold; the electronic device determines that the root rendering node of the first rendering tree has K child nodes, and K is less than N; the electronic device splits the first rendering tree into K sub-rendering trees; after the electronic device determines that the task amount of the fourth rendering tree is greater than the first threshold, the electronic device splits the fourth rendering tree to obtain N-K+1 rendering sub-trees, the K sub-rendering trees include the fourth rendering tree, and the task amounts of the N rendering sub-trees are all less than the first threshold.
  • the rendering tree and the rendering subtrees can be split so that the task amount of each rendering subtree is less than the threshold, thereby ensuring that the time for converting the rendering subtree into a rendering instruction will not time out, thereby ensuring timely generation of the interface.
  • the electronic device converts the N sub-rendering trees into a first rendering instruction in parallel, specifically including: the electronic device fills the instructions converted from the N sub-rendering trees into N buffers through N threads respectively; the electronic device submits the instructions of the N buffers to a first buffer, and the instructions in the first buffer are the first rendering instructions.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device running a first application, the method comprising: the electronic device generates a first rendering tree through the first process, the first rendering tree includes drawing operations for generating a frame interface of the first process; the electronic device splits the first rendering tree into a second rendering tree and a third rendering tree, the second rendering tree includes some drawing operations in the first rendering tree, the third rendering tree includes some drawing operations in the first rendering tree, and the second rendering tree is different from the third rendering tree; the electronic device converts the second rendering tree into a first rendering instruction through a first thread, the first rendering instruction is stored in a first buffer, and the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; the electronic device converts the third rendering tree into a second rendering instruction through a second thread, and the second instruction is stored in a second buffer; the electronic device generates a frame interface of the first process based on the first rendering instruction and the second rendering instruction.
  • the electronic device can split the rendering tree generated by the process into multiple rendering trees, generate multiple groups of rendering instructions through different threads, and finally generate a frame of interface based on the multiple groups of rendering instructions. Since the threads are parallel, the time required to convert the rendering tree into rendering instructions can be reduced.
  • the electronic device determines a first task amount, where the first task amount is the task amount of the first rendering tree, and the first task amount is used to measure the time or computational amount of converting the first rendering tree into rendering instructions, and the first task amount is greater than a first threshold.
  • the rendering tree is split into multiple sub-rendering trees to reduce the time.
  • the rendering tree may not be split.
  • the electronic device generates a frame interface of the first process based on the first instruction and the second instruction, specifically including: the first rendering instruction is located in a first buffer held by the first thread, the second rendering instruction is located in a second buffer held by the second thread, and the electronic device submits the instructions in the first buffer and the rendering instructions in the second buffer to a third buffer; the electronic device generates a frame interface of the first process based on the third buffer.
  • the electronic device may submit rendering instructions in multiple buffers to one buffer, thereby driving the GPU to generate an interface.
  • the third buffer is the second buffer, or the third buffer is the first buffer.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device running a first application, the method comprising: the electronic device generates a first rendering tree, the first rendering tree includes a drawing operation for generating a frame interface of the first application; the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, the first task amount is used to measure the time or calculation amount of the first rendering tree being converted into a rendering instruction, the first task amount is greater than a first threshold, and the rendering instruction is a rendering engine, an image processing library or a GPU instructions in the driver; if the first task amount is greater than the first threshold, the electronic device configures the CPU's operating frequency to change from the first frequency to the second frequency, and the second frequency is higher than the first frequency; the electronic device generates a frame interface of the first application based on the first rendering tree; during the process of the electronic device generating a frame interface of the first application, the electronic device operates at the second frequency.
  • the electronic device can determine whether to adjust the CPU frequency according to the relationship between the task volume of the rendering tree and the threshold. If the task volume of the rendering tree is greater than the threshold, the rendering tree operates at a higher frequency so that the rendering tree can be converted into rendering instructions in a timely manner.
  • the electronic device generates a frame interface of the first application based on the first rendering tree, specifically including: the electronic device splits the first rendering tree to obtain N sub-rendering trees, where N is an integer greater than 1; the electronic device converts the N sub-rendering trees into first rendering instructions in parallel; the electronic device generates a frame interface of the first application based on the first rendering instructions.
  • the electronic device may further split the rendering tree into multiple sub-rendering trees, and convert the multiple sub-rendering trees into rendering instructions in parallel to reduce the duration of converting the rendering tree into rendering instructions.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device running a first application, the method comprising: the electronic device generates a first rendering tree, the first rendering tree comprising drawing operations for generating a frame interface of the first application; the electronic device traverses different parts of the first rendering tree through multiple different threads to generate a first rendering instruction, the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; the electronic device generates a frame interface of the first application based on the first rendering instruction.
  • the electronic device traverses the rendering tree in different orders and converts the rendering tree into rendering instructions in parallel through multiple threads, thereby reducing the latency of converting the rendering tree into rendering instructions and ensuring that the electronic device can generate an interface in a timely manner.
  • the method before the electronic device traverses the first rendering tree in different orders through multiple different threads to generate a first instruction, the method also includes: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, and the first task amount is used to measure the time or computational amount of converting the first rendering tree into the first rendering instruction; the electronic device determines that the first task amount is greater than a first threshold.
  • the rendering tree is converted into rendering instructions in parallel through multiple threads, thereby reducing the conversion time.
  • the electronic device traverses the first rendering tree in different orders through multiple different threads to generate a first instruction, specifically including: the electronic device traverses the first part of the first rendering tree through the first thread, and saves the generated second rendering instruction in the first buffer; the electronic device traverses the second part of the first rendering tree through the second thread, and saves the generated third rendering instruction in the second buffer; the electronic device submits the rendering instructions in the first buffer and the rendering instructions in the second buffer to the third buffer to obtain the first rendering instruction.
  • rendering instructions generated by conversion of different threads may be located in different buffers, and finally the rendering instructions in different buffers are submitted to the same buffer, thereby driving the GPU to generate an interface.
  • the electronic device submits the rendering instructions in the first buffer and the rendering instructions in the second buffer to the third buffer to obtain the first rendering instruction, specifically including: the rendering instructions in the first buffer include second rendering instructions and third rendering instructions, and the instructions in the second buffer include fourth rendering instructions; the instructions in the third buffer are arranged in the following order: second rendering instructions, fourth rendering instructions, and third rendering instructions.
  • the order of the rendering instructions can be adjusted, thereby restoring the dependency of the rendering nodes.
  • an embodiment of the present application provides an electronic device, comprising: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: the electronic device generates a first rendering tree, the first rendering tree includes a drawing operation for generating a frame interface of the first application; the electronic device splits the first rendering tree to obtain N sub-rendering trees, where N is greater than 1; the electronic device converts the N sub-rendering trees into a first rendering instruction in parallel, the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; the electronic device generates a frame interface of the first application based on the first rendering instruction.
  • the one or more processors are specifically used to call the computer instruction to enable the electronic device to execute: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, and the first task amount is used to indicate the time consumption or calculation amount of converting the first rendering tree into the first rendering instruction; in response to the electronic device determining that the first task amount is greater than a first threshold, the electronic device splits the first rendering tree to obtain the N sub-rendering trees.
  • the one or more processors are also used to call the computer instructions to cause the electronic device to execute: the electronic device determines M based on the first task amount and the first threshold, and M is less than or equal to N, and M is an integer greater than or equal to the ratio of the first task amount to the first threshold; the electronic device determines the integer greater than or equal to M as N.
  • the one or more processors are specifically used to call the computer instruction to cause the electronic device to execute: the electronic device determines the first task amount by determining the task amount of the drawing operations in the first rendering tree.
  • the N sub-rendering trees include a second rendering tree and the third rendering tree, a difference between the second task volume and the third task volume is less than a difference threshold, the second task volume is the task volume of the second rendering tree, and the second task volume is used to measure the time consumption or the amount of calculation for converting the second rendering tree into rendering instructions, and the third task volume is the task volume of the third rendering tree, and the third task volume is used to measure the time consumption or the amount of calculation for converting the third rendering tree into rendering instructions.
  • the one or more processors are specifically used to call the computer instruction to cause the electronic device to execute: the electronic device determines that the root rendering node of the first rendering tree has N child nodes, and the child nodes are rendering nodes directly connected to the rendering node; the electronic device splits the first rendering tree into N child rendering trees.
  • the one or more processors are specifically used to call the computer instructions to cause the electronic device to execute: the electronic device divides the interface of the first application into N areas; the electronic device splits the first rendering tree based on the N areas to obtain N sub-rendering trees, and the N sub-rendering trees correspond one-to-one to the N areas.
  • the one or more processors are specifically used to call the computer instructions to enable the electronic device to execute: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, the first task amount is used to measure the time or calculation amount of the first rendering tree being converted into rendering instructions, and the first task amount is greater than a first threshold; the electronic device determines that the root rendering node of the first rendering tree has K child nodes, and K is less than N; the electronic device splits the first rendering tree into K sub-rendering trees; after the electronic device determines that the task amount of the fifth rendering tree is greater than the first threshold, the electronic device splits the fifth rendering tree to obtain N-K+1 rendering sub-trees, the K sub-rendering trees include the fifth rendering tree, and the task amounts of the N rendering sub-trees are all less than the first threshold.
  • the one or more processors are specifically used to call the computer instructions to enable the electronic device to execute: the electronic device fills the instructions converted from the N sub-rendering trees into N buffers through N threads respectively; the electronic device submits the instructions of the N buffers to a first buffer, and the instructions in the first buffer are the first rendering instructions.
  • an embodiment of the present application provides an electronic device, comprising: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store a computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: the electronic device generates a first rendering tree through the first process, and the first rendering tree includes a drawing operation for generating a frame interface of the first process; the electronic device splits the first rendering tree into a second rendering tree and a third rendering tree, the second rendering tree includes part of the drawing operations in the first rendering tree, and the third rendering tree includes part of the drawing operations in the first rendering tree, and the second rendering tree is different from the third rendering tree; the electronic device converts the second rendering tree into a first rendering instruction through a first thread, the first rendering instruction is stored in a first buffer, and the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; the electronic device converts the third rendering tree into a second
  • the one or more processors are further used to call the computer instruction to cause the electronic device to execute: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, the first task amount is used to measure the time or computational amount of converting the first rendering tree into rendering instructions, and the first task amount is greater than a first threshold.
  • the one or more processors are also used to call the computer instructions to enable the electronic device to execute: the first rendering instruction is located in the first buffer held by the first thread, the second rendering instruction is located in the second buffer held by the second thread, and the electronic device submits the instructions in the first buffer and the rendering instructions in the second buffer to the third buffer; the electronic device generates a frame interface of the first process based on the third buffer.
  • the third buffer is the second buffer, or the third buffer is the first buffer.
  • an embodiment of the present application provides an electronic device, comprising: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store a computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: the electronic device generates a first rendering tree, the first rendering tree includes a drawing operation for generating a frame interface of the first application; the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, the first task amount is used to measure the time or calculation amount of the first rendering tree being converted into a rendering instruction, the first task amount is greater than a first threshold, and the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; if the first task amount is greater than the first threshold, the electronic device configures the CPU to change its operating frequency from a first frequency to a second frequency, and the second frequency is higher than the first frequency; the electronic device generates a frame interface
  • the one or more processors are specifically configured to call the computer instruction to The electronic device is made to execute: the electronic device splits the first rendering tree to obtain N sub-rendering trees, where N is an integer greater than 1; the electronic device converts the N sub-rendering trees into first rendering instructions in parallel; and the electronic device generates a frame interface of the first application based on the first rendering instructions.
  • an embodiment of the present application provides an electronic device, comprising: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: the electronic device generates a first rendering tree, the first rendering tree includes a drawing operation for generating a frame interface of the first application; the electronic device traverses different parts of the first rendering tree through multiple different threads to generate a first rendering instruction, the rendering instruction is an instruction in a rendering engine, an image processing library or a GPU driver; the electronic device generates a frame interface of the first application based on the first rendering instruction.
  • the one or more processors are further used to call the computer instruction to cause the electronic device to execute: the electronic device determines a first task amount, the first task amount is the task amount of the first rendering tree, and the first task amount is used to measure the time or computational amount of converting the first rendering tree into the first rendering instruction; the electronic device determines that the first task amount is greater than a first threshold.
  • the one or more processors are specifically used to call the computer instructions to enable the electronic device to execute: the electronic device traverses the first part of the first rendering tree through a first thread, and saves the generated second rendering instructions in a first buffer; the electronic device traverses the second part of the first rendering tree through a second thread, and saves the generated third rendering instructions in a second buffer; the electronic device submits the rendering instructions in the first buffer and the rendering instructions in the second buffer to the third buffer to obtain the first rendering instructions.
  • the one or more processors are specifically used to call the computer instructions to cause the electronic device to execute: the rendering instructions of the first buffer include second rendering instructions and third rendering instructions, and the instructions of the second buffer include fourth rendering instructions; the instructions of the third buffer are arranged in the following order: second rendering instructions, fourth rendering instructions, and third rendering instructions.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device, and the chip system includes one or more processors, which are used to call computer instructions so that the electronic device executes a method described in the first aspect, the second aspect, the third aspect, the fourth aspect, any possible implementation of the first aspect, any possible implementation of the second aspect, any possible implementation of the third aspect, and any possible implementation of the fourth aspect.
  • an embodiment of the present application provides a computer program product comprising instructions.
  • the above-mentioned computer program product runs on an electronic device, the above-mentioned electronic device executes the method described in the first aspect, the second aspect, the third aspect, the fourth aspect, any possible implementation of the first aspect, any possible implementation of the second aspect, any possible implementation of the third aspect, and any possible implementation of the fourth aspect.
  • an embodiment of the present application provides a computer-readable storage medium, comprising instructions.
  • the above electronic device executes the method described in the first aspect, the second aspect, the third aspect, the fourth aspect, any possible implementation of the first aspect, any possible implementation of the second aspect, any possible implementation of the third aspect, and any possible implementation of the fourth aspect.
  • the electronic devices provided in the fifth, sixth, seventh and eighth aspects, the chip system provided in the ninth aspect, the computer program product provided in the tenth aspect and the computer storage medium provided in the eleventh aspect are all used to execute the methods provided in the embodiments of the present application. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods, which will not be repeated here.
  • FIG1 is an exemplary schematic diagram of an application generating a bitmap provided in an embodiment of the present application
  • FIG2 is an exemplary schematic diagram of a process of generating an interface for an electronic device provided in an embodiment of the present application
  • FIG3 is an exemplary schematic diagram of an interface generation method provided in an embodiment of the present application.
  • FIG4 is an exemplary schematic diagram of determining the task amount of a rendering tree provided in an embodiment of the present application.
  • FIG. 5A and FIG. 5B are exemplary schematic diagrams of a split rendering tree provided in an embodiment of the present application.
  • FIG6A and FIG6B are exemplary schematic diagrams of a split rendering tree provided in an embodiment of the present application.
  • FIG. 7 is another exemplary schematic diagram of a split rendering tree provided in an embodiment of the present application.
  • FIG8 is an exemplary schematic diagram of converting a rendering tree into GPU instructions in parallel according to an embodiment of the present application
  • 9A and 9B are another exemplary schematic diagram of a process of generating an interface of an electronic device provided in an embodiment of the present application.
  • FIG10 is another exemplary schematic diagram of the process of the interface generation method provided in an embodiment of the present application.
  • FIG11 is an exemplary schematic diagram of adjusting CPU computing power based on the amount of tasks of a rendering tree provided in an embodiment of the present application
  • FIG12 is another exemplary schematic diagram of the interface generation method provided in an embodiment of the present application.
  • FIG13 is an exemplary schematic diagram of a rendering thread traversing a rendering tree in different orders according to an embodiment of the present application
  • FIG14 is an exemplary schematic diagram of submitting a GPU instruction to an instruction queue provided in an embodiment of the present application.
  • FIG15 is an exemplary schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 16 is an exemplary schematic diagram of the software structure of the electronic device provided in an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be understood as suggesting or implying relative importance or implicitly indicating the number of the indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features, and in the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
  • GUI graphical user interface
  • the interface is a medium interface for interaction and information exchange between the application and the user. Every time a vertical synchronization signal arrives, the electronic device needs to generate the interface of the application for the foreground application.
  • the frequency of the vertical synchronization signal is related to the refresh rate of the screen of the electronic device. For example, the frequency of the vertical synchronization signal is the same as the refresh rate of the screen of the electronic device.
  • the electronic device generates the interface of the application program, which requires the application program to render and generate the bitmap by itself, and pass its own bitmap to the surface compositor (SurfaceFlinger). That is, the application program, as a producer, performs drawing to generate the bitmap, and stores the bitmap in the buffer queue (BufferQueue) provided by the surface compositor; the surface compositor, as a consumer, continuously obtains the bitmap generated by the application program from the BufferQueue. Among them, the bitmap is located on the surface generated by the application program, and the surface will be filled in the BufferQueue.
  • the surface compositor After the surface compositor obtains the bitmap of the visible application, the surface compositor and the hardware compositing strategy module (HWC) determine how to composite the bitmap as a layer.
  • HWC hardware compositing strategy module
  • the surface compositor and/or the hardware compositing strategy module After the surface compositor and/or the hardware compositing strategy module performs bitmap synthesis, the surface compositor and/or the hardware compositing strategy module fills the synthesized bitmap into the frame buffer and passes it to the display subsystem (DSS). After getting the synthesized bitmap, the DSS can display the synthesized bitmap on the screen.
  • the frame buffer can be an on-screen buffer. Among them, the bitmap on the surface compositor can also be called a layer.
  • FIG. 1 is an exemplary schematic diagram of an application generating a bitmap provided in an embodiment of the present application.
  • step S101 After receiving a vertical synchronization signal (Vsync), the application starts to generate a bitmap.
  • the specific steps can be divided into three steps, namely step S101 , step S102 and step S103 .
  • S101 The main thread traverses the views of the application and saves the drawing operation of each view into a newly generated rendering tree.
  • the main thread invalidates the view hierarchy.
  • the UI thread traverses the views of the application through the measurement method call (measure()), layout method call (layout()), and drawing method call (draw()), determines and saves the drawing operation of each view, and records the view and the drawing operation involved in the view (such as drawline) into the drawing instruction list (displaylist) of the rendering node (RenderNode) of the rendering tree.
  • the data saved in the drawing instruction list can be a drawing operation structure (DrawOP or DrawListOP).
  • the view is the basic element that constitutes the application interface, and a control on the interface can correspond to one or more views.
  • the UI thread of the application within the drawing method call, also reads the content carried by the view into the memory. For example, the image carried by the image view (imageview), the text carried by the text view (textview).
  • the UI thread of the application determines the operation of reading the content carried by the view into the memory and records it in the drawing instruction list.
  • the drawing operation structure in the drawing instruction list can also be called a drawing instruction.
  • the drawing operation structure is a data structure used to draw graphics, such as drawing lines, drawing rectangles, drawing text, etc.
  • the drawing operation structure will be converted into API calls of the image processing library, interface calls in the OpenGLES library, Vulkan library, and Metal library through the rendering engine.
  • drawline will be encapsulated as DrawLineOp, which is a data structure containing drawing data such as line length, width and other information.
  • DrawLineOp will be further encapsulated as interface calls in the OpenGLES library, Vulkan library, and Metal library, and then obtain GPU instructions.
  • the interface calls in the Skia library, the OpenGLES library interface calls, the Vulkan library interface calls, and/or the Metal library interface calls are uniformly referred to as rendering instructions. That is, the rendering tree will be converted into rendering instructions by the rendering thread, and then further converted into GPU instructions that the GPU can recognize and process.
  • OpenGLES library, Vulkan library, and Metal library can be collectively referred to as image processing library or graphics rendering library.
  • the electronic device In the process of generating a frame interface, the electronic device generates rendering instructions through OpenGLES library, Vulkan library or Metal library.
  • the image processing library provides API and driver support for graphics rendering.
  • DrawOP can be stored in the stack of the application in a chain data structure.
  • the drawing instruction list may be a buffer, which records all drawing operation structures or identifiers of all drawing operations included in one frame interface of the application, such as addresses, serial numbers, etc.
  • the drawing instruction list may be a buffer, which records all drawing operation structures or identifiers of all drawing operations included in one frame interface of the application, such as addresses, serial numbers, etc.
  • the rendering tree is a data structure generated by the UI thread and used to generate the application interface.
  • the rendering tree may include multiple rendering nodes, each of which includes rendering attributes and a drawing instruction list.
  • the rendering tree records part or all of the information for generating a frame of the application interface.
  • the UI thread may only traverse the view of the dirty area (also referred to as the area that needs to be redrawn) to generate a differential rendering tree.
  • the rendering thread may determine the rendering tree to be used for the current frame interface rendering by differentially comparing the rendering tree with the rendering tree used for the previous frame rendering.
  • S102 The main thread synchronizes the rendering tree to the rendering thread, and the rendering tree is located in the stack of the application.
  • the UI thread passes/synchronizes the render tree to the render thread, where the render tree is located in the stack of the process corresponding to the application.
  • the rendering thread executes the drawing instructions in the rendering tree to generate a bitmap.
  • the rendering thread first obtains a hardware canvas (HardwareCanvas) and performs drawing operations in the rendering tree on the hardware canvas to generate a bitmap.
  • the hardware canvas is located in the surface held by the application, and the surface carries bitmaps or other formats of data for storing image information.
  • the rendering thread sends the surface carrying the bitmap to the surface compositor.
  • the rendering thread sends the generated bitmap to the surface compositor through the surface to participate in layer synthesis.
  • Step S101 can be considered as the construction phase, which is mainly responsible for determining the size, position, transparency and other properties of each view in the application.
  • the drawLine in the view can be encapsulated into a DrawLineOp in the construction, which contains the drawing data such as the length and width of the line, and can also contain the interface call corresponding to the DrawLineOp of the underlying graphics processing library, which is used to call the underlying graphics library to generate a bitmap in the rendering phase.
  • step S103 can be considered as the rendering stage, which is mainly responsible for traversing the rendering nodes of the rendering tree and performing drawing operations on each rendering node, thereby generating a bitmap on the hardware canvas.
  • the rendering thread calls the underlying graphics processing library, such as the OpenGLES library, the Vulkan library, the Metal library, etc., and then calls the GPU to complete the rendering to generate a bitmap.
  • FIG. 2 is an exemplary schematic diagram of a process of generating an interface for an electronic device provided in an embodiment of the present application.
  • the process of generating the first frame interface is steps 1, 2, 3, 4 and 5 in FIG. 2 ; the process of generating the second frame interface is steps 6, 7 and 5 in FIG. 2 .
  • the process of generating the first frame interface of the electronic device includes: after receiving the vertical synchronization signal, the UI thread of the application generates a rendering tree, as shown in 1 of FIG2; after receiving the rendering tree, the rendering thread of the application needs to convert the drawing instruction list and the rendering instruction list in the rendering tree into a rendering instruction list;
  • the rendering attributes are converted into GPU instructions, such as Vulkan library instructions, OpenGLES library instructions, Metal instructions and other instructions that can be recognized and processed by the GPU driver, as shown in 2 in Figure 2; after receiving the GPU instructions, the GPU or GPU driver generates a bitmap, as shown in 3 in Figure 2;
  • the surface compositor and/or the hardware synthesis strategy module receives the bitmap and performs layer synthesis using the bitmap as a layer, as shown in 4 in Figure 2;
  • the display subsystem receives the synthesized bitmap from the surface compositor and/or the hardware synthesis strategy module, and then sends it for display, as shown in 5 in Figure 2.
  • the process of the electronic device generating the second frame interface includes: after the UI thread of the application receives the vertical synchronization signal, the UI thread of the application generates a rendering tree, as shown in 6 in Figure 2; after receiving the rendering tree, the rendering thread of the application needs to convert the drawing instruction list and rendering attributes in the rendering tree into GPU instructions, such as Vulkan library instructions, OpenGLES library instructions, Metal instructions and other instructions that can be recognized and processed by the GPU driver, as shown in 7 in Figure 2.
  • GPU instructions such as Vulkan library instructions, OpenGLES library instructions, Metal instructions and other instructions that can be recognized and processed by the GPU driver, as shown in 7 in Figure 2.
  • the bitmap cannot be passed to the surface synthesizer and/or the hardware synthesis strategy module in time. After receiving the vertical synchronization signal, the surface synthesizer and/or the hardware synthesis strategy module will perform layer synthesis and then send it for display.
  • the surface synthesizer and/or the hardware synthesis strategy module Since the surface synthesizer and/or the hardware synthesis strategy module does not receive the interface of the application in the second frame interface, the surface synthesizer and/or the hardware synthesis strategy module will use the interface of the application in the first frame interface as the interface of the application in the second frame interface to perform layer synthesis and then send it for display, as shown in 5 in Figure 2.
  • the reason why the rendering thread cannot convert the rendering tree into GPU instructions in time can be summarized as: the mismatch between the load of the rendering tree corresponding to the interface of the application to be generated and the computing power of the electronic device.
  • the load of the rendering tree can be expressed in many ways, which are not limited here, for example, the amount of calculation of the rendering tree converted into GPU instructions, the memory usage of the rendering tree, etc.
  • the matching of the load of the rendering tree and the computing power of the electronic device means that when the computing power of the electronic device is within a certain range, the rendering tree can always be converted into GPU instructions within a preset time; on the contrary, when the time for the rendering tree to be converted into GPU instructions exceeds the preset time, the interface of the application will freeze or drop frames. At this time, the load of the rendering tree does not match the computing power of the electronic device.
  • the computing power of the electronic device may be increased by increasing the frequency of the CPU, so that the rendering thread can always convert the rendering tree into GPU instructions in a timely manner.
  • embodiments of the present application provide an interface generation method and an electronic device.
  • the interface generation method provided in the embodiment of the present application can have a built-in load scoring model on the electronic device.
  • the UI thread or rendering thread or unified rendering process of the application can determine the load of the rendering tree converted into GPU instructions based on the load scoring model, and the electronic device further selects an appropriate CPU frequency based on the load.
  • the unified rendering process is a process independent of the application, which is used to receive the rendering tree generated by the UI threads of different applications.
  • the application and the unified rendering process complete data interaction through inter-process communication (IPC).
  • IPC inter-process communication
  • the interface generation method determines the load of the rendering tree converted into GPU instructions through a load scoring model, and then selects an appropriate CPU frequency so that the rendering thread or unified rendering process can execute the conversion of GPU instructions in a timely manner, while reducing the power consumption increase caused by increasing the CPU frequency.
  • the interface generation method provided in the embodiments of the present application can split the rendering tree of the application, and the multiple split rendering trees are converted into GPU instructions by different threads respectively.
  • the interface generation method provided in the embodiments of the present application may modify the order of traversing the rendering tree so that a rendering tree may be traversed simultaneously by multiple threads to generate GPU instructions.
  • the interface generation method provided in the embodiment of the present application reduces the time consumption of converting the rendering tree into GPU instructions through multi-threaded parallelism, thereby reducing the probability of interface freezes. Moreover, without considering the overhead of multi-threaded parallelism, if the low-frequency energy efficiency of the CPU of the electronic device is relatively high, the rendering tree can be converted into GPU instructions through multi-threaded parallelism after reducing the frequency, and the energy efficiency of generating a frame of interface can be improved without increasing the time consumption of generating a frame of interface.
  • FIG. 3 is an exemplary schematic diagram of the process of the interface generation method provided in an embodiment of the present application.
  • the UI thread of the application After receiving a vertical synchronization signal, the UI thread of the application generates a rendering tree corresponding to the current frame interface, and determines the task amount of the rendering tree during the process of generating the rendering tree.
  • the rendering tree corresponding to the current frame interface generated by the UI thread of the application can refer to the text description in Figure 1 above, which will not be repeated here.
  • the UI thread of the application When the UI thread of the application generates a render tree, it can determine the task volume of the render tree by determining the task volume of each drawing operation or drawing operation structure.
  • the task volume of the render tree is used to represent the task volume of converting the render tree into GPU instructions during the generation of the current frame interface.
  • the task volume of the rendering tree is positively correlated with the time it takes to convert the rendering tree into GPU instructions.
  • the load scoring model may be a task model table as shown below.
  • a task model table may be stored locally in the electronic device or in a cloud accessible to the electronic device.
  • the task model table contains task quantity scores corresponding to different drawing operation structures.
  • the task model table is shown in Tables 1 and 2 below.
  • Table 1 and Table 2 are exemplary schematic tables of a task model table provided in an embodiment of the present application.
  • the task model table stores the correspondence between drawing operations or drawing operation structures and task amounts.
  • the task amount corresponding to DrawRect (parameter 1, parameter 2) is F1 (parameter 1, parameter 2)
  • the task amount corresponding to DrawImage (parameter 3, parameter 4)
  • the task amount corresponding to ClipRect (parameter 5) is F3 (parameter 5).
  • F1(), F2(), and F3() are task amount calculation functions corresponding to different drawing operations or drawing operation structures.
  • the task model table may store the correspondence between the drawing operation or drawing operation structure and the time consumption of converting the drawing operation or drawing operation structure into GPU instructions under different CPU computing capabilities, as shown in Table 2.
  • T1 the time consumption corresponding to DrawRect
  • T2 the time consumption corresponding to DrawImage
  • T3 the task volume corresponding to ClipRect
  • T1(), T2(), and T3() are time consumption calculation functions corresponding to different drawing operations or drawing operation structures.
  • the task model table may be generated by offline testing. For example, developers of terminal manufacturers may test each drawing operation or drawing operation structure and record the time consumption or task amount to generate the task model table.
  • the task model table can be generated and updated online.
  • the operating system on the electronic device records the time or amount of tasks for different drawing operations or drawing operation structures in the process of generating the interface, and then records and updates the task model table in real time.
  • the task model table of the same model of electronic devices can be different, thereby more accurately evaluating the load of a frame of the interface.
  • the condition of the electronic device may include the degree of aging of the electronic device, etc., which is not limited here.
  • rendering attributes may also be involved in the calculation of the task volume. This is because in the subsequent interface call process converted into the image processing library, the rendering attributes of the rendering node may also act on the drawing operations in the drawing instruction list of the rendering node, thereby affecting the converted GPU instructions.
  • input parameters of the drawing operation may not be considered, different drawing operations correspond to different task amounts, and the same drawing operation, ie, the input parameters of the drawing operation may be different, corresponding to the same task amount.
  • the task amount of the rendering tree can be saved as a separate parameter, and the separate parameter and the rendering tree are passed to the rendering thread or the unified rendering process.
  • the UI thread of the application determines the task amount of each rendering node in the process of generating the rendering tree, and the task amount of each rendering node can be saved in the rendering node, as shown in Figure 4.
  • the parameter used to save the task amount of the rendering node can be called a task parameter.
  • FIG. 4 is an exemplary schematic diagram of determining the task amount of a rendering tree provided by an embodiment of the present application.
  • the view structure corresponding to the current frame interface of the application is as follows: the subviews of the root view (view container 0) are view container 1 and view container 2, view container 1 has several subviews, and the subviews of view container 2 include view 22 .
  • the UI thread of the application In the process of the UI thread of the application traversing the view, the UI thread of the application generates a rendering tree; in the process of the UI thread of the application generating the rendering tree, the UI thread of the application determines the task amount of each rendering node based on the task model table, and then the UI thread of the application saves the task amount of each rendering node in the task parameter of the corresponding rendering node. For example, if the UI thread of the application has traversed the root view and view container 1, the rendering tree in the generation process includes the root rendering node and rendering node 1, and the task parameters are added to the root rendering node and rendering node 1. In FIG. 4, the root rendering node is the rendering node corresponding to the root view; and rendering node 1 is the rendering node corresponding to view container 1.
  • each rendering node of the rendering tree includes a task parameter, which stores the task amount of the rendering node.
  • the UI thread, the rendering thread or the rendering process can determine the task amount of the rendering node based on the task parameter in the rendering node.
  • the task parameters in the root rendering node of the rendering tree may also store the task amount of the entire rendering tree.
  • the task parameters in the rendering node may also store the task amount of a child rendering tree with the rendering node as the root rendering node.
  • the UI thread may determine the task amount of the differential rendering tree and save the task amount of the differential rendering tree in the task parameters.
  • the rendering thread or the unified rendering process After the differential rendering tree is passed to the rendering thread or the unified rendering process, the rendering thread or the unified rendering process generates a rendering tree corresponding to the current frame interface based on the differential rendering tree and the rendering tree corresponding to the previous frame interface, and then determines the rendering tree corresponding to the current frame interface, and then the rendering thread or the unified rendering process determines the task amount of the rendering tree corresponding to the current frame interface.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task volume of the rendering tree is greater than the task volume threshold 1. If the task volume of the rendering tree is greater than the task volume threshold 1, the UI thread, rendering thread or unified rendering process of the application splits the rendering tree to obtain multiple sub-rendering trees.
  • the conditions that the multiple sub-rendering trees after splitting need to meet are: the task volume of any of the multiple sub-rendering trees after splitting is less than the task volume threshold 1.
  • the task volume threshold and the task volume threshold 1 can both be called the first threshold.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task amount of the rendering tree is greater than the time threshold 1. If the task amount of the rendering tree is greater than the time threshold 1, the UI thread, rendering thread or unified rendering process of the application splits the rendering tree to obtain multiple sub-rendering trees. Among them, the conditions that need to be met by the multiple sub-rendering trees after splitting are: the time taken to convert any of the multiple sub-rendering trees after splitting into GPU instructions is less than the time threshold 1. Among them, the time threshold 1 may be related to the screen refresh rate. For example, the higher the screen refresh rate, the smaller the time threshold 1.
  • the workload of the render tree can have the same effect as the time-consuming process of converting the render tree into GPU instructions, so they can be replaced with each other.
  • the data transmitted or synchronized by the UI thread of the application to the rendering thread or the unified rendering process are multiple sub-rendering trees.
  • splitting the rendering tree may take into account the concurrency overhead of threads and/or the dependencies between rendering nodes.
  • the dependency of rendering nodes refers to the need for the drawing operation of the child node to be performed after the drawing operation of the parent node due to the parent-child relationship of the rendering nodes.
  • the parent-child relationship of the rendering nodes may affect whether the interface can be generated correctly, because when the drawing operation of the parent node and the drawing operation in the child node operate on the same pixel, the drawing operation of the parent node needs to be executed before the drawing operation in the child node. On the contrary, when the drawing operation of the parent node and the drawing operation in the child node do not operate on the same pixel, the drawing operation of the parent rendering node is independent of the drawing operation in the child node and has no dependency.
  • the rendering tree is split so that the amount of tasks (or time consumption) of any sub-rendering tree is less than a task amount threshold 1 (or a time threshold 1).
  • the overhead of thread concurrency can also be characterized by the amount of tasks or time consumption.
  • the rendering tree is split so that the sum of the task volume (or time consumption) of any sub-rendering tree and the task volume (or time consumption) of thread concurrency is less than a task volume threshold 1 (or a time threshold 1).
  • the operating system when it is determined that the amount of tasks of the rendering tree is greater than the task amount threshold, can adjust the computing power of the CPU, such as adjusting the operating frequency of the CPU.
  • the operating system can adjust the computing power of the CPU, such as adjusting the operating frequency of the CPU.
  • the method of adjusting the frequency of the CPU based on the amount of tasks of the rendering tree can refer to the text description corresponding to Figures 10 and 11 below, which will not be repeated here.
  • FIG. 5A and FIG. 5B are exemplary schematic diagrams of splitting a rendering tree provided in an embodiment of the present application.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task volume of the rendering tree is greater than the task volume threshold 1, and if so, executes step S502; if not, ends.
  • S502 Divide N sub-rendering trees according to the dependency relationship between the root rendering node and the child nodes of the root rendering node.
  • the rendering tree is split into N child rendering trees.
  • the child nodes of the root rendering node can be used as the root nodes of the N child rendering trees respectively.
  • a child rendering tree has a root rendering node, that is, the root rendering node is used as the root node of the child rendering tree.
  • the child node is a node directly connected to the root node. In the rendering tree, the root node is the root rendering node.
  • S503 Determine whether the task amount of each sub-rendering tree in all sub-rendering trees is less than or equal to task amount threshold 1.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task amount of the split sub-rendering tree is greater than the task amount threshold 1, and if so, executes step S502; if not, ends.
  • the workload of each sub-rendering tree needs to be added with the overhead of a thread parallelism.
  • the manner of splitting the sub-rendering tree may refer to the text description in step S502 or FIG. 5B , which will not be described in detail here.
  • the child nodes of the root rendering node of the rendering tree with a task volume of 200 are rendering node 51 and rendering node 52; the child nodes of rendering node 51 are rendering node 512 and rendering node 513; the child nodes of rendering node 52 are rendering node 521, rendering node 522, and rendering node 523, and the child node of rendering node 522 is rendering node 5221.
  • a sub-rendering tree with a task amount of 40 and a sub-rendering tree with a task amount of 160 are obtained.
  • the root node of the sub-rendering tree with a task amount of 40 is the root rendering node
  • the child node of the root rendering node is rendering node 51
  • the child nodes of rendering node 51 are rendering nodes 512 and rendering nodes 513.
  • the root node of the rendering tree with a task amount of 160 is rendering node 52
  • the child nodes of rendering node 52 are rendering nodes 521, rendering node 522 and rendering node 523
  • the child node of rendering node 522 is rendering node 5221.
  • the second split is performed.
  • a rendering tree with a task amount of 70 is obtained.
  • the root node of the rendering tree with a task amount of 70 is rendering node 52, and the child node of rendering node 52 is rendering node 512;
  • the root node of the rendering tree with a task amount of 50 is rendering node 522, and the child node of rendering node 522 is rendering node 5221;
  • the root node of the rendering tree with a task amount of 40 is rendering node 523.
  • step S303 After the second split, four sub-rendering trees are obtained, and the task volume of the four sub-rendering trees is less than the task volume threshold 1; further, in the four sub-rendering trees, the dependency relationship of the rendering nodes is preserved.
  • the GPU instructions corresponding to the four sub-rendering trees only need to be submitted to the command buffer in the command queue in sequence to restore the complete GPU instructions.
  • the dependency of the rendering nodes is preserved, which means that since there is no new parent-child relationship between the rendering nodes, the internal order of the GPU instructions corresponding to each child rendering tree does not change; further, after the GPU instructions are submitted to the command queue, the order of all GPU instructions in the command queue will not change. It can be understood that the splitting method of the child rendering tree when the task volume is less than the task volume threshold can ensure that in the subsequent process of converting the rendering tree into GPU instructions, the working time of any thread will not exceed the time corresponding to the task volume threshold, thereby avoiding the occurrence of interface frame drops, interface freezes, etc.
  • FIG. 6A and FIG. 6B are exemplary schematic diagrams of a split rendering tree provided in an embodiment of the present application.
  • S601 Determine whether the task volume of the rendering tree is greater than a task volume threshold 1.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task volume of the rendering tree is greater than the task volume threshold 1, and if so, executes step S602; if not, ends.
  • S602 Divide N sub-rendering trees according to the dependency relationship between the root rendering node and the child nodes of the root rendering node.
  • the rendering tree is split into N rendering trees.
  • the child nodes of the root rendering node can be used as the root nodes of the N child rendering trees respectively.
  • a child rendering tree has a root rendering node, that is, the root rendering node is used as the root node of the child rendering tree.
  • the child node is a rendering node directly connected to the root node.
  • S603 Determine whether the task amount of each sub-rendering tree in all sub-rendering trees satisfies the constraint relationship.
  • the UI thread, rendering thread or unified rendering process of the application determines whether the task volume of the sub-rendering tree satisfies the constraint relationship. If so, the process ends; if not, the process executes step S604.
  • the minimum number of sub-rendering trees required may be determined first by the task volume of the rendering tree and the task volume threshold 1. For example, if the task volume of the rendering tree is 200 and the task volume threshold 1 is 30, the minimum number of sub-rendering trees required is determined to be 7. After splitting to obtain 7 rendering trees, it is determined whether each of the sub-rendering trees satisfies the constraint relationship shown below. If the constraint relationship is not satisfied, the sub-rendering tree continues to be split; if the constraint relationship is satisfied, the sub-rendering tree is no longer split.
  • the workload of the rendering tree be load total
  • the workload threshold 1 be load threshold
  • the thread concurrency overhead be cost
  • the workload threshold of the sub-rendering tree be Where i is the i-th child rendering tree, and N is the number of concurrent threads (the number of child rendering trees), then the constraint relationship needs to be satisfied and
  • N the number of rendering trees
  • the UI thread, rendering thread or unified rendering process of the application can move rendering nodes in a sub-rendering tree with a high workload to a sub-rendering tree with a low workload by moving rendering nodes. After the rendering nodes are moved, the dependency between the rendering nodes may be destroyed.
  • the application's UI thread, rendering thread, or unified rendering process can split off high-volume sub-render trees.
  • FIG6B is the same as the rendering tree in FIG5B .
  • the task volume of the rendering node 52 is 40 and the task volume of the rendering node 521 is 30.
  • the constraint condition cannot be satisfied by moving the rendering node. So further split the sub-rendering tree with a task volume of 70.
  • the task volumes of the split sub-rendering trees are 40, 40, 30, 50, and 40, respectively, which meet the constraints.
  • the load-balanced method of splitting the rendering tree can balance the workload of the different split sub-rendering trees, so as to avoid the short board effect when converting the rendering tree into GPU instructions later, that is, to avoid extending the time of converting the rendering tree into GPU instructions due to the excessive workload of individual sub-rendering trees.
  • rendering nodes with a sibling relationship may be split preferentially.
  • the drawing operation of the rendering node may be split into different sub-rendering trees.
  • the rendering node in the process of splitting a rendering tree or splitting a sub-rendering tree, if a rendering node does not depend on any rendering node, the rendering node can be arbitrarily moved to a different rendering sub-tree. For example, in FIG5B , if the root rendering node is a transparent rendering node, rendering node 51 does not depend on any rendering node, and rendering node 52 does not depend on any rendering node.
  • FIG. 7 is another exemplary schematic diagram of a split rendering tree provided in an embodiment of the present application.
  • the interface to be generated by the application and displayed in the next frame is interface 701.
  • the application window can be divided into mutually non-blocking areas in a variety of ways, and the content displayed in different areas is the display content corresponding to different sub-rendering trees after the splitting.
  • the interface 701 (excluding the status bar) is divided into area 1 and area 2; the view corresponding to area 1 includes view container 1 and child nodes of view container 1; the view corresponding to area 2 includes view container 2 and child nodes of view container 2 such as view 22.
  • the UI thread of the application can directly generate two child rendering trees, namely, child rendering tree 1 corresponding to area 1 and child rendering tree 2 corresponding to area 2.
  • the rendering tree is divided into sub-rendering tree 1 and sub-rendering tree 2 according to the division of regions.
  • the DrawOP may be directly split.
  • S303 The application creates multiple rendering threads and converts multiple sub-rendering trees into GPU instructions in parallel.
  • the application creates multiple rendering threads, or the unified rendering process creates multiple sub-rendering threads, and then converts multiple sub-rendering trees into GPU instructions in parallel.
  • the number of rendering threads is consistent with the number of sub-rendering trees after the splitting or the number of sub-rendering threads is consistent with the number of sub-rendering trees after the splitting.
  • FIG. 8 is an exemplary schematic diagram of converting a rendering tree into GPU instructions in parallel according to an embodiment of the present application.
  • sub-rendering tree 1 to sub-rendering tree N are obtained.
  • Different threads such as rendering threads or sub-rendering threads, traverse different sub-rendering trees to obtain GPU instructions.
  • the unified rendering process, the UI thread or the rendering thread of the application can also apply for N command buffers from the command buffer pool, such as command buffer 1 to command buffer N in FIG8 , and the N command buffers are used to store GPU instructions corresponding to different sub-rendering trees.
  • the thread when traversing the rendering tree, the thread first encapsulates the drawing operation into a drawing operation structure (e.g., DrawOP), and then converts it into GPU instructions, such as interface calls in the OpenGLES library, Vulkan library, and Metal library.
  • a drawing operation structure e.g., DrawOP
  • GPU instructions such as interface calls in the OpenGLES library, Vulkan library, and Metal library.
  • GPU instructions After different threads obtain GPU instructions, they submit the GPU instructions in each command buffer to the buffer in the instruction queue (such as the Primarily Buffer) in sequence.
  • the sequence is the same as the traversal order of the rendering nodes in the rendering tree, that is, if the traversal order of the rendering nodes in the sub-rendering tree 1 is earlier than the traversal order of the rendering nodes in the sub-rendering tree N, the GPU instructions in the command buffer 1 in the Primarily Buffer will be executed first, and the GPU instructions in the command buffer 2 in the Primarily Buffer will be executed later.
  • Submitting data in different command buffers to the buffer in the instruction queue can be implemented in the following multiple ways, which are not limited here.
  • the electronic device when the electronic device converts the rendering tree into GPU instructions through the Vulkan library, then, illustratively, the GPU instructions in command buffer 1 or command buffer N can be submitted to the Primarily Buffer in the instruction queue through the vkQueueSubmit method call. In which, since it involves data synchronization from multiple command buffers to the buffer in the instruction queue, the electronic device can complete the synchronization through a semaphore.
  • the electronic device may also move data of multiple command buffers to a buffer in the instruction queue by means of pointer operations.
  • the electronic device may also move the data of multiple command buffers to the buffer in the instruction queue by copying.
  • command buffer 1 can be a buffer in an instruction queue, such as a Primarily Buffer, and GPU instructions in other command buffers need to be submitted to command buffer 1.
  • N address ranges may be divided in the Primarily Buffer to carry GPU instructions corresponding to different sub-rendering trees. For example, 0x000000-0x0000FF is the address range corresponding to the first command buffer, and 0x000100-0x0001FF is the address range corresponding to the second command buffer.
  • the bitmap after the bitmap is generated, the bitmap will be obtained by the surface synthesizer and/or the hardware synthesis strategy module, and then the parameters will be used for layer synthesis.
  • the synthesized bitmap will be obtained by the display subsystem for display.
  • FIG. 9A and FIG. 9B After the electronic device executes the interface generation method shown in FIG. 3 , the process of the electronic device generating an interface is shown in FIG. 9A and FIG. 9B .
  • 9A and 9B are another exemplary schematic diagram of the process of generating an interface of an electronic device provided in an embodiment of the present application.
  • FIG9A 7 since the electronic device converts the rendering tree into GPU instructions through two rendering threads (such as rendering thread 1 and rendering thread 2 in FIG9A ), the latency of converting the rendering tree into GPU instructions is reduced; the GPU can promptly receive the GPU instructions to generate a bitmap, as shown in FIG9A 8; then, the GPU can pass the generated bitmap to the surface synthesizer and/or the hardware synthesis strategy module, as shown in FIG9A 9; finally, the display subsystem can display the bitmap after the synthesis of the participating layers, as shown in FIG9A 10, and there will be no interface freeze.
  • FIG. 9A and FIG. 9B The difference between FIG. 9A and FIG. 9B is that the process of converting the rendering tree into GPU instructions can be performed by the rendering sub-thread of the unified rendering process, as shown in 7 in FIG. 9B.
  • the rendering sub-thread of the unified rendering process can be the rendering sub-thread 1 and the rendering sub-thread 2 in FIG. 9B, and the same contents between FIG. 9A and FIG. 9B are not repeated here.
  • the rendering tree may not be split, but the computing power of the CPU may be provided to reduce the time consumption of converting the rendering tree into GPU instructions.
  • FIG. 10 is another exemplary schematic diagram of the process of the interface generation method provided in an embodiment of the present application.
  • the UI thread of the application After receiving a vertical synchronization signal, the UI thread of the application generates a rendering tree corresponding to the current frame interface, and determines the task amount of the rendering tree during the process of generating the rendering tree.
  • step S1001 can refer to the text description of step S301 above, which will not be repeated here.
  • the UI thread, rendering thread, and unified rendering process of the application pass the amount of tasks in the rendering tree to the operating system, which adjusts the computing power of the CPU based on the amount of tasks in the rendering tree, for example, by adjusting the frequency of the CPU.
  • FIG. 11 is an exemplary schematic diagram of adjusting CPU computing power based on the amount of tasks of a rendering tree provided in an embodiment of the present application.
  • the task amount of the rendering tree may not be sent to the operating system, or the operating system does not adjust the CPU frequency after receiving the task amount of the rendering tree; when the task amount of the rendering tree is 101-200, the operating system adjusts the CPU frequency to frequency 1 after receiving the task amount of the rendering tree; when the task amount of the rendering tree is 201-300, the operating system adjusts the CPU frequency to frequency 2 after receiving the task amount of the rendering tree, wherein frequency 2 is higher than frequency 1.
  • the operating system adjusts the CPU's computing power to the state before adjustment, such as the default frequency in Figure 11.
  • the CPU frequency can be reduced.
  • the operating system determines to process the rendering tree in parallel through three threads after receiving the task volume of the rendering tree, and adjusts the CPU frequency to frequency 3; when the task volume of the rendering tree is 201-300, the operating system determines to process the rendering tree in parallel through five threads after receiving the task volume of the rendering tree, and adjusts the CPU frequency to frequency 4, wherein frequency 3 is lower than frequency 4.
  • step S1003 can refer to FIG. 1 and the text description in step 304 , which will not be repeated here.
  • the rendering tree may not be split, but the rendering tree may be traversed by multiple threads in different traversal orders, thereby reducing the time consumption of converting the rendering tree into GPU instructions.
  • FIG. 12 is another exemplary schematic diagram of the interface generation method provided in an embodiment of the present application.
  • the UI thread of the application After receiving the vertical synchronization signal, the UI thread of the application generates a rendering tree corresponding to the current frame interface, and determines the task amount of the rendering tree during the process of generating the rendering tree.
  • step S1201 can refer to the text description in step S301, and will not be repeated here.
  • the number of rendering threads can be determined by the workload of the rendering tree. For example, let the workload of the rendering tree be load total , the workload threshold be load threshold , and the number of rendering threads be N. in is a round-up function. Different rendering threads traverse the rendering tree in different orders to generate GPU instructions, as shown in Figures 13 and 14.
  • the number of rendering threads may also be a preconfigured fixed value.
  • FIG. 13 is an exemplary schematic diagram of a rendering thread traversing a rendering tree in different orders according to an embodiment of the present application.
  • the root node of the rendering tree is the root rendering node
  • the child nodes of the root rendering node are rendering node 1 and rendering node 2
  • the child nodes of rendering node 1 are rendering node 11 and rendering node 12
  • the child nodes of rendering node 2 are rendering node 21, rendering node 22, and rendering node 23, and the child node of rendering node 22 is rendering node 221.
  • the traversal order of thread 1 is: root rendering node, rendering node 1, rendering node 11 and rendering node 12.
  • the traversal order of thread 2 is: rendering node 2, rendering node 21, rendering node 22, rendering node 23 and rendering node 221.
  • thread 1 when thread 1 has traversed rendering node 1, rendering node 11, and rendering node 12, thread 2 traverses to rendering node 21, and then thread 2 continues to traverse rendering node 22, rendering node 23, and rendering node 221. Then, according to the content shown in FIG8 , the GPU instructions generated by different threads traversing the rendering nodes are located in different command buffers and need to be submitted to the buffer of the instruction queue in sequence.
  • FIG. 14 is an exemplary schematic diagram of submitting GPU instructions to an instruction queue provided in an embodiment of the present application.
  • the GPU instructions corresponding to the rendering node 23 are saved in command buffer 2.
  • the order between different GPU instructions needs to be adjusted.
  • GPU instructions corresponding to the root rendering node, rendering node 1, rendering node 11, and rendering node 12 are submitted from command buffer 2 to the buffer in the command queue; then, all GPU instructions in command buffer 1 are submitted to the buffer in the command queue; finally, the remaining GPU instructions in command buffer 2 (i.e., the GPU instructions corresponding to rendering node 23) are submitted to the buffer in the command queue.
  • all GPU instructions of command buffer 2 and command buffer 1 may be submitted to the buffer of the command queue in sequence.
  • the dependency relationship of the rendering nodes of the rendering tree may not be reconstructed. If rendering node 23 has a dependency relationship with rendering node 2, the dependency relationship of the rendering nodes of the rendering tree is not reconstructed; if rendering node 23 has no dependency relationship with rendering node 2, the dependency relationship of the rendering nodes of the rendering tree is reconstructed.
  • step S1203 can refer to the text description of step S304 above, which will not be repeated here.
  • FIG. 15 is an exemplary schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • the electronic device can be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, as well as a cellular phone, a personal digital assistant (PDA), an augmented reality (AR) device, a virtual reality (VR) device, an artificial intelligence (AI) device, a wearable device, a vehicle-mounted device, a smart home device and/or a smart city device.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable device a vehicle-mounted device
  • smart home device a smart home device and/or a smart city device.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180A, a pressure sensor 180B, a gyroscope sensor 180C, a pressure sensor 180A, a gyroscope sensor 180B, a pressure sensor 180A, a gyroscope sensor 180B, a pressure sensor 180B, a gyroscope sensor 180C, a gyroscope sensor 180D, a gyroscope sensor 180A, a pressure sensor 180A, a gyroscope sensor 180B, a gyroscope sensor 180C, a gyroscope sensor 180B, a gyroscope sensor 180C, a gyroscope sensor 180A, a gyroscope sensor 180B, a gyroscope sensor 180C, a gyroscope sensor 180D, a gyroscope sensor 180A, a gyroscope sensor 180B, a
  • the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or split certain components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple groups of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 communicates with the touch sensor 180K through the I2C bus interface to realize the touch function of the electronic device.
  • the I2S interface can be used for audio communication.
  • the processor 110 can include multiple I2S buses.
  • the processor 110 can be coupled to the audio module 170 via the I2S bus to achieve communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 via the I2S interface to achieve the function of answering a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 can be coupled via a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 via the PCM interface to realize the function of answering calls via a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through the CSI interface to realize the shooting function of the electronic device.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge an electronic device, and can also be used to transfer data between an electronic device and a peripheral device. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation of the electronic device.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from a wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of an electronic device. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
  • the power management module 141 can also be set in the processor 110.
  • the power management module 141 and the charging management module 140 can also be set in the same device.
  • the wireless communication function of the electronic device can be implemented through antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor and baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of the antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to electronic devices.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the same device as at least some of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless communication solutions for electronic devices, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signal and filters it, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the antenna 1 of the electronic device is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • WLAN wireless local area network
  • NFC long term evolution
  • FM long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT GNSS
  • the electronic device implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye.
  • the ISP can also perform algorithm optimization on the noise and brightness of the image.
  • the ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP can be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to processing digital image signals, they can also process other digital signals. For example, when an electronic device selects a frequency point, a digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital videos.
  • Electronic devices can support one or more video codecs. In this way, electronic devices can play or record videos in multiple coding formats, such as: Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG Moving Picture Experts Group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
  • RAM random access memories
  • NVM non-volatile memories
  • Random access memory may include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous dynamic random-access memory (DDR SDRAM, for example, the fifth generation DDR SDRAM is generally referred to as DDR5 SDRAM), etc.;
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • SDRAM synchronous dynamic random-access memory
  • DDR SDRAM double data rate synchronous dynamic random-access memory
  • DDR5 SDRAM double data rate synchronous dynamic random-access memory
  • Non-volatile memory can include disk storage devices and flash memory.
  • Flash memory can be divided into NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operating principle; can be divided into single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC), etc. according to the storage unit potential level; can be divided into universal flash storage (UFS), embedded multi media Card (eMMC), etc. according to the storage specification.
  • SLC single-level cell
  • MLC multi-level cell
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • eMMC embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data, etc.
  • the non-volatile memory may also store executable programs and user and application data, etc., and may be loaded into the random access memory in advance for direct reading and writing by the processor 110 .
  • the external memory interface 120 can be used to connect to an external non-volatile memory to expand the storage capacity of the electronic device.
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are stored in the external non-volatile memory.
  • the electronic device can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "earpiece" is used to convert an audio electrical signal into a sound signal.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by putting his mouth close to the microphone 170C to input the sound signal into the microphone 170C.
  • the electronic device can be provided with at least one microphone 170C. In other embodiments, the electronic device can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device can also be provided with three, four or more microphones 170C to realize the collection of sound signals, noise reduction, identification of sound sources, and realization of directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be set on the display screen 194.
  • the capacitive pressure sensor can be a parallel plate including at least two conductive materials. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device determines the intensity of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic device detects the touch operation intensity according to the pressure sensor 180A. The electronic device can also calculate the touch position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities can correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device.
  • the angular velocity of the electronic device around three axes i.e., x, y, and z axes
  • the gyro sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the electronic device shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device can use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device when the electronic device is a flip phone, the electronic device can detect the opening and closing of the flip cover according to the magnetic sensor 180D. Then, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, the flip cover can be automatically unlocked.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device in all directions (generally three axes). When the electronic device is stationary, it can detect the magnitude and direction of gravity. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device emits infrared light outward through the light emitting diode.
  • the electronic device uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device. When insufficient reflected light is detected, the electronic device can determine that there is no object near the electronic device.
  • the electronic device can use the proximity light sensor 180G to detect when the user holds the electronic device close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode and pocket mode to automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the ambient light brightness.
  • the electronic device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to achieve fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device reduces the performance of a processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection.
  • the electronic device when the temperature is lower than another threshold, the electronic device heats the battery 142 to avoid abnormal shutdown of the electronic device due to low temperature.
  • the electronic device boosts the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K is also called a "touch control device”.
  • the touch sensor 180K can be set on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch control screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the electronic device, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of the vocal part of the human body. The bone conduction sensor 180M can also contact the human pulse to receive a blood pressure beat signal. In some embodiments, the bone conduction sensor 180M can also be set in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can parse out a voice signal based on the vibration signal of the vibrating bone block of the vocal part obtained by the bone conduction sensor 180M to realize a voice function.
  • the application processor can parse the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize a heart rate detection function.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 can be a mechanical key or a touch key.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • Motor 191 can generate vibration prompts.
  • Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • touch operations acting on different areas of the display screen 194 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminders, receiving messages, alarm clocks, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to and separated from the electronic device by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195.
  • the electronic device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, and the like. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device uses an eSIM, i.e., an embedded SIM card.
  • the eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
  • FIG. 16 is an exemplary schematic diagram of the software structure of the electronic device provided in an embodiment of the present application.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present invention takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device.
  • the layered architecture divides the software into several layers, each with clear roles and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, from top to bottom: the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the phone manager is used to provide communication functions for electronic devices, such as the management of call status (including answering, hanging up, etc.).
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • the view system includes a rendering tree task amount estimation module, which determines the task amount of the rendering tree during the process of generating the rendering tree or after the rendering tree is generated.
  • the view system includes a rendering tree splitting module, which can split the rendering tree in different ways, wherein different rendering trees are traversed by different threads to generate GPU instructions.
  • the CPU scheduling module may adjust the computing power of the CPU based on the amount of tasks of the rendering tree, such as adjusting the frequency of the CPU.
  • Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the Java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: browser engine (webkit), rendering engine, surface compositor, hardware synthesis strategy module, media library (Media Libraries), image processing library (for example: OpenGL ES), rendering engine (such as Skia library), etc.
  • browser engine webkit
  • rendering engine surface compositor
  • hardware synthesis strategy module hardware synthesis strategy module
  • media library Media Libraries
  • image processing library for example: OpenGL ES
  • rendering engine such as Skia library
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the image processing library is used to implement 3D graphics drawing, image rendering, etc.
  • a rendering engine is a drawing engine for 2D graphics.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes display driver, camera driver, audio driver, sensor driver, etc.
  • the display subsystem includes a display driver.
  • the term "when" may be interpreted to mean “if" or “after" or “in response to determining" or “in response to detecting", depending on the context.
  • the phrases “upon determining" or “if (the stated condition or event) is detected” may be interpreted to mean “if determining" or “in response to determining" or “upon detecting (the stated condition or event)” or “in response to detecting (the stated condition or event)", depending on the context.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from a website site, a computer, a server or a data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server, a data center, etc. that contains one or more available media integration.
  • the available medium can be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk), etc.
  • the processes can be completed by a computer program to instruct the relevant hardware, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed, it can include the processes of the above-mentioned method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了界面生成方法及电子设备,涉及电子技术领域。本申请提供的界面生成方法包括:将渲染树拆分为多个子渲染树,通过多个线程并行地将多个子渲染树转换为渲染指令,进而基于渲染指令生成界面,进而可以降低渲染树转换为渲染指令的时延,避免界面出现掉帧和卡顿。

Description

界面生成方法及电子设备
本申请要求在2022年10月19日提交中国国家知识产权局、申请号为202211281435.9的中国专利申请的优先权,发明名称为“界面生成方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及界面生成方法及电子设备。
背景技术
随着技术的发展,电子设备的屏幕的分辨率和刷新率越来越高,其中,屏幕的分辨率影响一帧界面中包含的像素,刷新率影响生成一帧界面的时间。
在电子设备显示第一帧界面前,电子设备需要花费计算资源生成该第一帧界面;在电子设备显示第二帧界面前,电子设备需要重新花费计算资源生成该第二帧界面。
当电子设备未及时生成该第二帧界面时,电子设备的屏幕上显示的内容会发生卡顿。而电子设备为了保障能够及时生成该第二帧界面,往往通过提高CPU的工作频率以提升电子设备的计算能力,进而导致电子设备生成一帧界面的能耗较高,降低了界面生成的能效比。
发明内容
本申请实施例提供了界面生成方法及电子设备。本申请实施例提供的界面生成方法通过将渲染树拆分为多个子渲染树,进而并行的将多个子渲染树转换为渲染指令,降低子渲染树转换为渲染指令的时长,进而降低界面生成的时长,避免界面发生卡顿、掉帧等。
第一方面,本申请实施例提供了一种界面生成方法,应用于运行第一应用程序的电子设备,该方法包括:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备拆分该第一渲染树得到N个子渲染树,该N大于1;该电子设备并行地将该N个子渲染树转化为第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
在上述实施例中,将渲染树拆分为多个子渲染树,进而并行的将多个子渲染树转换为渲染指令,降低子渲染树转换为渲染指令的时长,进而降低界面生成的时长,避免界面发生卡顿、掉帧等。
结合第一方面的一些实施例,在一些实施例中,该电子设备拆分该第一渲染树得到N个子渲染树,具体包括:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于指示该第一渲染树转换为该第一渲染指令的耗时或计算量;响应于该电子设备确定该第一任务量大于第一阈值,该电子设备拆分该第一渲染树得到该N个子渲染树。
在上述实施例中,考虑到渲染树的任务量大于阈值的情况下,渲染树转换为渲染指令的时间较长,在该情况下,才将渲染树拆分为多个子渲染树,进而降低时长;在渲染树的任务量小于阈值的情况下,可以不拆分渲染树。
结合第一方面的一些实施例,在一些实施例中,在该电子设备确定第一任务量后,该方法还包括:该电子设备基于该第一任务量和该第一阈值确定M,该M小于等于该N,该M为大于等于该第一任务量与该第一阈值的比值的整数;该电子设备确定大于等于该M的整数为该N。
在上述实施例中,拆分渲染树可以先确定最少需要拆分子渲染树的个数,进而确定拆分渲染树的方式;确定拆分子渲染树的个数可以通过阈值和渲染树的任务量确定。
结合第一方面的一些实施例,在一些实施例中,该电子设备确定第一任务量,具体包括:该电子设备通过确定该第一渲染树中的绘制操作的任务量以确定该第一任务量。
在上述实施例中,可以通过多种方式确定第一任务量。
结合第一方面的一些实施例,在一些实施例中,该N个子渲染树包括第二渲染树和该第三渲染树,第二任务量和第三任务量的差距小于差距阈值,该第二任务量为该第二渲染树的任务量,该第二任务量用于衡量该第二渲染树被转换为渲染指令的耗时或计算量,该第三任务量为该第三渲染树的任务量,该第三任务量用于衡量该第三渲染树被转换为渲染指令的耗时或计算量。
在上述实施例中,在确定最少需要拆分子渲染树的个数,并拆分出对应个数的子渲染树后,可以对子渲染树进一步拆分以使得每一个子渲染树的任务量均小于阈值。
结合第一方面的一些实施例,在一些实施例中,该电子设备拆分该第一渲染树得到N个子渲染树,具体包括:该电子设备确定该第一渲染树的根渲染节点具有N个子节点,该子节点为该跟渲染节点直接连接的渲染节点;该电子设备拆分该第一渲染树为N个子渲染树。
在上述实施例中,可以按照渲染树的数据结构拆分得到多个渲染子树。
结合第一方面的一些实施例,在一些实施例中,该电子设备拆分该第一渲染树得到N个子渲染树,具体包括:该电子设备将该第一应用程序的界面划分为N个区域;该电子设备基于该N个区域拆分该第一渲染树得到N个子渲染树,该N个子渲染树与该N个区域一一对应。
在上述实施例中,可以先在界面上划分N个区域,然后按照区域拆分渲染树得到N个子渲染树。
结合第一方面的一些实施例,在一些实施例中,该电子设备拆分该第一渲染树得到N个子渲染树,具体包括:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值;该电子设备确定该第一渲染树的根渲染节点具有K个子节点,该K小于N;该电子设备拆分该第一渲染树为K个子渲染树;该电子设备确定第四渲染树的任务量大于该第一阈值后,该电子设备拆分该第四渲染树得到N-K+1个渲染子树,该K个子渲染树包括该第四渲染树,该N个渲染子树的任务量均小于该第一阈值。
在上述实施例中,可以拆分渲染树以及拆分渲染子树,使得渲染子树均的任务量均小于阈值,进而保证渲染子树转换为渲染指令的时长不会超时,进而保证界面的及时生成。
结合第一方面的一些实施例,在一些实施例中,该电子设备并行地将该N个子渲染树转化为第一渲染指令,具体包括:该电子设备通过N个线程分别将该N个子渲染树转换的指令填入N个缓冲中;该电子设备将该N个缓冲的指令提交至第一缓冲中,该第一缓冲中的指令为该第一渲染指令。
在上述实施例中,在将多个渲染子树转换为多个渲染指令后,还需要将渲染指令合并在一个缓冲中,进而将该穿穿提交到GPU以驱动GPU生成界面。
第二方面,本申请实施例提供了一种界面生成方法,应用于运行第一应用程序的电子设备,该方法包括:该电子设备通过该第一进程生成第一渲染树,该第一渲染树包括用于生成该第一进程的一帧界面的绘制操作;该电子设备拆分该第一渲染树为第二渲染树和第三渲染树,该第二渲染树包括第一渲染树中的部分绘制操作,该第三渲染树包括该第一渲染树中的部分绘制操作,该第二渲染树和该第三渲染树不同;该电子设备通过第一线程将该第二渲染树转换为第一渲染指令,该第一渲染指令保存在第一缓冲中,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备通过第二线程将该第三渲染树转换为第二渲染指令,该第二指令保存在第二缓冲中;该电子设备基于该第一渲染指令和该第二渲染指令生成该第一进程的一帧界面。
在上述实施例中,电子设备可以将进程生成的渲染树拆分为多个渲染树,并通过不同的线程生成多组渲染指令,最后基于多组渲染指令生成一帧界面,由于线程是并行的,所以可以降低渲染树转换为渲染指令的时长。
结合第二方面的一些实施例,在一些实施例中,该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值。
在上述实施例中,考虑到渲染树的任务量大于阈值的情况下,渲染树转换为渲染指令的时间较长,在该情况下,才将渲染树拆分为多个子渲染树,进而降低时长;在渲染树的任务量小于阈值的情况下,可以不拆分渲染树。
结合第二方面的一些实施例,在一些实施例中,该电子设备基于该第一指令和该第二指令生成该第一进程的一帧界面,具体包括:该第一渲染指令位于该第一线程持有的第一缓冲中,该第二渲染指令位于该第二线程持有第二缓冲中,该电子设备提交该第一缓冲中的指令和该第二缓冲中的渲染指令至第三缓冲中;该电子设备基于该第三缓冲生成该第一进程的一帧界面。
在上述实施例中,电子设备可以将多个缓冲中的渲染指令提交至一个缓冲中,进而驱动GPU生成界面。
结合第二方面的一些实施例,在一些实施例中,该第三缓冲为该第二缓冲,或者该第三缓冲为该第一缓冲。
第三方面,本申请实施例提供了一种界面生成方法,应用于运行第一应用程序的电子设备,该方法包括:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值,渲染指令为渲染引擎、图像处理库或GPU 驱动中的指令;若该第一任务量大于该第一阈值,该电子设备配置CPU的工作频率从第一频率变为第二频率,该第二频率高于该第一频率;该电子设备基于该第一渲染树生成该第一应用程序的一帧界面;在该电子设备生成该第一应用程序的一帧界面的过程中,该电子设备以该第二频率工作。
在上述实施例中,电子设备可以根据渲染树的任务量和阈值的大小关系判断是否调整CPU的频率,若渲染树的任务量大于阈值,则以更高的频率工作,使得渲染树可以及时的转换为渲染指令。
结合第三方面的一些实施例,在一些实施例中,该电子设备基于该第一渲染树生成该第一应用程序的一帧界面,具体包括:该电子设备拆分该第一渲染树得到N个子渲染树,该N为大于1的整数;该电子设备并行地将该N个子渲染树转化为第一渲染指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
在上述实施例中,电子设备还可以将渲染树拆分为多个子渲染树,通过并行地将多个子渲染树转换为渲染指令以将渲染树转换为渲染指令的时长。
第四方面,本申请实施例提供了一种界面生成方法,应用于运行第一应用程序的电子设备,该方法包括:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备通过多个不同的线程遍历该第一渲染树中的不同部分以生成第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
在上述实施例中,电子设备以不同的顺序遍历渲染树,通过多个线程并行地将渲染树转换为渲染指令,降低了渲染树转换为渲染指令的时延,保证电子设备能够及时地生成界面。
结合第四方面的一些实施例,在一些实施例中,在该电子设备通过多个不同的线程以不同的顺序遍历该第一渲染树以生成第一指令前,该方法还包括:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为该第一渲染指令的耗时或计算量;该电子设备确定该第一任务量大于第一阈值。
在上述实施例中,考虑到渲染树的任务量大于阈值的情况下,渲染树转换为渲染指令的时间较长,在该情况下,才通过多个线程并行地将渲染树转换为渲染指令,进而降低转换的时长。
结合第四方面的一些实施例,在一些实施例中,该电子设备通过多个不同的线程以不同的顺序遍历该第一渲染树以生成第一指令,具体包括:该电子设备通过第一线程遍历该第一渲染树中的第一部分,将生成的第二渲染指令保存在第一缓冲中;该电子设备通过第二线程遍历该第一渲染树中的第二部分,将生成的第三渲染指令保存在第二缓冲中;该电子设备将该第一缓冲的渲染指令和该第二缓冲中的渲染指令提交到第三缓冲中以获取该第一渲染指令。
在上述实施例中,不同线程转换生成的渲染指令可以位于不同的缓冲中,最后将不同缓冲中的渲染指令提交到同一个缓冲中,进而驱动GPU生成界面。
结合第四方面的一些实施例,在一些实施例中,该电子设备将该第一缓冲的渲染指令和该第二缓冲中的渲染指令提交到第三缓冲中以获取该第一渲染指令,具体包括:该第一缓冲的渲染指令包括第二渲染指令和第三渲染指令,该第二缓冲的指令包括第四渲染指令;该第三缓冲的指令的排列顺序依次为:第二渲染指令、第四渲染指令、第三渲染指令。
在上述实施例中,在将不同缓冲中的渲染指令提交到同一个缓冲中,可以调整渲染指令的顺序,进而恢复渲染节点的依赖性。
第五方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备拆分该第一渲染树得到N个子渲染树,该N大于1;该电子设备并行地将该N个子渲染树转化为第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于指示该第一渲染树转换为该第一渲染指令的耗时或计算量;响应于该电子设备确定该第一任务量大于第一阈值,该电子设备拆分该第一渲染树得到该N个子渲染树。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备基于该第一任务量和该第一阈值确定M,该M小于等于该N,该M为大于等于该第一任务量与该第一阈值的比值的整数;该电子设备确定大于等于该M的整数为该N。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备通过确定该第一渲染树中的绘制操作的任务量以确定该第一任务量。
结合第五方面的一些实施例,在一些实施例中,该N个子渲染树包括第二渲染树和该第三渲染树,第二任务量和第三任务量的差距小于差距阈值,该第二任务量为该第二渲染树的任务量,该第二任务量用于衡量该第二渲染树被转换为渲染指令的耗时或计算量,该第三任务量为该第三渲染树的任务量,该第三任务量用于衡量该第三渲染树被转换为渲染指令的耗时或计算量。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备确定该第一渲染树的根渲染节点具有N个子节点,该子节点为该跟渲染节点直接连接的渲染节点;该电子设备拆分该第一渲染树为N个子渲染树。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备将该第一应用程序的界面划分为N个区域;该电子设备基于该N个区域拆分该第一渲染树得到N个子渲染树,该N个子渲染树与该N个区域一一对应。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值;该电子设备确定该第一渲染树的根渲染节点具有K个子节点,该K小于N;该电子设备拆分该第一渲染树为K个子渲染树;该电子设备确定第五渲染树的任务量大于该第一阈值后,该电子设备拆分该第五渲染树得到N-K+1个渲染子树,该K个子渲染树包括该第五渲染树,该N个渲染子树的任务量均小于该第一阈值。
结合第五方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备通过N个线程分别将该N个子渲染树转换的指令填入N个缓冲中;该电子设备将该N个缓冲的指令提交至第一缓冲中,该第一缓冲中的指令为该第一渲染指令。
第六方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:该电子设备通过该第一进程生成第一渲染树,该第一渲染树包括用于生成该第一进程的一帧界面的绘制操作;该电子设备拆分该第一渲染树为第二渲染树和第三渲染树,该第二渲染树包括第一渲染树中的部分绘制操作,该第三渲染树包括该第一渲染树中的部分绘制操作,该第二渲染树和该第三渲染树不同;该电子设备通过第一线程将该第二渲染树转换为第一渲染指令,该第一渲染指令保存在第一缓冲中,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备通过第二线程将该第三渲染树转换为第二渲染指令,该第二指令保存在第二缓冲中;该电子设备基于该第一渲染指令和该第二渲染指令生成该第一进程的一帧界面。
结合第六方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值。
结合第六方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该第一渲染指令位于该第一线程持有的第一缓冲中,该第二渲染指令位于该第二线程持有第二缓冲中,该电子设备提交该第一缓冲中的指令和该第二缓冲中的渲染指令至第三缓冲中;该电子设备基于该第三缓冲生成该第一进程的一帧界面。
结合第六方面的一些实施例,在一些实施例中,该第三缓冲为该第二缓冲,或者该第三缓冲为该第一缓冲。
第七方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为渲染指令的耗时或计算量,该第一任务量大于第一阈值,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;若该第一任务量大于该第一阈值,该电子设备配置CPU的工作频率从第一频率变为第二频率,该第二频率高于该第一频率;该电子设备基于该第一渲染树生成该第一应用程序的一帧界面;在该电子设备生成该第一应用程序的一帧界面的过程中,该电子设备以该第二频率工作。
结合第七方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以 使得该电子设备执行:该电子设备拆分该第一渲染树得到N个子渲染树,该N为大于1的整数;该电子设备并行地将该N个子渲染树转化为第一渲染指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
第八方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:该电子设备生成第一渲染树,该第一渲染树包括用于生成该第一应用程序的一帧界面的绘制操作;该电子设备通过多个不同的线程遍历该第一渲染树中的不同部分以生成第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;该电子设备基于该第一渲染指令生成该第一应用程序的一帧界面。
结合第八方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备确定第一任务量,该第一任务量为该第一渲染树的任务量,该第一任务量用于衡量该第一渲染树被转换为该第一渲染指令的耗时或计算量;该电子设备确定该第一任务量大于第一阈值。
结合第八方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备通过第一线程遍历该第一渲染树中的第一部分,将生成的第二渲染指令保存在第一缓冲中;该电子设备通过第二线程遍历该第一渲染树中的第二部分,将生成的第三渲染指令保存在第二缓冲中;该电子设备将该第一缓冲的渲染指令和该第二缓冲中的渲染指令提交到第三缓冲中以获取该第一渲染指令。
结合第八方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该第一缓冲的渲染指令包括第二渲染指令和第三渲染指令,该第二缓冲的指令包括第四渲染指令;该第三缓冲的指令的排列顺序依次为:第二渲染指令、第四渲染指令、第三渲染指令。
第九方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备,该芯片系统包括一个或多个处理器,该处理器用于调用计算机指令以使得该电子设备执行如第一方面、第二方面、第三方面、第四方面、第一方面中任一可能的实现方式、第二方面中任一可能的实现方式、第三方面任一可能的实现方式以及第四方面任一可能的实现方式描述的方法。
第十方面,本申请实施例提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面、第二方面、第三方面、第四方面、第一方面中任一可能的实现方式、第二方面中任一可能的实现方式、第三方面任一可能的实现方式以及第四方面任一可能的实现方式描述的方法。
第十一方面,本申请实施例提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面、第二方面、第三方面、第四方面、第一方面中任一可能的实现方式、第二方面中任一可能的实现方式、第三方面任一可能的实现方式以及第四方面任一可能的实现方式描述的方法。
可以理解地,上述第五方面、第六方面、第七方面和第八方面提供的电子设备、第九方面提供的芯片系统、第十方面提供的计算机程序产品和第十一方面提供的计算机存储介质均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的应用程序生成位图的一个示例性示意图;
图2为本申请实施例提供的电子设备生成界面过程的一个示例性示意图;
图3为本申请实施例提供的界面生成方法的一个示例性示意图;
图4为本申请实施例提供的确定渲染树的任务量的一个示例性示意图;
图5A和图5B为本申请实施例提供的拆分渲染树的一个示例性示意图;
图6A和图6B为本申请实施例提供的一个拆分渲染树的一个示例性示意图;
图7为本申请实施例提供的拆分渲染树的另一个示例性示意图;
图8为本申请实施例提供的并行地将渲染树转换为GPU指令的一个示例性示意图;
图9A和图9B为本申请实施例提供的电子设备生成界面过程的另一个示例性示意图;
图10为本申请实施例提供的界面生成方法的流程的另一个示例性示意图;
图11为本申请实施例提供的基于渲染树的任务量调整CPU计算能力的一个示例性示意图;
图12为本申请实施例提供的界面生成方法的另一个示例性示意图;
图13为本申请实施例提供的渲染线程以不同的顺序遍历渲染树的一个示例性示意图;
图14为本申请实施例提供的提交GPU指令至指令队列的一个示例性示意图;
图15为本申请实施例提供的电子设备的硬件结构的一个示例性示意图;
图16为本申请实施例提供的电子设备的软件结构的一个示例性示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“该”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
为了便于理解,下面先对本申请实施例涉及的相关术语及相关概念进行介绍。本发明的实施方式部分使用的术语仅用于对本发明的具体实施例进行解释,而非旨在限定本发明。
界面作为应用程序与用户之间的交互和信息交互的介质接口,在每一次垂直同步信号到来时,电子设备需要为前台的应用程序生成该应用程序的界面。其中,垂直同步信号的频率与电子设备的屏幕的刷新率有关,例如垂直同步信号的频率与电子设备的屏幕的刷新率相同。
即每次电子设备刷新屏幕上显示的内容前,都需要为前台应用生成该应用程序的界面,以在屏幕刷新时向用户展现应用程序的新生成的界面。
其中,电子设备生成应用程序的界面需要应用程序自己渲染生成位图(bitmap),将自己的位图传递给表面合成器(SurfaceFlinger)。即,应用程序作为生产者执行绘制生成位图,将该位图存入表面合成器提供的缓冲队列(BufferQueue)中;表面合成器作为消费者不断地从BufferQueue获取应用程序生成的位图。其中,位图位于应用程序生成的surface上,该surface会被填入BufferQueue中。
在表面合成器获得可见的应用程序的位图后,表面合成器与硬件合成策略模块(Hardware Composer,HWC)确定位图作为图层(layer)的图层合成的方式。
在表面合成器和/或硬件合成策略模块执行位图合成后,由表面合成器和/或硬件合成策略模块将合成后的位图填入帧缓冲(Frame Buffer)中传递给显示子系统(Display Subsystem,DSS),DSS在拿到合成后的位图可以将该合成后的位图显示到屏幕上。该帧缓冲可以是在屏缓冲(on-screenbuffer)。其中,位图在表面合成器上也可以称为图层。
其中,应用程序生成位图的过程如下图1所示。
图1为本申请实施例提供的应用程序生成位图的一个示例性示意图。
如图1所示,应用程序在接收到垂直同步信号(Vsync)后,开始生成位图,具体的步骤可以分为三步,分别为步骤S101、步骤S102和步骤S103。
S101:主线程遍历应用程序的视图,将每个视图的绘制操作保存至新生成的渲染树中。
主线程(UI线程,UI Thread)让视图结构(viewhierarchy)失效,UI线程通过测量方法调用(measure())、布局方法调用(layout())、绘制方法调用(draw())遍历该应用程序的视图(view),确定并保存每个视图的绘制操作,并将视图和该视图涉及的绘制操作如(如drawline)录制到渲染树的渲染节点(RenderNode)的绘制指令列表(displaylist)中。其中,绘制指令列表中保存的数据可以为绘制操作结构体(DrawOP或DrawListOP)。
其中,视图是构成应用程序界面的基本元素,界面上的一个控件可以对应于一个或多个视图。
可选的,在本申请一些实施方式中,在绘制方法调用内,应用程序的UI线程还会读取视图上承载的内容至内存中。例如,图片视图(imageview)承载的图片,文本视图(textview)承载的文本。或者,在绘制方法调用内,应用程序的UI线程确定读取视图上承载的内容至内存中的操作,并录制到绘制指令列表中。绘制指令列表中的绘制操作结构体也可以称为绘制指令。
其中,绘制操作结构体为一个数据结构体,用于绘制图形,例如绘制线条、绘制矩形、绘制文本等。绘制操作结构体在渲染节点被遍历时会通过渲染引擎进而转换为图像处理库的API调用,OpenGLES库、Vulkan库、Metal库中的接口调用。例如在渲染引擎(Skia库)中,drawline会被封装为DrawLineOp,DrawLineOp是一个数据结构体,数据结构体里面包含有绘制的数据如线的长度、宽度等信息。DrawLineOp进一步会被封装为OpenGLES库、Vulkan库、Metal库中的接口调用,进而得到GPU指令。在后文中,统一称Skia库中的接口调用、OpenGLES库接口调用、Vulkan库接口调用和/或Metal库中的接口调用为渲染指令。即,渲染树会被渲染线程转换为渲染指令,进而进一步转为GPU可以识别并处理的GPU指令。其中,OpenGLES库、Vulkan库、Metal库可以统称为图像处理库或图形渲染库,在一帧界面的生成过程中,电子设备通过OpenGLES库、Vulkan库或Metal库生成渲染指令。图像处理库提供图形渲染的API以及驱动支持等。
其中,DrawOP可以以链式的数据结构存储在应用程序的栈中。
其中,绘制指令列表可以是一个缓冲区,该缓冲区中记录有应用程序一帧界面所包括的所有绘制操作结构体或是所有绘制操作的标识,如地址、序号等。当应用程序有多个窗口、或者在不同的显示区域(display)上显示时,需要独立地生成与多个窗口对应的多个渲染树。
其中,渲染树是UI线程生成的,用于生成应用程序界面的一个数据结构体,渲染树可以包括多个渲染节点,每个渲染节点包括渲染属性和绘制指令列表。渲染树记录有生成应用程序一帧界面的部分或所有信息。
可选地,在本申请一些实施方式中,UI线程可以只对脏区域(也可以称为需要被重绘的区域)的视图执行遍历,生成差分渲染树。其中,差分渲染树在被传递/同步到渲染线程后,渲染线程可以差分渲染树和上一帧渲染使用的渲染树确定本帧界面渲染需要使用的渲染树。
S102:主线程将渲染树同步到渲染线程,渲染树位于应用程序的栈中。
UI线程将渲染树传递/同步给渲染线程(Render Thread),其中,渲染树位于应用程序对应的进程的栈(stack)中。
S103:渲染线程执行渲染树中的绘制指令,生成位图。
渲染线程首先获取一个硬件画布(HardwareCanvas),并在该硬件画布上执行渲染树中的绘制操作,进而生成位图。其中,该硬件画布位于该应用程序持有的surface中,该surface中承载有位图或者其他格式的用于保存图像信息的数据。
S104:渲染线程发送承载位图的表面至表面合成器。
渲染线程通过表面将生成的位图发送到表面合成器上以参与图层合成。
可以认为步骤S101为构建阶段,主要负责确定该应用程序中每个视图的大小、位置、透明度等属性。例如,视图中的drawLine,构建中可以被封装成一个DrawLineOp,里面包含有绘制的数据如线的长度、宽度等,还可以包含有底层图形处理库的DrawLineOp对应的接口调用,用于在渲染阶段调用底层图形库生成位图。
类似的,可以认为步骤S103为渲染阶段,主要负责遍历渲染树的渲染节点,并执行每个渲染节点的绘制操作,进而在硬件画布上生成位图,在该过程中,渲染线程通过调用底层图形处理库,如OpenGLES库、Vulkan库、Metal库等,进而调用GPU完成渲染以生成位图。
图2为本申请实施例提供的电子设备生成界面过程的一个示例性示意图。
如图2所示,其中,生成第一帧界面的过程为图2中①、②、③、④和⑤;生成第二帧界面过程为图2中的⑥、⑦和⑤。
电子设备生成第一帧界面的过程包括:应用程序的UI线程在接收到垂直同步信号后,应用程序的UI线程生成渲染树,如图2中的①;应用程序的渲染线程在接收到渲染树需要将渲染树中的绘制指令列表和 渲染属性转换为GPU指令,如Vulkan库指令、OpenGLES库指令、Metal指令等GPU驱动可以识别并处理的指令,如图2中的②;GPU或GPU驱动在接收到GPU指令后,生成位图,如图2中的③;表面合成器和/或硬件合成策略模块接收位图,并将位图作为图层执行图层合成,如图2中的④;显示子系统从表面合成器和/或硬件合成策略模块接收合成后的位图,然后送显,如图2中的⑤。
电子设备生成第二帧界面的过程包括:应用程序的UI线程在接收到垂直同步信号后,应用程序的UI线程生成渲染树,如图2中的⑥;应用程序的渲染线程在接收到渲染树需要将渲染树中的绘制指令列表和渲染属性转换为GPU指令,如Vulkan库指令、OpenGLES库指令、Metal指令等GPU驱动可以识别并处理的指令,如图2中的⑦。
但是,在电子设备生成第二帧界面的过程中,由于渲染线程将渲染树中的绘制指令列表和渲染属性转换为GPU可以识别并处理的指令的耗时较长,不能及时将位图传递到表面合成器和/或硬件合成策略模块。在接收到垂直同步信号后,表面合成器和/或硬件合成策略模块就会执行图层合成,进而以送显,而由于表面合成器和/或硬件合成策略模块没有接收到第二帧界面中应用程序的界面,故表面合成器和/或硬件合成策略模块会将第一帧界面中应用程序的界面作为第二帧界面中应用程序的界面执行图层合成,进而送显,如图2中的⑤。
很显然的,由于渲染线程未能及时的将渲染树中的渲染属性和绘制指令列表转换为GPU指令,GPU不能及时的生成位图,进而导致应用程序的界面发生卡顿。
其中,在不同的场景下,渲染线程不能及时的将渲染树转换为GPU指令的原因可以归纳为:需要生成的应用程序的界面对应的渲染树的负载和电子设备的运算能力的不匹配。渲染树的负载可以用多种方式表示,在此不作限定,例如,渲染树被转换为GPU指令的计算量、渲染树的内存占用等。其中,渲染树的负载和电子设备的运算能力的匹配是指:电子设备的运算能力在某个范围内的情况下,总是使得可以在预设的时长内将渲染树转换为GPU指令;相反的,当渲染树被转换为GPU指令的时长超过预设时长,则应用程序的界面会发生卡顿或掉帧,此时,渲染树的负载和电子设备的运算能力不匹配。
可选地,在本申请一些实施方式中,可以通过提高CPU的频率以增加电子设备的计算能力,进而使渲染线程总是能及时将渲染树转换为GPU指令。
但是,CPU的频率的提高会增加电子设备的功耗,降低电子设备生成一帧界面的能效比。进一步地,在渲染线程要将渲染树的转化为GPU指令前,渲染线程不能确定转换该渲染树为GPU指令这一操作的负载,无法选择适当的CPU频率。
基于此,本申请实施例提供了界面生成方法及电子设备。
本申请实施例提供的界面生成方法可以在电子设备上内置负载打分模型,在应用程序的UI线程生成渲染树的过程中,或者在应用程序的UI线程生成渲染树后,应用程序的UI线程或渲染线程或统一渲染进程可以基于负载打分模型确定渲染树被转换为GPU指令的负载,进一步的电子设备基于该负载选择合适的CPU频率。
其中,统一渲染进程(unirender)为与应用程序独立的进程,用于接收不同应用程序的UI线程生成的渲染树。其中,应用程序与统一渲染进程通过进程间通信(inter-processcommunication,IPC)完成数据的交互。关于统一渲染进程的具体内容可参考申请号为2021114105136的发明名称为“界面生成方法及电子设备”的专利申请和申请号为202111410643X的发明名称为“界面生成方法及电子设备”的专利申请,在此不再赘述。
可以理解的是,本申请实施例提供的界面生成方法,通过负载打分模型确定渲染树被转换为GPU指令的负载,进而选择合适的CPU频率,使得渲染线程或统一渲染进程能及时地执行完GPU指令的转换,同时降低由于提升CPU频率提升带来的功耗提升。
可选地,在本申请一些实施方式中,本申请实施例提供的界面生成方法可以拆分应用程序的渲染树,拆分后的多个渲染树分别被不同的线程转换为GPU指令。
可选地,在本申请一些实施方式中,本申请实施例提供的界面生成方法可以通过修改渲染树遍历的顺序使得可以通过多个线程同时遍历一个渲染树进而生成GPU指令。
可以理解的是,本申请实施例提供的界面生成方法,通过多线程并行的方式降低渲染树被转换为GPU指令的耗时,进而降低界面发生卡顿的概率。并且,在不考虑多线程并行的开销的情况下,若电子设备的CPU的低频能效比较高,则可以通过降低频率后通过多线程并行的方式将渲染树转换为GPU指令,在不增加生成一帧界面的耗时的情况下,可以提升生成一帧界面的能效。
下面结合图3所示的内容,示例性介绍本申请实施例提供的界面生成方法及电子设备。
图3为本申请实施例提供的界面生成方法的流程的一个示例性示意图。
S301:应用程序的UI线程在接收到垂直同步信号后,生成本帧界面对应的渲染树,在生成渲染树的过程中确定渲染树的任务量。
其中,应用程序的UI线程生成本帧界面对应的渲染树可以参考上文中图1中的文字描述,此处不再赘述。
应用程序的UI线程在生成渲染树的过程中,可以通过确定每一个绘制操作或绘制操作结构体的任务量进而确定渲染树的任务量。其中渲染树的任务量用于表征本帧界面生成过程中,将渲染树转换为GPU指令的任务量。
其中,在CPU的计算能力相同的情况下,如频率相同,则渲染树的任务量与渲染树被转换为GPU指令的耗时成正相关。
由于GPU指令越复杂,将渲染树转换为GPU指令的耗时越长;并且,GPU指令越复杂,GPU渲染生成位图的耗时就越长,所以,可以认为渲染树的任务量与生成本帧界面的耗时成正相关。
负载打分模型可以为如下文所示的任务模型表。
电子设备本地或电子设备可以访问的云端可以存有一个任务模型表,任务模型表中有不同绘制操作结构体对应的任务量评分,任务模型表如下表1和表2所示。
表1和表2为本申请实施例提供的一个任务模型表的一个示例性示意表。
表1
如表1所示,任务模型表内部保存有绘制操作或绘制操作结构体与任务量的对应关系,例如,DrawRect(参数1,参数2)对应的任务量为F1(参数1,参数2),DrawImage(参数3,参数4)对应的任务量为F2(参数3,参数4),ClipRect(参数5)对应的任务量为F3(参数5)。其中,F1()、F2()、F3()为不同绘制操作或绘制操作结构体对应的任务量计算函数。
可选地,在本申请一些实施例中,任务模型表中可以保存绘制操作或绘制操作结构体与不同CPU计算能力下绘制操作或绘制操作结构体被转换为GPU指令的耗时的对应关系,如表2所示。
表2
如表2所示,DrawRect(参数1,参数2)对应的耗时为T1(CPU参数,参数1,参数2),DrawImage(参数3,参数4)对应的耗时为T2(CPU参数,参数3,参数4),ClipRect(参数5)对应的任务量为T3(CPU参数,参数5)。其中,T1()、T2()、T3()为不同绘制操作或绘制操作结构体对应的耗时计算函数。
其中,任务模型表可以离线进行测试以生成,例如,终端厂商的开发人员可以测试每个绘制操作或绘制操作结构体进行测试,并将耗时或任务量记录下来,进而生成任务模型表。
或者,任务模型表可以在线生成以及更新,例如,用户在使用电子设备的时候,电子设备上的操作系统记录生成界面过程中不同绘制操作或绘制操作结构体的耗时或任务量,进而实时记录并更新任务模型表。在该情况下,由于同一型号的电子设备的状况不同,同一型号的电子设备的任务模型表可以不同,进而更准确地评估一帧界面的负载。电子设备的状况可以包括电子设备的老化程度等,在此不作限定。
可选地,在本申请一些实施方式中,渲染属性也会参与到任务量的计算中,这是因为在后续被转换为图像处理库的接口调用过程中,渲染节点的渲染属性也会作用在渲染节点的绘制指令列表中的绘制操作,进而影响到转换后的GPU指令。
可选地,在本申请一些实施方式中,可以不考虑绘制操作的输入参数,不同的绘制操作对应于不同的任务量,相同的绘制操作,即绘制操作的输入参数可以不同,对应于相同的任务量。
可选地,在本申请一些实施方式中,在应用程序的UI线程确定渲染树的任务量后,可以将渲染树的任务量保存为单独的参数,并将该单独的参数与渲染树传递给渲染线程或统一渲染进程。
可选地,在本申请一些实施方式中,应用程序的UI线程在生成渲染树的过程中确定每个渲染节点的任务量,则可以将每个渲染节点的任务量保存到渲染节点中,如图4所示。其中,用于保存渲染节点的任务量的参数可以称为任务参数。
图4为本申请实施例提供的确定渲染树的任务量的一个示例性示意图。
如图4所示,应用程序本帧界面对应的视图结构为:根视图(视图容器0)的子视图为视图容器1和视图容器2,视图容器1有若干子视图,视图容器2的子视图包括视图22。
在应用程序的UI线程遍历视图的过程中,应用程序的UI线程生成渲染树;在应用程序的UI线程生成渲染树的过程中,应用程序的UI线程基于任务模型表确定每个渲染节点的任务量,然后应用程序的UI线程将每个渲染节点的任务量保存在对应渲染节点的任务参数中。例如,应用程序的UI线程已经遍历了根视图和视图容器1,则生成过程中的渲染树包括根渲染节点和渲染节点1,并在根渲染节点和渲染节点1中增加了任务参数。其中,在图4中,根渲染节点为与根视图对应的渲染节点;渲染节点1为与视图容器1对应的渲染节点。
在应用程序的UI线程完成遍历视图后,渲染树的各个渲染节点均包括任务参数,任务参数保存有该渲染节点的任务量。在后续处理中,UI线程、渲染线程或渲染进程可以基于渲染节点中的任务参数确定渲染节点的任务量。
可选地,在本申请一些实施方式中,渲染树的根渲染节点中的任务参数中除了保存有该渲染节点的任务量外,任务参数还可以保存整个渲染树的任务量。再进一步地,对于渲染树中的任意渲染节点来说,该渲染节点中的任务参数除了保存有该渲染节点的任务量外,还可以保存以该渲染节点作为根渲染节点的子渲染树的任务量。
可选地,在本申请一些实施方式中,UI线程可以确定差分渲染树的任务量并将差分渲染树的任务量保存在任务参数中。在将差分渲染树传递到渲染线程或统一渲染进程后,由渲染线程或统一渲染进程基于差分渲染树和上一帧界面对应的渲染树生成本帧界面对应的渲染树,进而确定本帧界面对应的渲染树,再进而由渲染线程或统一渲染进程确定本帧界面对应的渲染树的任务量。
S302:在渲染树的任务量大于任务量阈值的情况下,应用程序基于任务量拆分渲染树得到多个子渲染树。
若渲染节点的任务参数用于确定渲染树的任务量,如上表1所示,则应用程序的UI线程、渲染线程或统一渲染进程判断渲染树的任务量是否大于任务量阈值1,若渲染树的任务量大于任务量阈值1,则应用程序的UI线程、渲染线程或统一渲染进程拆分渲染树得到多个子渲染树。其中,拆分后的多个子渲染树需要满足的条件为:拆分后的多个子渲染树中的任意一个子渲染树的任务量小于任务量阈值1。
其中,任务量阈值和任务量阈值1均可以称为第一阈值。
或者,若渲染节点的任务参数用于确定渲染树转换为GPU指令的耗时,如上表2所示,则应用程序的UI线程、渲染线程或统一渲染进程判断渲染树的任务量是否大于时间阈值1,若渲染树的任务量大于时间阈值1,则应用程序的UI线程、渲染线程或统一渲染进程拆分渲染树得到多个子渲染树。其中,拆分后的多个子渲染树需要满足的条件为:拆分后的多个子渲染树中的任意一个子渲染树转换为GPU指令的耗时小于时间阈值1。其中,时间阈值1可以与屏幕刷新率有关,例如,屏幕刷新率越高时间阈值1越小。
在后文中,以渲染树的任务量为例继续介绍。其中,渲染树的任务量可以和渲染树转换为GPU指令的耗时的作用相同,所以可以互相替换。
可选地,在本申请一些实施方式中,当应用程序的UI线程拆分渲染树得到多个子渲染树后,应用程序的UI线程传递或同步到渲染线程或统一渲染进程的数据为多个子渲染树。
可选地,在本申请一些实施方式中,拆分渲染树可以考虑线程的并发开销和/或渲染节点之间的依赖关系。其中,渲染节点的依赖关系是指渲染节点的父子关系导致的子节点的绘制操作需要在父节点的绘制操作后执行。其中,渲染节点的父子关系可能会影响界面是否能够正确生成,这是因为当父节点的绘制操作和子节点中的绘制操作对同一像素点均有操作,则父节点的绘制操作需要先于子节点中的绘制操作被执行。反之,当父节点的绘制操作和子节点中的绘制操作之间对同一像素点不是均有操作,则父渲染节点的绘制操作与子节点中的绘制操作独立,没有依赖关系。
可选地,在本申请一些实施方式中,在不考虑线程并发的开销的情况下,拆分渲染树使得任一个子渲染树的任务量(或耗时)小于任务量阈值1(或时间阈值1)。其中,线程并发的开销也可以通过任务量或耗时表征。
可选地,在本申请一些实施方式中,在考虑线程并发的开销的情况下,拆分渲染树使得任一个子渲染树的任务量(或耗时)和线程并发的任务量(或耗时)的和小于任务量阈值1(或时间阈值1)。
可选地,在本申请一些实施方式中,在确定渲染树的任务量大于任务量阈值的情况下,操作系统可以调整CPU的计算能力,如调整CPU的工作频率。或者,在确定的渲染树转换为GPU指令的耗时超过时间阈值的情况下,操作系统可以调整CPU的计算能力,例如调整CPU的工作频率。其中,基于渲染树的任务量调整CPU的频率的方式可以参考下文中图10和图11对应的文字描述,此处不在赘述。
下面示例性介绍几种拆分渲染树的方式:
(1)子渲染树的任务量小于任务量阈值的拆分方式。
图5A和图5B为本申请实施例提供的拆分渲染树的一个示例性示意图。
S501:判断渲染树的任务量是否大于任务量阈值1。
应用程序的UI线程、渲染线程或统一渲染进程确定渲染树的任务量是否大于任务量阈值1,若是则执行步骤S502;若否,则结束。
S502:根据根渲染节点与根渲染节点的子节点的依赖关系划分N个子渲染树。
若渲染树的根渲染节点具有N个子节点,则拆分渲染树为N个子渲染树。其中,根渲染节点的子节点可以分别作为N个子渲染树的根节点。其中,N个子渲染树中某个子渲染树持有根渲染节点,即根渲染节点作为子渲染树的根节点。其中,子节点为根节点直接连接的节点,在渲染树中,根节点即为根渲染节点。
S503:判断全部子渲染树中每个子渲染树的任务量是否小于等于任务量阈值1。
应用程序的UI线程、渲染线程或统一渲染进程确定拆分后的子渲染树的任务量是否大于任务量阈值1,若是则执行步骤S502;若否,则结束。
可选地,在本申请一些实施方式中,在考虑线程并行开销的情况下,每个子渲染树的任务量还需要加上一个线程并行的开销。
S504:拆分任务量大于任务量阈值1的子渲染树。
其中,拆分子渲染树的方式可以参考步骤S502或图5B中的文字描述,此处不再赘述。
若任务量阈值1为100,如图5B所示,任务量为200的渲染树的根渲染节点的子节点为渲染节点51和渲染节点52;渲染节点51的子节点为渲染节点512和渲染节点513;渲染节点52的子节点为渲染节点521、渲染节点522和渲染节点523,渲染节点522的子节点为渲染节点5221。
在第一次拆分后,得到任务量为40的子渲染树和任务量为160的子渲染树。其中,任务量为40的子渲染树的根节点为根渲染节点,根渲染节点的子节点为渲染节点51,渲染节点51的子节点为渲染节点512和渲染节点513。其中,任务量为160的渲染树的根节点为渲染节点52,渲染节点52的子节点为渲染节点521、渲染节点522和渲染节点523,渲染节点522的子节点为渲染节点5221。
由于任务量为160的渲染树的任务量大于任务量阈值1,则执行第二次拆分。
在第二次拆分后,得到任务量为70的渲染树、任务量为50的渲染树、任务量为40的渲染树,任务量为70的渲染树的根节点为渲染节点52,渲染节点52的子节点为渲染节点512;任务量为50的渲染树的根节点为渲染节点522,渲染节点522的子节点为渲染节点5221,任务量为40的渲染树的根节点为渲染节点523。
在第二次拆分后,得到四个子渲染树,四个子渲染树的任务量均小于任务量阈值1;进一步的,在四个子渲染树中,渲染节点的依赖关系都得到了保留。在四个子渲染树被转换为GPU指令后,只需要将四个子渲染树对应的GPU指令依次地提交到命令队列(Command Queue)中的命令缓冲(command buffer)就可以恢复出完整的GPU指令,具体的内容可以参考下文中的步骤S303的文字描述,此处不再赘述。
其中,渲染节点的依赖关系得到了保留是指:由于没有新增渲染节点之间的父子关系,每一个子渲染树对应的GPU指令内部的顺序没有发生变化;进一步的,在将GPU指令提交到命令队列后,命令队列中的所有GPU指令的顺序也不会发生变化。可以理解的是,子渲染树的任务量小于任务量阈值的拆分方式可以使得在后续将渲染树转换为GPU指令的过程中,不会有任意一个线程的工作时间超过任务量阈值对应的时间,进而避免出现界面掉帧、界面卡顿等情况。
(2)负载均衡的拆分方式。
图6A和图6B为本申请实施例提供的一个拆分渲染树的一个示例性示意图。
S601:判断渲染树的任务量是否大于任务量阈值1。
应用程序的UI线程、渲染线程或统一渲染进程确定渲染树的任务量是否大于任务量阈值1,若是则执行步骤S602;若否,则结束。
S602:根据根渲染节点与根渲染节点的子节点的依赖关系划分N个子渲染树。
若根渲染节点具有N个子节点,则拆分渲染树为N个渲染树。其中,根渲染节点的子节点可以分别作为N个子渲染树的根节点。其中,N个子渲染树中某个子渲染树持有根渲染节点,即根渲染节点作为子渲染树的根节点。其中,子节点为与根节点直接连接的渲染节点。
S603:判断全部子渲染树中每个子渲染树的任务量是否满足约束关系。
应用程序的UI线程、渲染线程或统一渲染进程判断子渲染树的任务量是否满足约束关系,若是,则结束;若否,则执行步骤S604。
可选地,在本申请一些实施方式中,可以先通过渲染树的任务量和任务量阈值1确定最小需要的子渲染树个数。例如,渲染树的任务量为200,任务量阈值1为30,则确定最小需要的子渲染树个数为7。在拆分得到7个渲染树后,判断子渲染树中的每一个子渲染树是否满足下文所示的约束关系,若不满足约束关系则继续拆分子渲染树;若满足约束关系,则不再拆分子渲染树。
例如,设渲染树的任务量为loadtotal,任务量阈值1为loadthreshold,线程并发开销为cost,子渲染树的任务量阈值为其中i为第i个子渲染树,且N为并发线程数(子渲染树的个数),则需要满足约束关系并且
其中,在不考虑线程并发开销的情况下,约束关系为并且
若loadthreshold=80,loadtotal=200,N=5,ε=10,则即需要所有的子渲染树的任务量位于30到50之间才满足约束关系。
若N=2,则先调整N的大小(渲染树的个数)使得后,再判断是否满足
可选地在本申请一些实施方式中也可以先根据确定渲染树的拆分梳理。若loadthreshold=80,loadtotal=200,则可以通过计算得到N,其中为向上取整函数。
S604:移动渲染节点或进一步拆分子渲染树。
应用程序的UI线程、渲染线程或统一渲染进程可以通过移动渲染节点,将任务量高的子渲染树中的渲染节点移动到任务量低的子渲染树。其中,移动渲染节点后,渲染节点之间的依赖关系可能会被破坏。
或者,应用程序的UI线程、渲染线程或统一渲染进程拆分任务量高的子渲染树。
图6B与图5B所示内容中的渲染树相同,而由于按照图6A所示的方法,若loadthreshold=80,loadtotal=200,ε=11,在N=4的情况下,约束关系为由于第二次拆分的渲染树中存在任务量为70的子渲染树,不满足约束条件需要移动渲染节点或继续拆分子渲染树。其中,渲染节点52的任务量为40并且渲染节点521的任务量为30,无法通过移动渲染节点满足约束条件所以进一步拆分任务量为70的子渲染树。
在N=5的情况下,则拆分后的子渲染树的任务量分别为40、40、30、50、40,满足约束条件。
可以理解的是,负载均衡的拆分渲染树的方法由于平衡了不同拆分后的子渲染树的任务量,可以后续将渲染树转换为GPU指令的时候避免短板效应,即避免由于个别子渲染树的任务量过大延长将渲染树转换为GPU指令的时长。
可选地,在本申请一些实施方式中,拆分渲染树或拆分子渲染树的过程中,可以优先拆分具有兄弟关系的渲染节点。
可选地,在本申请一些实施方式中,拆分渲染树或拆分子渲染树的过程中,若渲染节点的渲染属性为空,则该渲染节点的绘制操作可以被拆分到不同的子渲染树中。
可选地,在本申请一些实施方式中,拆分渲染树或拆分子渲染树的过程中,若渲染节点不依赖于任何渲染节点,则可以将渲染节点任意移动到不同的渲染子树中。例如,在图5B中,若根渲染节点为透明的渲染节点,则渲染节点51不依赖于任何渲染节点,渲染节点52不依赖任何渲染节点。
(3)显示区域划分的拆分方式。
图7为本申请实施例提供的拆分渲染树的另一个示例性示意图。
应用程序要生成的、下一帧显示的界面为界面701。其中,可以通过多种方式在应用程序的窗口上划分互不遮蔽的区域,不同区域上显示内容即为拆分后不同子渲染树对应的显示内容。
例如,界面701中(不包含状态栏)被分为区域1和区域2;区域1对应的视图包括视图容器1以及视图容器1的子节点;区域2对应的视图包括视图容器2以及视图容器2的子节点如视图22。则应用程序的UI线程在生成渲染树的过程中,可以直接生成两个子渲染树,分别为对应于区域1的子渲染树1,对应于区域2对应子渲染树2。
或者,在应用程序的UI线程在生成渲染树后,按照区域的划分将渲染树划分为子渲染树1和子渲染树2。
(4)对DrawOP进行拆分
可选地,在本申请一些实施方式中,在渲染线程或统一渲染进程的渲染子线程遍历渲染树得到链式存储的DrawOP后,可以直接对DrawOP进行拆分。
S303:应用程序创建多个渲染线程,并行地将多个子渲染树转换为GPU指令。
应用程序创建多个渲染线程,或者统一渲染进程创建多个子渲染线程,进而并行的将多个子渲染树转换为GPU指令。
其中,渲染线程的数量与拆分后的子渲染树的数量一致或者子渲染线程的数量与拆分后的子渲染树的数量一致。
图8为本申请实施例提供的并行地将渲染树转换为GPU指令的一个示例性示意图。
如图8所示,在拆分渲染树后,得到子渲染树1至子渲染树N。
不同的线程如渲染线程或子渲染线程,分别遍历不同的子渲染树,得到GPU指令。其中,统一渲染进程、应用程序的UI线程或渲染线程还可以从命令缓冲池(commandBufferpool)申请N个命令缓冲,如图8中的命令缓冲1至命令缓冲N,N个命令缓冲分别用于保存不同子渲染树对应的GPU指令。
可选地,在本申请一些实施方式中,线程在遍历渲染树的过程中,先将绘制操作封装为绘制操作结构体(例如,DrawOP),然后转换为GPU指令,如OpenGLES库、Vulkan库、Metal库中的接口调用。
在不同的线程得到GPU指令后,按照顺序将每个命令缓冲中的GPU指令依次提交到指令队列中的缓冲(如Primarily Buffer)。其中,顺序与渲染树中渲染节点的遍历顺序相同,即若子渲染树1中的渲染节点的被遍历顺序均早于子渲染树N中的渲染节点的被遍历顺序,则Primarily Buffer中命令缓冲1中的GPU指令会被在先执行,Primarily Buffer中命令缓冲2中的GPU指令会被在后执行。其中,将不同命令缓冲中的数据提交到指令队列中的缓冲可以通过如下的多种方式实现,在此不做限定。
可选地,在本申请一些实施方式中,当电子设备通过Vulkan库将渲染树转换为GPU指令,则,示例性地,可以通过vkQueueSubmit方法调用将命令缓冲1或命令缓冲N中的GPU指令提交到指令队列中的Primarily Buffer中。其中,由于涉及到多个命令缓冲至指令队列中的缓冲的数据同步,电子设备可以通过信号量(Semaphore)完成同步。
可选地,在本申请一些实施方式中,电子设备也可以通过指针操作的方式将多个命令缓冲的数据移动到指令队列中的缓冲。
可选地,在本申请一些实施方式中,电子设备也可以通过拷贝的方式将多个命令缓冲的数据移动到指令队列中的缓冲。
可选地,在本申请一些实施方式中,命令缓冲1可以为指令队列中的缓冲,如Primarily Buffer,则其他命令缓冲中的GPU指令需要提交到该命令缓冲1中。
或者,可选地,在本申请一些实施方式中,也可以不从命令缓冲池中申请N个命令缓冲,而是在Primarily Buffer划分N个地址范围,分别用于承载不同子渲染树对应的GPU指令。例如,0x000000-0x0000FF为第一个命令缓冲对应的地址范围,0x000100-0x0001FF为第二个命令缓冲对应的地址范围。
S304:GPU执行GPU指令生成位图后,表面合成器和/或硬件合成策略模块对位图执行图层合成以送显。
其中,生成位图后,位图会被表面合成器和/或硬件合成策略模块获取,进而参数到图层合成,合成后的位图会被显示子系统获取以送显。
电子设备在按照如图3所示的界面生成方法执行后,电子设备生成界面的过程如图9A和图9B所示。
图9A和图9B为本申请实施例提供的电子设备生成界面过程的另一个示例性示意图。
对比图9A和图2所示的内容,其中图9A和图2所示相同的内容不再赘述,在图9A中⑦,由于电子设备通过两个渲染线程(如图9A中的渲染线程1和渲染线程2)将渲染树转化为GPU指令,降低了渲染树转换为GPU指令的延时;GPU可以及时地接收到GPU指令生成位图,如图9A中的⑧;然后,GPU可以将生成的位图传递到表面合成器和/或硬件合成策略模块,如图9A中⑨;最后,显示子系统可以将该参与图层合成后的位图送显,如图9A中的⑩,不会出现界面卡顿。
图9A与图9B所示的不同的是,渲染树转换为GPU指令的过程可以由统一渲染进程的渲染子线程执行,如图9B中的⑦。例如,统一渲染进程的渲染子线程可以为图9B中的渲染子线程1、渲染子线程2,图9A与图9B相同的内容不再赘述。
上文结合图3所示的内容,介绍了本申请一个实施例中界面生成方法的流程。下面介绍其他与图3所示内容不同界面生成方法的流程。
可选地,在本申请一些实施方式中,在确定渲染树的任务量后,可以不对渲染树执行拆分,而是通过提供CPU的计算能力以降低渲染树被转换为GPU指令的耗时。
图10为本申请实施例提供的界面生成方法的流程的另一个示例性示意图。
S1001:应用程序的UI线程在接收到垂直同步信号后,生成本帧界面对应的渲染树,在生成渲染树的过程中确定渲染树的任务量。
步骤S1001的内容可以参考上文中步骤S301的文字描述,此处不再赘述。
S1002:在渲染树的任务量大于任务量阈值的情况下,基于渲染树的任务量调整CPU的计算能力。
在渲染树的任务量大于任务量阈值的情况下,应用程序的UI线程、渲染线程、统一渲染进程将渲染树的任务量传递到操作系统。由操作系统基于渲染树的任务量去调整CPU的计算能力,例如,调整CPU的频率。
图11为本申请实施例提供的基于渲染树的任务量调整CPU计算能力的一个示例性示意图。
例如,如图11所示,当渲染树的任务量为0-100时,可以不向操作系统发送渲染树的任务量,或者,操作系统在接收到渲染树的任务量后不调整CPU的频率;当渲染树的任务量为101-200时,操作系统在接收到渲染树的任务量后调整CPU的频率至频率1;当渲染树的任务量为201-300时,操作系统在接收到渲染树的任务量后调整CPU的频率至频率2,其中,频率2高于频率1。
在渲染树被转换为GPU指令后,由操作系统将CPU的计算能力调整至调整前的状态,如图11中的默认频率。
可选地,在本申请一些实施方式中,在拆分渲染树后并通过多线程遍历渲染树的情况下,可以降低CPU的频率。例如,当渲染树的任务量为101-200时,操作系统在接收到渲染树的任务量后确定通过三个线程并行处理渲染树,并调整CPU的频率至频率3;当渲染树的任务量为201-300时,操作系统在接收到渲染树的任务量后确定通过五个线程并行处理渲染树,并调整CPU的频率至频率4,其中,频率3低于频率4。
可以理解的是,通过调整CPU的计算能力可以降低渲染树被转换为GPU指令的耗时,进而降低界面卡顿、掉帧的概率。
S1003:将渲染树转换为GPU指令后,生成位图,表面合成器和/或硬件合成策略模块对位图执行图层合成以送显。
其中,步骤S1003的内容可以参考图1以及步骤304中的文字描述,此处不再赘述。
可选地,在本申请一些实施方式中,在确定渲染树的任务量后,可以不对渲染树执行拆分,而是通过多个线程以不同的遍历顺序遍历渲染树,进而降低渲染树被转换为GPU指令的耗时。
图12为本申请实施例提供的界面生成方法的另一个示例性示意图。
S1201:应用程序的UI线程在接收到垂直同步信号后,生成本帧界面对应的渲染树,在生成渲染树的过程中确定渲染树的任务量。
其中,步骤S1201可以参考步骤S301中的文字描述,此处不再赘述。
S1202:在渲染树的任务量大于任务量阈值的情况下,多个渲染线程并行地以不同的顺序遍历渲染树,并将渲染树转换为GPU指令。
其中,渲染线程的个数可以通过渲染树的任务量确定。例如,设渲染树的任务量为loadtotal,任务量阈值为loadthreshold,渲染线程的个数为N,则其中为向上取整函数。不同的渲染线程以不同的顺序遍历渲染树以生成GPU指令,如图13和图14所示。
可选地,在本申请一些实施方式中,渲染线程的个数也可以是预配置的固定值。
图13为本申请实施例提供的渲染线程以不同的顺序遍历渲染树的一个示例性示意图。
如图13所示,渲染树的根节点为根渲染节点,根渲染节点的子节点为渲染节点1和渲染节点2,渲染节点1的子节点为渲染节点11和渲染节点12,渲染节点2的子节点为渲染节点21、渲染节点22和渲染节点23,渲染节点22的子节点为渲染节点221。
首先,线程1的遍历顺序依次为:根渲染节点、渲染节点1、渲染节点11和渲染节点12。线程2的遍历顺序依次为:渲染节点2、渲染节点21、渲染节点22、渲染节点23和渲染节点221。
其次,当线程1遍历完根渲染节点、渲染节点1、渲染节点11和渲染节点12时,线程2遍历至渲染节点21。
可选地,在本申请一些实施方式中,当线程1遍历完渲染节点1、渲染节点11和渲染节点12时,线程2遍历至渲染节点21后,线程2继续遍历渲染节点22、渲染节点23和渲染节点221。然后,按照图8所示的内容,不同线程遍历渲染节点生成的GPU指令位于不同的命令缓冲中,需要按照顺序提交至指令队列的缓冲中。
可选地,在本申请一些实施方式中,当线程1遍历完渲染节点1、渲染节点11和渲染节点12时,线程2遍历至渲染节点21后,线程2的遍历顺序依次为渲染节点22、渲染节点23和渲染节点221,线程1的遍历顺序依次为渲染节点23、渲染节点22和渲染节点221。在按照该方式遍历的情况下,渲染节点23由线程2遍历,渲染节点22和渲染节点221由线程1遍历。然后,按照图14所示的内容,将不同命令缓冲的GPU指令提交至指令队列的缓冲中。
图14为本申请实施例提供的提交GPU指令至指令队列的一个示例性示意图。
由于渲染节点23被线程2遍历,并将渲染节点23对应GPU指令保存在命令缓冲2中。为了重构渲染树中渲染节点的依赖关系,其中,在将命令缓冲1和命令缓冲2中的GPU指令提交至命令队列的过程中,需要调整不同GPU指令之间的顺序。
例如,首先从命令缓冲2中将对应于根渲染节点、渲染节点1、渲染节点11和渲染节点12对应的GPU指令提交至命令队列中的缓冲;然后,将命令缓冲1中的全部GPU指令提交至命令队列中的缓冲;最后,将命令缓冲2中的剩余GPU指令(即与渲染节点23对应的GPU指令)提交至命令队列的缓冲中。
可选地,在本申请一些实施方式中,可以依次地将命令缓冲2和命令缓冲1的全部GPU指令提交至命令队列的缓冲中。在该情况下,渲染树的渲染节点的依赖关系可能没有被重构。若渲染节点23与渲染节点2有依赖关系,则渲染树的渲染节点的依赖关系没有被重构;若渲染节点23与渲染节点2没有依赖关系,则渲染树的渲染节点的依赖关系被重构。
S1203:GPU执行GPU指令生成位图后,表面合成器和/或硬件合成策略模块对位图执行图层合成以送显。
其中,步骤S1203的内容可以参考上文中步骤S304中的文字描述,此处不在赘述。
下面介绍本申请实施例提供的电子设备的硬件结构和软件架构。
图15为本申请实施例提供的电子设备的硬件结构的一个示例性示意图。
电子设备可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传 感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在本申请另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波执行滤波,放大等处理,传送至调制解调处理器执行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其执行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度执行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备在频点选择时,数字信号处理器用于对频点能量执行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备可以支持一种或多种视频编解码器。这样,电子设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等;
非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universal flash storage,UFS)、嵌入式多媒体存储卡(embedded multi media Card,eMMC)等。
随机存取存储器可以由处理器110直接执行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接执行读写。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备可以设置至少一个麦克风170C。在另一些实施例中,电子设备可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备根据压力传感器180A检测所述触摸操作强度。电子设备也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备是翻盖机时,电子设备可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备在各个方向上(一般为三轴)加速度的大小。当电子设备静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备通过发光二极管向外发射红外光。电子设备使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备附近有物体。当检测到不充分的反射光时,电子设备可以确定电子设备附近没有物体。电子设备可以利用接近光传感器180G检测用户手持电子设备贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备对电池142加热,以避免低温导致电子设备异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备的接触和分离。电子设备可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备中,不能和电子设备分离。
图16为本申请实施例提供的电子设备的软件结构的一个示例性示意图。
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图16所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图16所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
可选地,在本申请一些实施方式中,视图系统中包括渲染树任务量估计模块,在生成渲染树的过程中,在生成渲染树后,确定渲染树的任务量。
可选地,在本申请一些实施方式中,视图系统中包括渲染树拆分模块,可以通过不同的方式拆分渲染树。其中,不同的渲染树被不同的线程遍历后以生成GPU指令。
可选地,在本申请一些实施方式中,CPU调度模块可以基于渲染树的任务量调整CPU的计算能力,例如调整CPU的频率。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:浏览器引擎(webkit)、渲染引擎、表面合成器,硬件合成策略模块,媒体库(Media Libraries),图像处理库(例如:OpenGL ES),渲染引擎(如Skia库)等。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
图像处理库用于实现三维图形绘图,图像渲染等。
渲染引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层包括显示驱动,摄像头驱动,音频驱动,传感器驱动等。
可选地,在本申请一些实施方式中,显示子系统包括显示驱动。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (23)

  1. 一种界面生成方法,其特征在于,应用于运行第一应用程序的电子设备,所述方法包括:
    所述电子设备生成第一渲染树,所述第一渲染树包括用于生成所述第一应用程序的一帧界面的绘制操作;
    所述电子设备拆分所述第一渲染树得到N个子渲染树,所述N大于1;
    所述电子设备并行地将所述N个子渲染树转化为第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;
    所述电子设备基于所述第一渲染指令生成所述第一应用程序的一帧界面。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备拆分所述第一渲染树得到N个子渲染树,具体包括:
    所述电子设备确定第一任务量,所述第一任务量为所述第一渲染树的任务量,所述第一任务量用于指示所述第一渲染树转换为所述第一渲染指令的耗时或计算量;
    响应于所述电子设备确定所述第一任务量大于第一阈值,所述电子设备拆分所述第一渲染树得到所述N个子渲染树。
  3. 根据权利要求2所述的方法,其特征在于,在所述电子设备确定第一任务量后,所述方法还包括:
    所述电子设备基于所述第一任务量和所述第一阈值确定M,所述M小于等于所述N,所述M为大于等于所述第一任务量与所述第一阈值的比值的整数;
    所述电子设备确定大于等于所述M的整数为所述N。
  4. 根据权利要求2所述的方法,其特征在于,所述电子设备确定第一任务量,具体包括:
    所述电子设备通过确定所述第一渲染树中的绘制操作的任务量以确定所述第一任务量。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述N个子渲染树包括第二渲染树和所述第三渲染树,第二任务量和第三任务量的差距小于差距阈值,所述第二任务量为所述第二渲染树的任务量,所述第二任务量用于衡量所述第二渲染树被转换为渲染指令的耗时或计算量,所述第三任务量为所述第三渲染树的任务量,所述第三任务量用于衡量所述第三渲染树被转换为渲染指令的耗时或计算量。
  6. 根据权利要求1或2所述的方法,其特征在于,所述电子设备拆分所述第一渲染树得到N个子渲染树,具体包括:
    所述电子设备确定所述第一渲染树的根渲染节点具有N个子节点,所述子节点为所述跟渲染节点直接连接的渲染节点;
    所述电子设备拆分所述第一渲染树为N个子渲染树。
  7. 根据权利要求1或2所述的方法,其特征在于,所述电子设备拆分所述第一渲染树得到N个子渲染树,具体包括:
    所述电子设备将所述第一应用程序的界面划分为N个区域;
    所述电子设备基于所述N个区域拆分所述第一渲染树得到N个子渲染树,所述N个子渲染树与所述N个区域一一对应。
  8. 根据权利要求1所述的方法,其特征在于,所述电子设备拆分所述第一渲染树得到N个子渲染树,具体包括:
    所述电子设备确定第一任务量,所述第一任务量为所述第一渲染树的任务量,所述第一任务量用于衡量所述第一渲染树被转换为渲染指令的耗时或计算量,所述第一任务量大于第一阈值;
    所述电子设备确定所述第一渲染树的根渲染节点具有K个子节点,所述K小于N;
    所述电子设备拆分所述第一渲染树为K个子渲染树;
    所述电子设备确定第四渲染树的任务量大于所述第一阈值后,所述电子设备拆分所述第四渲染树得到N-K+1个渲染子树,所述K个子渲染树包括所述第四渲染树,所述N个渲染子树的任务量均小于所述第一 阈值。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述电子设备并行地将所述N个子渲染树转化为第一渲染指令,具体包括:
    所述电子设备通过N个线程分别将所述N个子渲染树转换的指令填入N个缓冲中;
    所述电子设备将所述N个缓冲的指令提交至第一缓冲中,所述第一缓冲中的指令为所述第一渲染指令。
  10. 一种界面生成方法,其特征在于,应用于电子设备,所述电子设备上运行有第一进程,所述方法包括:
    所述电子设备通过所述第一进程生成第一渲染树,所述第一渲染树包括用于生成所述第一进程的一帧界面的绘制操作;
    所述电子设备拆分所述第一渲染树为第二渲染树和第三渲染树,所述第二渲染树包括第一渲染树中的部分绘制操作,所述第三渲染树包括所述第一渲染树中的部分绘制操作,所述第二渲染树和所述第三渲染树不同;
    所述电子设备通过第一线程将所述第二渲染树转换为第一渲染指令,所述第一渲染指令保存在第一缓冲中,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;
    所述电子设备通过第二线程将所述第三渲染树转换为第二渲染指令,所述第二指令保存在第二缓冲中;
    所述电子设备基于所述第一渲染指令和所述第二渲染指令生成所述第一进程的一帧界面。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    所述电子设备确定第一任务量,所述第一任务量为所述第一渲染树的任务量,所述第一任务量用于衡量所述第一渲染树被转换为渲染指令的耗时或计算量,所述第一任务量大于第一阈值。
  12. 根据权利要求10或11所述的方法,其特征在于,所述电子设备基于所述第一指令和所述第二指令生成所述第一进程的一帧界面,具体包括:
    所述第一渲染指令位于所述第一线程持有的第一缓冲中,所述第二渲染指令位于所述第二线程持有第二缓冲中,所述电子设备提交所述第一缓冲中的指令和所述第二缓冲中的渲染指令至第三缓冲中;
    所述电子设备基于所述第三缓冲生成所述第一进程的一帧界面。
  13. 根据权利要求12所述的方法,其特征在于,所述第三缓冲为所述第二缓冲,或者所述第三缓冲为所述第一缓冲。
  14. 一种界面生成方法,其特征在于,应用于电子设备,所述电子设备上运行有第一应用程序,所述方法包括:
    所述电子设备生成第一渲染树,所述第一渲染树包括用于生成所述第一应用程序的一帧界面的绘制操作;
    所述电子设备确定第一任务量,所述第一任务量为所述第一渲染树的任务量,所述第一任务量用于衡量所述第一渲染树被转换为渲染指令的耗时或计算量,所述第一任务量大于第一阈值,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;
    若所述第一任务量大于所述第一阈值,所述电子设备配置CPU的工作频率从第一频率变为第二频率,所述第二频率高于所述第一频率;
    所述电子设备基于所述第一渲染树生成所述第一应用程序的一帧界面;
    在所述电子设备生成所述第一应用程序的一帧界面的过程中,所述电子设备以所述第二频率工作。
  15. 根据权利要求14所述的方法,所述电子设备基于所述第一渲染树生成所述第一应用程序的一帧界面,具体包括:
    所述电子设备拆分所述第一渲染树得到N个子渲染树,所述N为大于1的整数;
    所述电子设备并行地将所述N个子渲染树转化为第一渲染指令;
    所述电子设备基于所述第一渲染指令生成所述第一应用程序的一帧界面。
  16. 一种界面生成方法,其特征在于,应用于电子设备,所述电子设备上运行有第一应用程序,所述方法包括:
    所述电子设备生成第一渲染树,所述第一渲染树包括用于生成所述第一应用程序的一帧界面的绘制操作;
    所述电子设备通过多个不同的线程遍历所述第一渲染树中的不同部分以生成第一渲染指令,渲染指令为渲染引擎、图像处理库或GPU驱动中的指令;
    所述电子设备基于所述第一渲染指令生成所述第一应用程序的一帧界面。
  17. 根据权利要求16所述的方法,其特征在于,在所述电子设备通过多个不同的线程以不同的顺序遍历所述第一渲染树以生成第一指令前,所述方法还包括:
    所述电子设备确定第一任务量,所述第一任务量为所述第一渲染树的任务量,所述第一任务量用于衡量所述第一渲染树被转换为所述第一渲染指令的耗时或计算量;
    所述电子设备确定所述第一任务量大于第一阈值。
  18. 根据权利要求16或17所述的方法,其特征在于,所述电子设备通过多个不同的线程以不同的顺序遍历所述第一渲染树以生成第一指令,具体包括:
    所述电子设备通过第一线程遍历所述第一渲染树中的第一部分,将生成的第二渲染指令保存在第一缓冲中;
    所述电子设备通过第二线程遍历所述第一渲染树中的第二部分,将生成的第三渲染指令保存在第二缓冲中;
    所述电子设备将所述第一缓冲的渲染指令和所述第二缓冲中的渲染指令提交到第三缓冲中以获取所述第一渲染指令。
  19. 根据权利要求18所述的方法,其特征在于,所述电子设备将所述第一缓冲的渲染指令和所述第二缓冲中的渲染指令提交到第三缓冲中以获取所述第一渲染指令,具体包括:
    所述第一缓冲的渲染指令包括第二渲染指令和第三渲染指令,所述第二缓冲的指令包括第四渲染指令;
    所述第三缓冲的指令的排列顺序依次为:第二渲染指令、第四渲染指令、第三渲染指令。
  20. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器和存储器;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1-19中任一项所述的方法。
  21. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求1-19中任一项所述的方法。
  22. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-19中任一项所述的方法。
  23. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-19中任一项所述的方法。
PCT/CN2023/124034 2022-10-19 2023-10-11 界面生成方法及电子设备 WO2024083014A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211281435.9 2022-10-19
CN202211281435.9A CN117806745A (zh) 2022-10-19 2022-10-19 界面生成方法及电子设备

Publications (1)

Publication Number Publication Date
WO2024083014A1 true WO2024083014A1 (zh) 2024-04-25

Family

ID=90432198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/124034 WO2024083014A1 (zh) 2022-10-19 2023-10-11 界面生成方法及电子设备

Country Status (2)

Country Link
CN (1) CN117806745A (zh)
WO (1) WO2024083014A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764849A (zh) * 2018-07-25 2020-02-07 优视科技有限公司 用户界面的渲染方法、设备、客户端装置及电子设备
CN113138655A (zh) * 2021-04-02 2021-07-20 Oppo广东移动通信有限公司 处理器频率的调整方法、装置、电子设备及存储介质
US20210264648A1 (en) * 2018-06-15 2021-08-26 Swiftclass Sa Graphics rendering
CN114443197A (zh) * 2022-01-24 2022-05-06 北京百度网讯科技有限公司 界面处理方法、装置、电子设备及存储介质
CN114748873A (zh) * 2022-06-14 2022-07-15 北京新唐思创教育科技有限公司 界面渲染方法、装置、设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210264648A1 (en) * 2018-06-15 2021-08-26 Swiftclass Sa Graphics rendering
CN110764849A (zh) * 2018-07-25 2020-02-07 优视科技有限公司 用户界面的渲染方法、设备、客户端装置及电子设备
CN113138655A (zh) * 2021-04-02 2021-07-20 Oppo广东移动通信有限公司 处理器频率的调整方法、装置、电子设备及存储介质
CN114443197A (zh) * 2022-01-24 2022-05-06 北京百度网讯科技有限公司 界面处理方法、装置、电子设备及存储介质
CN114748873A (zh) * 2022-06-14 2022-07-15 北京新唐思创教育科技有限公司 界面渲染方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN117806745A (zh) 2024-04-02

Similar Documents

Publication Publication Date Title
WO2021036735A1 (zh) 显示用户界面的方法及电子设备
CN115473957B (zh) 一种图像处理方法和电子设备
WO2020093988A1 (zh) 一种图像处理方法及电子设备
WO2020253758A1 (zh) 一种用户界面布局方法及电子设备
WO2022199509A1 (zh) 应用执行绘制操作的方法及电子设备
WO2020155875A1 (zh) 电子设备的显示方法、图形用户界面及电子设备
CN116048933B (zh) 一种流畅度检测方法
WO2023066165A1 (zh) 动画效果显示方法及电子设备
WO2023016014A1 (zh) 视频编辑方法和电子设备
WO2023071482A1 (zh) 视频编辑方法和电子设备
WO2024083014A1 (zh) 界面生成方法及电子设备
WO2024082987A1 (zh) 界面生成方法及电子设备
WO2024061292A1 (zh) 界面生成方法及电子设备
WO2024083009A1 (zh) 界面生成方法及电子设备
WO2024046010A1 (zh) 一种界面显示方法、设备及系统
WO2022262291A1 (zh) 应用的图像数据调用方法、系统、电子设备及存储介质
WO2023066177A1 (zh) 动画效果显示方法及电子设备
WO2024067551A1 (zh) 界面显示方法及电子设备
WO2023246783A1 (zh) 调整设备功耗的方法及电子设备
US20240069845A1 (en) Focus synchronization method and electronic device
WO2023001208A1 (zh) 多文件同步方法及电子设备
WO2022166550A1 (zh) 数据传输方法及电子设备
WO2023179123A1 (zh) 蓝牙音频播放方法、电子设备及存储介质
WO2023051036A1 (zh) 加载着色器的方法和装置
CN116166257A (zh) 界面生成方法及电子设备