WO2024082987A1 - 界面生成方法及电子设备 - Google Patents

界面生成方法及电子设备 Download PDF

Info

Publication number
WO2024082987A1
WO2024082987A1 PCT/CN2023/123558 CN2023123558W WO2024082987A1 WO 2024082987 A1 WO2024082987 A1 WO 2024082987A1 CN 2023123558 W CN2023123558 W CN 2023123558W WO 2024082987 A1 WO2024082987 A1 WO 2024082987A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
electronic device
animation
rendering
rendering instruction
Prior art date
Application number
PCT/CN2023/123558
Other languages
English (en)
French (fr)
Inventor
廖恒
陆壬淼
李煜
刘道勇
周越海
徐俊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024082987A1 publication Critical patent/WO2024082987A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present application relates to the field of electronic technology, and in particular to an interface generation method and an electronic device.
  • the resolution and refresh rate of the screens of electronic devices are getting higher and higher, wherein the resolution of the screen affects the pixels contained in one frame of the interface, and the refresh rate affects the time to generate one frame of the interface.
  • the electronic device Before the electronic device displays a first frame interface, the electronic device needs to expend computing resources to generate the first frame interface; before the electronic device displays a second frame interface, the electronic device needs to expend computing resources again to generate the second frame interface.
  • the electronic device When the electronic device fails to generate the second frame interface in time, the content displayed on the screen of the electronic device will freeze. In order to ensure that the second frame interface can be generated in time, the electronic device often increases the CPU operating frequency to improve the computing power of the electronic device, which leads to higher energy consumption for the electronic device to generate a frame interface, reducing the energy efficiency of interface generation.
  • the embodiment of the present application provides an interface generation method and an electronic device. Considering the continuity of the animation, after the electronic device generates a first rendering instruction, the first rendering instruction can be directly updated based on the logic of the animation, and the interface during the animation process can be generated based on the updated first rendering instruction. Furthermore, the interface generation method provided by the embodiment of the present application can improve the energy efficiency ratio of the generated interface by reducing the number of times the application traverses the view and renders the generated bitmap.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device, on which a first application is installed, the method comprising: after the electronic device receives a first operation, the electronic device determines the description information of the interface during a first animation process, and the first operation is used to trigger the electronic device to display the first animation through the first application; the electronic device generates a first rendering instruction, and the rendering instruction is data for being executed by a GPU to generate a first interface, and the first interface is a frame interface during the first animation process; the electronic device updates the first rendering instruction to a second rendering instruction based on the description information of the interface during the animation process; the electronic device generates the second interface based on the second rendering instruction, and the second interface is different from the first interface, and the second interface is a frame interface during the first animation process.
  • the electronic device directly modifies the first rendering instruction into the second rendering instruction through animation-based logic, thereby reducing the overhead of generating the second instruction and improving the energy efficiency of generating the interface.
  • the electronic device updates the first rendering instruction to the second rendering instruction based on the description information of the interface during the animation process, specifically including: the electronic device determines a first parameter based on the description information of the interface during the first animation process, and the first parameter is used to describe the changed properties of the first control in the second interface, and the first control is a view whose display effect changes in the second interface; the electronic device updates the first rendering instruction based on the first parameter to obtain the second rendering instruction.
  • the electronic device may determine the changed control, determine the changed property of the second control, and then modify the first rendering instruction to the second rendering instruction.
  • the method before the electronic device receives the first operation, the method also includes: the electronic device displays a desktop, the desktop includes a first control, and the first control corresponds to a first application; the electronic device receives the first operation, specifically including: the electronic device detects that a user clicks on the first control; after the electronic device receives the first operation, the method also includes: the first animation is a startup animation of the first application, and in the first animation, the position and size of the second control change; the electronic device determines a first parameter based on the description information of the interface during the first animation, specifically including: the first electronic device determines a first position, the first parameter includes the first position, the first position is the position of the second control in a second interface, and the second interface is a non-first frame interface of the first animation; the electronic device updates the first rendering instruction based on the first parameter to obtain the second rendering instruction, specifically including: the electronic device obtains the second rendering instruction by modifying the vertex position of the second control in the
  • the position and size of the second control change, and the second rendering instruction corresponding to the second interface can be obtained by modifying the first rendering instruction, and then the second interface is generated based on the second rendering instruction.
  • the electronic device obtains the second rendering instruction by modifying the vertex position of the second control in the first rendering instruction to the first position, specifically including: the electronic device updates the vertex position used by the first method call based on the first parameter to the first position to obtain the second rendering instruction, the first method call being used to be instructed by the GPU to draw the second Second, the method call of the control.
  • the second rendering instruction can be obtained by modifying the input parameters in the method call for drawing the second control in the first rendering instruction.
  • the first parameter is used to describe the color, vertex position, transparency and/or scaling of the first view.
  • the first parameter may include the color, vertex position, transparency and/or scaling ratio, etc., used to describe the view involved in the animation, which is not limited here.
  • the first parameter is written into a first data structure by the first application, and the first data structure is bound to a rendering pipeline; in the rendering pipeline, the first parameter is read by the electronic device to modify the rendering instruction.
  • the first parameter needs to be passed to the rendering pipeline so that it can be read by the electronic device when modifying the rendering instruction.
  • the first data structure is a unifrom.
  • the first parameter may be located in a uniform and then participate in the rendering pipeline.
  • the method also includes: after the electronic device receives the first operation, the electronic device configures an update task, and the update task is configured in the GPU driver; the electronic device updates the first rendering instruction based on the first parameter to obtain the second rendering instruction, specifically including: the electronic device replaces the second parameter in the first rendering instruction with the first parameter through the update task to obtain the second rendering instruction, and the second parameter is used to describe the properties of the first control in the first interface.
  • an update task may be configured in the GPU driver, and before driving the GPU to generate the second interface, the second parameter in the rendering instruction is replaced with the first parameter, thereby generating a second rendering instruction.
  • the method also includes: the electronic device determines, based on the description information of the interface during the first animation process, that the background map or foreground map of the first control in the first interface is different from the background map or foreground map of the first control in the second interface; the electronic device loads a first texture, which is the background map or foreground map of the first control in the second interface, and the first parameter includes an identifier of the first texture, which is the background map or foreground map of the first view in the second interface.
  • the animation also involves texture updating of the view
  • the electronic device since the second rendering instruction is not generated by the electronic device based on the rendering tree after traversing the view, the electronic device is also required to read the texture memory and pass the texture identifier to the rendering pipeline through the first parameter.
  • the method calls of the first rendering instruction and the second rendering instruction are the same, the resources in the first rendering instruction and the resources in the second rendering instruction are different, the resources in the first rendering instruction are the variables or fixed values used by the method call in the first rendering instruction when it is executed, and the resources in the second rendering instruction are the variables or fixed values used by the method call in the second rendering instruction when it is executed.
  • the electronic device determines the description information of the interface during the first animation process, specifically including: after the electronic device receives the first operation, the electronic device determines that the triggered animation is the first animation; the electronic device determines the view involved in the first animation, and the view involved in the first animation is the view whose display content changes during the first animation process; the electronic device determines the attributes of the view involved in the first animation in one or more frames of the interface during the first animation process.
  • the method calls of the first rendering instruction and the second rendering instruction are the same, but the resources used by the method calls are different.
  • the electronic device determines the description information of the interface during the first animation process, specifically including: after the electronic device receives the first operation, the electronic device determines that the triggered animation is the first animation; the electronic device determines the view involved in the first animation, and the view involved in the first animation is the view whose display content changes during the first animation process; the electronic device determines the attributes of the view involved in the first animation in one or more frames of the interface during the first animation process.
  • the electronic device can determine the description information of the interface during the triggered animation process, and then determine the first parameter. Secondly, since there is no need to wait until the next frame of the interface is generated to know the properties of the view in the next frame of the interface, there is no need for the UI thread and the rendering thread to participate in the animation process.
  • an embodiment of the present application provides an interface generation method, which is applied to an electronic device, on which a first application is installed, the method comprising: after the electronic device receives a first operation, the first application generates a first rendering tree, the first rendering tree stores a drawing operation for generating a first interface, the first interface is a first frame interface of the first application in a first animation, the first operation is used to trigger the electronic device to display the first animation through the first application, the position of a first control in the first interface and the second interface changes, the first control is in the first position in the second interface, and the second interface is a non-first frame interface in the first animation; the first application converts the first rendering tree into a first rendering instruction; the electronic device calls the GPU to generate the first interface based on the first rendering instruction; the electronic device converts the first rendering tree into a first rendering instruction; The first parameter in the rendering instruction is updated to the first position to obtain a second rendering instruction; and the electronic device calls the GPU to generate the second interface based on the second rendering
  • the electronic device during the generation of the first frame interface of the animation, the electronic device generates a first rendering instruction based on the rendering tree; during the generation of other frame interfaces of the animation, the electronic device updates the first rendering instruction to a second rendering instruction, and then generates other frame interfaces.
  • the electronic device does not need to generate a rendering tree corresponding to the second rendering instruction, nor does it need to generate the second rendering instruction based on the rendering tree, thereby improving the energy efficiency of interface generation during the animation process.
  • the electronic device determines the description information of the interface during the first animation process, and the description information of the interface during the first animation process includes the second position.
  • the electronic device also needs to determine the properties of the second frame interface view before the second frame interface starts to be generated, and then update the first rendering instruction to the second rendering instruction.
  • the electronic device after the electronic device receives the first operation, the electronic device configures an update task, and the update task is configured in the GPU driver; the electronic device updates the first parameter in the first rendering instruction to the first position to obtain the second rendering instruction, specifically including: the electronic device updates the first parameter in the first rendering instruction to the first position through the update task to obtain the second rendering instruction.
  • the configuration update task in the GPU driver can be changed to update the first rendering instruction to the second rendering instruction; after generating the second rendering instruction, the GPU generates a second frame interface.
  • the method also includes: the electronic device determines, based on the description information of the interface during the first animation process, that the background map or foreground map of the first control in the first interface is different from the background map or foreground map of the first control in the second interface; the electronic device loads a first texture, which is the background map or foreground map of the first control in the second interface, and the first parameter includes an identifier of the first texture, which is the background map or foreground map of the first view in the second interface.
  • the animation also involves texture updating of the view
  • the electronic device since the second rendering instruction is not generated by the electronic device based on the rendering tree after traversing the view, the electronic device is also required to read the texture memory and pass the texture identifier to the rendering pipeline through the first parameter.
  • an embodiment of the present application provides an electronic device, on which a first application is installed, and the electronic device includes: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: after the electronic device receives a first operation, the electronic device determines the description information of the interface during the first animation process, and the first operation is used to trigger the electronic device to display the first animation through the first application; the electronic device generates a first rendering instruction, the rendering instruction is data for being executed by a GPU to generate a first interface, and the first interface is a frame interface during the first animation process; the electronic device updates the first rendering instruction to a second rendering instruction based on the description information of the interface during the animation process; the electronic device generates the second interface based on the second rendering instruction, the second interface is different from the first interface, and the second interface is a frame interface during the first animation process.
  • the one or more processors are specifically used to call the computer instruction to cause the electronic device to execute: the electronic device determines a first parameter based on the description information of the interface during the first animation process, and the first parameter is used to describe the changed properties of the first control in the second interface, and the first control is a view whose display effect changes in the second interface; the electronic device updates the first rendering instruction based on the first parameter to obtain the second rendering instruction.
  • the one or more processors are further used to call the computer instruction so that the electronic device executes: the electronic device displays a desktop, the desktop includes a first control, and the first control corresponds to a first application; the one or more processors are specifically used to call the computer instruction so that the electronic device executes: the electronic device detects that a user clicks on the first control; the one or more processors are further used to call the computer instruction so that the electronic device executes: the first animation is a startup animation of the first application, and in the first animation, the position and size of the second control change; the one or more processors are specifically used to call the computer instruction so that the electronic device executes: the first electronic device determines a first position, the first parameter includes the first position, the first position is the position of the second control in a second interface, and the second interface is a non-first frame interface of the first animation; the electronic device obtains the second rendering instruction by modifying the vertex position of the second control in the first rendering instruction to
  • the one or more processors are specifically used to call the computer instruction to cause the electronic device to execute: the electronic device updates the vertex position used by the first method call based on the first parameter to the first position to obtain the second rendering instruction, and the first method call is a method call used to be instructed by the GPU to draw the second control.
  • the first parameter is used to describe the color, vertex position, transparency and/or scaling of the first view.
  • the first parameter is written into the first data structure by the first application.
  • the first data structure is bound to a rendering pipeline; in the rendering pipeline, the first parameter is read by the electronic device to modify a rendering instruction.
  • the first data structure is a unifrom.
  • the one or more processors are also used to call the computer instruction to enable the electronic device to execute: after the electronic device receives the first operation, the electronic device configures an update task, and the update task is configured in the GPU driver; the electronic device updates the first rendering instruction based on the first parameter to obtain the second rendering instruction, specifically including: the electronic device replaces the second parameter in the first rendering instruction with the first parameter through the update task to obtain the second rendering instruction, and the second parameter is used to describe the properties of the first control in the first interface.
  • the one or more processors are also used to call the computer instructions to cause the electronic device to execute: the electronic device determines, based on the description information of the interface during the first animation process, that the background map or foreground map of the first control in the first interface is different from the background map or foreground map of the first control in the second interface; the electronic device loads a first texture, which is the background map or foreground map of the first control in the second interface, and the first parameter includes an identifier of the first texture, which is the background map or foreground map of the first view in the second interface.
  • the method calls of the first rendering instruction and the second rendering instruction are the same, the resources in the first rendering instruction and the resources in the second rendering instruction are different, the resources in the first rendering instruction are the variables or fixed values used by the method call in the first rendering instruction when it is executed, and the resources in the second rendering instruction are the variables or fixed values used by the method call in the second rendering instruction when it is executed.
  • the one or more processors are specifically used to call the computer instructions to cause the electronic device to execute: after the electronic device receives the first operation, the electronic device determines that the triggered animation is the first animation; the electronic device determines the view involved in the first animation, and the view involved in the first animation is the view whose display content changes during the first animation; the electronic device determines the properties of the view involved in the first animation in one or more frames of the interface during the first animation.
  • an embodiment of the present application provides an electronic device, on which a first application is installed, and the electronic device includes: one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code includes computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute: after the electronic device receives a first operation, the first application generates a first rendering tree, the first rendering tree stores a drawing operation for generating a first interface, the first interface is a first frame interface of the first application in a first animation, the first operation is used to trigger the electronic device to display the first animation through the first application, the position of the first control in the first interface and the second interface changes, the first control is in the first position in the second interface, and the second interface is a non-first frame interface in the first animation; the first application converts the first rendering tree into a first rendering instruction; the electronic device calls the GPU to generate the first interface based on the first rendering instruction; the electronic device updates the
  • the one or more processors are also used to call the computer instruction to cause the electronic device to execute: after the electronic device receives the first operation, the electronic device determines the description information of the interface during the first animation process, and the description information of the interface during the first animation process includes the second position.
  • the one or more processors are also used to call the computer instruction to cause the electronic device to execute: after the electronic device receives the first operation, the electronic device configures an update task, and the update task is configured in the GPU driver; the electronic device updates the first parameter in the first rendering instruction to the first position to obtain the second rendering instruction, specifically including: the electronic device updates the first parameter in the first rendering instruction to the first position through the update task to obtain the second rendering instruction.
  • the one or more processors are also used to call the computer instructions to cause the electronic device to execute: the electronic device determines, based on the description information of the interface during the first animation process, that the background map or foreground map of the first control in the first interface is different from the background map or foreground map of the first control in the second interface; the electronic device loads a first texture, which is the background map or foreground map of the first control in the second interface, and the first parameter includes an identifier of the first texture, which is the background map or foreground map of the first view in the second interface.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device, and the chip system includes one or more processors, which are used to call computer instructions so that the above-mentioned electronic device executes the method described in the first aspect, the second aspect, any possible implementation method in the first aspect, and any possible implementation method in the second aspect.
  • an embodiment of the present application provides a computer program product comprising instructions, which, when the computer program product is run on an electronic device, enables the electronic device to perform a method as described in the first aspect, the second aspect, any possible implementation of the first aspect, and any possible implementation of the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, including instructions, which, when executed on an electronic device, cause the electronic device to execute the first aspect, the second aspect, any possible implementation of the first aspect, and any possible implementation of the second aspect.
  • the electronic device provided in the fourth and fifth aspects, the chip system provided in the fifth aspect, the computer program product provided in the sixth aspect, and the computer storage medium provided in the seventh aspect are all used to execute the method provided in the embodiment of the present application. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method, which will not be repeated here.
  • FIG. 1 is an exemplary schematic diagram of an application generating a bitmap provided in an embodiment of the present application.
  • FIG. 2A and FIG. 2B are exemplary schematic diagrams of interface changes during an animation process provided in an embodiment of the present application.
  • FIG. 3 is an exemplary schematic diagram of interface generation during the animation process provided in an embodiment of the present application.
  • 4A and 4B are exemplary schematic diagrams of a UI thread executing a measurement method call and a layout method call to determine a position and size of a control of a non-animation object provided by an embodiment of the present application.
  • FIG. 5 is an exemplary schematic diagram of the process of the interface generation method provided in an embodiment of the present application.
  • FIG. 6A is an exemplary schematic diagram of updating a GPU instruction by updating a vertex position provided in an embodiment of the present application.
  • 6B-6E are another exemplary schematic diagram of updating GPU instructions by updating vertex positions provided in an embodiment of the present application.
  • FIGS. 7A-7D are exemplary schematic diagrams of the data flow in the interface generation method provided in an embodiment of the present application.
  • FIG. 8A and FIG. 8B are exemplary schematic diagrams of a method for implementing three-dimensional animation provided by an embodiment of the present application.
  • FIG. 9 is an exemplary schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 10 is an exemplary schematic diagram of the software architecture of the electronic device provided in an embodiment of the present application.
  • FIG. 11A and FIG. 11B are exemplary schematic diagrams of the data flow and software module interaction of the interface generation method provided in an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be understood as suggesting or implying relative importance or implicitly indicating the number of the indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features, and in the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
  • GUI graphical user interface
  • the interface is a medium interface for interaction and information exchange between the application and the user. Every time a vertical synchronization signal arrives, the electronic device needs to generate the interface of the application for the foreground application.
  • the frequency of the vertical synchronization signal is related to the refresh rate of the screen of the electronic device. For example, the frequency of the vertical synchronization signal is the same as the refresh rate of the screen of the electronic device.
  • the electronic device generates the interface of the application program, which requires the application program to render and generate the bitmap by itself, and pass its own bitmap to the surface compositor (SurfaceFlinger). That is, the application program, as a producer, performs drawing to generate the bitmap, and stores the bitmap in the buffer queue (BufferQueue) provided by the surface compositor; the surface compositor, as a consumer, continuously obtains the bitmap generated by the application program from the BufferQueue. Among them, the bitmap is located on the surface generated by the application program, and the surface will be filled in the BufferQueue.
  • the surface compositor After the surface compositor obtains the bitmap of the visible application, the surface compositor and the hardware compositing strategy module (HWC) determine how to composite the bitmap as a layer.
  • HWC hardware compositing strategy module
  • the surface compositor and/or the hardware compositing strategy module After the surface compositor and/or the hardware compositing strategy module performs bitmap synthesis, the surface compositor and/or the hardware compositing strategy module fills the synthesized bitmap into the frame buffer and passes it to the display subsystem (DSS). After getting the synthesized bitmap, the DSS can display the synthesized bitmap on the screen.
  • the frame buffer can be an on-screen buffer. Among them, the bitmap on the surface compositor can also be called a layer.
  • FIG. 1 is an exemplary schematic diagram of an application generating a bitmap provided in an embodiment of the present application.
  • step S101 After receiving the vertical synchronization signal (Vsync), the application starts to generate a bitmap.
  • the specific steps can be divided into four steps, namely step S101 , step S102 , step S103 and step S104 .
  • S101 The main thread traverses the views of the application and saves the drawing operation of each view into a newly generated rendering tree.
  • the main thread invalidates the view hierarchy, and the UI thread traverses the application's views through measurement method calls (measure()), layout method calls (layout()), and drawing method calls (draw(), or drawing recording method calls), determines and saves the drawing operations of each view, and records the view and the drawing operations involved in the view (such as drawline) into the drawing instruction list (displaylist) of the rendering node (RenderNode) of the rendering tree.
  • the data saved in the drawing instruction list can be a drawing operation structure (DrawOP or DrawListOP).
  • the view is the basic element that constitutes the application interface, and a control on the interface can correspond to one or more views.
  • the UI thread of the application within the drawing method call, also reads the content carried by the view into the memory. For example, the image carried by the image view (imageview), the text carried by the text view (textview).
  • the UI thread of the application determines the operation of reading the content carried by the view into the memory and records it in the drawing instruction list.
  • the drawing operation structure in the drawing instruction list can also be called a drawing instruction.
  • the drawing operation structure is a data structure used to draw graphics, such as drawing lines, drawing rectangles, drawing text, etc.
  • the drawing operation structure will be converted into an API call of the image processing library through the rendering engine, and an interface call in the OpenGLES library, Vulkan library, and Metal library.
  • drawlineOp which is a data structure containing drawn data such as the length and width of the line.
  • DrawLineOp will be further encapsulated as an interface call in the OpenGLES library, Vulkan library, and Metal library, and then obtain GPU instructions.
  • GPU instructions are used to call the GPU to generate a bitmap.
  • the OpenGLES library, Vulkan library, and Metal library can be collectively referred to as an image processing library or a graphics rendering library.
  • the electronic device In the process of generating a frame interface, the electronic device generates rendering instructions through the OpenGLES library, Vulkan library, or Metal library.
  • the image processing library provides APIs and driver support for graphics rendering.
  • GPU instructions can also be called rendering instructions.
  • DrawOP can be stored in the stack of the application in a chain data structure.
  • the drawing instruction list may be a buffer, which records all drawing operation structures or identifiers of all drawing operations included in one frame interface of the application, such as addresses, serial numbers, etc.
  • the drawing instruction list may be a buffer, which records all drawing operation structures or identifiers of all drawing operations included in one frame interface of the application, such as addresses, serial numbers, etc.
  • the rendering tree is a data structure generated by the UI thread and used to generate the application interface.
  • the rendering tree may include multiple rendering nodes, each of which includes rendering attributes and a drawing instruction list.
  • the rendering tree records part or all of the information for generating a frame of the application interface.
  • the main thread may only traverse the view of the dirty area (also referred to as the area that needs to be redrawn) to generate a differential rendering tree.
  • the rendering thread may determine the rendering tree to be used for the current frame interface rendering by differentially comparing the rendering tree with the rendering tree used for the previous frame rendering.
  • S102 The main thread synchronizes the rendering tree to the rendering thread, and the rendering tree is located in the stack of the application.
  • the UI thread passes/synchronizes the render tree to the render thread, where the render tree is located in the stack of the process corresponding to the application.
  • the rendering thread executes the drawing instructions in the rendering tree to generate a bitmap.
  • the rendering thread first obtains a hardware canvas (HardwareCanvas) and performs drawing operations in the rendering tree on the hardware canvas to generate a bitmap.
  • the hardware canvas is located in the surface held by the application, and the surface carries bitmaps or other formats of data for storing image information.
  • the rendering thread sends the surface carrying the bitmap to the surface compositor.
  • the rendering thread sends the generated bitmap to the surface compositor through the surface to participate in layer synthesis.
  • Step S101 can be considered as the construction phase, which is mainly responsible for determining the size, position, transparency and other properties of each view in the application.
  • the drawLine in the view can be encapsulated into a DrawLineOp in the construction, which contains the drawing data such as the length and width of the line, and can also contain the interface call corresponding to the DrawLineOp of the underlying image processing library, which is used to call the underlying graphics library to generate a bitmap in the rendering phase.
  • step S103 can be considered as the rendering stage, which is mainly responsible for traversing the rendering nodes of the rendering tree and performing drawing operations on each rendering node, thereby generating a bitmap on the hardware canvas.
  • the rendering thread calls the underlying graphics processing library through the rendering engine, such as the OpenGLES library (or OpenGL library), Vulkan library, Metal library, etc., and then calls the GPU to complete the rendering to generate a bitmap.
  • the OpenGLES library or OpenGL library
  • Vulkan library Vulkan library
  • Metal library etc.
  • FIG. 2A and FIG. 2B are exemplary schematic diagrams of interface changes during an animation process provided in an embodiment of the present application.
  • the interface displayed by the electronic device is interface 201, which may be the interface of a desktop application.
  • the electronic device displays controls 2A01, 2A02, 2A03, and 2A04, wherein controls 2A02, 2A03, and 2A04 are sub-controls of control 2A01.
  • Control 2A01 may be called a folder icon
  • control 2A02 is an icon control corresponding to a game application
  • control 2A03 is an icon control corresponding to a flashlight application
  • control 2A04 is an icon control corresponding to a gallery.
  • control 2A01 After the user clicks on control 2A01, the size and position of control 2A01 change as shown in interface 202. Clicking on control 2A01 may be clicking on an area of control 2A01 that does not include controls 2A02, 2A03, and 2A04.
  • control 2A01 As the size and position of control 2A01 change, the sizes and positions of controls 2A02, 2A03, and 2A04 also change.
  • the change of the interface shown in the interface 202 is an animation in an embodiment of the present application.
  • the interface changes during the application startup process are interface 203, interface 204, interface 205 and interface 206.
  • interface 204 and interface 205 are interface changes involved in an animation in the embodiment of the present application.
  • the interface 203 is the interface of the desktop application, and the interface 203 includes a control 2B01, which is the icon of the reading application. After the user clicks the control 2B01, the gallery application starts, and the startup process is shown in the interfaces 204, 205 and 206.
  • control 2B02 appears at the position of control 2B01, and control 2B02 continues to expand; in interface 205, control 2B02 continues to expand to the screen size (which may not include the status bar); finally, the electronic device displays interface 205, which is the startup interface (StartingWindow) of the application.
  • the process of the continuous expansion of control 2B02 is a kind of animation in the embodiment of the present application, which can be called a startup animation.
  • the interface 205 may also be the main interface of the application, that is, the main interface of the application is displayed after the startup animation ends.
  • the main interface may be the interface corresponding to the MainActivity.
  • control 2A01 corresponds to one view; in a more general case, control 2A01 corresponds to multiple views.
  • FIG. 3 is an exemplary schematic diagram of interface generation during the animation process provided in an embodiment of the present application.
  • the process of generating an interface during the animation process may include the following five steps, namely: step S301, step S302, step S303, step S304 and step S305.
  • the process of generating the first frame interface during the animation process may include: step S301, step S302, step S303, step S304 and step S305;
  • the process of generating a non-first frame interface during the animation process may include: step S302, step S303, step S304 and step S305.
  • Step S301 Create animation event 1.
  • Animation events can be created by the UI thread at any time, depending on the logic of the application. For example, animation events can be created after receiving user input, message events sent to the application by other threads or processes, or network data request updates. Animation events include the internal logic for implementing animation effects, such as the end conditions of animation effects and the modification amount of view properties in each frame during the duration of the animation effect.
  • a callback will be registered on the UI thread (equivalent to registering the animation event), such as registering a callback on the choreographer of the UI thread.
  • This callback is used to trigger the UI thread to process the animation event each time the UI thread receives a vertical synchronization signal (Vsync), and to modify the properties of the view according to the logic of the animation event.
  • Vsync vertical synchronization signal
  • the UI thread will actively cancel the callback registered by the animation event in the UI thread according to the logic of the animation event.
  • Animation event 1 is used to modify the properties of an animation object after receiving a vertical synchronization signal 1.
  • the animation object includes one or more views.
  • the animation object does not include the changed controls perceived by the user.
  • the animation object is control 2A01
  • the changed controls also include control 2A02, control 2A03, and control 2A04.
  • control 2A02, control 2A03, and control 2A04 are the changed controls affected by the animation object. That is, the animation event does not modify the properties of the views corresponding to control 2A02, control 2A03, and control 2A04, but the UI thread of the application adjusts the properties of the views corresponding to control 2A02, control 2A03, and control 2A04 according to the constraint relationship determined by the layout.
  • the UI thread of the application adjusts the properties of the views corresponding to control 2A02, control 2A03, and control 2A04 according to the constraint relationship determined by the layout. Please refer to the text description of step S303 below for adjusting the properties of the views corresponding to control 2A02, control 2A03, and control 2A04 according to the constraint relationship determined by the layout, which will not be repeated here.
  • the UI thread of the application After receiving the vertical synchronization signal, the UI thread of the application will process the input event (CALLBACK_INPUT), animation event (CALLBACK_ANIMATION), traversal event (CALLBACK_TRAVERSAL) and submission event (CALLBACK_COMMIT) in sequence.
  • input event CALLBACK_INPUT
  • animation event CALLBACK_ANIMATION
  • traversal event CALLBACK_TRAVERSAL
  • submission event CALLBACK_COMMIT
  • the UI thread of the application When the UI thread of the application processes an animation event (e.g., doCallbacks(CALLBACK_ANIMATION)), it will modify the properties of the view according to the logic of the animation event. After executing the code in animation event 1, the properties of the animation object will be modified. For example, in the scene shown in FIG2A, if the view corresponding to control 2A01 is view 401 in FIG4A, the width, height and position of view 401 are different before and after the modification, and view 401 is an animation object.
  • an animation event e.g., doCallbacks(CALLBACK_ANIMATION)
  • the UI thread of the application executes measurement method calls, layout method calls, and drawing method calls.
  • measurement method calls for specific content, please refer to the text description corresponding to Figure 1 above, which will not be repeated here.
  • the size and position of the view need to be re-determined based on the layout of the view.
  • 4A and 4B are exemplary schematic diagrams of a UI thread executing a measurement method call and a layout method call to determine a position and size of a control of a non-animation object provided by an embodiment of the present application.
  • control 2A01 corresponds to view 401
  • control 2A02 corresponds to view 402
  • control 2A03 corresponds to view 403
  • control 2A04 corresponds to view 404 .
  • the UI thread of the application determines that the width of view 402 becomes 10dp based on the constraint relationship that "the width of view 402 is 1/8 of the width of view 401".
  • the length of view 401 becomes 80dp, it determines that the width of view 402 becomes 10dp.
  • the width of view 402 , the length and width of view 403 , and the length and width of view 404 may also be determined by constraints in a layout file (eg, an XML file).
  • the position of view 402 , the position of view 403 , and the position of view 404 may also be determined by constraint relationships in a layout file (eg, an XML file).
  • a layout file eg, an XML file
  • the interface of the application includes views 405 , 406 , and 407 , and the horizontal (width direction of the views is horizontal) intervals of views 405 , 406 , and 407 are fixed (the horizontal interval is fixed as a constraint relationship). For example, the horizontal interval is 5 dp.
  • the logic of the animation event is to change the width of view 406 (view 406 is an animation object) from B1 to B2, where B2 is greater than B1 and greater than 0.
  • the UI thread of the application changes the width of view 406 to B2, it is also necessary to change the position of view 407 to ensure that the horizontal interval between view 406 and view 407 is fixed.
  • the UI thread of the application can determine the position change of view 407. For example, the UI thread of the application can determine that the X-axis position of view 407 becomes x1.
  • the UI thread after receiving the vertical synchronization signal, the UI thread also needs to perform measurement, layout and drawing recording, that is, during the generation of each frame interface during the animation process, the UI thread and rendering thread of the application need to work continuously.
  • S304 Receive a rendering tree, initialize a GPU rendering context, generate a GPU instruction corresponding to the rendering tree, and instruct the GPU to generate a bitmap.
  • the rendering thread After receiving the rendering tree, the rendering thread converts the drawing instruction list and rendering attributes in the rendering node in the rendering tree into corresponding GPU instructions.
  • the GPU instruction is a method call in an image processing library, or the GPU instruction is a method call in a rendering engine.
  • the GPU instruction is an instruction received by a GPU driver.
  • the GPU driver or GPU After receiving the GPU instruction corresponding to the rendering tree, the GPU driver or GPU executes the instruction to generate a bitmap.
  • the embodiment of the present application provides an interface generation method and an electronic device.
  • the interface generation method provided by the embodiment of the present application directly updates the GPU instruction or the data corresponding to the GPU instruction in the GPU driver layer, the image processing library or the rendering engine, thereby generating the next frame interface in the animation process.
  • the GPU instruction or the data corresponding to the GPU instruction before the update can be called the first rendering instruction
  • the GPU instruction or the data corresponding to the GPU instruction after the update can be called the second rendering instruction.
  • the animation module in the view system can determine the description information of the interface during the animation process, and then the animation module or other functional modules can generate update information based on the description information of the interface during the animation process.
  • the update information is used to update the GPU instructions.
  • updating the GPU instructions means: for example, updating the first rendering instruction to the second rendering instruction.
  • the description information of the interface during the animation process is used to describe the view and the attributes of the view whose attributes change in each frame interface except the first frame interface during the animation process.
  • the operating system After the operating system generates a first rendering instruction corresponding to the first frame interface of the animation, it can modify the input parameters of the method call for rendering the first control in the first rendering instruction, thereby generating each frame interface except the first frame interface during the animation process.
  • the animation module may synchronize the position of the view to the UI thread of the application through an interface, thereby enabling the UI thread of the application to determine the position of the control.
  • the animation module can synchronize textures from the UI thread of the application through an interface to update the background map or foreground map of the view, etc., which is not limited here.
  • the operation of synchronizing texture resources can also be performed by the GPU.
  • the interface generation method provided in the embodiment of the present application does not require the UI thread to execute drawing recording calls or generate new rendering trees during the animation process. Furthermore, the rendering thread does not need to convert drawing operations into GPU instructions, thereby improving the energy efficiency of interface generation.
  • FIG. 5 is an exemplary schematic diagram of the process of the interface generation method provided in an embodiment of the present application.
  • the interface generation method provided by the embodiment of the present application may include the following nine steps in the process of generating an interface during the animation process, namely: step S501 to step S509.
  • the process of generating the first frame interface during the animation process may include: step S501, step S502, step S503, step S504 and step S505;
  • the process of generating a non-first frame interface during the animation process may include: step S506, step S507, step S508 and step S509.
  • step S506 and step S507 may be performed only during the process of the application generating the first frame interface during the animation process.
  • step S506 and step S507 may be performed during the process of the application generating a non-first frame interface during the animation process.
  • step S506 and step S507 may be executed when the application generates the second frame interface during the animation process, but may no longer be executed when other frame interfaces are generated during the animation process.
  • step S507 and step S508 may be executed by other threads instead of rendering threads.
  • the other threads may not be threads of the application program, but threads maintained by the operating system.
  • the other threads may be threads of a unified rendering process.
  • the unified rendering process (UniRender) is a process independent of the application program, which obtains the rendering trees of one or more applications through cross-process communication, and calls the GPU to generate a bitmap after synthesizing the rendering trees.
  • step S507 and step S506 need to be repeatedly executed depends on the content of the description information of the interface during the animation process. For example, if the content of the description information of the interface during the animation process is only used to indicate the modification method of the attributes of the view in the current frame interface, it needs to be repeatedly executed; if the content of the description information of the interface during the animation process is used to indicate the modification method of the attributes of the view in each frame interface during the animation process, it can be executed only once.
  • the declarative animation event may be the same as or different from the animation event in Figure 3.
  • the declarative animation event is only used to distinguish it from the animation event in Figure 3 in name, and does not refer to any substantive content.
  • Declarative animation events can be created at any time, depending on the logic of the application. For example, animation events can be created after receiving user input, message events sent to the application by other threads or processes, or network data request updates. Animation events include the internal logic for implementing animation effects, such as the end conditions of animation effects and the amount of modification to view properties for each frame during the duration of the animation effect.
  • the declarative animation event is different from the animation event in Figure 3, wherein the declarative animation event needs to declare the logic of the animation during the animation process.
  • the content of the declarative animation event declaration includes: description information of the animation end interface and the duration of the animation; for another example, the content of the declarative animation event declaration includes: description information of the animation end interface and the step amount of the animation; for another example, the content of the declarative animation event declaration includes: duration of the animation, step amount of the animation.
  • the step amount of the animation may include the change in the attributes of the view in the current frame interface and the previous frame interface.
  • the declarative animation event may not register a callback. Since the declarative animation event has declared the logic of the animation during the animation process, that is, it declares the way to modify the properties of the view in each frame interface during the animation process, during the animation process (not the first frame interface), there is no additional information in the declarative animation event that needs to be processed by the UI thread of the application, so the callback registered before the animation event can be cancelled.
  • the declarative animation event is the same as the animation event in FIG3, and both specify the animation object and the modification method of the animation object.
  • the UI thread of the application needs to determine at least two of the description information of the animation end interface, the animation step amount, and the animation duration based on the declarative animation object event.
  • the declarative animation event is the same as the animation event in FIG. 3 , and after receiving the vertical synchronization signal, the UI thread of the application can determine the animation description information of the current frame interface through measurement and layout.
  • the UI thread of the application After receiving the vertical synchronization signal, the UI thread of the application determines the logic of animation 1 from the declaration content of declarative animation event 1, and further determines the properties of the view in the first frame interface during the animation process.
  • the UI thread of the application determines the description information of the animation end interface from the declaration content of the declarative animation event 1, and then determines the properties of the view in the animation end interface.
  • step S502 if the UI thread of the application determines the property of the view in the first frame interface during the animation process, the rendering tree is the rendering tree corresponding to the first frame interface during the animation process.
  • step S502 if the UI thread of the application program determines the property of the view in the animation end interface, then the rendering tree is the rendering tree corresponding to the animation end interface.
  • the rendering tree is the rendering tree corresponding to the end interface.
  • the method of updating the GPU instruction can refer to the text description in step S507 and step S508 below, which will not be repeated here.
  • S504 Receive a rendering tree, initialize a GPU rendering context, generate a GPU instruction corresponding to the rendering tree, and instruct the GPU to generate a bitmap.
  • the rendering thread receives the rendering tree, initializes the GPU rendering context, generates the GPU instructions corresponding to the rendering tree, and instructs the GPU to generate the bitmap.
  • the process can refer to the text description in Figure 1 above, which will not be repeated here.
  • S505 Execute GPU instructions to generate a bitmap.
  • the GPU driver executes GPU instructions.
  • the process of generating a bitmap can refer to the text description of Figure 1 above, which will not be repeated here.
  • GPU instructions can be divided into: resources (such as uniform, texture, mesh, etc.) and pipelines according to their functions.
  • the pipeline includes shaders, etc.
  • the process of GPU executing GPU instructions can also be referred to as GPU executing GPU rendering tasks (for example, job desc), that is, GPU rendering tasks can be used to call GPU to perform rendering to generate specific operations of bitmaps.
  • a pipeline may also be called a rendering pipeline, an assembly line, a rendering pipeline, a graphics pipeline, or an image pipeline.
  • the pipeline is the process by which the GPU generates a bitmap from resources and commands sent by the CPU, which involves a series of preset method calls, such as shaders, rasterization, etc.
  • a shader can be a special function pointed to by a graphics hardware device (such as a GPU), that is, a small program compiled specifically for the GPU.
  • Uniform is a limited read-only variable in the image processing library, which is assigned by the application and passed to the image processing library. Uniform can save parameters such as color, transparency, size ratio, etc. of different views in the current frame interface. After the uniform is bound to the pipeline, it can participate in the process of calling the GPU to generate bitmaps corresponding to the pipeline.
  • S506 Generate description information of the interface during the animation process.
  • the UI thread of the application needs to generate description information of the interface during the animation process.
  • the description information of the interface during the animation process includes the view and the properties of the view that describe the property changes in each frame interface except the first frame interface during the animation process, or the description information of the interface during the animation process includes the view and the properties of the view that describe the property changes in each frame interface during the animation process.
  • the description information of the interface during the animation process includes the properties of the view in each frame of the interface during the animation process.
  • the description information of the interface during the animation process includes describing the amount of change of the properties of the view during the animation process.
  • the description information of the interface may not include the values of the length, width and other attributes of the view. It is understandable that since the length, width and other attributes of the view are parameters for the convenience of application developers to configure, and in the rendering process, the input parameters of the drawing operation may not be the length, width and other attributes of the view, for example, for the drawing operation of drawing a rectangle, the required input parameters may be the vertex position, so the description information of the interface may not include the values of the length, width and other attributes of the view.
  • the description information of the interface during the animation process includes the vertex position change information of the control 2A01, wherein the vertex position change information can be used to determine the size and position of the control 2A01.
  • the vertex position can refer to the text description corresponding to FIG. 6A below.
  • the UI thread of the application needs to determine the view whose properties change based on the content of the animation event; in addition, the UI thread of the application also needs to determine the value of the property of the view whose properties change in each frame of the interface except the first frame of the interface during the animation process. In this case, the UI thread of the application also needs to execute the layout method call and the measurement method call after receiving the vertical synchronization signal.
  • the description information of the interface during the animation process is only used to determine the view and the attributes of the view whose attributes change in the next frame of the interface. If the description information of the interface during the animation process is only used to determine the view and the attributes of the view whose attributes change in the next frame of the interface, then during the animation process, the UI thread of the application needs to generate the description information of the interface during the animation process, that is, the description information of the interface of the current frame, after receiving the vertical synchronization signal.
  • the description information of the interface during the animation process can be determined by the UI thread of the application program traversing the view. That is, in this case, the description information of the interface during the animation process is the description information of the interface of this frame.
  • S507 Generate update information based on the description information of the interface during the animation process.
  • the animation module may generate update information based on the description information of the interface during the animation process and transmit the update information to the rendering engine and the image processing library.
  • the update information may be instructions at the Skia library level, instructions at the Vulkan library, OpenGL ES library or Metal library level, or instructions that can be recognized and executed by the GPU driver.
  • the update information may be information used to update resources in the GPU instructions in the current frame interface.
  • the update information is used in the embodiment of the present application to update the GPU instructions of the previous frame interface (or the GPU instructions of the first frame interface of the animation) to the GPU instructions corresponding to the current frame interface.
  • the update information is used to convert the Skia library, Vulkan library, OpenGL ES library or Metal library level instructions corresponding to the previous frame interface into the corresponding Skia library, Vulkan library, OpenGL ES library or Metal library level instructions in the current frame interface.
  • the specific conversion process can be referred to the text description of step S508 below, which will not be repeated here.
  • the update information can be implemented through a uniform and an additional GPU task.
  • the GPU task can be called an update task, which can be configured in the GPU driver, and the update task can receive and parse the description information of the interface during the animation process, and update the value in the uniform based on the description information of the interface during the animation process.
  • the update information includes description information of each frame interface during the animation process, and the update information can directly participate in the generation process of the first frame interface during the animation process, and then directly participate in the subsequent interface generation process.
  • the content corresponding to the description information of the first frame interface in the update information takes effect and participates in the GPU rendering and bitmap generation process.
  • the description information of the interface during the animation process only includes the description information of the interface of the current frame
  • the UI thread or the rendering thread needs to continuously refresh the content of the update information.
  • S508 Generate GPU instructions corresponding to the current frame interface based on the update information and the GPU instructions corresponding to the previous frame interface.
  • the GPU or GPU driver updates the GPU instructions corresponding to the previous frame interface to the GPU instructions corresponding to the current frame interface based on the update information.
  • the rendering thread updates the GPU instructions to correspond to instructions at the Skia library level, OpenGL ES library level, Vulkan library level, or Metal library level.
  • the change between two adjacent frames of the interface may be a change in position, size, texture, color, and/or transparency of the view in the interface.
  • the pipeline in the GPU instruction may not change, but only the resources in the GPU instruction may change.
  • the drawing operation of the previous frame interface, the first frame interface, or the last frame interface may be partially or completely updated at the rendering engine to the drawing operation of the current frame interface.
  • FIG. 6A is an exemplary schematic diagram of updating a GPU instruction by updating a vertex position provided in an embodiment of the present application.
  • the update information may act on the vertex position of the drawing operation corresponding to the previous frame interface.
  • T (bottom2/bottom1, left2/left1, right2/right1, top2/top1).
  • T may be determined according to the description information of the interface during the animation process.
  • T (bottom2/bottom1, left2/left1, right2/right1, top2/top1) does not mean that the animation description information must include the vertex position of the view 402 in the i+kth frame of the animation. This is just an exemplary description. If the animation description information includes the vertex position of the view 402 in the i+kth frame of the animation, c can be directly assigned.
  • the rotation of the view can be achieved by modifying the value of the input parameter degree of the drawing operation canvas.rotate(degree).
  • the drawing operation for achieving the rotation of the view is not limited to the drawing operation canvas.rotate().
  • the transparency of the view can be adjusted by modifying the value of the input parameter alpha of the drawing operation paint.setalpha(alpha).
  • the drawing operation for changing the transparency of the view is not limited to the drawing operation paint.setalpha.
  • the color of the view can be adjusted by modifying the value of the input parameter color of the drawing operation paint.setcolor(color).
  • the drawing operation for changing the color of the view is not limited to the drawing operation paint.setcolor.
  • the change of vertex position, the change of input parameter degree, the change of input parameter alpha and the change of input parameter color can all be update information in the implementation of the present application. That is, uniform can carry the vertex position, input parameter degree, input parameter alpha and input parameter color of different views during the animation process.
  • degree [0°, 10°, 20°, 30°].
  • degree [0].
  • the update task above is used to increase the value of degree by 10° before generating the bitmap each time. After the GPU driver executes the GPU rendering task, the rotation angle of the drawn graphics will increase by 10°.
  • the GPU instructions of the previous frame interface may be updated at the GPU driver layer to be the GPU instructions of the current frame interface.
  • 6B-6E are another exemplary schematic diagram of updating GPU instructions by updating vertex positions provided in an embodiment of the present application.
  • FIG. 6B exemplarily introduces the process of generating the first frame interface of the animation
  • FIGS. 6C and 6D exemplarily introduce the process of generating the non-first frame interface of the animation
  • FIG. 6E exemplarily introduces the process of updating the GPU instructions.
  • the application in the process of generating the first frame interface of the animation, first, the application generates a rendering tree, such as rendering tree 1 in FIG6B ; then, after receiving rendering tree 1, thread 1 converts the rendering tree into GPU instruction 1, and writes GPU instruction 1 into a command buffer, such as command buffer 1 in FIG6B , where thread 1 may be a rendering thread; then, after all GPU instructions are written into command buffer 1, the data of command buffer 1 will be submitted to a command queue; finally, the GPU executes the commands in the command queue to generate bitmap 1, which is the bitmap corresponding to the first frame interface of the animation.
  • the command buffer can be obtained from the command buffer pool (commandbuffer pool).
  • the application or operating system can update the GPU instruction 1 in the command buffer 1 to the GPU instruction 2, and submit the data in the command buffer 1 to the command queue after the update; then, the GPU executes the commands in the command queue to generate bitmap 2, which is the bitmap corresponding to the non-first frame interface of the animation.
  • the application or operating system can update the GPU instruction 1 in the command queue to GPU instruction 2; then, the GPU executes the commands in the command queue to generate bitmap 2, which is the bitmap corresponding to the non-first frame interface of the animation.
  • the GPU instruction can be divided into two parts, namely, the identification of the method call and the identification of the resource
  • the GPU instruction can be updated by updating the identification of the resource.
  • the method call can be equivalent to or can belong to the pipeline mentioned above.
  • the identification of the resource can be updated, wherein the resource can be equivalent to the input parameter of the method call, or the resource can be equivalent to the variable, fixed value, etc. required to be used in the method call.
  • method call 1 is a call for drawing a rectangle
  • resource 1 can be vertex position 1 such as (bottom1, left1, right1, top1)
  • resource 11 can be vertex position 2 such as (bottom2, left2, right2, top2).
  • GPU instruction 1 is used to draw a rectangle at vertex position 1
  • GPU instruction 2 is used to draw a rectangle at vertex position 2, thereby realizing the interface change shown in FIG6A .
  • the method call portion of the GPU instruction before and after the update ie, the pipeline portion, does not change.
  • the UI thread or rendering thread of the application does not need to prepare computing resources for the interface changes, and the load of the interface during the animation process is borne by the GPU.
  • S509 Execute GPU instructions to generate a bitmap.
  • the GPU executes GPU instructions to generate a bitmap of the application in the current frame interface.
  • the bitmap is stored in the framebuffer of the BufferQueue and then participates in layer synthesis for display.
  • FIGS. 7A-7D are exemplary schematic diagrams of the data flow in the interface generation method provided in an embodiment of the present application.
  • the UI thread of the application traverses the view to generate a rendering tree corresponding to the view; and the UI thread of the application determines the description information of the interface during the animation process.
  • GPU instruction 1 is a GPU instruction corresponding to the bitmap of the application in the first frame interface. After receiving GPU instruction 1, the GPU generates bitmap 1.
  • the UI thread of the application configures an update task to the GPU driver through the image processing library, and the update task can receive the interface description information during the animation process.
  • the GPU driver can generate update information based on the update task.
  • the GPU driver In the process of generating a non-first frame interface of the animation, for example, in the process of generating the Nth frame interface of the animation, the GPU driver generates GPU instruction 2 based on the update information and GPU instruction 1, and then generates a bitmap based on GPU instruction 2. Alternatively, the GPU driver generates GPU instruction 2 based on GPU instruction 1 during the execution of the update task.
  • the process of generating GPU instruction 2 can refer to the text description in Figures 6A to 6E above, which will not be repeated here.
  • GPU driver can generate GPU instruction 3 based on GPU instruction 1 and update information, and then generate a bitmap based on GPU instruction 3.
  • GPU instruction 3 is an instruction corresponding to the bitmap of the application in the N+1 frame interface of the animation.
  • the GPU driver may generate GPU instruction 3 based on GPU instruction 2 and update information, and then generate a bitmap based on GPU instruction 3.
  • GPU instruction 3 is an instruction corresponding to the bitmap of the application in the N+1 frame interface of the animation.
  • the UI thread of the application traverses the view to generate a rendering tree corresponding to the view; and the UI thread of the application determines the description information of the interface during the animation process.
  • GPU instruction 1 is a GPU instruction corresponding to the bitmap of the application in the first frame interface.
  • the UI thread of the application configures update information through the image processing library based on the description information of the interface during the animation process, and passes the update information to the GPU driver.
  • the GPU driver In the process of generating a non-first frame interface of the animation, for example, in the process of generating the Nth frame interface of the animation, the GPU driver generates GPU instruction 2 based on the update information and GPU instruction 1, and then generates a bitmap based on GPU instruction 2.
  • the process of generating GPU instruction 2 can refer to the text description in Figures 6A to 6E above, which will not be repeated here.
  • the UI thread of the application traverses the view to generate a rendering tree corresponding to the view; and the UI thread of the application determines the description information of the interface during the animation process.
  • GPU instruction 1 is a GPU instruction corresponding to the bitmap of the application in the first frame interface. After receiving GPU instruction 1, the GPU generates a bitmap.
  • the UI thread of the application determines the description information of the interface during the animation, that is, determines the description information of the interface of this frame, and converts the description information of the interface into update information through the image processing library; the GPU driver generates GPU instruction 2 based on the update information and GPU instruction 1, and then generates a bitmap based on GPU instruction 2.
  • the UI thread of the application can configure a GPU task to the GPU driver through the image processing library as shown in Figure 7A, and the GPU task is used to refresh the update information.
  • the UI thread and the rendering thread of the application program may update the texture and then write the identifier of the updated texture into the update information.
  • the application may create a new thread to be responsible for texture updating.
  • the application may preload multiple textures into the memory and save the identifiers of the multiple textures in the update information.
  • the application needs to load the texture into memory during the generation of each frame of the interface and save the texture identifier in the update information.
  • the rendering thread does not need to convert the rendering tree into GPU instructions, which reduces the CPU load during the interface generation process and can improve the energy efficiency of the interface generation during the animation process.
  • the interface generation method provided in the embodiment of the present application can improve the energy efficiency of interface generation during the animation process
  • the interface generation method provided in the embodiment of the present application can be applied in the field of three-dimensional (pseudo-three-dimensional) animation.
  • FIG. 8A and FIG. 8B are exemplary schematic diagrams of a method for implementing three-dimensional animation provided by an embodiment of the present application.
  • FIG8A there is a spherical object in interface 1 during the three-dimensional animation process and in interface 2 during the three-dimensional animation process.
  • the change in the spatial position of the spherical object causes the light intensity on the surface of the spherical object to change.
  • the pixels on the surface of the spherical object in interface 1 during the three-dimensional animation process are different from the pixels on the surface of the spherical object in interface 1 during the three-dimensional animation process, and the relationship is not a simple translation, rotation, or color change.
  • the rendering engine provides a custom shader to the application, which has built-in pixels of any frame interface in one or more three-dimensional animation processes or drawing operations for generating the interface (such as instructions at the skia library level).
  • the update information also acts on the color calculation process.
  • the color changes in different built-in or multiple 3D animations (such as the different brightness of the spherical object surface caused by the change of light direction in FIG. 8A ) can be determined by the update information.
  • the pixel values in the next frame interface of the 3D animation can be determined based on the update information (or the drawing operation to generate the next frame interface), there is no need to save the complete 3D model in the GPU and there is no need for the GPU to determine the pixel values in the next frame interface of the 3D animation based on the 3D model.
  • M 2 T1*M 1 .
  • the transformation matrix T1 is a two-dimensional matrix.
  • the calculation amount of the two-dimensional matrix is much smaller than that of the three-dimensional model, which can reduce the calculation amount of the interface generation in the three-dimensional animation process.
  • update information such as T1 can be determined offline and saved on the electronic device.
  • a frame interface in a three-dimensional animation may be generated by updating the position of pixels (the position of the drawing operation on the canvas).
  • the update information includes the transformation function f 1 ().
  • f 1 () can be used to change the position of the internal elements of the pixel matrix in the matrix.
  • FIG. 9 is an exemplary schematic diagram of the hardware structure of an electronic device provided in an embodiment of the present application.
  • the electronic device can be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, as well as a cellular phone, a personal digital assistant (PDA), an augmented reality (AR) device, a virtual reality (VR) device, an artificial intelligence (AI) device, a wearable device, a vehicle-mounted device, a smart home device and/or a smart city device.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable device a vehicle-mounted device
  • smart home device a smart home device and/or a smart city device.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or split certain components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signal to complete the control of instruction fetching and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple groups of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 communicates with the touch sensor 180K through the I2C bus interface to realize the touch function of the electronic device.
  • the I2S interface can be used for audio communication.
  • the processor 110 can include multiple I2S buses.
  • the processor 110 can be coupled to the audio module 170 via the I2S bus to achieve communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 via the I2S interface to achieve the function of answering a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 can be coupled via a PCM bus interface.
  • the audio module 170 can also send wireless signals to the wireless communication module 160 via the PCM interface.
  • the communication module 160 transmits audio signals to realize the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit an audio signal to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through the CSI interface to realize the shooting function of the electronic device.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, etc.
  • the USB interface 130 is an interface that complies with USB standard specifications, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge an electronic device, and can also be used to transfer data between an electronic device and a peripheral device. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation of the electronic device.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from a wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of an electronic device. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
  • the power management module 141 can also be set in the processor 110.
  • the power management module 141 and the charging management module 140 can also be set in the same device.
  • the wireless communication function of the electronic device can be implemented through antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor and baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of the antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to electronic devices.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 may receive electromagnetic waves from the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 may be arranged in the same device as at least some of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), global navigation satellite systems (GNSS), etc.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite systems
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic wave signal and performs filtering, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the antenna 1 of the electronic device is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the electronic device implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye.
  • the ISP can also perform algorithm optimization on the noise and brightness of the image.
  • the ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP can be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to processing digital image signals, they can also process other digital signals. For example, when an electronic device selects a frequency point, a digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital videos.
  • Electronic devices can support one or more video codecs. In this way, electronic devices can play or record videos in multiple coding formats, such as: Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG Moving Picture Experts Group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
  • RAM random access memories
  • NVM non-volatile memories
  • Random access memory can include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and memory, SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM, for example, the fifth generation DDR SDRAM is generally referred to as DDR5SDRAM), etc.
  • SRAM static random-access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • DDR5SDRAM double data rate synchronous dynamic random access memory
  • Non-volatile memory can include disk storage devices and flash memory.
  • Flash memory can be divided into NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operating principle; can be divided into single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC), etc. according to the storage unit potential level; can be divided into universal flash storage (UFS), embedded multi media Card (eMMC), etc. according to the storage specification.
  • SLC single-level cell
  • MLC multi-level cell
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • eMMC embedded multi media Card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data, etc.
  • the non-volatile memory may also store executable programs and user and application data, etc., and may be loaded into the random access memory in advance for direct reading and writing by the processor 110 .
  • the external memory interface 120 can be used to connect to an external non-volatile memory to expand the storage capacity of the electronic device.
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are stored in the external non-volatile memory.
  • the electronic device can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be arranged in the processor 110, or some functional modules of the audio module 170 can be arranged in the processor 110.
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "earpiece" is used to convert an audio electrical signal into a sound signal.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by putting his mouth close to the microphone 170C to input the sound signal into the microphone 170C.
  • the electronic device can be provided with at least one microphone 170C. In other embodiments, the electronic device can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device can also be provided with three, four or more microphones 170C to realize the collection of sound signals, noise reduction, identification of sound sources, and realization of directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be set on the display screen 194.
  • the capacitive pressure sensor can be a parallel plate including at least two conductive materials. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device determines the intensity of the pressure based on the change in capacitance. When a touch operation acts on the display screen 194, the electronic device detects the touch operation intensity according to the pressure sensor 180A. The electronic device can also calculate the touch position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities can correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyroscope sensor 180B can be used to determine the motion posture of the electronic device.
  • the angular velocity of the electronic device around three axes i.e., x, y, and z axes
  • the gyroscope sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyroscope sensor 180B detects the angle of the electronic device shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device through reverse movement to achieve anti-shake.
  • the gyroscope sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device can use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device when the electronic device is a flip phone, the electronic device can detect the opening and closing of the flip cover according to the magnetic sensor 180D. Then, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, the flip cover can be automatically unlocked.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device in all directions (generally three axes). When the electronic device is stationary, it can detect the magnitude and direction of gravity. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device emits infrared light outward through the light emitting diode.
  • the electronic device uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device. When insufficient reflected light is detected, the electronic device can determine that there is no object near the electronic device.
  • the electronic device can use the proximity light sensor 180G to detect when the user holds the electronic device close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode and pocket mode to automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the ambient light brightness.
  • the electronic device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to achieve fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device reduces the performance of a processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection.
  • the electronic device when the temperature is lower than another threshold, the electronic device heats the battery 142 to avoid abnormal shutdown of the electronic device due to low temperature.
  • the electronic device boosts the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K is also called a "touch control device”.
  • the touch sensor 180K can be set on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch control screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the electronic device, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of the vocal part of the human body. The bone conduction sensor 180M can also contact the human pulse to receive a blood pressure beat signal. In some embodiments, the bone conduction sensor 180M can also be set in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can parse out a voice signal based on the vibration signal of the vibrating bone block of the vocal part obtained by the bone conduction sensor 180M to realize a voice function.
  • the application processor can parse the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize a heart rate detection function.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 can be a mechanical key or a touch key.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • Motor 191 can generate vibration prompts.
  • Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • touch operations acting on different areas of the display screen 194 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminders, receiving messages, alarm clocks, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to and separated from the electronic device by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195.
  • the electronic device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, and the like. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device uses an eSIM, i.e., an embedded SIM card.
  • the eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
  • FIG. 10 is an exemplary schematic diagram of the software architecture of the electronic device provided in an embodiment of the present application.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present invention takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device.
  • the layered architecture divides the software into several layers, each with clear roles and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, from top to bottom: the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the phone manager is used to provide communication functions for electronic devices, such as the management of call status (including answering, hanging up, etc.).
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the Java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: browser engine (webkit), rendering engine (such as Skia library), surface compositor, hardware synthesis strategy module, media library (Media Libraries), image processing library (such as OpenGL ES), rendering engine (such as Skia library), etc.
  • browser engine webkit
  • rendering engine such as Skia library
  • surface compositor such as Skia library
  • hardware synthesis strategy module such as hardware synthesis strategy module
  • media library Media Libraries
  • image processing library such as OpenGL ES
  • rendering engine such as Skia library
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the image processing library is used to implement 3D graphics drawing, image rendering, etc.
  • a rendering engine is a drawing engine for 2D graphics.
  • the rendering engine provides one or more preset three-dimensional animation effects, and the three-dimensional animation effects are implemented by custom shaders.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes display driver, camera driver, audio driver, sensor driver, etc.
  • FIG. 11A and FIG. 11B are exemplary schematic diagrams of the data flow and software module interaction of the interface generation method provided in an embodiment of the present application.
  • the application configures the animation through the animation module in the view system.
  • the application saves the drawing operation as a drawing operation structure by calling the rendering engine, and then converts the drawing operation structure into a call in the image processing library through the image processing library.
  • the image processing library sends the GPU instruction corresponding to the interface to the GPU driver, and the GPU driver generates a rendering task for rendering and generating a bitmap.
  • the content shown in FIG. 11A above can be considered as an exemplary introduction to the data flow and software module interaction of the interface generation method during the first frame interface generation process.
  • the application also needs to determine the description information of the interface during the animation process based on the view system; and then pass the description information of the interface during the animation process to the GPU driver, and the GPU driver correspondingly generates an update task to update the unifrom, for example, obtains the unifrom from the GPU memory and modifies it.
  • the application also needs to determine the description information of the interface during the animation process based on the view system; based on the interface description information during the animation process, the update information is passed to the GPU driver, and the corresponding generation update task of the GPU driver updates the unifrom based on the update information, and then updates the GPU instructions.
  • the animation module determines that the animation involves texture update, it can configure the texture update task to the GPU through the image processing library.
  • the texture update task updates the texture identifier in the GPU instruction, and then renders the updated texture to the bitmap when the GPU executes the GPU instruction.
  • the term "when" may be interpreted to mean “if" or “after" or “in response to determining" or “in response to detecting", depending on the context.
  • the phrases “upon determining" or “if (the stated condition or event) is detected” may be interpreted to mean “if determining" or “in response to determining" or “upon detecting (the stated condition or event)” or “in response to detecting (the stated condition or event)", depending on the context.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions can be transmitted from a website site, a computer, a server or a data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or a data center that includes one or more available media integration.
  • the available medium can be a magnetic medium, (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware, and the programs can be stored in computer-readable storage media.
  • the programs can include the processes of the above-mentioned method embodiments.
  • the aforementioned storage media include: ROM or random access memory RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了界面生成方法及电子设备,涉及电子技术领域。本申请提供的界面生成方法包括:在电子设备通过应用程序显示动画的过程中,考虑到动画过程中界面的连续性,在生成第一渲染指令并基于第一渲染指令生成动画的一帧界面后,基于动画的逻辑直接修改第一渲染指令为第二渲染指令,并基于第二渲染指令生成动画的另一帧界面,无需UI线程执行测量、布局、绘制录制,也无需渲染线程生成动画过程中的每一帧界面,进而提升界面生成的能效比。其中,界面的连续性是指,在动画过程中虽然控件的大小、位置、透明度和/或颜色等属性变化,但是该控件存在于动画过程中。

Description

界面生成方法及电子设备
本申请要求在2022年10月19日提交中国国家知识产权局、申请号为202211281882.4的中国专利申请的优先权,发明名称为“界面生成方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及界面生成方法及电子设备。
背景技术
随着电子技术的发展,电子设备的屏幕的分辨率和刷新率越来越高,其中,屏幕的分辨率影响一帧界面中包含的像素,刷新率影响生成一帧界面的时间。
在电子设备显示第一帧界面前,电子设备需要花费计算资源生成该第一帧界面;在电子设备显示第二帧界面前,电子设备需要重新花费计算资源生成该第二帧界面。
当电子设备未及时生成该第二帧界面时,电子设备的屏幕上显示的内容会发生卡顿。而电子设备为了保障能够及时生成该第二帧界面,往往通过提高CPU的工作频率以提升电子设备的计算能力,进而导致电子设备生成一帧界面的能耗较高,降低了界面生成的能效比。
发明内容
本申请实施例提供了界面生成方法及电子设备,考虑到动画的连续性,在电子设备生成第一渲染指令后,可以直接基于动画的逻辑更新第一渲染指令,并基于更新后的第一渲染指令生成动画过程中的界面。进而,本申请实施例提供的界面生成方法,可以通过减少应用程序遍历视图和渲染生成位图的次数,进而提升生成界面的能效比。
第一方面,本申请实施例提供了一种界面生成方法,应用于电子设备,该电子设备上安装有第一应用,该方法包括:在该电子设备接收到第一操作后,该电子设备确定第一动画过程中界面的描述信息,该第一操作用于触发该电子设备通过该第一应用显示该第一动画;该电子设备生成第一渲染指令,渲染指令为用于被GPU执行以生成第一界面的数据,该第一界面为该第一动画过程中的一帧界面;该电子设备基于该动画过程中界面的描述信息更新该第一渲染指令为第二渲染指令;该电子设备基于该第二渲染指令生成该第二界面,该第二界面与该第一界面不同,该第二界面为该第一动画过程中的一帧界面。
在上述实施例中,电子设备通过基于动画的逻辑直接修改第一渲染指令为第二渲染指令,减少了生成第二指令的开销,进而提升了生成界面的能效比。
结合第一方面的一些实施例,在一些实施例中,该电子设备基于该动画过程中界面的描述信息更新该第一渲染指令为第二渲染指令,具体包括:该电子设备基于该第一动画过程中界面的描述信息确定第一参数,该第一参数用于描述第一控件在该第二界面中的发生变化的属性,该第一控件为在该第二界面中显示效果发生变化的视图;该电子设备基于该第一参数更新该第一渲染指令得到该第二渲染指令。
在上述实施例中,电子设备可以确定发生变化的控件,并确定第二控件发生变化的属性,然后修改第一渲染指令为第二渲染指令。
结合第一方面的一些实施例,在一些实施例中,在该电子设备接收到第一操作前,该方法还包括:该电子设备显示桌面,该桌面包括第一控件,该第一控件对应于第一应用;该电子设备接收到第一操作,具体包括:该电子设备检测到用户点击该第一控件;在该电子设备接收到第一操作后,该方法还包括:该第一动画为该第一应用的启动动画,在该第一动画中,第二控件的位置和大小发生变化;该电子设备基于该第一动画过程中界面的描述信息确定第一参数,具体包括:该第一电子设备确定第一位置,该第一参数包括该第一位置,该第一位置为该第二控件在第二界面中的位置,该第二界面为该第一动画的非第一帧界面;该电子设备基于该第一参数更新该第一渲染指令得到该第二渲染指令,具体包括:该电子设备通过将该第一渲染指令中的该第二控件的顶点位置修改为该第一位置得到该第二渲染指令。
在上述实施例中,应用程序的启动动画过程中,第二控件的位置和大小发生变化,则可以通过修改第一渲染指令,得到第二界面对应的第二渲染指令,进而基于第二渲染指令生成第二界面。
结合第一方面的一些实施例,在一些实施例中,该电子设备通过将该第一渲染指令中的该第二控件的顶点位置修改为该第一位置得到该第二渲染指令,具体包括:该电子设备基于该第一参数更新第一方法调用所使用的顶点位置为该第一位置以得到该第二渲染指令,该第一方法调用为用于被GPU指令以绘制该第 二控件的方法调用。
在上述实施例中,由于第二控件的位置和大小发生变化,则可以通过修改第一渲染指令中绘制第二控件的方法调用中的输入参数以得到第二渲染指令。
结合第一方面的一些实施例,在一些实施例中,该第一参数用于描述该第一视图的颜色、顶点位置、透明度和/或缩放比例。
在上述实施例中,第一参数可以包括用于描述动画涉及的视图的颜色、顶点位置、透明度和/或缩放比例等,在此不作限定。
结合第一方面的一些实施例,在一些实施例中,该第一参数被该第一应用写入第一数据结构体中,该第一数据结构体与渲染管线绑定;在该渲染管线中,该第一参数被该电子设备读取以修改渲染指令。
在上述实施例中,第一参数需要传递到渲染管线中才能在修改渲染指令时被电子设备读取。
结合第一方面的一些实施例,在一些实施例中,该第一数据结构体为unifrom。
在上述实施例中,第一参数可以位于uniform中,进而参与到渲染管线中。
结合第一方面的一些实施例,在一些实施例中,该方法还包括:在该电子设备接收到第一操作后,该电子设备配置更新任务,该更新任务被配置在GPU驱动中;该电子设备基于该第一参数更新该第一渲染指令得到该第二渲染指令,具体包括:该电子设备通过更新任务将该第一渲染指令中的第二参数替换为该第一参数以得到该第二渲染指令,该第二参数用于描述该第一控件在该第一界面中的属性。
在上述实施例中,可以在GPU驱动中配置更新任务,在驱动GPU生成第二界面前,替换渲染指令中的第二参数为第一参数,进而生成第二渲染指令。
结合第一方面的一些实施例,在一些实施例中,该方法还包括:该电子设备基于该第一动画过程中界面的描述信息确定该第一控件在该第一界面中的背景贴图或前景贴图与该第一控件在该第二界面中的背景贴图或前景贴图不同;该电子设备加载第一纹理,该第一纹理为该第一控件在该第二界面中的背景贴图或前景贴图,该第一参数包括该第一纹理的标识,该第一纹理为该第一视图在该第二界面中的背景贴图或前景贴图。
在上述实施例中,当动画还涉及视图的纹理更新时,由于第二渲染指令不是由电子设备在遍历视图后基于渲染树生成的,故还需要电子设备读取纹理内存中,并将纹理的标识通过第一参数传递到渲染管线中。
结合第一方面的一些实施例,在一些实施例中,该第一渲染指令和该第二渲染指令的方法调用相同,该第一渲染指令中的资源和该第二渲染指令中的资源不同,该第一渲染指令中的资源为该第一渲染指令中的方法调用在被执行时所使用的变量或固定值,该第二渲染指令中的资源为该第二渲染指令中的方法调用在被执行时所使用的变量或固定值。
结合第一方面的一些实施例,在一些实施例中,该电子设备接收到第一操作后,该电子设备确定第一动画过程中界面的描述信息,具体包括:该电子设备接收到该第一操作后,该电子设备确定被触发的动画为该第一动画;该电子设备确定该第一动画涉及的视图,该第一动画涉及的视图为在该第一动画过程中显示内容发生变化的视图;该电子设备确定该第一动画过程中一帧或多帧界面中该第一动画涉及的视图的属性。
在上述实施例中,第一渲染指令和第二渲染指令的方法调用相同,而方法调用所使用的资源不同。
结合第一方面的一些实施例,在一些实施例中,该电子设备接收到第一操作后,该电子设备确定第一动画过程中界面的描述信息,具体包括:该电子设备接收到该第一操作后,该电子设备确定被触发的动画为该第一动画;该电子设备确定该第一动画涉及的视图,该第一动画涉及的视图为在该第一动画过程中显示内容发生变化的视图;该电子设备确定该第一动画过程中一帧或多帧界面中该第一动画涉及的视图的属性。
在上述实施例中,电子设备可以确定被触发动画过程中界面的描述信息,进而确定第一参数。其次,由于无需等到要生成下一帧界面时才知道下一帧界面中视图的属性,所以在动画过程中无需UI线程和渲染线程参与。
第二方面,本申请实施例提供了一种界面生成方法,应用于电子设备,该电子设备上安装有第一应用,该方法包括:在该电子设备接收到第一操作后,该第一应用生成第一渲染树,该第一渲染树保存有用于生成第一界面的绘制操作,该第一界面为第一动画中的该第一应用的第一帧界面,该第一操作用于触发该电子设备通过该第一应用显示该第一动画,在该第一界面和第二界面中第一控件的位置发生变化,在该第二界面中该第一控件在第一位置,该第二界面为该第一动画中的非第一帧界面;该第一应用将该第一渲染树转换为第一渲染指令;该电子设备基于该第一渲染指令调用GPU生成该第一界面;该电子设备将该第一渲 染指令中的第一参数更新为该第一位置以得到第二渲染指令;该电子设备基于该第二渲染指令调用GPU生成该第二界面。
在上述实施例中,在动画的第一帧界面生成过程中,电子设备基于渲染树生成第一渲染指令;在动画的其他帧界面的生成过程中,电子设备更新第一渲染指令为第二渲染指令,进而生成其他帧界面,无需电子设备生成与第二渲染指令对应的渲染树,也无需电子设备基于渲染树生成第二渲染指令,提升了动画过程中界面生成的能效比。
结合第二方面的一些实施例,在一些实施例中,在该电子设备接收到第一操作后,该电子设备确定该第一动画过程中界面的描述信息,该第一动画过程中界面的描述信息包括该第二位置。
在上述实施例中,电子设备还需要在第二帧界面开始生成前,确定第二帧界面视图的属性,进而更新第一渲染指令为第二渲染指令。
结合第二方面的一些实施例,在一些实施例中,在该电子设备接收到第一操作后,该电子设备配置更新任务,该更新任务被配置在GPU驱动中;该电子设备将该第一渲染指令中的第一参数更新为该第一位置以得到第二渲染指令,具体包括:该电子设备通过该更新任务将该第一渲染指令中的该第一参数更新为该第一位置以得到该第二渲染指令。
在上述实施例中,可以更改在GPU驱动中配置更新任务,进而更新第一渲染指令为第二渲染指令;在生成第二渲染指令后,GPU生成第二帧界面。
结合第二方面的一些实施例,在一些实施例中,该方法还包括:该电子设备基于该第一动画过程中界面的描述信息确定该第一控件在该第一界面中的背景贴图或前景贴图与该第一控件在该第二界面中的背景贴图或前景贴图不同;该电子设备加载第一纹理,该第一纹理为该第一控件在该第二界面中的背景贴图或前景贴图,该第一参数包括该第一纹理的标识,该第一纹理为该第一视图在该第二界面中的背景贴图或前景贴图。
在上述实施例中,当动画还涉及视图的纹理更新时,由于第二渲染指令不是由电子设备在遍历视图后基于渲染树生成的,故还需要电子设备读取纹理内存中,并将纹理的标识通过第一参数传递到渲染管线中。
第三方面,本申请实施例提供了一种电子设备,该电子设备上安装有第一应用,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:在该电子设备接收到第一操作后,该电子设备确定第一动画过程中界面的描述信息,该第一操作用于触发该电子设备通过该第一应用显示该第一动画;该电子设备生成第一渲染指令,渲染指令为用于被GPU执行以生成第一界面的数据,该第一界面为该第一动画过程中的一帧界面;该电子设备基于该动画过程中界面的描述信息更新该第一渲染指令为第二渲染指令;该电子设备基于该第二渲染指令生成该第二界面,该第二界面与该第一界面不同,该第二界面为该第一动画过程中的一帧界面。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备基于该第一动画过程中界面的描述信息确定第一参数,该第一参数用于描述第一控件在该第二界面中的发生变化的属性,该第一控件为在该第二界面中显示效果发生变化的视图;该电子设备基于该第一参数更新该第一渲染指令得到该第二渲染指令。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备显示桌面,该桌面包括第一控件,该第一控件对应于第一应用;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备检测到用户点击该第一控件;该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该第一动画为该第一应用的启动动画,在该第一动画中,第二控件的位置和大小发生变化;该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该第一电子设备确定第一位置,该第一参数包括该第一位置,该第一位置为该第二控件在第二界面中的位置,该第二界面为该第一动画的非第一帧界面;该电子设备通过将该第一渲染指令中的该第二控件的顶点位置修改为该第一位置得到该第二渲染指令。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备基于该第一参数更新第一方法调用所使用的顶点位置为该第一位置以得到该第二渲染指令,该第一方法调用为用于被GPU指令以绘制该第二控件的方法调用。
结合第三方面的一些实施例,在一些实施例中,该第一参数用于描述该第一视图的颜色、顶点位置、透明度和/或缩放比例。
结合第三方面的一些实施例,在一些实施例中,该第一参数被该第一应用写入第一数据结构体中,该 第一数据结构体与渲染管线绑定;在该渲染管线中,该第一参数被该电子设备读取以修改渲染指令。
结合第三方面的一些实施例,在一些实施例中,该第一数据结构体为unifrom。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:在该电子设备接收到第一操作后,该电子设备配置更新任务,该更新任务被配置在GPU驱动中;该电子设备基于该第一参数更新该第一渲染指令得到该第二渲染指令,具体包括:该电子设备通过更新任务将该第一渲染指令中的第二参数替换为该第一参数以得到该第二渲染指令,该第二参数用于描述该第一控件在该第一界面中的属性。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备基于该第一动画过程中界面的描述信息确定该第一控件在该第一界面中的背景贴图或前景贴图与该第一控件在该第二界面中的背景贴图或前景贴图不同;该电子设备加载第一纹理,该第一纹理为该第一控件在该第二界面中的背景贴图或前景贴图,该第一参数包括该第一纹理的标识,该第一纹理为该第一视图在该第二界面中的背景贴图或前景贴图。
结合第三方面的一些实施例,在一些实施例中,该第一渲染指令和该第二渲染指令的方法调用相同,该第一渲染指令中的资源和该第二渲染指令中的资源不同,该第一渲染指令中的资源为该第一渲染指令中的方法调用在被执行时所使用的变量或固定值,该第二渲染指令中的资源为该第二渲染指令中的方法调用在被执行时所使用的变量或固定值。
结合第三方面的一些实施例,在一些实施例中,该一个或多个处理器,具体用于调用该计算机指令以使得该电子设备执行:该电子设备接收到该第一操作后,该电子设备确定被触发的动画为该第一动画;该电子设备确定该第一动画涉及的视图,该第一动画涉及的视图为在该第一动画过程中显示内容发生变化的视图;该电子设备确定该第一动画过程中一帧或多帧界面中该第一动画涉及的视图的属性。
第四方面,本申请实施例提供了一种电子设备,该电子设备上安装有第一应用,该电子设备包括:一个或多个处理器和存储器;该存储器与该一个或多个处理器耦合,该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,该一个或多个处理器调用该计算机指令以使得该电子设备执行:在该电子设备接收到第一操作后,该第一应用生成第一渲染树,该第一渲染树保存有用于生成第一界面的绘制操作,该第一界面为第一动画中的该第一应用的第一帧界面,该第一操作用于触发该电子设备通过该第一应用显示该第一动画,在该第一界面和第二界面中第一控件的位置发生变化,在该第二界面中该第一控件在第一位置,该第二界面为该第一动画中的非第一帧界面;该第一应用将该第一渲染树转换为第一渲染指令;该电子设备基于该第一渲染指令调用GPU生成该第一界面;该电子设备将该第一渲染指令中的第一参数更新为该第一位置以得到第二渲染指令;该电子设备基于该第二渲染指令调用GPU生成该第二界面。
结合第四方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:在该电子设备接收到第一操作后,该电子设备确定该第一动画过程中界面的描述信息,该第一动画过程中界面的描述信息包括该第二位置。
结合第四方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:在该电子设备接收到第一操作后,该电子设备配置更新任务,该更新任务被配置在GPU驱动中;该电子设备将该第一渲染指令中的第一参数更新为该第一位置以得到第二渲染指令,具体包括:该电子设备通过该更新任务将该第一渲染指令中的该第一参数更新为该第一位置以得到该第二渲染指令。
结合第四方面的一些实施例,在一些实施例中,该一个或多个处理器,还用于调用该计算机指令以使得该电子设备执行:该电子设备基于该第一动画过程中界面的描述信息确定该第一控件在该第一界面中的背景贴图或前景贴图与该第一控件在该第二界面中的背景贴图或前景贴图不同;该电子设备加载第一纹理,该第一纹理为该第一控件在该第二界面中的背景贴图或前景贴图,该第一参数包括该第一纹理的标识,该第一纹理为该第一视图在该第二界面中的背景贴图或前景贴图。
第五方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备,该芯片系统包括一个或多个处理器,该处理器用于调用计算机指令以使得上述电子设备执行如第一方面、第二方面、第一方面中任一可能的实现方式以及第二方面中任一可能的实现方式描述的方法。
第六方面,本申请实施例提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面、第二方面、第一方面中任一可能的实现方式以及第二方面中任一可能的实现方式描述的方法。
第七方面,本申请实施例提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面、第二方面、第一方面中任一可能的实现方式以及第二方面中任一可能 的实现方式描述的方法。
可以理解地,上述第四方面和第五方面提供的电子设备、第五方面提供的芯片系统、第六方面提供的计算机程序产品和第七方面提供的计算机存储介质均用于执行本申请实施例所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的应用程序生成位图的一个示例性示意图。
图2A和图2B为本申请实施例提供的动画过程中界面变化的一个示例性示意图。
图3为本申请实施例提供的动画过程中界面生成的一个示例性示意图。
图4A和图4B为本申请实施例提供的UI线程执行测量方法调用和布局方法调用确定非动画对象的控件的位置和大小的一个示例性示意图。
图5为本申请实施例提供的界面生成方法的流程的一个示例性示意图。
图6A为本申请实施例提供的通过更新顶点位置更新GPU指令的一个示例性示意图。
图6B-图6E为本申请实施例提供的通过更新顶点位置更新GPU指令的另一个示例性示意图。
图7A-图7D为本申请实施例提供的界面生成方法中数据流程的一个示例性示意图。
图8A和图8B为本申请实施例提供的通过实现三维动画方法的一个示例性示意图。
图9为本申请实施例提供的电子设备的硬件结构的一个示例性示意图。
图10为本申请实施例提供的电子设备的软件架构的一个示例性示意图。
图11A和图11B为本申请实施例提供的界面生成方法的数据流动与软件模块交互的一个示例性示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“该”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
为了便于理解,下面先对本申请实施例涉及的相关术语及相关概念进行介绍。本发明的实施方式部分使用的术语仅用于对本发明的具体实施例进行解释,而非旨在限定本发明。
界面作为应用程序与用户之间的交互和信息交互的介质接口,在每一次垂直同步信号到来时,电子设备需要为前台的应用程序生成该应用程序的界面。其中,垂直同步信号的频率与电子设备的屏幕的刷新率有关,例如垂直同步信号的频率与电子设备的屏幕的刷新率相同。
即每次电子设备刷新屏幕上显示的内容前,都需要为前台应用生成该应用程序的界面,以在屏幕刷新时向用户展现应用程序的新生成的界面。
其中,电子设备生成应用程序的界面需要应用程序自己渲染生成位图(bitmap),将自己的位图传递给表面合成器(SurfaceFlinger)。即,应用程序作为生产者执行绘制生成位图,将该位图存入表面合成器提供的缓冲队列(BufferQueue)中;表面合成器作为消费者不断的从BufferQueue获取应用程序生成的位图。其中,位图位于应用程序生成的surface上,该surface会被填入BufferQueue中。
在表面合成器获得可见的应用程序的位图后,表面合成器与硬件合成策略模块(Hardware Composer,HWC)确定位图作为图层(layer)的图层合成的方式。
在表面合成器和/或硬件合成策略模块执行位图合成后,由表面合成器和/或硬件合成策略模块将合成后的位图填入帧缓冲(Frame Buffer)中传递给显示子系统(Display Subsystem,DSS),DSS在拿到合成后的位图可以将该合成后的位图显示到屏幕上。该帧缓冲可以是在屏缓冲(on-screenbuffer)。其中,位图在表面合成器上也可以称为图层。
其中,应用程序生成位图的过程如下图1所示。
图1为本申请实施例提供的应用程序生成位图的一个示例性示意图。
如图1所示,应用程序在接收到垂直同步信号(Vsync)后,开始生成位图,具体的步骤可以分为四步,分别为步骤S101、步骤S102、步骤S103和步骤S104。
S101:主线程遍历应用程序的视图,将每个视图的绘制操作保存至新生成的渲染树中。
主线程(UI线程,UI Thread)让视图结构(viewhierarchy)失效,UI线程通过测量方法调用(measure())、布局方法调用(layout())、绘制方法调用(draw(),或可以称为绘制录制方法调用)遍历该应用程序的视图(view),确定并保存每个视图的绘制操作,并将视图和该视图涉及的绘制操作如(如drawline)录制到渲染树的渲染节点(RenderNode)的绘制指令列表(displaylist)中。其中,绘制指令列表中保存的数据可以为绘制操作结构体(DrawOP或DrawListOP)。
其中,视图是构成应用程序界面的基本元素,界面上的一个控件可以对应于一个或多个视图。
可选的,在本申请一些实施方式中,在绘制方法调用内,应用程序的UI线程还会读取视图上承载的内容至内存中。例如,图片视图(imageview)承载的图片,文本视图(textview)承载的文本。或者,在绘制方法调用内,应用程序的UI线程确定读取视图上承载的内容至内存中的操作,并录制到绘制指令列表中。绘制指令列表中的绘制操作结构体也可以称为绘制指令。
其中,绘制操作结构体为一个数据结构体,用于绘制图形,例如绘制线条、绘制矩形、绘制文本等。绘制操作结构体在渲染节点被遍历时会通过渲染引擎进而转换为图像处理库的API调用,OpenGLES库、Vulkan库、Metal库中的接口调用。例如在渲染引擎(Skia库)中,drawline会被封装为DrawLineOp,DrawLineOp是一个数据结构体,数据结构体里面包含有绘制的数据如线的长度、宽度等信息。DrawLineOp进一步会被封装为OpenGLES库、Vulkan库、Metal库中的接口调用,进而得到GPU指令。GPU指令用于调用GPU以生成位图。其中,OpenGLES库、Vulkan库、Metal库可以统称为图像处理库或图形渲染库,在一帧界面的生成过程中,电子设备通过OpenGLES库、Vulkan库或Metal库生成渲染指令。图像处理库提供图形渲染的API以及驱动支持等。
其中,GPU指令也可以称为渲染指令。
其中,DrawOP可以以链式的数据结构存储在应用程序的栈中。
其中,绘制指令列表可以是一个缓冲区,该缓冲区中记录有应用程序一帧界面所包括的所有绘制操作结构体或是所有绘制操作的标识,如地址、序号等。当应用程序有多个窗口、或者在不同的显示区域(display)上显示时,需要独立地生成与多个窗口对应的多个渲染树。
其中,渲染树是UI线程生成的,用于生成应用程序界面的一个数据结构体,渲染树可以包括多个渲染节点,每个渲染节点包括渲染属性和绘制指令列表。渲染树记录有生成应用程序一帧界面的部分或所有信息。
可选地,在本申请一些实施方式中,主线程可以只对脏区域(也可以称为需要被重绘的区域)的视图执行遍历,生成差分渲染树。其中,差分渲染树在被传递/同步到渲染线程后,渲染线程可以差分渲染树和上一帧渲染使用的渲染树确定本帧界面渲染需要使用的渲染树。
S102:主线程将渲染树同步到渲染线程,渲染树位于应用程序的栈中。
UI线程将渲染树传递/同步给渲染线程(Render Thread),其中,渲染树位于应用程序对应的进程的栈(stack)中。
S103:渲染线程执行渲染树中的绘制指令,生成位图。
渲染线程首先获取一个硬件画布(HardwareCanvas),并在该硬件画布上执行渲染树中的绘制操作,进而生成位图。其中,该硬件画布位于该应用程序持有的surface中,该surface中承载有位图或者其他格式的用于保存图像信息的数据。
S104:渲染线程发送承载位图的表面至表面合成器。
渲染线程通过表面将生成的位图发送到表面合成器上以参与图层合成。
可以认为步骤S101为构建阶段,主要负责确定该应用程序中每个视图的大小、位置、透明度等属性。例如,视图中的drawLine,构建中可以被封装成一个DrawLineOp,里面包含有绘制的数据如线的长度、宽度等,还可以包含有底层图像处理库的DrawLineOp对应的接口调用,用于在渲染阶段调用底层图形库生成位图。
类似的,可以认为步骤S103为渲染阶段,主要负责遍历渲染树的渲染节点,并执行每个渲染节点的绘制操作,进而在硬件画布上生成位图,在该过程中,渲染线程通过渲染引擎调用底层图形处理库,如OpenGLES库(或OpenGL库)、Vulkan库、Metal库等,进而调用GPU完成渲染以生成位图。
在大多数场景中,为了符合消费者的视觉习惯,应用程序显示的内容的变化往往是连续的而不是跳变的,也就是说在连续的多帧界面中,一般会出现相同的显示内容,例如下文中的图2A和图2B所示。
图2A和图2B为本申请实施例提供的动画过程中界面变化的一个示例性示意图。
如图2A所示,电子设备显示的界面为界面201,界面201可以是桌面应用程序的界面。在界面201中,电子设备显示有控件2A01、控件2A02、控件2A03、控件2A04,其中,控件2A02、控件2A03和控件2A04为控件2A01的子控件。控件2A01可以被称为文件夹图标,控件2A02为游戏应用程序对应的图标控件,控件2A03为手电筒应用程序对应的图标控件,控件2A04为图库对应的图标控件。
在用户点击控件2A01后控件2A01的大小位置发生变化如界面202所示。其中,点击控件2A01可以是点击控件2A01中的不包括控件2A02、控件2A03和控件2A04的区域。
在界面202中,随着控件2A01的大小位置发生变化,控件2A02、控件2A03和控件2A04的大小和位置也发生变化。
可选地,在本申请一些实施方式中,在用户点击界面201中的控件2A01后,界面202所示的界面的变化为本申请实施例中的一种动画。
如图2B所示,应用程序启动过程中的界面变化依次为界面203、界面204、界面205和界面206。其中,界面204和界面205即为本申请实施例中的一种动画涉及的界面变化。
其中,界面203为桌面应用程序的界面,界面203包括控件2B01,控件2B01为阅读应用程序的图标。在用户点击控件2B01后,图库应用程序启动,启动过程如界面204、界面205和界面206所示。
在界面204中,在控件2B01位置上出现新的控件2B02,且控件2B02不断扩大;在界面205中,控件2B02不断扩大至屏幕大小(可以不包括状态栏);最后,电子设备显示界面205,界面205为应用程序的启动界面(StartingWindow)。其中,控件2B02的不断扩大的过程即为本申请实施例的动画的一种,可以称该动画为启动动画。
可选地,在本申请一些实施方式中,界面205也可以为应用程序的主界面,即在启动动画结束后显示应用程序的主界面。其中,主界面可以为MainActivity对应的界面。
在上文中的图2A所示的动画中,应用程序的开发者只需指定控件2A01的在动画中的变化逻辑,即修改控件2A01在动画过程中的每一帧界面的视图属性。类似的,在上文中图2B所示的启动动画中,应用程序的开发者也需要指定每一帧界面中控件2B02的属性。
为了方便说明,认为控件2A01与一个视图对应;在更一般的情况下,控件2A01对应于多个视图。
图3为本申请实施例提供的动画过程中界面生成的一个示例性示意图。
如图3所示,动画过程中界面生成的过程可以包括如下五个步骤,分别为:步骤S301、步骤S302、步骤S303、步骤S304和步骤S305。其中,动画过程中的第一帧界面生成的过程可以包括:步骤S301、步骤S302、步骤S303、步骤S304和步骤S305;动画过程中的非第一帧界面生成的过程可以包括:步骤S302、步骤S303、步骤S304和步骤S305。
步骤S301:创建动画事件1。
动画事件可以在任意时刻被UI线程创建,与应用程序的逻辑有关,例如,可以是在接收到用户的输入、其他线程或进程向该应用程序发送的消息事件、网络数据请求更新后创建动画事件。动画事件中包括有实现动画效果的内部逻辑,例如动画效果的结束条件、动画效果持续时间内每一帧中视图属性的修改量。
动画事件在创建后,都会在UI线程注册回调(相当于注册动画事件),如UI线程的编舞者(Choregrapher)上注册回调,该回调用于UI线程在每次接收到垂直同步信号(Vsync)后,触发UI线程处理该动画事件,按照动画事件的逻辑修改视图的属性。
其中,在动画效果结束时,UI线程会按照动画事件的逻辑主动注销掉动画事件在UI线程注册的回调。
例如,在图2A所示的场景中,应用程序的UI线程在接收到用户点击控件2A01的操作后,会生成动画事件,如动画事件1。动画事件1用于在接收到垂直同步信号1后,修改动画对象的属性。其中,动画对象包括一个或多个视图。
可选地,在本申请一些实施方式中,动画对象并不包括用户感知到的发生变化的控件。例如,在图2A所示的场景中,动画对象为控件2A01,发生变化的控件还包括控件2A02、控件2A03和控件2A04。其中,控件2A02、控件2A03和控件2A04为被动画对象影响的发生变化的控件。即,动画事件并不修改控件2A02、控件2A03和控件2A04对应的视图的属性,而是由应用程序的UI线程根据布局确定的约束关系调整控件2A02、控件2A03和控件2A04对应的视图的属性。其中,应用程序的UI线程根据布局确定的约束关系调整控件2A02、控件2A03和控件2A04对应的视图的属性可以参考下文中的步骤S303的文字描述,此处不再赘述。
S302:接收到垂直同步信号后,触发动画事件1的回调,按照动画事件1的逻辑修改视图的属性。
应用程序的UI线程的在接收到垂直同步信号后,会依次处理输入事件(CALLBACK_INPUT)、动画事件(CALLBACK_ANIMATION)、遍历事件(CALLBACK_TRAVERSAL)和提交事件(CALLBACK_COMMIT)。
应用程序的UI线程在处理动画事件(如,doCallbacks(CALLBACK_ANIMATION))的过程中,会依据动画事件的逻辑修改视图的属性,在执行动画事件1中的代码后,动画对象的属性会被修改。例如,在图2A所示的场景中,若控件2A01对应的视图为图4A中的视图401,则修改前后视图401的宽、高和位置均不同,其中,视图401为动画对象。
S303:测量、布局、绘制录制以生成渲染树。
应用程序的UI线程执行测量方法调用、布局方法调用、绘制方法调用,具体的内容可以参考上文中图1对应的文字描述,此处不再赘述。
其中,在测量方法调用和布局方法调用中,需要基于视图的布局重新确定视图的大小和位置。
图4A和图4B为本申请实施例提供的UI线程执行测量方法调用和布局方法调用确定非动画对象的控件的位置和大小的一个示例性示意图。
如图4A所示,图4A中所示的内容与图2A所示的场景对应。其中,控件2A01与视图401对应,控件2A02与视图402对应,控件2A03与视图403对应、控件2A04与视图404对应。
在动画过程中,若视图401被应用程序的UI线程修改属性,如视图401的长度(高度)从40dp修改为80dp(视图401的长度为40dp时,视图402的长度为5dp),在应用程序的UI线程执行测量方法调用和布局方法调用的过程中,应用程序的UI线程基于“视图402的宽度为视图401宽度的1/8”这一约束关系,当视图401的长度变为80dp后,确定视图402的宽度变为10dp。
类似的,未在图4A中示出的视图402的宽度、视图403的长度和宽度、视图404的长度的宽度也可以通过布局文件(如XML文件)中的约束关系确定。
类似的,未在图4A中示出的视图402的位置、视图403的位置、视图404的位置也可以通过布局文件(如XML文件)中的约束关系确定。
如图4B所示,应用程序的界面包括视图405、视图406和视图407,并且视图405、视图406和视图407的水平(视图的宽度方向为水平)间隔固定(水平间隔固定为约束关系)。例如,水平间隔为5dp。
若动画事件的逻辑为将视图406(视图406为动画对象)的宽度从B1修改为B2,其中,B2大于B1大于0。在应用程序的UI线程将视图406的宽度修改为B2后,还需要修改视图407的位置以保证视图406和视图407的水平间隔固定。应用程序的UI线程在执行测量方法调用和布局方法调用后,可以确定视图407的位置变化。例如,应用程序的UI线程可以确定视图407的X轴位置变为x1。
故,在动画的过程中,UI线程在接收到垂直同步信号后,还需要执行测量、布局和绘制录制,即执行在动画的过程中的每一帧界面的生成过程中,应用程序的UI线程和渲染线程还需要不断地工作。
S304:接收渲染树、初始化GPU渲染的上下文,生成渲染树对应的GPU指令,指示GPU生成位图。
渲染线程在接收到渲染树后,会将渲染树中的渲染节点中的绘制指令列表和渲染属性转换为对应GPU指令。
可选地,在本申请一些实施方式中,GPU指令为图像处理库中的方法调用,或者GPU指为渲染引擎中的方法调用。
可选地,在本申请一些实施方式中,GPU指令GPU驱动接收到的指令。
S305:执行渲染树对应的GPU指令,生成位图。
GPU驱动或GPU在接收到渲染树对应的GPU指令后,执行该指令以生成位图。
结合上文图3、图4A和图4B所示的内容,由于动画过程中的界面变化是连续而非突变的,动画过程中不同帧的界面对应的渲染树的相似度较高,所以动画过程中不同帧的界面对应的GPU指令有相同的部分。
值得说明的是,在动画过程中,由于不同帧的界面对应的GPU指令有相同的部分,即应用程序的UI线程和渲染线程在动画过程中浪费了资源去生成相同的内容,降低了界面生成的能效比。
基于此,本申请实施例提供了界面生成方法及电子设备,本申请实施例提供的界面生成方法通过直接在GPU驱动层、图像处理库或渲染引擎中更新GPU指令或GPU指令对应的数据,进而生成动画过程中的下一帧界面。其中,更新前的GPU指令或GPU指令对应的数据可以称为第一渲染指令,更新后的GPU指令或GPU指令对应的数据可以称为第二渲染指令。
可选地,在本申请一些实施方式中,视图系统中的动画模块可以确定动画过程中界面的描述信息,进而由动画模块或其他功能模块基于该动画过程中界面的描述信息生成更新信息。该更新信息用于更新GPU指令。其中,更新GPU指令是指:例如,将第一渲染指令更新为第二渲染指令。其中,动画过程中界面的描述信息用于描述动画过程中除第一帧界面外的每一帧界面中属性变化的视图以及视图的属性。
例如,在动画过程中,第一控件的位置和大小发生变化,则第一控件在每一帧界面中的大小和位置存储在更新信息中,则操作系统在生成与动画的第一帧界面对应的第一渲染指令后,可以通过修改第一渲染指令中渲染第一控件的方法调用的输入参数,进而生成动画过程中的除第一帧界面外的每一帧界面。
可选的,在本申请一些实施方式中,该动画模块可以通过接口向应用程序的UI线程同步视图的位置,进而使得应用程序的UI线程可以确定控件的位置。
可选的,在本申请一些实施方式中,该动画模块可以通过接口从应用程序的UI线程同步纹理(texture),用于更新视图的背景贴图或前景贴图等,在此不做限定。或者,同步纹理资源这一操作也可以由GPU执行。
可以理解的是,本申请实施例提供的界面生成方法,在动画过程中,不需要UI线程执行绘制录制调用,不需要UI线程生成新的渲染树,进一步的,不需要渲染线程将绘制操作转换为GPU指令,所以可以提升界面生成的能效比。
下面结合图5所示的内容示例性的介绍本申请实施例提供的界面生成方法的流程。
图5为本申请实施例提供的界面生成方法的流程的一个示例性示意图。
如图5所示,本申请实施例提供的界面生成方法在动画过程中生成界面的过程可以包括如下九个步骤,分别为:步骤S501至步骤S509。其中,动画过程中的第一帧界面生成的过程可以包括:步骤S501、步骤S502、步骤S503、步骤S504和步骤S505;动画过程中的非第一帧界面生成的过程可以包括:步骤S506、步骤S507、步骤S508和步骤S509。
可选地,在本申请一些实施方式中,步骤S506和步骤S507可以仅在应用程序生成动画过程中的第一帧界面的过程中执行。
可选地,在本申请一些实施方式中,步骤S506和步骤S507可以在应用程序生成动画过程中的非第一帧界面的过程中执行。
可选地,在本申请一些实施方式中,步骤S506和步骤S507可以在应用程序生成动画过程中的第二帧界面的过程中执行,而在动画过程中生成其他帧界面的过程中不再执行。
可选地,在本申请一些实施方式中,步骤S507和步骤S508可以不由渲染线程执行而是由其他线程执行。其中,其他线程可以不是应用程序的线程,而是操作系统维护的线程。例如,其他线程可以是统一渲染进程的线程。其中,统一渲染进程(UniRender)为与应用程序独立的进程,通过跨进程通信方式获取一个或多个应用的渲染树,并在合成渲染树后调用GPU生成位图。
其中,步骤S507和步骤S506是否需要重复执行,取决于动画过程中界面的描述信息包括的内容。例如,若动画过程中界面的描述信息的内容仅用于指示本帧界面中视图的属性的修改方式,则需要重复执行;若动画过程中界面的描述信息的内容用于指示动画过程中每一帧界面中视图的属性的修改方式,则可以仅执行一次。
S501:创建声明式动画事件1。
其中,声明式动画事件与图3中的动画事件可以一样也可以不一样,在本申请实施例中,声明式动画事件仅用于在名称上与图3中的动画事件进行区分,并不指代任何实质内容。
声明式动画事件可以在任意时刻创建,与应用程序的逻辑有关,例如,可以是在接收到用户的输入、其他线程或进程向该应用程序发送的消息事件、网络数据请求更新后创建动画事件。动画事件中包括有实现动画效果的内部逻辑,例如动画效果的结束条件、动画效果持续时间内每一帧对视图属性的修改量。
可选地,在本申请一些实施方式中,声明式动画事件与图3中的动画事件不同,其中,声明式动画事件需要声明动画过程中动画的逻辑。例如,声明式动画事件声明的内容包括:动画结束界面的描述信息和动画的持续时间;又例如,声明式动画事件声明的内容包括:动画结束界面的描述信息和动画的步进量;又例如,声明式动画事件声明的内容包括:动画的持续时间、动画的步进量。其中,动画的步进量可以包括本帧界面与上一帧界面中视图的属性的变化量。
可选地,在本申请一些实施方式中,在动画的过程中(非第一帧界面),声明式动画事件可以不注册回调。由于声明式动画事件已经声明了动画过程中动画的逻辑,即声明了动画过程中对每一帧界面中视图的属性的修改方式,故在动画过程中(非第一帧界面),声明式动画事件中没有额外的信息需要被应用程序的UI线程处理,故可以注销动画事件之前注册的回调。
可选地,在本申请一些实施方式中,声明式动画事件与图3中的动画事件相同,均指明动画对象以及动画对象的修改方式。在该情况下,应用程序的UI线程需要基于该声明式动画对象事件确定动画结束界面的描述信息、动画的步进量和动画的持续时间中的至少两个。
可选地,在本申请一些实施方式中,声明式动画事件与图3中的动画事件相同,则应用程序的UI线程在接收到垂直同步信号后,可以通过测量、布局确定本帧界面的动画描述信息。
S502:接收到垂直同步信号后,从声明式动画事件1中获取本帧界面的动画描述信息,确定视图的属性。
应用程序的UI线程接收到垂直同步信号后,从声明式动画事件1的声明内容中确定动画1的逻辑,进而确定动画过程中的第一帧界面中的视图的属性。
可选地,在本申请一些实施方式中,应用程序的UI线程在接收到垂直同步信号后,从声明式动画事件1的声明内容中确定动画结束界面的描述信息,进而确定动画结束界面中的视图的属性。
S503:测量、布局、绘制录制以生成渲染树。
应用程序的UI线程生成渲染树的过程可以参考图1中的文字描述,此处不再赘述。
其中,在步骤S502中,若应用程序的UI线程确定的是动画过程中的第一帧界面中的视图的属性,则该渲染树为动画过程中的第一帧界面对应的渲染树。
其中,在步骤S502中,若应用程序的UI线程确定的是动画结束界面中的视图的属性,则该渲染树为动画结束界面对应的渲染树。
若,应用程序的UI线程在步骤S502中确定的是结束界面的描述信息,则该渲染树为结束界面对应的渲染树。在该情况下,需要将结束界面对应GPU指令更新动画的第一帧界面的GPU指令。其中,更新GPU指令的方法可以参考后文中步骤S507、步骤S508中的文字描述,此处不再赘述。
S504:接收渲染树、初始化GPU渲染的上下文,生成渲染树对应的GPU指令,指示GPU生成位图。
渲染线程接收渲染树,初始化GPU渲染的上下文,生成渲染树对应的GPU指令,指示GPU生成位图的过程可以参考上文中图1中的文字描述,此处不再赘述。
S505:执行GPU指令,生成位图。
GPU驱动执行GPU指令,生成位图的过程中可以参考上文中图1的文字描述,此处不再赘述。
可选地,在本申请一些实施方式中,可以将GPU指令按照功能划分为:资源(如uniform、纹理(texture)、网格(mesh)等)和管线(pipeline)。其中,管线(pipeline)包括着色器(shader)等。其中,GPU执行GPU指令的过程也可以被称为GPU执行GPU渲染任务(例如,job desc),即GPU渲染任务可以用于调用GPU执行渲染以生成位图的具体操作。
管线也可以称为渲染管线、流水线、渲染流水线、图像管线或图像流水线。
其中,管线为GPU将资源和CPU发送的命令生成位图的过程,在该过程中涉及到一系列预置的方法调用,如着色器、光栅化等。其中,着色器可以为图形硬件设备(如GPU)所指向的一类特殊函数,即专为GPU编译的一种小型程序。
其中,uniform为图像处理库中的限定只读变量,由应用程序赋值后传递到图像处理库中,其中unifrom中可以保存有本帧界面中不同视图的颜色、透明度、大小比例等参数。uniform与管线绑定后,可以参与到管线对应的调用GPU生成位图的过程中。
S506:生成动画过程中界面的描述信息。
应用程序的UI线程需要生成动画过程中界面的描述信息。其中,动画过程中界面的描述信息包括描述动画过程中除第一帧界面外的每一帧界面中属性变化的视图以及视图的属性,或者,动画过程中界面的描述信息包括动画过程中每一帧界面中属性变化的视图以及视图的属性。
可选地,在本申请一些实施方式中,动画过程中界面的描述信息包括动画过程中每一帧界面中视图的属性。
可选地,在本申请一些实施方式中,动画过程中界面的描述信息包括描述动画过程中视图的属性的变化量。
可选地,在本申请一些实施方式中,界面的描述信息可以不包括视图的长、宽等属性的数值。可以理解的是,由于视图的长、宽等属性是为了方便应用程序开发者配置的参数,而在渲染过程中,绘制操作的输入参数可以不是视图的长、宽等属性,例如对于绘制矩形这一绘制操作来说,需要的输入参数可以是顶点位置,所以界面的描述信息可以不包括视图的长、宽等属性的数值。
例如,动画过程中界面的描述信息包括控件2A01顶点位置变化信息,其中顶点位置变化信息可以用于确定控件2A01的大小和位置。其中,顶点位置可以参考下文中图6A对应的文字描述。
若声明式动画事件(相当于图3中的动画事件)仅指定了动画对象在动画过程中的属性变化,应用程序的UI线程需要结合动画事件的内容确定属性变化的视图;除此之外,应用程序的UI线程还需要确定动画过程中除第一帧界面外的每一帧界面属性变化的视图的属性的数值。在该情况下,应用程序的UI线程还需要在接收到垂直同步信号后,执行布局方法调用和测量方法调用。
可选地,在本申请一些实施方式中,动画过程中界面的描述信息仅用于确定下一帧界面中属性变化的视图以及视图的属性。若动画过程中界面的描述信息仅用于确定下一帧界面中属性变化的视图以及视图的属性,则在动画过程中,应用程序的UI线程需要在接收到垂直同步信号后,生成动画过程中界面的描述信息,即本帧界面的描述信息。
可选地,在本申请一些实施方式中,动画过程中界面的描述信息可以由应用程序的UI线程遍历视图以确定。即,在该情况下,动画过程中界面的描述信息为本帧界面的描述信息。
S507:基于动画过程中界面的描述信息生成更新信息。
在动画模块生成动画过程中界面的描述信息后,可以由动画模块基于该动画过程中界面的描述信息生成更新信息并传递到渲染引擎、图像处理库中。
其中,更新信息可以为Skia库级别的指令,也可以是Vulkan库、OpenGL ES库或Metal库级别的指令,也可以是GPU驱动可以识别并执行的指令。
其中,更新信息可以是本帧界面中用于更新GPU指令中资源的信息。更新信息在本申请实施例中用于将上一帧界面的GPU指令(或动画第一帧界面的GPU指令)更新为本帧界面对应的GPU指令。
可选地,在本申请一些实施方式中,更新信息用于将上一帧界面对应的Skia库、Vulkan库、OpenGL ES库或Metal库级别的指令转换为本帧界面中的对应的Skia库、Vulkan库、OpenGL ES库或Metal库级别的指令。具体的转换过程可以参考下文步骤S508的文字描述,此处不再赘述。
可选地,在本申请一些实施方式中,其中更新信息的实现可以通过uniform和一个额外的GPU任务完成。其中,该GPU任务可以称为更新任务,该更新任务可以配置在GPU驱动中,该更新任务可以接收并解析动画过程中界面的描述信息,并基于该动画过程中界面的描述信息更新uniform中的值。
可选地,在本申请一些实施方式中,更新信息包括动画过程中每一帧界面的描述信息,则更新信息可以直接参与到动画过程中第一帧界面的生成过程中,进而直接参与到后续的界面生成过程中。例如,在生成第I帧界面的过程中,更新信息中的与第I帧界面的描述信息对应的内容生效,参与到GPU渲染生成位图过程中。
可选地,在本申请一些实施方式中,若动画过程中界面的描述信息仅包括本帧界面的描述信息,则UI线程在动画过程中不断生成动画过程中界面的描述信息后,UI线程或渲染线程还需要不断地刷新更新信息的内容。
S508:基于更新信息和上一帧界面对应的GPU指令生成本帧界面对应的GPU指令。
GPU或GPU驱动基于更新信息将上一帧界面对应的GPU指令更新为本帧界面对应的GPU指令。
可选地,在本申请一些实施方式中,当更新信息为Skia库、Vulkan库、OpenGL ES库或Metal库级别的指令,则由渲染线程更新GPU指令对应为Skia库级别的指令、OpenGL ES库、Vulkan库或Metal库级别的指令。
其中,在动画过程中,相邻两帧界面的变化可以为界面中的视图的位置变化、大小变化、贴图变化、颜色变化和/或透明度变化等。对应的,在动画过程中,GPU指令中的管线可以没有发生变化,而仅GPU指令中的资源发生变化。
下面介绍几种更新GPU指令或等价更新GPU指令的方式。
可选地,在本申请一些实施方式中,可以在渲染引擎处部分或全部地更新上一帧界面或第一帧界面或最后一帧界面的绘制操作为本帧界面的绘制操作。
图6A为本申请实施例提供的通过更新顶点位置更新GPU指令的一个示例性示意图。
例如,在图2A所示的场景中,视图401的变化为大小变化和位置变化。若视图401为一个矩形,则在动画的第i帧中,其中i大于1,该视图401对应的绘制操作包括drawrect(rect r,paint p),其中r为矩形(rect)对象,p为画笔(paint)对象;r中包括参数c,参数c用于确定矩形的位置,如c=(bottom1,left1,right1,top1)。
其中,更新信息可以作用于上一帧界面对应的绘制操作的顶点位置,例如,更新信息使得c=c.*T,.*为矩阵点乘。其中,T=(bottom2/bottom1,left2/left1,right2/right1,top2/top1)。其中,T可以根据动画过程中界面的描述信息确定。
值得说明的是,T=(bottom2/bottom1,left2/left1,right2/right1,top2/top1)不代表动画描述信息中一定会包括动画的第i+k帧中视图402的顶点位置,这里只是示例性的说明。若,动画描述信息中包括动画的第i+k帧中视图402的顶点位置,则可以直接对c赋值。
在动画的第i+k帧中,其中k大于0,该视图401对应的绘制操作包括drawrect(rect r,paint p),r中参数c的值变为c=(bottom2,left2,right2,top2)。
可选地,在本申请一些实施方式中,可以通过修改canvas.rotate(degree)这一绘制操作的输入参数degree的值,实现视图的旋转。其中,实现视图的旋转的绘制操作并不局限于canvas.rotate()这一绘制操作。
可选地,在本申请一些实施方式中,可以通过修改paint.setalpha(alpha)这一绘制操作的输入参数alpha的值,实现视图的透明度的调整。其中,实现视图的透明度变化的绘制操作并不局限于paint.setalpha这一绘制操作。
可选地,在本申请一些实施方式中,可以通过修改paint.setcolor(color)这一绘制操作的输入参数color的值,实现视图的颜色的调整。其中,实现视图的颜色变化的绘制操作并不局限于paint.setcolor这一绘制操作。
在上述可选的实施方式中,顶点位置的变化、输入参数degree的变化、输入参数alpha的变化和输入参数color的变化均可以为本申请实施方式中的更新信息。即,uniform可以承载动画过程中不同视图的顶点位置、输入参数degree、输入参数alpha、输入参数color。
例如,在uniform中,degree=[0°,10°,20°,30°]。又例如,degree=[0],上文中的更新任务用于在每次生成位图前将degree的值增加10°。在GPU驱动执行GPU渲染任务后,画出的图形的旋转角度就会增加10°。
可选的,在本申请一些实施方式中,可以在GPU驱动层更新上一帧界面的GPU指令为本帧界面的GPU指令。
图6B-图6E为本申请实施例提供的通过更新顶点位置更新GPU指令的另一个示例性示意图。
其中,图6B示例性的介绍动画的第一帧界面的生成过程,图6C和图6D示例性的介绍动画的非第一帧界面的生成过程,图6E示例性的介绍更新GPU指令的过程。
如图6B所示,在动画的第一帧界面的生成过程中,首先,应用程序生成渲染树,如图6B中的渲染树1;然后,线程1接收渲染树1后将渲染树转换为GPU指令1,并将GPU指令1写入命令缓冲(commandbuffer),如图6B中的命令缓冲1,其中,线程1可以是渲染线程;再然后,在GPU指令被全部写入命令缓冲1后,命令缓冲1的数据会被提交到命令队列(commandqueue)中;最后,GPU执行命令队列中的命令,生成位图1,位图1为动画的第一帧界面对应的位图。
其中,命令缓冲可以从命令缓冲池(commandbuffer pool)中申请获得。
如图6C所示,在动画的非第一帧界面的生成过程中,应用程序或操作系统可以将命令缓冲1中的GPU指令1更新为GPU指令2,并在更新后将命令缓冲1中的数据提交到命令队列中;然后,GPU执行命令队列中的命令,生成位图2,位图2为动画的非第一帧界面对应的位图。
如图6D所示,在动画的非第一帧界面的生成过程中,应用程序或操作系统可以将命令队列中的GPU指令1更新为GPU指令2;然后,GPU执行命令队列中的命令,生成位图2,位图2为动画的非第一帧界面对应的位图。
如图6E所示,在动画的非第一帧界面的生成过程中,由于GPU指令可以被划分为两部分,分别为方法调用的标识和资源的标识,故可以通过更新资源的标识进而更新GPU指令。其中,方法调用可以相当于或者可以属于上文中的管线。在更新GPU指令的过程中,可以只更新资源的标识,其中,资源可以相当于方法调用的输入参数,或者,资源可以相当于方法调用中需要使用到的变量、固定值等。
例如,在图6E中,在将GPU指令1更新为GPU指令2的过程中,将资源1的标识更改为资源11的标识。若在GPU指令1中资源1的标识被方法调用1使用,则在GPU指令2中资源11的标识被方法调用1使用。结合图6A所示的内容,方法调用1为绘制矩形的调用,资源1可以为顶点位置1如(bottom1,left1,right1,top1),资源11可以为顶点位置2如(bottom2,left2,right2,top2)。则,GPU指令1用于在顶点位置1绘制一个矩形,GPU指令2用于在顶点位置2绘制一个矩形,进而实现如图6A所示的界面变化。
可选地,在本申请一些实施方式中,更新前后的GPU指令的方法调用部分,即管线部分不发生变化。
可以理解的是,在动画的第二帧界面直至动画结束前,应用程序的UI线程或渲染线程无需为界面的变化准备计算资源,动画过程中界面的负载由GPU承担。
S509:执行GPU指令,生成位图。
GPU执行GPU指令,生成本帧界面中应用程序的位图,该位图存储在BufferQueue的framebuffer上,进而参与图层合成以送显。
图7A-图7D为本申请实施例提供的界面生成方法中数据流程的一个示例性示意图。
如图7A所示,在动画的第一帧界面的生成过程中,应用程序的UI线程遍历视图生成与视图对应的渲染树;并且,应用程序的UI线程确定动画过程中界面的描述信息。
渲染线程接收到渲染树后,通过图像处理库将渲染树转换为GPU指令1,GPU指令1为对应于第一帧界面中应用程序的位图的GPU指令。GPU接收到GPU指令1后,生成位图1。
应用程序的UI线程通过图像处理库向GPU驱动配置更新任务,该更新任务可以接收动画过程中的界面描述信息。GPU驱动可以基于该更新任务生成更新信息。
在动画的非第一帧界面的生成过程中,例如,在动画的第N帧界面的生成过程中,GPU驱动基于更新信息和GPU指令1生成GPU指令2,然后基于GPU指令2生成位图。或者,GPU驱动在执行更新任务的过程中,基于GPU指令1生成GPU指令2。其中,生成GPU指令2的过程中可以参考上文中图6A-图6E中的文字描述,此处不再赘述。
在动画的第N+1帧界面的生成过程中,GPU驱动可以基于GPU指令1和更新信息生成GPU指令3,然后基于GPU指令3生成位图。其中,GPU指令3为对应于动画的第N+1帧界面中应用程序的位图的指令。
或者,在动画的第N+1帧界面的生成过程中,GPU驱动可以基于GPU指令2和更新信息生成GPU指令3,然后基于GPU指令3生成位图。其中,GPU指令3为对应于动画的第N+1帧界面中应用程序的位图的指令。
如图7B所示,在动画的第一帧界面的生成过程中,应用程序的UI线程遍历视图生成与视图对应的渲染树;并且,应用程序的UI线程确定动画过程中界面的描述信息。
渲染线程接收到渲染树后,通过图像处理库将渲染树转换为GPU指令1,GPU指令1为对应于第一帧界面中应用程序的位图的GPU指令。
应用程序的UI线程基于动画过程中界面的描述信息通过图像处理库配置更新信息,并将更新信息传递到GPU驱动。
在动画的非第一帧界面的生成过程中,例如,在动画的第N帧界面的生成过程中,GPU驱动基于更新信息和GPU指令1生成GPU指令2,然后基于GPU指令2生成位图。其中,生成GPU指令2的过程中可以参考上文中图6A-图6E中的文字描述,此处不再赘述。
如图7C所示,在动画的第一帧界面的生成过程中,应用程序的UI线程遍历视图生成与视图对应的渲染树;并且,应用程序的UI线程确定动画过程中界面的描述信息。
渲染线程接收到渲染树后,通过图像处理库将渲染树转换为GPU指令1,GPU指令1为对应于第一帧界面中应用程序的位图的GPU指令。GPU接收到GPU指令1后,生成位图。
在动画的非第一帧界面的生成过程中,例如,在动画的第N帧界面的生成过程中,应用程序的UI线程确定动画过程中界面的描述信息,即确定本帧界面的描述信息,并通过图像处理库将界面的描述信息转变为更新信息;GPU驱动基于更新信息和GPU指令1生成GPU指令2,然后基于GPU指令2生成位图。
可选地,在本申请一些实施方式中,应用程序的UI线程确定动画过程中界面的描述信息后,即确定本帧界面的描述信息后,可以按照图7A所示的,应用程序的UI线程通过图像处理库向GPU驱动配置GPU任务,该GPU任务用于刷新更新信息。
如图7D所示,与图7C所示内容不同的是,若动画涉及到纹理更新,则应用程序的UI线程和渲染线程可以更新纹理后将更新后的纹理的标识写入更新信息。
可选地,在本申请一些实施方式中,应用程序可以建立新的线程负责纹理更新。
可选地,在本申请一些实施方式中,若动画过程中界面的描述信息包括动画过程中多帧界面的描述信息,则应用程序可以将多个纹理预加载在内存中,并将多个纹理的标识保存在更新信息中。
可选地,在本申请一些实施方式中,若动画过程中界面的描述信息只包括动画过程中本帧界面的描述信息,则应用程序需要在每一帧界面的生成过程中将纹理加载在内存中,并将该纹理的标识保存在更新信息中。
可以理解的是,在本申请实施例提供的界面生成方法中,在动画的非第一帧界面的生成过程中,渲染线程不需要将渲染树转换为GPU指令,降低了界面生成过程中CPU的负载,可以提高动画过程中界面生成的能效比。
由于,本申请实施例提供的界面生成方法可以提高动画过程中界面生成的能效比,可以本省实施例提供的界面生成方法可以应用在三维(伪三维)动画领域。
图8A和图8B为本申请实施例提供的通过实现三维动画方法的一个示例性示意图。
如图8A所示,在三维动画过程中的界面1和三维动画过程中的界面2中存在一个球形物体,在光线方向不动(光源位置不动)的情况下,球形物体的空间位置变化导致球形物体表面的光照强度发生变化。
即,在三维动画过程中的界面1中球形物体表面的像素与三维动画过程中的界面1球形物体表面的像素不同,且不是简单的平移、旋转、颜色变化关系。
在本申请实施例中,渲染引擎向应用程序提供一个自定义的着色器,该自定义的着色器内置有一个或多个三维动画过程中的任意一帧界面的像素或生成该界面的绘制操作(如skia库级别的指令)。
其中,由于着色器用于计算颜色,则更新信息同样作用于颜色的计算过程。其中,不同的内置的一个或多个三维动画中颜色变化(如图8A中的光线方向变化导致的球形物体表面明暗不同)可以通过更新信息确定。
由于可以基于更新信息确定三维动画中的下一帧界面中像素的值(或生成下一帧界面的绘制操作),就无需在GPU中保存完整的三维模型以及无需GPU基于三维模型确定三维动画中的下一帧界面中像素的值。
例如,设三维动画过程中界面1中球形物体的像素矩阵为M1,三维动画过程中界面2中球形物体的像素矩阵为M2,其中更新信息中包括变换矩阵T1。则,M2可以通过M2=T1*M1计算得到。
其中,变换矩阵T1为二维矩阵,二维矩阵的计算量远远小于三维模型的计算量,进而可以降低三维动画过程中界面生成的计算量。
可选地,在本申请一些实施方式中,更新信息如T1可以离线确定并保存在电子设备上。
可选地,在本申请一些实施方式中,如图8B所示,也可以通过更新像素的位置(绘制操作在画布上的位置),进而生成三维动画中的一帧接界面。
例如,设三维动画过程中界面1中球形物体的像素矩阵为M1,三维动画过程中界面2中球形物体的像素矩阵为M2,其中更新信息中包括变换函数f1()。则,M2可以通过M2=f1(M1)计算得到。其中,f1()可以用于更改像素矩阵的内部元素在矩阵中的位置。
可以理解的是,由于不需要保存三维动画中涉及的物体的三维模型,可以将三维动画的计算复杂度从三维降低为二维,并且可以节约内存空间,进而提高了界面生成的能效比。
下面介绍本申请实施例提供的电子设备的硬件结构和软件架构。
图9为本申请实施例提供的电子设备的硬件结构的一个示例性示意图。
电子设备可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pul se code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线 通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在本申请另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波执行滤波,放大等处理,传送至调制解调处理器执行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global  navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其执行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度执行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备在频点选择时,数字信号处理器用于对频点能量执行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备可以支持一种或多种视频编解码器。这样,电子设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access  memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等;
非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universal flash storage,UFS)、嵌入式多媒体存储卡(embedded multi media Card,eMMC)等。
随机存取存储器可以由处理器110直接执行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接执行读写。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备可以设置至少一个麦克风170C。在另一些实施例中,电子设备可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备根据压力传感器180A检测所述触摸操作强度。电子设备也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备是翻盖机时,电子设备可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备在各个方向上(一般为三轴)加速度的大小。当电子设备静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备通过发光二极管向外发射红外光。电子设备使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备附近有物体。当检测到不充分的反射光时,电子设备可以确定电子设备附近没有物体。电子设备可以利用接近光传感器180G检测用户手持电子设备贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备对电池142加热,以避免低温导致电子设备异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备的接触和分离。电子设备可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备中,不能和电子设备分离。
图10为本申请实施例提供的电子设备的软件架构的一个示例性示意图。
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图10所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图10所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:浏览器引擎(webkit)、渲染引擎(如skia库)、表面合成器,硬件合成策略模块,媒体库(Media Libraries),图像处理库(例如:OpenGL ES),渲染引擎(如Skia库)等。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
图像处理库用于实现三维图形绘图,图像渲染等。
渲染引擎是2D绘图的绘图引擎。
可选地,在本申请一些实施方式中,渲染引擎提供一个或多个预置的三维动画效果,该三维动画效果通过自定义着色器实现。
内核层是硬件和软件之间的层。内核层包括显示驱动,摄像头驱动,音频驱动,传感器驱动等。
以图10所示的软件架构为例,示例性的介绍本申请实施例中界面生成方法的数据流动与软件模块交互过程。
图11A和图11B为本申请实施例提供的界面生成方法的数据流动与软件模块交互的一个示例性示意图。
如图11A所示,应用程序通过视图系统中的动画模块配置动画。应用程序的通过调用渲染引擎将绘制操作保存为绘制操作结构体,然后通过图像处理库将绘制操作结构体转换为图像处理库中调用。然后,图像处理库将接口对应GPU指令下发到GPU驱动中,GPU驱动生成渲染任务,用于渲染生成位图。
上文中图11A所示的内容可以认为是第一帧界面生成过程中界面生成方法的数据流动与软件模块交互的示例性介绍。
除此之外,在图11A所示的内容中,应用程序还需要基于视图系统确定动画过程中界面的描述信息;进而将动画过程中界面的描述信息传递到GPU驱动,GPU驱动对应的生成更新任务更新unifrom,例如,从GPU内存中获取unifrom并修改。
或者,应用程序还需要基于视图系统确定动画过程中界面的描述信息;基于动画过程中的界面描述信息,并将更新信息传递到GPU驱动,GPU驱动对应的生成更新任务基于更新信息去更新unifrom,进而更新GPU指令。
图11B与图11A所示内容不同的是,动画模块在确定动画涉及纹理更新后,可以通过图像处理库向GPU配置纹理更新任务,动画模块或应用程序获取新的纹理后,由纹理更新任务更新GPU指令中纹理的标识,进而在GPU执行GPU指令中将更新后的纹理渲染到位图上。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (19)

  1. 一种界面生成方法,其特征在于,应用于电子设备,所述电子设备上安装有第一应用,所述方法包括:
    在所述电子设备接收到第一操作后,所述电子设备确定第一动画过程中界面的描述信息,所述第一操作用于触发所述电子设备通过所述第一应用显示所述第一动画;
    所述电子设备生成第一渲染指令,渲染指令为用于被GPU执行以生成第一界面的数据,所述第一界面为所述第一动画过程中的一帧界面;
    所述电子设备基于所述动画过程中界面的描述信息更新所述第一渲染指令为第二渲染指令;
    所述电子设备基于所述第二渲染指令生成所述第二界面,所述第二界面与所述第一界面不同,所述第二界面为所述第一动画过程中的一帧界面。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备基于所述动画过程中界面的描述信息更新所述第一渲染指令为第二渲染指令,具体包括:
    所述电子设备基于所述第一动画过程中界面的描述信息确定第一参数,所述第一参数用于描述第一控件在所述第二界面中的发生变化的属性,所述第一控件为在所述第二界面中显示效果发生变化的视图;
    所述电子设备基于所述第一参数更新所述第一渲染指令得到所述第二渲染指令。
  3. 根据权利要求2所述的方法,其特征在于,
    在所述电子设备接收到第一操作前,所述方法还包括:所述电子设备显示桌面,所述桌面包括第一控件,所述第一控件对应于第一应用;
    所述电子设备接收到第一操作,具体包括:所述电子设备检测到用户点击所述第一控件;
    在所述电子设备接收到第一操作后,所述方法还包括:所述第一动画为所述第一应用的启动动画,在所述第一动画中,第二控件的位置和大小发生变化;
    所述电子设备基于所述第一动画过程中界面的描述信息确定第一参数,具体包括:所述第一电子设备确定第一位置,所述第一参数包括所述第一位置,所述第一位置为所述第二控件在第二界面中的位置,所述第二界面为所述第一动画的非第一帧界面;
    所述电子设备基于所述第一参数更新所述第一渲染指令得到所述第二渲染指令,具体包括:所述电子设备通过将所述第一渲染指令中的所述第二控件的顶点位置修改为所述第一位置得到所述第二渲染指令。
  4. 根据权利要求3所述的方法,其特征在于,所述电子设备通过将所述第一渲染指令中的所述第二控件的顶点位置修改为所述第一位置得到所述第二渲染指令,具体包括:
    所述电子设备基于所述第一参数更新第一方法调用所使用的顶点位置为所述第一位置以得到所述第二渲染指令,所述第一方法调用为用于被GPU指令以绘制所述第二控件的方法调用。
  5. 根据权利要求2所述的方法,其特征在于,所述第一参数用于描述所述第一视图的颜色、顶点位置、透明度和/或缩放比例。
  6. 根据权利要求2所述的方法,其特征在于,所述第一参数被所述第一应用写入第一数据结构体中,所述第一数据结构体与渲染管线绑定;在所述渲染管线中,所述第一参数被所述电子设备读取以修改渲染指令。
  7. 根据权利要求6所述的方法,其特征在于,所述第一数据结构体为unifrom。
  8. 根据权利要求2-7中任一项所述的方法,其特征在于,所述方法还包括:
    在所述电子设备接收到第一操作后,所述电子设备配置更新任务,所述更新任务被配置在GPU驱动中;
    所述电子设备基于所述第一参数更新所述第一渲染指令得到所述第二渲染指令,具体包括:所述电子设备通过更新任务将所述第一渲染指令中的第二参数替换为所述第一参数以得到所述第二渲染指令,所述第二参数用于描述所述第一控件在所述第一界面中的属性。
  9. 根据权利要求2-7中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备基于所述第一动画过程中界面的描述信息确定所述第一控件在所述第一界面中的背景贴图或前景贴图与所述第一控件在所述第二界面中的背景贴图或前景贴图不同;
    所述电子设备加载第一纹理,所述第一纹理为所述第一控件在所述第二界面中的背景贴图或前景贴图,所述第一参数包括所述第一纹理的标识,所述第一纹理为所述第一视图在所述第二界面中的背景贴图或前景贴图。
  10. 根据权利要求1-7中任一项所述的方法,其特征在于,所述第一渲染指令和所述第二渲染指令的方法调用相同,所述第一渲染指令中的资源和所述第二渲染指令中的资源不同,所述第一渲染指令中的资源为所述第一渲染指令中的方法调用在被执行时所使用的变量或固定值,所述第二渲染指令中的资源为所述第二渲染指令中的方法调用在被执行时所使用的变量或固定值。
  11. 根据权利要求1-7中任一项所述的方法,其特征在于,所述电子设备接收到第一操作后,所述电子设备确定第一动画过程中界面的描述信息,具体包括:
    所述电子设备接收到所述第一操作后,所述电子设备确定被触发的动画为所述第一动画;
    所述电子设备确定所述第一动画涉及的视图,所述第一动画涉及的视图为在所述第一动画过程中显示内容发生变化的视图;
    所述电子设备确定所述第一动画过程中一帧或多帧界面中所述第一动画涉及的视图的属性。
  12. 一种界面生成方法,其特征在于,应用于电子设备,所述电子设备上安装有第一应用,所述方法包括:
    在所述电子设备接收到第一操作后,所述第一应用生成第一渲染树,所述第一渲染树保存有用于生成第一界面的绘制操作,所述第一界面为第一动画中的所述第一应用的第一帧界面,所述第一操作用于触发所述电子设备通过所述第一应用显示所述第一动画,在所述第一界面和第二界面中第一控件的位置发生变化,在所述第二界面中所述第一控件在第一位置,所述第二界面为所述第一动画中的非第一帧界面;
    所述第一应用将所述第一渲染树转换为第一渲染指令;
    所述电子设备基于所述第一渲染指令调用GPU生成所述第一界面;
    所述电子设备将所述第一渲染指令中的第一参数更新为所述第一位置以得到第二渲染指令;
    所述电子设备基于所述第二渲染指令调用GPU生成所述第二界面。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    在所述电子设备接收到第一操作后,所述电子设备确定所述第一动画过程中界面的描述信息,所述第一动画过程中界面的描述信息包括所述第二位置。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:在所述电子设备接收到第一操作后,所述电子设备配置更新任务,所述更新任务被配置在GPU驱动中;
    所述电子设备将所述第一渲染指令中的第一参数更新为所述第一位置以得到第二渲染指令,具体包括:所述电子设备通过所述更新任务将所述第一渲染指令中的所述第一参数更新为所述第一位置以得到所述第二渲染指令。
  15. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    所述电子设备基于所述第一动画过程中界面的描述信息确定所述第一控件在所述第一界面中的背景贴图或前景贴图与所述第一控件在所述第二界面中的背景贴图或前景贴图不同;
    所述电子设备加载第一纹理,所述第一纹理为所述第一控件在所述第二界面中的背景贴图或前景贴图,所述第一参数包括所述第一纹理的标识,所述第一纹理为所述第一视图在所述第二界面中的背景贴图或前景贴图。
  16. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器和存储器;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代 码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1至15中任一项所述的方法。
  17. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求1至15中任一项所述的方法。
  18. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1至15中任一项所述的方法。
  19. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1至15中任一项所述的方法。
PCT/CN2023/123558 2022-10-19 2023-10-09 界面生成方法及电子设备 WO2024082987A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211281882.4A CN117909000A (zh) 2022-10-19 2022-10-19 界面生成方法及电子设备
CN202211281882.4 2022-10-19

Publications (1)

Publication Number Publication Date
WO2024082987A1 true WO2024082987A1 (zh) 2024-04-25

Family

ID=90695242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/123558 WO2024082987A1 (zh) 2022-10-19 2023-10-09 界面生成方法及电子设备

Country Status (2)

Country Link
CN (1) CN117909000A (zh)
WO (1) WO2024082987A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130328898A1 (en) * 2012-06-06 2013-12-12 Apple Inc. Render Tree Caching
CN107071556A (zh) * 2017-04-18 2017-08-18 腾讯科技(深圳)有限公司 一种界面渲染方法和装置
CN108073339A (zh) * 2016-11-16 2018-05-25 阿里巴巴集团控股有限公司 浮层展示方法、客户端和电子设备
CN112199149A (zh) * 2020-10-16 2021-01-08 维沃移动通信有限公司 界面渲染方法、装置及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130328898A1 (en) * 2012-06-06 2013-12-12 Apple Inc. Render Tree Caching
CN108073339A (zh) * 2016-11-16 2018-05-25 阿里巴巴集团控股有限公司 浮层展示方法、客户端和电子设备
CN107071556A (zh) * 2017-04-18 2017-08-18 腾讯科技(深圳)有限公司 一种界面渲染方法和装置
CN112199149A (zh) * 2020-10-16 2021-01-08 维沃移动通信有限公司 界面渲染方法、装置及电子设备

Also Published As

Publication number Publication date
CN117909000A (zh) 2024-04-19

Similar Documents

Publication Publication Date Title
WO2022258024A1 (zh) 一种图像处理方法和电子设备
WO2020253758A1 (zh) 一种用户界面布局方法及电子设备
WO2020093988A1 (zh) 一种图像处理方法及电子设备
WO2022199509A1 (zh) 应用执行绘制操作的方法及电子设备
CN113761427A (zh) 自适应生成卡片的方法、终端设备和服务器
CN116048933B (zh) 一种流畅度检测方法
WO2023093776A1 (zh) 界面生成方法及电子设备
WO2023066165A1 (zh) 动画效果显示方法及电子设备
WO2023093779A1 (zh) 界面生成方法及电子设备
WO2023005751A1 (zh) 渲染方法及电子设备
WO2023016014A1 (zh) 视频编辑方法和电子设备
WO2023071482A1 (zh) 视频编辑方法和电子设备
WO2024082987A1 (zh) 界面生成方法及电子设备
WO2024083014A1 (zh) 界面生成方法及电子设备
WO2024083009A1 (zh) 界面生成方法及电子设备
WO2024061292A1 (zh) 界面生成方法及电子设备
WO2023066177A1 (zh) 动画效果显示方法及电子设备
WO2022262291A1 (zh) 应用的图像数据调用方法、系统、电子设备及存储介质
WO2024046010A1 (zh) 一种界面显示方法、设备及系统
WO2023051036A1 (zh) 加载着色器的方法和装置
WO2023246783A1 (zh) 调整设备功耗的方法及电子设备
WO2024066976A1 (zh) 控件显示方法及电子设备
WO2024067551A1 (zh) 界面显示方法及电子设备
CN116166257A (zh) 界面生成方法及电子设备
US20240184504A1 (en) Screen projection method and system, and related apparatus