CN117724779A - Method for generating interface image and electronic equipment - Google Patents

Method for generating interface image and electronic equipment Download PDF

Info

Publication number
CN117724779A
CN117724779A CN202310688308.9A CN202310688308A CN117724779A CN 117724779 A CN117724779 A CN 117724779A CN 202310688308 A CN202310688308 A CN 202310688308A CN 117724779 A CN117724779 A CN 117724779A
Authority
CN
China
Prior art keywords
layer
attribute information
interface
application
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310688308.9A
Other languages
Chinese (zh)
Inventor
杨胜利
田孝斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310688308.9A priority Critical patent/CN117724779A/en
Publication of CN117724779A publication Critical patent/CN117724779A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a method for generating an interface image and electronic equipment, and relates to the technical field of terminals. And applying for the cache corresponding to the layer according to the attribute information of the layer in advance at the first moment before the layer is generated by the surface eFinger service. When the SurfaceFlinger service responds to the Vsync signal to generate a layer, the buffer which is applied in advance is directly bound to the buffer queue corresponding to the layer, and the real-time application of the memory is not needed when the layer is generated. Because the caching of the image layer is applied in advance, only the binding of the caching and the buffering queue is carried out when the image layer is generated, the duration of the drawing layer is drawn and is not influenced by the duration of the application memory, the problem that an interface image is not ready when the next Vsync signal arrives is avoided, and the probability of frame loss of a display picture is reduced.

Description

Method for generating interface image and electronic equipment
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method and an electronic device for generating an interface image.
Background
The display of an electronic device is made up of frame-by-frame interface images, each of which typically includes at least one layer. The system and the application of the electronic equipment prepare the resources of each layer, generate each layer, and display the interface image frame by frame on the screen after rendering and synthesis. In the process of generating the layers, memories need to be applied for each layer respectively for the related processing of the generated layers. In a scene with larger layer change, the problem that the time for applying for the memory is longer and the time for preparing the interface image is longer can occur with a certain probability. If the interface image when the electronic device refreshes the screen is not ready, the problem of frame loss of the display screen occurs.
Disclosure of Invention
The embodiment of the application provides a method for generating an interface image and electronic equipment, which can solve the problem of overlong layer time for generating the interface image and reduce the probability of frame loss of a display picture of the electronic equipment.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, a method for generating an interface image is provided, and the method is applied to an electronic device, wherein a first application is installed on the electronic device, an operating system of the electronic device comprises a surface synthesis service and a synthesis rendering component, and the method comprises the following steps: the surface synthesis service acquires attribute information of a first layer of a first interface image of a first application; for example, the attribute information may include at least one of a layer width, a layer height, and pixel information of the interface image. The surface synthesis service applies for a first cache at a first moment according to the attribute information of the first layer; the surface synthesis service binds the first cache to a buffer queue corresponding to the first layer at a second moment (a moment when the first vertical synchronization Vsync signal is received); the surface synthesis service generates all the layers according to buffer queues corresponding to all the layers contained in the first interface image; and the composition rendering component renders and synthesizes all the layers to generate a first interface image.
Wherein the first time is earlier than the second time. Since the buffer (buffer cache) corresponding to the first layer is applied for in advance according to the attribute information of the first layer at the first moment. When the surface synthesis service (surface eFlinger service) receives the first Vsync signal to generate a first layer, the buffercalcche applied in advance is directly bound to a buffer queue (bufferQueue) corresponding to the first layer, and then the layer content can be filled into the buffercche according to the graphic data of the first layer. The Buffer does not need to be applied in real time when the first layer is generated. Because the buffercalche of the first layer is applied for in advance, only binding between the buffercalche and the bufferQuue is carried out when the first layer is generated, and then the buffercalche is filled with content; the time length for drawing the first layer is not influenced by the time length of the application memory, and the problem that the first interface image is not ready when the next Vsync signal (the second Vsync signal) arrives is avoided.
With reference to the first aspect, in one implementation manner, the applying, by the surface composition service, the first cache according to the attribute information of the first layer includes: the surface synthesis service determines a first value according to the attribute information of the first layer, wherein the first value is the space occupied by the first cache; the surface composition service applies for the first cache based on the first value.
The size of the space occupied by the Buffer corresponding to the layer is determined according to the attribute information of the layer, and the surface synthesis service can apply for the layer Buffer only after acquiring the attribute information of the layer.
With reference to the first aspect, in an embodiment, the method further includes: displaying a first interface image on a screen of the electronic device at a third moment; the third timing is a timing at which the second Vsync signal is issued, and the second Vsync signal is a next Vsync signal to the first Vsync signal.
That is, the first Vsync signal starts to generate a layer when being sent down, and the second Vsync signal needs to be ready for an interface image before being sent down, so that the interface image can be displayed when the second Vsync signal is sent down, and the frame loss phenomenon is avoided.
With reference to the first aspect, in one implementation manner, the first moment is a moment when an Activity corresponding to the first interface image is created or started.
In the cold starting process of the first application, when an onCreate event or an onStart event of Activity corresponding to an application interface of the first application is detected, a buffer cache corresponding to each layer such as an application interface layer, a status bar layer, a navigation bar layer and a wallpaper layer is applied. When the surface eFlinger service receives the Vsync signal to generate the image layer, each BufferCache applied in advance is only required to be respectively bound to the corresponding bufferQueue, and the real-time application of the memory is not required when the image layer is generated. The time spent applying for the memory does not affect the speed of generating the interface image. Therefore, the problem that the interface image is not ready when the next Vsync signal arrives is avoided, and the frame loss probability of the display picture of the electronic equipment is reduced.
In one implementation, the first layer is an application interface layer, and the surface composition service obtains attribute information of the first layer from a process of the first application.
In another implementation, the first layer is a status bar layer, a navigation bar layer, or a wallpaper layer, and the surface composition service obtains attribute information of the first layer from a process of the operating system.
In some implementations, when an onCreate event or an onStart event of an Activity corresponding to an application interface of a first application is detected, the first application sends attribute information of an application interface layer to a surface composition service. In some implementations, the surface composition service obtains attribute information of an application interface layer of the first application by learning.
With reference to the first aspect, in one implementation, the first time is a time when a landscape screen switching portrait screen event, portrait screen switching landscape screen event, folding screen unfolding event, or folding screen folding event is detected.
In the application running process, when the electronic equipment is switched in the horizontal and vertical screen or the folding screen of the electronic equipment is unfolded or folded, the surface eFlinger service applies for the corresponding buffer cache of the changed layer (first layer). When the surface eFlinger service receives the Vsync signal to generate the image layer, the buffer caches of all the image layers applied in advance are only required to be respectively bound to the corresponding buffer queue, and the real-time application of the memory is not required when the image layer is generated. The time spent applying for the memory does not affect the speed of generating the interface image. Therefore, the problem that the interface image is not ready when the next Vsync signal arrives is avoided, and the frame loss probability of the display picture of the electronic equipment is reduced.
In some embodiments, the first time is a time when a horizontal screen switching and vertical screen event is detected, the first layer is an application interface layer, and the surface synthesis service applies for the first cache according to attribute information of the application interface layer in a vertical screen state of the electronic device.
In some embodiments, the first time is a time when a vertical screen switching horizontal screen event is detected, the first layer is an application interface layer, and the surface synthesis service applies for the first cache according to attribute information of the application interface layer in a horizontal screen state of the electronic device.
In some embodiments, the first time is a time when a folding screen unfolding event is detected, the first layer includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer, and the surface synthesis service applies for the first cache according to attribute information of the first layer in a folding screen unfolding state of the electronic device.
In some embodiments, the first time is a time when a folding event of the folding screen is detected, the first layer includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer, and the surface synthesis service applies for the first cache according to attribute information of the first layer in a folding state of the folding screen of the electronic device.
In a second aspect, there is provided an electronic device having a function of implementing the method for generating an interface image according to the first aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an electronic device is provided, comprising: the device comprises a processor, a memory and a display screen; the memory is configured to store computer-executable instructions that, when executed by the electronic device, cause the electronic device to perform the method of any of the first aspects.
In a fourth aspect, there is provided an electronic device comprising: a processor; the processor is configured to perform the method according to any of the first aspects above according to instructions in a memory after being coupled to the memory and reading the instructions in the memory.
In a fifth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
In a seventh aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting an electronic device to implement the functions referred to in the first aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the seventh aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
Fig. 1 is a schematic diagram of an example of a scenario to which a method for generating an interface image according to an embodiment of the present application is applicable;
fig. 2 is a schematic diagram of an example of a scenario to which the method for generating an interface image provided in the embodiment of the present application is applicable;
FIG. 3 is a flow chart of drawing a frame of interface image;
FIG. 4 is a diagram of message Trace (Trace) information during the process of rendering a frame of interface image;
FIG. 5 is a schematic diagram of a cause analysis for a longer time consuming surfer application;
FIG. 6 is a schematic diagram of a method for generating an interface image according to an embodiment of the present disclosure;
fig. 7 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
FIG. 9 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
FIG. 10 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
FIG. 11 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
FIG. 12 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
FIG. 13 is a flowchart of a method for generating an interface image according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the description of the embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of this application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the various embodiments herein below, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The interface image displayed on the electronic device may include one or more layers, such as a status bar layer, a navigation bar layer, an application interface layer, a wallpaper layer, and the like. The number of layers included in each of the different interface images may be different. Illustratively, as shown in FIG. 1, the electronic device displays a desktop interface, the desktop interface image including a status bar layer, a navigation bar layer, an application interface layer, and a wallpaper layer. Illustratively, as shown in FIG. 2, the electronic device displays a video call interface, the video call interface image including a status bar layer and an application interface layer. The status bar layer, the navigation bar layer and the wallpaper layer are generated and rendered by a system process of the electronic device, and the application interface layer is generated and rendered by an application process.
The screen of the electronic device is refreshed according to a certain frequency, and one frame of interface image is displayed every time the screen is refreshed. In the embodiment of the present application, the time interval for displaying the interface images of two adjacent frames is referred to as a frame interval. Exemplary, the refresh frequency of the cell phone screen is typically 60Hz or 120Hz. The refresh frequency of 60Hz means 60 refreshes per second, i.e., the time interval between displaying two adjacent frames of interface images is about 16.67ms (one frame interval is 16.67 ms); the refresh rate of 120Hz means 120 refreshes per second, i.e. the time interval between displaying two adjacent frames of interface images is about 8.33ms (one frame interval is 8.33 ms).
Each time a frame of interface image is displayed, the screen will issue a vertical synchronization (Vsync) signal to the system process and application process. The system process and application process need to prepare (paint) the next frame of interface image before the next Vsync signal arrives. Thus, the next frame of interface image can be normally displayed when the screen is refreshed next time.
By way of example, fig. 3 shows a schematic flow chart for rendering a frame of interface image.
As shown in fig. 3, the interface includes a status bar, an application interface, and a navigation bar. After the screen transmits the Vsync signal, surface synthesis (surface eFlinger) applies a Buffer for the status bar layer, and adds a Buffer queue (Buffer queue) of the status bar layer; applying for a Buffer for the navigation bar layer, and adding a Buffer queue for the navigation bar layer; and applying a Buffer for the application interface layer, and adding the Buffer queue of the application interface layer. The system process puts the graphic data of the status bar layer into the Buffer of the status bar layer, and puts the graphic data of the navigation bar layer into the Buffer of the navigation bar layer. The application process puts the graphic data of the application interface layer into a Buffer of the application interface layer.
The SurfaceFlinger traverses all the BufferQueues, graphics data in the Buffer generated last time in each BufferQueue is used for generating the content of a corresponding layer, then the BufferQueue of the status bar layer, the BufferQueue of the navigation bar layer and the BufferQueue of the application interface layer are combined into one BufferQueue, and a synthesis rendering component is called for rendering and synthesizing to generate an interface image of the interface. The SurfaceFlinger sends the generated interface image to the display driver so that the interface can be displayed on the screen when the next Vsync signal arrives.
In some scenarios, the surfeflinger application Buffer consumes a long time, so that when the next Vsync signal arrives, the next frame of interface image is not ready, and the problem of frame loss of the display screen occurs. By way of example, FIG. 4 shows message tracking (Trace) information during the rendering of a frame of interface image. As shown in fig. 4, in the process of drawing the interface image, the memory allocation service applies for allocating a block of cache to the kernel. Wherein the command response duration of communication message 1 (HwBinder 1) is abnormal, requesting a block of approximately 10M of buffering, taking about 34.5ms, far exceeding one frame interval (16.67 ms or 8.33 ms). This can result in excessive time to generate the interface image, and the interface image cannot be prepared when the next Vsync signal arrives, which can cause a frame loss problem for the display.
One reason for the longer time consuming surfer application Buffer is described by way of example below in connection with fig. 5.
As shown in FIG. 5, the desktop interface of the mobile phone includes 4 layers, namely a status bar, an application interface, wallpaper, and a navigation bar. The surface eFlinger maintains a corresponding number of BufferQueues according to the number of layers, as shown in FIG. 5, the surface eFlinger maintains 4 BufferQueues, each comprising 3 buffers.
After the screen issues the Vsync signal, responding to the call of the system process and the application process, and calling by the SurfaceFlinger And the memory application interface of the encapsulated memory manager (ION) applies buffers to the kernel and is respectively used for placing graphic data of the status bar layer, the application interface layer, the wallpaper layer and the navigation bar layer.
The SurfaceFlinger needs to complete the drawing of the interface image within one frame interval (16.67 ms or 8.33 ms), that is, the duration of the SurfaceFlinger application Buffer cannot exceed one frame interval (16.67 ms or 8.33 ms). Otherwise, the drawing cannot be completed when the next Vsync signal arrives.
In one implementation manner, considering that the space size of the interface applied to the kernel by the SurfaceFlinger is determined according to attribute information (such as width and height of a layer, pixel information of the interface image, etc.) of the interface image, the attribute information of the interface image of different applications is not the same, so that the Buffer is applied in real time according to the attribute information corresponding to the layer issued by a system process or an application process when the application calls the SurfaceFlinger to generate the layer. The duration taken to apply Buffer directly affects the speed of drawing the layer. Applying Buffer takes too long, which may result in the interface image not being ready when the next Vsync signal arrives.
In the current implementation, each layer of the interface corresponds to one BufferQueue, and the more layers the interface includes, the greater the number of bufferqueues maintained by the surface eflinger. Illustratively, as shown in FIG. 5, the desktop interface of the handset includes 4 layers, and correspondingly, surfaceFlinger maintains 4 BufferQueue. SurfaceFlinger needs to apply 4 blocks of memory to the kernel at a time, and each block of memory is a memory with continuous physical addresses. Generally, the Buffer corresponding to one layer is about 10M. The number of memories applied for one-time centralized application of the surface eFlinger is large (for example, 4 blocks), the Buffer corresponding to each layer is large (about 10M), and each fast Buffer requested is a memory with continuous physical addresses. The memory meeting the SurfaceFlinger request is difficult to allocate by the kernel, so that the problem of long memory application time can occur. The more layers an interface image includes, the higher the screen refresh frequency, and the greater the probability of problems occurring. And when the system load is higher, the system memory is more tense, the continuous memory of the system is lack, and the probability of occurrence of the problem is higher.
Generally, in a scene with more layers, the application has slower memory and longer time consumption, and the occurrence probability is higher. For example, when the application is cold-started, all layers of the interface image need to be redrawn, and the applied resources are the most and the full. After the application is started, the layer will not change. For example, when the electronic device is switched between landscape and portrait, or the screen is folded and unfolded, all layers of the interface image need to be redrawn. The problem of long Buffer consumption of the SurfaceFlinger application occurs probabilistically when the drawing layer is redrawn.
The embodiment of the application provides a method for generating an interface image, which is used for acquiring attribute information of a layer in advance (before the layer is generated) and applying for a cache (buffer cache) corresponding to the layer according to the attribute information of the layer. When the surface eFlinger service generates a layer, the BufferCache applied in advance is directly bound to the bufferQueue corresponding to the layer, and then the layer content is filled into the BufferCache according to the graphic data of the layer. It is not necessary to apply buffers in real time and to populate the layer contents according to the graphic data of the layer when generating the layer. Because the buffercalche of the layer is applied for in advance, only binding between the buffercalche and the bufferQueue is carried out when the layer is generated, and then the buffercalche is filled with content to generate an interface image; the duration of the drawing layer is not influenced by the duration of the application memory, and the problem that an interface image is not ready when the next Vsync signal arrives is avoided.
Illustratively, as shown in fig. 6 (a), in the current implementation, when a surfeflinger generates a layer, a Buffer corresponding to the layer is applied for and the layer content is filled. Applying for memory time-consuming results in the Vsync signal coming out of readiness for an interface image.
As shown in fig. 6 (b), in the method for generating an interface image provided in the embodiment of the present application, when an Activity (Activity) corresponding to an application interface of an application is created (onCreate) or started (onStart) at a first time, a cache (buffercalche) corresponding to a layer is applied. Then, at a second moment, such as when a surface eFlinger generates a layer, binding the applied buffercche to the bufferQueue corresponding to the layer, and filling the layer content into the buffercche.
Wherein the first time is earlier than the second time. For example, in general, the time (first time) at which an onCreate event occurs at the Activity of the interface is 50ms-60ms earlier than the time (second time) at which the layer is generated. Even if the application memory takes a long time (e.g., 34.5 ms), the application memory process can be completed at the time of generating the layer (second time). In this way, it is avoided that the application memory takes a long time to generate the interface image too slowly, and the interface image is not ready when the next Vsync signal arrives.
Wherein onCreate and onStart belong to the life cycle of Activity, which is briefly described below. The lifecycle of the Activity includes onCreate, onStart, onResume, onPause, onStop, onRestart and onDestroy.
onCreate: when an Activity is first loaded. The onCreate event is executed when the interface is first started, and the onCreate event is re-executed after the Activity is destroyed and reloaded again.
onStart: the onCreate event is performed after. After cutting out the interface, reenter the interface after a period of time has elapsed (Activity is not destroyed), the onCreate event is skipped to directly execute the onStart event.
onResume: the onStart event is performed after; or after the interface is swapped to the background, when the user views the interface again (the Activity is not destroyed and the onstate event is not executed), the onstate event and the onStart event are skipped, and the onstate event is directly executed.
onPause: the interface is invoked when it is switched to the background.
onStop: the onPause event is performed after. If the interface is not returned for a period of time, then the onStop event for the interface Activity will be executed. Or the user directly removes the interface from the current page and also executes the onstate event of the interface Activity.
onRestart: after the onStop event is executed, if the interface and the process of the application are not destroyed by the system, the user reenters the interface again, and then the onStop event of the interface Activity is executed. The onStart event is skipped after the onStart event and the onStart event is directly executed.
The restriction: activity is executed when destroyed. After executing the onStop event, the Activity is destroyed if the interface is not returned again.
The method provided by the embodiment of the application can be applied to the electronic equipment comprising the display screen. The electronic device may include a mobile phone, a tablet computer, a notebook computer, a personal computer (personal computer, PC), an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a handheld computer, a netbook, an intelligent home device (such as an intelligent television, a smart screen, a large screen, an intelligent sound box, an intelligent air conditioner, etc.), a personal digital assistant (personal digital assistant, PDA), a wearable device (such as an intelligent watch, an intelligent bracelet, etc.), a vehicle-mounted device, a virtual reality device, etc., which is not limited in this embodiment.
Fig. 7 is a schematic structural diagram of the electronic device 100. Wherein the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, display 191, indicator 192, camera 193, etc. Wherein the sensor module 180 may include a touch sensor, a temperature sensor, a distance sensor, etc.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 191, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 191, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 191 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 191 is used to display images, videos, or the like. The display 191 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 191, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1. In the present embodiment, the camera 193 may be used to capture video images.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of electronic devices can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc. In the embodiment of the present application, the audio module 170 may be used to collect audio in the recorded video.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, audio, video, etc. files are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121, and the internal memory 121 may include a storage program area and a storage data area. The storage program area may store application programs (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system. The storage data area may store data created during use of the electronic device (e.g., video files), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
In the embodiment of the present application, the electronic device 100 is an electronic device that can run an operating system and install an application program. Alternatively, the operating system on which the electronic device runs may beSystem (S)>System (S)>A system, etc.
The method for generating the interface image according to the embodiment of the present application will be described in detail with reference to the accompanying drawings.
Scene one:
when the cold start is applied, all layers of the interface image need to be redrawn, and the applied resources are the most and the full; the probability of long time consumption of application memory is high.
In the method for generating the interface image, in the application cold start process, when an onCreate event or an onStart event of the Activity corresponding to the application interface of the application is detected, the buffer cache corresponding to each layer is applied. When the surface eFlinger generates the layer, the buffercalche applied in advance only needs to be bound to the corresponding bufferQueue. The time spent applying for the memory does not affect the speed of generating the interface image.
As shown in fig. 8, an exemplary method for generating an interface image according to an embodiment of the present application may include:
s801, acquiring attribute information of each layer contained in a frame interface image of the electronic equipment by a surface synthesis (SurfaceFinger) service.
Illustratively, if the interface image contains an application interface layer, the surfeflinger service obtains attribute information for the application interface layer. Optionally, if the interface image includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer, the surfeflinger service further obtains attribute information of at least one of the status bar layer, the navigation bar layer, and the wallpaper layer.
S802, detecting an creation (onCreate) event or a starting (onStart) event of an Activity (Activity) corresponding to an application interface of an application, and notifying a buffer (buffer cache) corresponding to each layer of a surface synthesis (surface eFlinger) service application interface image by an application process.
S803, the surface synthesis (SurfaceFlinger) service applies for caches (BufferCache) corresponding to all the layers according to attribute information of all the layers contained in the interface image of the electronic equipment.
In one implementation, a surface composition (surfeflinger) service applies for a cache (buffercalche) corresponding to an application interface layer according to attribute information of the application interface layer. If the attribute information of the status bar layer exists, the surface synthesis (surface eFlinger) service also applies for a buffer memory (buffer cache) corresponding to the status bar layer according to the attribute information of the status bar layer; if the attribute information of the navigation bar layer exists, the surface synthesis (surface eFlinger) service also applies for a buffer memory (buffer cache) corresponding to the navigation bar layer according to the attribute information of the navigation bar layer; if the attribute information of the wallpaper layer exists, the surface synthesis (surface eFlinger) service also applies for a cache (buffer cache) corresponding to the wallpaper layer according to the attribute information of the wallpaper layer.
S804, receiving the Vsync signal, the surface synthesis (SurfaceFlinger) service puts the graphic data of each layer into a buffer (BufferCache) corresponding to the layer, and binds the buffer (BufferCache) of each layer to a buffer queue (BufferQuue) of the corresponding layer.
In one implementation, after receiving the Vsync signal, a surface synthesis (surface eflinger) service determines the space size occupied by the buffer corresponding to the layer according to the stored attribute information of the layer, and searches the corresponding buffer (buffercche) applied in advance according to the space size occupied by the buffer. The surface synthesis (surfeflink) service puts the graphics data of the layers into the caches (buffercalche) corresponding to the layers, and binds the caches (buffercalche) of the layers to the buffer queues (BufferQueue) of the corresponding layers.
S805, generating an application interface layer by a surface synthesis (SurfaceFlinger) service. Optionally, a surface synthesis (SurfaceFlinger) service also generates at least one of a status bar layer, a wallpaper layer, and a navigation bar layer.
And S806, rendering and compositing each layer by a composite rendering component (Composer) to generate an electronic equipment interface image.
S807, when the next Vsync signal arrives, the display driver displays the electronic device interface image on the screen.
In this process, the surface synthesis (SurfaceFlinger) service may obtain attribute information of each layer included in a frame interface image of the electronic device in a plurality of different manners.
In some implementations, a software development kit (software development kit, SDK) is provided for application process invocation. The application process detects that the application interface Activity is executed on creation event or onStart event, and invokes the SDK to transmit the attribute information of the application interface layer to the surface eFlinger service. The attribute information of the layer may include the width, height, pixel information of the interface image, and the like of the layer.
Illustratively, as shown in FIG. 9, the application is launched in response to a user clicking on an application icon on the desktop. The application interface Activity is executed on Create event (or onStart event), and the application process invokes the SDK to transfer the attribute information of the application interface layer to the SurfaceFlinger service. The SurfaceFinger service obtains attribute information of an application interface layer.
Optionally, the SurfaceFlinger service also obtains attribute information of the status bar layer, the wallpaper layer, and the navigation bar layer from the system process.
Thus, the SurfaceFlinger service acquires attribute information of each layer contained in one frame of interface image of the electronic equipment. Optionally, the SurfaceFlinger service stores attribute information of each layer included in a frame of interface image, which may include attribute information of an application interface layer, attribute information of a status bar layer, attribute information of a wallpaper layer, attribute information of a navigation bar layer, and the like. It will be appreciated that interface images for different applications may include different numbers of display layers. For example, in some examples, the interface image includes an application interface layer and a status bar layer, and the SurfaceFlinger service obtains attribute information of the application interface layer through the SDK interface and obtains attribute information of the status bar layer from the system process. In other examples, the interface image includes an application interface layer, and the SurfaceFlinger service obtains attribute information of the application interface layer through the SDK interface. In still other examples, the interface image includes an application interface layer, a status bar layer, and a navigation bar layer, and the surfeflinger service obtains attribute information of the application interface layer through the SDK interface, obtains attribute information of the status bar layer, and obtains attribute information of the navigation bar layer from the system process.
In one implementation, the application process invokes the SDK to transfer the attribute information of the application interface layer to the SurfaceFlinger service when the application is first cold started. The SurfaceFlinger service stores attribute information of each layer of the interface image. And then the application process sends a first command to the SurfaceFlinger service, and the first command is used for notifying the SurfaceFlinger service to apply for caches corresponding to each layer. When the subsequent application is cold started, the application process detects that the application interface Activity is executed on a create event or on a start event, and sends a first command to the SurfaceFlinger service. Of course, in another implementation, the application process may also call the SDK to transmit the attribute information of the application interface layer to the surfeflinger service at each cold start. The embodiments of the present application are not limited in this regard.
And the surface eFlinger service receives the first command and applies for caches (buffercalche) corresponding to all the layers according to the attribute information of all the layers contained in the stored interface image. For example, the SurfaceFlinger service determines that the space occupied by the buffer (buffer cache) corresponding to the application interface layer is a first value according to the attribute information of the application interface layer, and applies for the buffer, the space occupied by the kernel of which is the first value, as the buffer corresponding to the application interface layer. For example, the SurfaceFlinger service determines, according to attribute information of the status bar layer, that a space occupied by a buffer (buffer cache) corresponding to the status bar layer is a second value, and applies for a buffer, which occupies the space of the second value, to the kernel as the buffer corresponding to the status bar layer. For example, the surfeflinger service determines that the space occupied by the buffer memory (buffer cache) corresponding to the navigation bar layer is a third value according to the attribute information of the navigation bar layer, and applies for the buffer memory with the space occupied by the third value to the kernel as the buffer memory corresponding to the navigation bar layer. For example, the surfeflinger service determines that the space occupied by the buffer memory (buffer cache) corresponding to the wallpaper layer is a fourth value according to the attribute information of the wallpaper layer, and applies for the buffer memory with the space occupied by the fourth value to the kernel as the buffer memory corresponding to the wallpaper layer.
The screen periodically issues Vsync signals to the application process and the system process. When the first Vsync signal is received, the surface synthesis (surfeflinger) service puts the graphics data of the application interface layer into a buffer (buffercalche) corresponding to the application interface layer applied in advance, and binds the buffer (buffercalche) corresponding to the application interface layer to a buffer queue (BufferQueue) corresponding to the application interface layer. A surface synthesis (SurfaceFlinger) service generates an application interface layer.
If the interface image includes any one of a status bar layer, a wallpaper layer, and a navigation bar layer, the surface composition (surfeflinger) service also generates a corresponding layer. If the interface image includes a status bar layer, a surface synthesis (surfeflink) service puts graphic data of the status bar layer into a buffer (buffercalche) corresponding to the status bar layer applied in advance, and binds the buffer (buffercalche) corresponding to the status bar layer to a buffer queue (BufferQueue) corresponding to the status bar layer; if the interface image comprises a navigation bar layer, the surface synthesis (surface eFlinger) service puts graphic data of the navigation bar layer into a buffer (buffer cache) corresponding to the navigation bar layer which is applied in advance, and binds the buffer (buffer cache) corresponding to the navigation bar layer to a buffer queue (buffer queue) corresponding to the navigation bar layer; if the interface image includes a wallpaper layer, the surface synthesis (surface eFlinger) service places graphic data of the wallpaper layer into a buffer (buffer cache) corresponding to the wallpaper layer applied in advance, and binds the buffer (buffer cache) corresponding to the wallpaper layer to a buffer queue (buffer queue) corresponding to the wallpaper layer.
A composition rendering component (Composer) renders and composes each layer of the interface image to generate an electronic device interface image. The display driver displays the electronic device interface image on the screen when the next Vsync signal (second Vsync signal) arrives.
In one implementation, a surface compositing (surfeflinger) service clears the saved attribute information of each layer contained in the interface image of the application as the application is uninstalled from the electronic device.
In other embodiments, for applications that do not use an SDK, the SurfaceFlinger service obtains attribute information of the application interface layer through dynamic learning.
For example, as shown in fig. 10, the memory application method shown in fig. 6 (a) is used to apply for the layer cache during N times of cold start before one application. That is, after the attribute information of the application interface layer issued by the application process is received, the layer cache is applied when the layer is generated. Where N is a preset value, for example, n=5.
And after receiving the attribute information of the application interface layer issued by the application process, the SurfaceFlinger service learns the attribute information of the application interface layer. In one implementation, if it is determined that the attribute information of the application interface layers issued by the same application process N times (N is less than or equal to N) is the same, the attribute information of the application interface layers of the application process is obtained.
Optionally, the SurfaceFlinger service also obtains attribute information of the status bar layer, the wallpaper layer, and the navigation bar layer from the system process.
Thus, the SurfaceFlinger service acquires attribute information of each layer contained in one frame of interface image of the electronic equipment. Optionally, the SurfaceFlinger service stores attribute information of each layer included in a frame of interface image, which may include attribute information of an application interface layer, attribute information of a status bar layer, attribute information of a wallpaper layer, attribute information of a navigation bar layer, and the like.
Referring to FIG. 11, upon a subsequent application cold start, the application process detects that the application interface Activity is being executed on a Create event or on Start event, and sends a first command to the SurfaceFlinger service. And the surface eFlinger service receives the first command and applies for caches (buffercalche) corresponding to all the layers according to the attribute information of all the layers contained in the stored interface image. For example, the SurfaceFlinger service determines that the space occupied by the buffer (buffer cache) corresponding to the application interface layer is a first value according to the attribute information of the application interface layer, and applies for the buffer, the space occupied by the kernel of which is the first value, as the buffer corresponding to the application interface layer. For example, the SurfaceFlinger service determines, according to attribute information of the status bar layer, that a space occupied by a buffer (buffer cache) corresponding to the status bar layer is a second value, and applies for a buffer, which occupies the space of the second value, to the kernel as the buffer corresponding to the status bar layer. For example, the surfeflinger service determines that the space occupied by the buffer memory (buffer cache) corresponding to the navigation bar layer is a third value according to the attribute information of the navigation bar layer, and applies for the buffer memory with the space occupied by the third value to the kernel as the buffer memory corresponding to the navigation bar layer. For example, the surfeflinger service determines that the space occupied by the buffer memory (buffer cache) corresponding to the wallpaper layer is a fourth value according to the attribute information of the wallpaper layer, and applies for the buffer memory with the space occupied by the fourth value to the kernel as the buffer memory corresponding to the wallpaper layer.
The screen periodically issues Vsync signals to the application process and the system process. When the first Vsync signal is received, the surface synthesis (surfeflinger) service puts the graphics data of the application interface layer into a buffer (buffercalche) corresponding to the application interface layer applied in advance, and binds the buffer (buffercalche) corresponding to the application interface layer to a buffer queue (BufferQueue) corresponding to the application interface layer. A surface synthesis (SurfaceFlinger) service generates an application interface layer.
If the interface image includes any one of a status bar layer, a wallpaper layer, and a navigation bar layer, a surface synthesis (surfeflinger) service generates a corresponding layer. If the interface image includes a status bar layer, a surface synthesis (surfeflink) service puts graphic data of the status bar layer into a buffer (buffercalche) corresponding to the status bar layer applied in advance, and binds the buffer (buffercalche) corresponding to the status bar layer to a buffer queue (BufferQueue) corresponding to the status bar layer; if the interface image comprises a navigation bar layer, the surface synthesis (surface eFlinger) service puts graphic data of the navigation bar layer into a buffer (buffer cache) corresponding to the navigation bar layer which is applied in advance, and binds the buffer (buffer cache) corresponding to the navigation bar layer to a buffer queue (buffer queue) corresponding to the navigation bar layer; if the interface image includes a wallpaper layer, the surface synthesis (surface eFlinger) service places graphic data of the wallpaper layer into a buffer (buffer cache) corresponding to the wallpaper layer applied in advance, and binds the buffer (buffer cache) corresponding to the wallpaper layer to a buffer queue (buffer queue) corresponding to the wallpaper layer.
A composition rendering component (Composer) renders and composes each layer of the interface image to generate an electronic device interface image. The display driver displays the electronic device interface image on the screen when the next Vsync signal (second Vsync signal) arrives.
In one implementation, a surface compositing (surfeflinger) service clears the saved attribute information of each layer contained in the interface image of the application as the application is uninstalled from the electronic device.
In the method for generating the interface image, in the application cold start process, when an onCreate event or an onStart event of an Activity corresponding to an application interface of an application is detected, a buffer cache corresponding to each layer such as an application interface layer, a status bar layer, a navigation bar layer and a wallpaper layer is applied. When the surface eFlinger service receives the Vsync signal to generate the image layer, each BufferCache applied in advance is only required to be respectively bound to the corresponding bufferQueue, and the real-time application of the memory is not required when the image layer is generated. The time spent applying for the memory does not affect the speed of generating the interface image. Therefore, the problem that the interface image is not ready when the next Vsync signal arrives is avoided, and the frame loss probability of the display picture of the electronic equipment is reduced.
Scene II:
when the method is applied to the foreground operation process, the electronic equipment is switched in the horizontal and vertical screen or the folding screen of the electronic equipment is unfolded or folded, some layers of the interface image of the electronic equipment are changed, redrawing is needed, and the probability of long time consumption of applying for memory is high. For example, a horizontal-vertical screen switching event occurs, and an application interface layer of an interface image of the electronic device changes, so that redrawing is required. For example, when a folding screen is unfolded or a folding event occurs, a status bar layer, a navigation bar layer and a wallpaper layer of an interface image of the electronic device change, and redrawing is required.
In the method for generating the interface image, when a transverse screen and vertical screen switching event or a folding screen unfolding event or a folding screen folding event is detected, the corresponding buffer cache of each changed layer is applied in the application running process. When the surface eFlinger generates the layer, the buffercalche applied in advance only needs to be bound to the corresponding bufferQueue. The time spent applying for the memory does not affect the speed of generating the interface image.
As shown in fig. 12, an exemplary method for generating an interface image according to an embodiment of the present application may include:
S901, in a first stage, a surface synthesis (SurfaceFlinger) service obtains attribute information of each layer of the electronic equipment interface image which changes when a horizontal and vertical screen switching event or a folding screen unfolding event or a folding screen folding event occurs through learning.
The first stage is learning, and when a horizontal and vertical screen switching event or a folding screen unfolding event or a folding screen folding event occurs, the electronic equipment interface image changes the attribute information of each layer. For example, the first stage is a preset period of time, or the first stage is a process of S (S is a preset value) times before an application runs on the electronic device.
In the first stage, when a horizontal-vertical screen switching event or a folding screen unfolding event or a folding screen folding event occurs in the application running process, a memory application method in the conventional technology is adopted to apply for the layer cache. That is, the layer cache is applied when the layer is generated.
Referring to fig. 13, in an example, a horizontal screen switching and vertical screen event occurs during an application running process, after a surfeflinger service receives attribute information of an application interface layer in a vertical screen state of an electronic device issued by an application process, the surfeflinger service applies for a cache of the application interface layer according to the attribute information of the application interface layer in the vertical screen state of the electronic device, and adds the cache queue of the application interface layer. The SurfaceFinger service learns attribute information of an application interface layer in a vertical screen state of the electronic equipment. In one implementation, if it is determined that m horizontal screen switching and vertical screen events occur in the application running process, attribute information of an application interface layer in a vertical screen state of the electronic device, which is issued by an application process each time, is the same, attribute information of the application interface layer of the application process in the vertical screen state of the electronic device is obtained. Where m is a preset value, such as m=5.
In another example, when a vertical screen switching horizontal screen event occurs in the application running process, after the SurfaceFlinger service receives attribute information of an application interface layer in a horizontal screen state of the electronic device issued by an application process, the SurfaceFlinger service applies for a cache of the application interface layer according to the attribute information of the application interface layer in the horizontal screen state of the electronic device and adds the cache of the application interface layer into a cache queue of the application interface layer. The SurfaceFinger service learns attribute information of an application interface layer in a transverse screen state of the electronic equipment. In one implementation, if it is determined that m vertical screen switching horizontal screen events occur in the application running process, attribute information of an application interface layer in an electronic device horizontal screen state issued by an application process is the same every time, and then the attribute information of the application interface layer of the application process in the electronic device horizontal screen state is obtained.
In yet another example, if the interface image includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer. When a folding screen unfolding event occurs in the application running process, the SurfaceFlinger service receives at least one item of state column layer attribute information, navigation column layer attribute information and wallpaper layer attribute information in the folding screen unfolding state of the electronic equipment issued by a system process, and then the SurfaceFlinger service applies for caching of a corresponding layer according to at least one item of state column layer attribute information, navigation column layer attribute information and wallpaper layer attribute information in the folding screen unfolding state of the electronic equipment and adds the cache of the corresponding layer. The SurfaceFinger service learns at least one item of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information of the electronic equipment in the unfolded state of the folding screen. In one implementation manner, if it is determined that m folding screen unfolding events occur in the application running process, at least one item of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in an electronic device folding screen unfolding state, which are issued by each system process, is the same, at least one item of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in the electronic device folding screen unfolding state is obtained.
In yet another example, if the interface image includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer. When a folding event of a folding screen occurs in the application running process, after the SurfaceFlinger service receives at least one of state bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in the folding state of the folding screen of the electronic equipment, which are issued by a system process, the SurfaceFlinger service applies for caching of a corresponding layer according to at least one of the state bar layer attribute information, the navigation bar layer attribute information and the wallpaper layer attribute information in the folding state of the folding screen of the electronic equipment, and adds the cache of the corresponding layer. The SurfaceFinger service learns at least one of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information of the electronic equipment in a folding state of a folding screen. In one implementation manner, if it is determined that m folding events occur in the application running process, at least one item of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in a folding state of the folding screen of the electronic device, which are issued by each system process, is the same, at least one item of status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in the folding state of the folding screen of the electronic device is obtained.
In this way, the SurfaceFlinger service acquires attribute information of changed layers in the layers contained in the interface image of the electronic equipment when a horizontal-vertical screen switching event or a folding screen unfolding event or a folding screen folding event occurs. Optionally, the SurfaceFlinger service stores attribute information of each layer included in the interface image, which may include attribute information of an application interface layer, attribute information of a status bar layer, attribute information of a wallpaper layer, attribute information of a navigation bar layer, and the like. The method can further comprise attribute information of each layer contained in the interface image in a vertical screen state of the electronic equipment, a horizontal screen state of the electronic equipment, an unfolding state of a folding screen of the electronic equipment and a folding state of the folding screen of the electronic equipment.
S902, in the second stage, detecting a horizontal screen switching vertical screen event, a vertical screen switching horizontal screen event, a folding screen unfolding event or a folding screen folding event, and notifying a cache (buffer cache) corresponding to a changed layer of a surface synthesis (SurfaceFlinger) service application by an application process.
The second stage is a stage of acquiring attribute information of each image layer contained in the interface image in a vertical screen state of the electronic equipment, a horizontal screen state of the electronic equipment, an unfolding state of a folding screen of the electronic equipment and a folding state of the folding screen of the electronic equipment.
In one example, an application process detects a cross-screen switch vertical screen event, a vertical screen switch cross-screen event, a folded screen unfolding event, or a folded screen folding event, and sends the cross-screen switch vertical screen event, the vertical screen switch cross-screen event, the folded screen unfolding event, or the folded screen folding event to a surface composition (SurfaceFlinger) service.
S903, applying for a buffer (buffer cache) corresponding to the layer according to the saved attribute information of the changed layer by the surface synthesis (surfeflinger) service, and updating the saved attribute information of the changed layer by the surface synthesis (surfeflinger) service.
In one implementation, a horizontal screen switching vertical screen event is received, and a surface synthesis (surfeflinger) service applies for a buffer (buffercalche) corresponding to an application interface layer according to stored attribute information of the application interface layer in a vertical screen state of the electronic device. Optionally, the surface synthesis (surfeflinger) service also releases the buffer (buffercalche) corresponding to the application interface layer applied in the last electronic device cross screen state. Further, the surface synthesis (SurfaceFlinger) service updates the saved attribute information of the application interface layer to the attribute information of the application interface layer in the vertical screen state of the electronic device.
In one implementation, a vertical screen switching horizontal screen event is received, and a surface synthesis (surfeflinger) service applies for a buffer (buffercalche) corresponding to an application interface layer according to stored attribute information of the application interface layer in a horizontal screen state of the electronic device. Optionally, the surface synthesis (surfeflinger) service also releases the buffer (buffercalche) corresponding to the application interface layer applied in the last electronic device vertical screen state. Further, the surface synthesis (SurfaceFlinger) service updates the saved attribute information of the application interface layer to the attribute information of the application interface layer in the electronic device horizontal screen state.
In one implementation, a folding screen unfolding event is received, and a surface synthesis (surfeflinger) service applies for a buffer (buffercalche) corresponding to each layer according to at least one of stored status bar layer attribute information, navigation bar layer attribute information and wallpaper layer attribute information in a folding screen unfolding state of the electronic device. Optionally, the surface synthesis (surface eflinger) service also releases a buffer (buffercalche) corresponding to at least one of the status bar layer, the navigation bar layer and the wallpaper layer applied in the last folded state of the folding screen of the electronic device. Further, a surface synthesis (SurfaceFlinger) service updates the attribute information of at least one item of the saved status bar layer, navigation bar layer and wallpaper layer to the attribute information of at least one item of the status bar layer, navigation bar layer and wallpaper layer in the unfolded state of the folding screen of the electronic device.
In one implementation, a folding event of a folding screen is received, and a surface synthesis (surfeflinger) service applies for a cache (buffercalche) corresponding to each layer according to at least one of attribute information of a status bar layer, attribute information of a navigation bar layer and attribute information of a wallpaper layer in a folding state of the folding screen of the electronic device. Optionally, the surface synthesis (surface eflinger) service also releases a buffer (buffercalche) corresponding to at least one of the status bar layer, the navigation bar layer and the wallpaper layer applied in the last state of unfolding the folding screen of the electronic device. Further, a surface synthesis (SurfaceFlinger) service updates the attribute information of at least one item of the saved status bar layer, navigation bar layer and wallpaper layer to the attribute information of at least one item of the status bar layer, navigation bar layer and wallpaper layer in the folding state of the folding screen of the electronic device.
And S904, receiving the Vsync signal, and placing graphic data of each layer into a buffer (buffer cache) corresponding to the layer by a surface synthesis (SurfaceFlinger) service, and binding the buffer (buffer cache) of each layer to a buffer queue (buffer queue) of the corresponding layer.
In one implementation, after receiving the Vsync signal, a surface synthesis (surface eflinger) service determines the space size occupied by the buffer corresponding to each layer according to the saved attribute information of the layer, and searches the corresponding buffer (buffercalche) applied in advance according to the space size occupied by the buffer. The surface synthesis (surfeflink) service puts the graphics data of the layers into the caches (buffercalche) corresponding to the layers, and binds the caches (buffercalche) of the layers to the buffer queues (BufferQueue) of the corresponding layers.
S905, generating an application interface layer by a surface synthesis (SurfaceFlinger) service. Optionally, a surface synthesis (SurfaceFlinger) service also generates at least one of a status bar layer, a wallpaper layer, and a navigation bar layer.
And S906, rendering and compositing each layer by a composite rendering component (Composer) to generate an electronic equipment interface image.
S907, when the next Vsync signal arrives, the display driver displays the electronic device interface image on the screen.
In one implementation, when an application is uninstalled from an electronic device, a surface compositing (surfeflinger) service clears stored attribute information of each layer included in an interface image of the application in an electronic device portrait state, an electronic device landscape state, an electronic device folded screen unfolded state, and an electronic device folded screen folded state.
In the method for generating the interface image, when the electronic equipment is switched between the horizontal screen and the vertical screen or the folding screen of the electronic equipment is unfolded or folded in the application running process, the surface eFlinger service applies for the corresponding BufferCache of the changed image layer. When the surface eFlinger service receives the Vsync signal to generate the image layer, the buffer caches of all the image layers applied in advance are only required to be respectively bound to the corresponding buffer queue, and the real-time application of the memory is not required when the image layer is generated. The time spent applying for the memory does not affect the speed of generating the interface image. Therefore, the problem that the interface image is not ready when the next Vsync signal arrives is avoided, and the frame loss probability of the display picture of the electronic equipment is reduced.
It may be understood that, in order to implement the above-mentioned functions, the electronic device provided in the embodiments of the present application includes corresponding hardware structures and/or software modules that perform each function. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application may divide the functional modules of the electronic device according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one example, please refer to fig. 14, which shows a possible structural schematic diagram of the electronic device involved in the above embodiment. The electronic device 1400 includes: a processing unit 1410, a storage unit 1420, and a display unit 1430.
The processing unit 1410 is configured to control and manage the operation of the electronic device 1400.
The memory unit 1420 is used to store program codes and data of the electronic device 1400.
The display unit 1430 is used to display an interface of the electronic device 1400.
Of course, the unit modules in the electronic device 1400 include, but are not limited to, the processing unit 1410, the storage unit 1420, and the display unit 1430.
Optionally, an audio unit, a communication unit, etc. may also be included in the electronic device 1400. The audio unit is used for collecting audio, playing audio and the like. The communication unit is used to support the electronic device 1400 to communicate with other devices.
The processing unit 1410 may be a processor or a controller, such as a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. The memory unit 1420 may be a memory. The display unit 1430 may be a display screen or the like. The audio unit may include a microphone, a speaker, etc. The communication unit may comprise a mobile communication unit and/or a wireless communication unit.
For example, the processing unit 1410 may be a processor (e.g., the processor 110 shown in fig. 7), the storage unit 1420 may be a memory (e.g., the internal memory 121 shown in fig. 7), and the display unit 1430 may be a display screen (e.g., the display screen 191 shown in fig. 7). The audio unit may be an audio module (such as audio module 170 shown in fig. 7). The communication units may include a mobile communication unit (such as the mobile communication module 150 shown in fig. 7) and a wireless communication unit (such as the wireless communication module 160 shown in fig. 7). The electronic device 1400 provided in the embodiment of the present application may be the electronic device 100 shown in fig. 7. Wherein the processors, memory, display screen, etc. may be coupled together, for example, via a bus.
Embodiments of the present application also provide a chip system including at least one processor and at least one interface circuit. The processors and interface circuits may be interconnected by wires. For example, the interface circuit may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit may be used to send signals to other devices (e.g., processors). The interface circuit may, for example, read instructions stored in the memory and send the instructions to the processor. The instructions, when executed by a processor, may cause an electronic device to perform the various steps of the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer readable storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the mobile phone in the above-mentioned method embodiments.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of generating an interface image for an electronic device having a first application installed thereon, an operating system of the electronic device comprising a surface composition service and a composition rendering component, the method comprising:
the surface synthesis service acquires attribute information of a first layer of a first interface image; the first interface image is an interface image of the first application, and the attribute information comprises at least one of layer width, layer height and pixel information of the interface image;
at a first moment, the surface synthesis service applies for a first cache according to the attribute information of the first layer;
at a second moment, the surface synthesis service binds the first cache to a buffer queue corresponding to the first layer; the second time is the time when the surface synthesis service receives a first vertical synchronization Vsync signal, and the first time is earlier than the second time;
The surface synthesis service generates all layers according to buffer queues corresponding to all layers contained in the first interface image;
and the composition rendering component renders and synthesizes all the layers to generate the first interface image.
2. The method of claim 1, wherein the surface composition service applying for the first cache based on the attribute information of the first layer comprises:
the surface synthesis service determines a first value according to the attribute information of the first layer, wherein the first value is the size of the space occupied by the first cache;
and the surface synthesis service applies for a first cache according to the first value.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
displaying the first interface image on a screen of the electronic equipment at a third moment; the third time is a time when a second Vsync signal is issued, and the second Vsync signal is a next Vsync signal of the first Vsync signal.
4. A method according to any one of claims 1-3, wherein the first moment is the moment when the Activity corresponding to the first interface image is created or initiated.
5. The method of claim 4, wherein the first layer is an application interface layer, and wherein the surface composition service obtaining attribute information of the first layer of the first interface image comprises:
the surface synthesis service obtains attribute information of the first layer from a process of the first application.
6. The method of claim 4, wherein the first layer is a status bar layer, a navigation bar layer, or a wallpaper layer, and wherein the surface composition service obtaining attribute information of the first layer of the first interface image comprises:
the surface synthesis service obtains attribute information of the first layer from a process of the operating system.
7. A method according to any of claims 1-3, wherein the first time instant is a time instant when a landscape switch portrait event, portrait switch landscape event, folded-screen unfolding event, or folded-screen folding event is detected.
8. The method of claim 7, wherein the first time is a time when a horizontal-to-vertical screen event is detected, the first layer is an application interface layer, and the surface composition service applying for the first cache according to attribute information of the first layer comprises:
And the surface synthesis service applies for a first cache according to the attribute information of the application interface layer in the vertical screen state of the electronic equipment.
9. The method of claim 7, wherein the first time is a time when a vertical screen switching horizontal screen event is detected, the first layer is an application interface layer, and the applying for the first cache by the surface composition service according to attribute information of the first layer includes:
and the surface synthesis service applies for a first cache according to the attribute information of the application interface layer in the transverse screen state of the electronic equipment.
10. The method of claim 7, wherein the first time is a time when a folding screen unfolding event is detected, the first layer includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer, and the applying for the first cache by the surface composition service according to attribute information of the first layer includes:
and the surface synthesis service applies for a first cache according to the attribute information of the first layer in the unfolded state of the folding screen of the electronic equipment.
11. The method of claim 7, wherein the first time is a time when a folding event of a folding screen is detected, the first layer includes at least one of a status bar layer, a navigation bar layer, and a wallpaper layer, and the applying for the first cache by the surface composition service according to attribute information of the first layer includes:
And the surface synthesis service applies for a first cache according to the attribute information of the first layer in the folding state of the folding screen of the electronic equipment.
12. An electronic device, the electronic device comprising: the device comprises a processor, a memory and a display screen, wherein the processor, the display screen and the memory are coupled; the memory is used for storing computer program codes; the computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
CN202310688308.9A 2023-06-09 2023-06-09 Method for generating interface image and electronic equipment Pending CN117724779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310688308.9A CN117724779A (en) 2023-06-09 2023-06-09 Method for generating interface image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310688308.9A CN117724779A (en) 2023-06-09 2023-06-09 Method for generating interface image and electronic equipment

Publications (1)

Publication Number Publication Date
CN117724779A true CN117724779A (en) 2024-03-19

Family

ID=90207513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310688308.9A Pending CN117724779A (en) 2023-06-09 2023-06-09 Method for generating interface image and electronic equipment

Country Status (1)

Country Link
CN (1) CN117724779A (en)

Similar Documents

Publication Publication Date Title
CN112004086B (en) Video data processing method and device
CN113726950B (en) Image processing method and electronic equipment
CN115631258B (en) Image processing method and electronic equipment
CN114518817B (en) Display method, electronic device and storage medium
CN113687803A (en) Screen projection method, screen projection source end, screen projection destination end, screen projection system and storage medium
CN113556598A (en) Multi-window screen projection method and electronic equipment
CN112767231B (en) Layer composition method and device
CN114661263B (en) Display method, electronic equipment and storage medium
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
CN111813490A (en) Method and device for processing interpolation frame
WO2022242487A1 (en) Display method and related device
CN113986162B (en) Layer composition method, device and computer readable storage medium
CN114531519B (en) Control method based on vertical synchronous signal and electronic equipment
WO2023000745A1 (en) Display control method and related device
CN117724779A (en) Method for generating interface image and electronic equipment
CN116257235A (en) Drawing method and electronic equipment
CN114793283A (en) Image encoding method, image decoding method, terminal device, and readable storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN114827696A (en) Method for synchronously playing cross-device audio and video data and electronic device
CN116664630B (en) Image processing method and electronic equipment
EP4296845A1 (en) Screen projection method and system, and related apparatus
CN116664375B (en) Image prediction method, device, equipment and storage medium
CN111880876B (en) Object processing method and related device
CN117745604A (en) Image processing method and electronic equipment
CN117667276A (en) Page refreshing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination