WO2022021895A1 - 一种图像处理方法及电子设备 - Google Patents
一种图像处理方法及电子设备 Download PDFInfo
- Publication number
- WO2022021895A1 WO2022021895A1 PCT/CN2021/081367 CN2021081367W WO2022021895A1 WO 2022021895 A1 WO2022021895 A1 WO 2022021895A1 CN 2021081367 W CN2021081367 W CN 2021081367W WO 2022021895 A1 WO2022021895 A1 WO 2022021895A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- electronic device
- time
- event
- buffer
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 133
- 238000012545 processing Methods 0.000 claims abstract description 98
- 230000004044 response Effects 0.000 claims description 159
- 239000000872 buffer Substances 0.000 claims description 151
- 238000009877 rendering Methods 0.000 claims description 70
- 230000001960 triggered effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 11
- 230000015572 biosynthetic process Effects 0.000 description 50
- 238000003786 synthesis reaction Methods 0.000 description 50
- 238000010586 diagram Methods 0.000 description 47
- 239000000203 mixture Substances 0.000 description 45
- 230000008569 process Effects 0.000 description 35
- 238000013461 design Methods 0.000 description 34
- 230000006870 function Effects 0.000 description 25
- 238000005516 engineering process Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 12
- 230000000007 visual effect Effects 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 9
- 230000001976 improved effect Effects 0.000 description 8
- 238000010295 mobile communication Methods 0.000 description 8
- 230000005236 sound signal Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 239000003550 marker Substances 0.000 description 6
- 230000003139 buffering effect Effects 0.000 description 5
- 230000002194 synthesizing effect Effects 0.000 description 5
- 229920001621 AMOLED Polymers 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
Definitions
- the embodiments of the present application relate to the technical field of image processing, and in particular, to an image processing method and an electronic device.
- the high frame rate display of electronic devices is also a development trend.
- the frame rate of electronic devices has evolved from 60 hertz (Hz) to 90 Hz to 120 Hz.
- Hz hertz
- the higher the frame rate of the electronic device the more likely the problem of frame loss occurs, which will lead to incoherence of the display content of the electronic device and affect the user experience. Therefore, how to reduce or even avoid the phenomenon of frame dropping when the electronic device displays images is an urgent problem to be solved.
- the embodiments of the present application provide an image processing method and an electronic device, which can reduce the possibility of frame loss when the electronic device displays an image, and can ensure the smoothness of the displayed image on the display screen, thereby improving the user's visual experience.
- an embodiment of the present application provides an image processing method, which can be applied to an electronic device.
- the electronic device draws the first layer, renders the first layer, and caches the rendered first layer in the SF cache queue.
- the electronic device finishes drawing the first layer before the first moment the electronic device may draw the second layer before the first moment, and render the second layer, and cache the rendered second layer in the SF cache queue.
- the full name of SF in English is Surface Flinger.
- the above-mentioned first moment is the moment when the first vertical synchronization signal for triggering the electronic device to draw the second layer arrives.
- the electronic device before the arrival of the next first vertical synchronization signal, can continue to perform the next layer drawing task (that is, drawing the second layer after completing one layer drawing task (that is, drawing the first layer) ), instead of waiting for the first vertical sync signal to arrive before drawing the second layer. That is, the electronic device can use the idle period of the UI thread to perform the next layer drawing task in advance. In this way, the layer drawing and rendering tasks can be completed in advance, the possibility of frame loss when the electronic device displays the image can be reduced, the smoothness of the displayed image on the display screen can be ensured, and the user's visual experience can be improved.
- the electronic device may draw the second layer immediately after drawing the first layer before the first moment. Specifically, the electronic device draws the first layer before the first moment, the electronic device draws the second layer before the first moment, and renders the second layer, and the rendered second layer is cached in the SF cache queue , which may include: the electronic device draws the first layer before the first moment, the electronic device responds to the end of the drawing of the first layer, draws the second layer, and renders the second layer, and caches the rendered image in the SF cache queue. Second layer.
- This design method provides a specific method for the electronic device to draw the second layer in advance.
- the electronic device may not immediately start to draw the second layer in response to the end of drawing the first layer .
- the electronic device can start to draw the second layer from the second time, and render the second layer, and cache the rendered second image in the SF cache queue.
- the second time is a time-consuming time for triggering the electronic device to draw a preset percentage of the signal period of the first vertical synchronization signal of the first layer, the preset percentage is less than 1, and the second time is before the first time.
- This design method provides a specific method for the electronic device to draw the second layer in advance.
- the electronic device may also finish drawing the first layer before the first moment and after the second moment.
- the electronic device may, in response to the end of drawing of the first layer, draw the second layer, render the second layer, and cache the rendered second layer in the SF cache queue. That is, the electronic device may draw the second layer immediately after the drawing of the first layer is completed.
- This design method provides a specific method for the electronic device to draw the second layer in advance.
- the electronic device may draw the second layer in advance in response to the first UI event.
- the electronic device may receive the first UI event.
- the first UI event is used to trigger the electronic device to display preset image content or display image content in a preset manner.
- the first UI event includes any one of the following: the electronic device receives a throwing and sliding operation input by the user, the electronic device receives a user's click operation on a preset control in the foreground application, and a UI event that is automatically triggered by the electronic device.
- the electronic device draws the first layer, renders the first layer, and caches the rendered first layer in the SF cache queue.
- the electronic device can determine whether the above-mentioned SF buffer queue has enough buffer space for buffering the layers drawn and rendered in advance by the electronic device. Specifically, the electronic device can determine the buffer space of the SF buffer queue and the number of buffered frames in the SF buffer queue, where the buffer frame is a layer buffered in the SF buffer queue; then, calculate the difference between the buffer space of the SF buffer queue and the buffer frame. The difference between the numbers is used to obtain the remaining buffer space of the SF buffer queue.
- the electronic device draws the first layer before the first time, and the electronic device draws the second layer and renders the second image before the first time. layer, the second layer after rendering is cached in the SF cache queue.
- the electronic device draws and renders the layer in advance when the remaining cache space of the SF cache queue is greater than the first space threshold, that is, when the remaining cache space of the SF cache queue is sufficient to cache the layer drawn and rendered in advance .
- the electronic device can reduce the problem of frame loss due to drawing and rendering the layer in advance due to insufficient buffer space in the SF cache queue, reducing the possibility of frame loss when the electronic device displays the image, and ensuring the continuity of the displayed image on the display screen. Improve the user's visual experience.
- the electronic device draws the second layer and renders the second layer in response to the first vertical synchronization signal The second layer, the second layer after rendering is cached in the SF cache queue.
- the electronic device may dynamically set the buffer space of the SF buffer queue. Specifically, the electronic device draws the first layer before the first moment, the electronic device draws the second layer before the first moment, and renders the second layer, and the rendered second layer is cached in the SF cache queue
- the method of the embodiment of the present application may further include: the electronic device sets the buffer space of the SF buffer queue to M+p frames.
- M is the size of the cache space of the SF cache queue before setting
- p is the number of frames dropped by the electronic device within a preset time, or p is a preset positive integer.
- the electronic device dynamically sets the buffer space of the SF buffer queue, which can expand the buffer space of the SF Buffer.
- the problem that the layer overflow in the SF Buffer affects the coherence of the displayed image of the electronic device can be solved, and the coherence of the displayed image of the electronic device can be improved.
- the electronic device sets the buffer space of the SF buffer queue to N frames. In this design, the electronic device sets the upper limit value of the buffer space of the SF buffer queue.
- the electronic device adopts Android The animation native algorithm based on , calculates the moving distance of the layer and draws the layer according to the moving distance of the layer, and the display screen of the electronic device is prone to jitter.
- the electronic device may calculate the movement distance of the corresponding layer according to the signal period of the first vertical synchronization signal and draw the layer according to the movement distance.
- drawing the second layer by the electronic device includes: the electronic device calculates the movement distance of the second layer according to the signal period of the first vertical synchronization signal, and draws the second layer according to the movement distance of the second layer; wherein, The moving distance of the second layer is the moving distance of the image content in the second layer compared to the image content in the first layer.
- the electronic device calculates the movement distance of the second layer according to the signal period of the first vertical synchronization signal, and draws the movement distance of the second layer according to the movement distance of the second layer.
- the method may include: the electronic device calculates the processing time of the second layer according to the signal period of the first vertical synchronization signal; calculates the movement distance of the second layer according to the processing time of the second layer, and calculates the movement distance of the second layer according to the movement of the second layer The distance draws the second layer.
- the processing time of the second layer is p i-1 +T i-1 , i ⁇ 2, and i is positive Integer.
- p i-1 is the processing time of the i-1 th layer;
- T i-1 is the signal period of the first vertical synchronization signal used to trigger the electronic device to draw the i-1 th layer.
- the electronic device may receive an interruption event for triggering the electronic device to stop displaying the image content corresponding to the first UI event.
- the electronic device can receive the second UI event.
- the second UI event is an interruption event used to trigger the electronic device to stop displaying the image content corresponding to the first UI event.
- the electronic device may stop drawing the layer of the first UI event.
- the electronic device deletes the layer of the first UI event buffered in the SF buffer queue in response to the second vertical synchronization signal.
- the second vertical synchronization signal is used to trigger the electronic device to synthesize the rendered layer to obtain an image frame.
- the electronic device may draw the third layer of the second UI event, render the third layer, and cache the rendered third layer in the SF buffer queue.
- the electronic device stops drawing the layer of the first UI event in response to the second UI event. After that, in response to the second vertical synchronization signal, the layer of the first UI event buffered in the SF buffer queue is deleted. In this way, the electronic device can display the image content of the second UI event as soon as possible, the touch response delay can be reduced, and the follow-up performance of the electronic device can be improved.
- the method in this embodiment of the present application may further include: the electronic device redraws the fourth layer, so as to return the logic of the electronic device for drawing the layer to the fourth image layer, and get the processing time of the fourth layer.
- the fourth layer is the layer next to the layer corresponding to the image frame being displayed by the electronic device when the electronic device receives the second UI event; or, the fourth layer includes the second UI event received by the electronic device , the layer corresponding to the image frame being displayed by the electronic device, and the layer next to the layer corresponding to the image frame being displayed by the electronic device.
- the electronic device no longer renders the fourth layer, and the processing time of the fourth layer is used by the electronic device to calculate the moving distance of the fourth layer.
- the electronic device redraws the fourth layer to return the logic of the electronic device drawing layer to the fourth layer, which can avoid a large jump in the image content displayed by the electronic device and improve the coherence of the image content displayed by the electronic device. properties to enhance the user experience.
- embodiments of the present application provide an electronic device, where the electronic device includes a display screen, a memory, and one or more processors.
- the display screen and memory are coupled to the processor.
- the display screen is used to display the image generated by the processor.
- the memory is used to store computer program code comprising computer instructions.
- the electronic device is caused to perform the following operations: draw the first layer, render the first layer, and cache the rendered first layer in the SF cache queue; finish drawing the first layer before the first moment.
- the electronic device draws the second layer before the first moment, and renders the second layer, and the rendered second layer is cached in the SF cache queue.
- the above-mentioned first moment is the moment when the first vertical synchronization signal for triggering the electronic device to draw the second layer arrives.
- the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: finish drawing the first layer before the first moment, and in response to the end of drawing the first layer , draw the second layer, and render the second layer, and cache the rendered second layer in the SF cache queue.
- the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: finish drawing the first layer before the second moment, and start drawing the first layer from the second moment.
- the second layer is rendered and the second layer is rendered, and the rendered second layer is cached in the SF cache queue.
- the second time is a time-consuming time for triggering the electronic device to draw a preset percentage of the signal period of the first vertical synchronization signal of the first layer, the preset percentage is less than 1, and the second time is before the first time.
- the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: before the first moment and after the second moment, the first layer is drawn, and in response to the second moment, the first layer is drawn. After the first layer is drawn, the second layer is drawn, and the second layer is rendered, and the rendered second layer is cached in the SF cache queue.
- the electronic device when the computer instructions are executed by the processor, the electronic device is caused to further perform the following step: receiving a first UI event, where the first UI event is used to trigger the display screen to display a preset image content or display image content in a preset manner; the first UI event includes any of the following: the electronic device receives a throwing operation input by the user, the electronic device receives the user's click operation on the preset control in the foreground application, and the electronic device automatically triggers UI event; in response to the first UI event, draw the first layer, render the first layer, and cache the rendered first layer in the SF cache queue.
- the electronic device when the computer instruction is executed by the processor, the electronic device further performs the following steps: determining the buffer space of the SF buffer queue and the number of buffered frames in the SF buffer queue, and buffering the frames is the layer cached in the SF buffer queue; calculate the difference between the buffer space of the SF buffer queue and the number of buffered frames to obtain the remaining buffer space of the SF buffer queue; if the remaining buffer space of the SF buffer queue is larger than the first preset gate The limit value, the first layer is drawn before the first time, the second layer is drawn before the first time, and the second layer is rendered, and the rendered second layer is cached in the SF cache queue.
- the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: if the remaining buffer space of the SF buffer queue is less than the second preset threshold, responding Based on the first vertical synchronization signal, the second layer is drawn, and the second layer is rendered, and the rendered second layer is buffered in the SF buffer queue.
- the electronic device when the computer instruction is executed by the processor, the electronic device is caused to further perform the following step: setting the buffer space of the SF buffer queue to M+p frames.
- M is the size of the cache space of the SF cache queue before setting;
- p is the number of frames dropped by the electronic device within a preset time, or p is a preset positive integer.
- the electronic device when the computer instruction is executed by the processor, the electronic device is caused to further perform the following steps: if M+p is greater than the preset upper limit value N, the cache space of the SF cache queue is stored Set to N frames.
- the electronic device when the computer instructions are executed by the processor, the electronic device is caused to further perform the following steps: calculating the movement distance of the second layer according to the signal period of the first vertical synchronization signal, The second layer is drawn according to the movement distance of the second layer; wherein, the movement distance of the second layer is the movement distance of the image content in the second layer compared to the image content in the first layer.
- the electronic device when the computer instruction is executed by the processor, the electronic device is caused to further perform the following steps: calculating the processing time of the second layer according to the signal period of the first vertical synchronization signal; The movement distance of the second layer is calculated according to the processing time of the second layer, and the second layer is drawn according to the movement distance of the second layer.
- the processing time of the second layer is p i-1 +T i-1 , i ⁇ 2, and i is positive Integer.
- p i-1 is the processing time of the i-1th layer.
- T i-1 is the signal period of the first vertical synchronization signal used to trigger the electronic device to draw the i-1 th layer;
- the electronic device when the computer instructions are executed by the processor, the electronic device is caused to further perform the following steps: receiving a second UI event; in response to the second UI event, stop drawing the first UI event In response to the second vertical synchronization signal, delete the layer of the first UI event cached in the SF cache queue; wherein, the second vertical synchronization signal is used to trigger the electronic device to synthesize the rendered layer to obtain an image frame; response Based on the first vertical synchronization signal, the third layer of the second UI event is drawn, the third layer is rendered, and the rendered third layer is cached in the SF buffer queue.
- the second UI event is an interruption event used to trigger the electronic device to stop displaying the image content corresponding to the first UI event.
- the electronic device when the computer instructions are executed by the processor, the electronic device is caused to further perform the following step: redraw the fourth layer, so as to return the logic of drawing the layer by the electronic device to The fourth layer, and get the processing time of the fourth layer.
- the electronic device no longer renders the fourth layer, and the processing time of the fourth layer is used by the electronic device to calculate the moving distance of the fourth layer.
- the fourth layer is the layer next to the layer corresponding to the image frame being displayed on the display screen when the second UI event is received; or, the fourth layer includes the layer that is being displayed on the display screen when the second UI event is received.
- the layer corresponding to the image frame of and the layer next to the layer corresponding to the image frame being displayed on the display screen.
- the present application provides a chip system, which can be applied to an electronic device including a memory and a display screen.
- the chip system includes one or more interface circuits and one or more processors.
- the interface circuit and the processor are interconnected by wires.
- the interface circuit is configured to receive signals from the memory described above and send the signals to the processor, the signals comprising computer instructions stored in the memory.
- the processor executes the computer instructions
- the electronic device executes the method described in the first aspect and any possible design manners thereof.
- the present application provides a computer-readable storage medium comprising computer instructions.
- the computer instructions When the computer instructions are executed on the electronic device, the electronic device is caused to perform the method as described in the first aspect and any possible design manners thereof.
- the present application provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect and any possible design manners thereof.
- FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
- FIG. 2A is a schematic diagram of a vertical synchronization signal provided by an embodiment of the present application.
- 2B is a schematic diagram of a software processing flow of an electronic device displaying an image in response to a touch operation according to an embodiment of the present application
- Fig. 2 C is a kind of electronic equipment in the conventional technology to carry out layer drawing, rendering, synthesizing and the principle schematic diagram of image frame display;
- FIG. 3 is a flowchart of an image processing method provided by an embodiment of the present application.
- 4A is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by an electronic device according to an embodiment of the present application;
- 4B is a flowchart of an image processing method provided by an embodiment of the present application.
- 5A is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by an electronic device provided by an embodiment of the application;
- FIG. 5B is a flowchart of another image processing method provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of a method for a SF Buffer cache layer provided by an embodiment of the present application.
- FIG. 7A is a schematic diagram of a method for caching layers in a Frame Buffer provided by an embodiment of the present application.
- Fig. 7B is a kind of sequence diagram that electronic equipment draws multi-frame layer in the conventional technology that SysTrace tool captures;
- Fig. 7C is a kind of sequence diagram of drawing a multi-frame layer by the electronic device in the embodiment of the present application captured by the SysTrace tool;
- 7D is another sequence diagram of the electronic device drawing multi-frame layers in the embodiment of the present application captured by the SysTrace tool;
- FIG. 8A is a schematic diagram of a display interface of an electronic device provided by an embodiment of the present application.
- FIG. 8B is a schematic diagram of another display interface of the electronic device provided by the embodiment of the present application.
- 8C is a schematic diagram of another display interface of the electronic device provided by the embodiment of the present application.
- FIG. 9 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application.
- 10A is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- 10B is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- 10C is a schematic diagram of another method for SF Buffer caching layer provided by an embodiment of the present application.
- 10D is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- 10E is a schematic diagram of the change of the cached frame in the SF Buffer in the process of drawing a multi-frame layer by an electronic device in the conventional technology
- 10F is a schematic diagram of the change of the cached frame in the SF Buffer in the process of drawing a multi-frame layer by the electronic device in the embodiment of the application;
- 11A is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application;
- FIG. 11B is a schematic diagram of movement distance variation of a layer provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application;
- FIG. 13 is a flowchart of another image processing method provided by an embodiment of the present application.
- FIG. 14 is a flowchart of another image processing method provided by an embodiment of the present application.
- FIG. 15 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application;
- FIG. 16A is a schematic diagram of another method for caching layers in SF Buffer provided by an embodiment of the present application.
- 16B is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- FIG. 16C is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- 16D is a schematic diagram of another method for caching layers in SF Buffer provided by an embodiment of the present application.
- FIG. 17 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application.
- 18A is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- FIG. 18B is a schematic diagram of another method for caching layers in SF Buffer provided by an embodiment of the present application.
- 19 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device provided by an embodiment of the present application;
- FIG. 20 is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device according to an embodiment of the present application;
- 21 is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- 22A is a schematic diagram of the principle of layer drawing, rendering, synthesis, and image frame display performed by another electronic device provided by an embodiment of the present application;
- 22B is a schematic diagram of another method for SF Buffer caching layers provided by an embodiment of the present application.
- FIG. 23 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
- first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
- a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
- plural means two or more.
- An embodiment of the present application provides an image processing method, which can be applied to an electronic device including a display screen (eg, a touch screen).
- a display screen eg, a touch screen.
- the aforementioned electronic devices may be cell phones, tablet computers, desktops, laptops, handheld computers, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, as well as cellular phones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment and other equipment including display screen (such as touch screen), the specific form of the electronic equipment is not special in the embodiment of the present application limit.
- FIG. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
- the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 293 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environmental sensor Light sensor 180L, bone conduction sensor 180M, etc.
- the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
- the electronic device 100 may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
- the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processor
- graphics processor graphics processor
- ISP image signal processor
- controller memory
- video codec digital signal processor
- DSP digital signal processor
- NPU neural-network processing unit
- the controller may be the nerve center and command center of the electronic device 100 .
- the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transceiver
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- SIM subscriber identity module
- USB universal serial bus
- the interface connection relationship between the modules illustrated in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
- the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the charging management module 140 is used to receive charging input from the charger. While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 293, and the wireless communication module 160.
- the power management module 141 may also be provided in the processor 110 .
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network.
- the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
- the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
- WLAN wireless local area networks
- BT Bluetooth
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication
- IR infrared technology
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
- the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
- the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite based augmentation systems
- the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
- Display screen 194 is used to display images, videos, and the like.
- the display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- LCD liquid crystal display
- OLED organic light-emitting diode
- active-matrix organic light-emitting diode active-matrix organic light-emitting diode
- AMOLED flexible light-emitting diode
- FLED flexible light-emitting diode
- Miniled MicroLed, Micro-oLed
- quantum dot light-emitting diode quantum dot light emitting diodes, QLED
- the display screen 194 in this embodiment of the present application may be a touch screen. That is, the touch sensor 180K is integrated in the display screen 194 .
- the touch sensor 180K may also be referred to as a "touch panel”. That is, the display screen 194 may include a display panel and a touch panel, and the touch sensor 180K and the display screen 194 form a touch screen, also referred to as a "touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it. After the touch operation detected by the touch sensor 180K, the driver of the kernel layer (such as a TP driver) can transmit it to the upper layer to determine the type of the touch event. Visual output related to touch operations may be provided through display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
- the electronic device 100 can realize the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 194 and the application processor.
- the ISP is used to process the data fed back by the camera 293 .
- Camera 293 is used to capture still images or video.
- a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
- Internal memory 121 may be used to store computer executable program code, which includes instructions.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
- the processor 110 may execute instructions stored in the internal memory 121, and the internal memory 121 may include a program storage area and a storage data area.
- the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
- the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
- the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
- the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. Speaker 170A, also referred to as a "speaker”, is used to convert audio electrical signals into sound signals. The receiver 170B, also referred to as “earpiece”, is used to convert audio electrical signals into sound signals. The microphone 170C, also called “microphone” or “microphone”, is used to convert sound signals into electrical signals. The earphone jack 170D is used to connect wired earphones.
- the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
- the pressure sensor 180A may be provided on the display screen 194 .
- the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
- the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
- the electronic device 100 may acquire the pressing force of the user's touch operation through the pressure sensor 180A.
- the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
- the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
- Motor 191 can generate vibrating cues.
- the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
- the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
- the SIM card interface 195 is used to connect a SIM card.
- the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
- the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
- the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on.
- the vertical synchronizing signal 1, the vertical synchronizing signal 2, and the vertical synchronizing signal 3 described above will be described below.
- Vertical synchronization signal 1 such as VSYNC_APP.
- the vertical synchronization signal 1 can be used to trigger the drawing of one or more layers and render the drawn layers. That is to say, the above vertical synchronization signal 1 may be used to trigger the UI thread to draw one or more layers, and the Render thread will render the one or more layers drawn by the UI thread.
- Vertical synchronization signal 2 such as VSYNC_SF.
- the vertical synchronization signal 2 can be used to trigger layer composition of one or more layers to be rendered to obtain image frames. That is to say, the above-mentioned vertical synchronization signal 2 can be used to trigger the composition thread to perform layer composition on one or more layers rendered by the Render thread to obtain an image frame.
- Vertical synchronization signal 3 such as HW_VSYNC.
- the vertical synchronization signal 3 can be used to trigger hardware to refresh and display image frames.
- the vertical synchronization signal 3 is a hardware signal triggered by the display screen driving of the electronic device.
- the signal period T3 of the vertical synchronization signal 3 (eg HW_VSYNC) is determined according to the frame rate of the display screen of the electronic device.
- the signal period T3 of the vertical synchronization signal 3 is the inverse of the frame rate of the display screen (eg, LCD or OLED) of the electronic device.
- the frame rate of the display screen of the electronic device may be any value such as 60 Hz, 70 Hz, 75 Hz, 80 Hz, 90 Hz, or 120 Hz.
- the electronic device may support multiple different frame rates.
- the frame rate of the electronic device can be switched between the different frame rates mentioned above.
- the frame rate described in the embodiments of this application is the frame rate currently used by the electronic device. That is, the signal period of the vertical synchronization signal 3 is the inverse of the frame rate currently used by the electronic device.
- the vertical synchronization signal 3 in this embodiment of the present application is a periodic discrete signal.
- the vertical sync signal 1 and the vertical sync signal 2 are generated based on the vertical sync signal 3 , that is, the vertical sync signal 3 may be the signal source of the vertical sync signal 1 and the vertical sync signal 2 .
- the vertical synchronization signal 1 and the vertical synchronization signal 2 are synchronized with the vertical synchronization signal 3 . Therefore, the signal periods of the vertical synchronization signal 1 and the vertical synchronization signal 2 are the same as the signal period of the vertical synchronization signal 3, and the phases are the same.
- the signal period of the vertical synchronization signal 1 and the signal period of the vertical synchronization signal 2 are the same as the signal period of the vertical synchronization signal 3 .
- the phases of the vertical synchronization signal 1 , the vertical synchronization signal 2 , and the vertical synchronization signal 3 match. It can be understood that, in an actual implementation process, a certain phase error may exist between the vertical synchronization signal 1, the vertical synchronization signal 2, and the vertical synchronization signal 3 due to various factors (eg, processing performance). It should be noted that, when understanding the method of the embodiment of the present application, the above-mentioned phase error is ignored.
- the above-mentioned vertical synchronization signal 1 , vertical synchronization signal 2 and vertical synchronization signal 3 are all periodic discrete signals.
- FIG. 2A there is a vertical synchronization signal 1 at every signal period, a vertical synchronization signal 2 at every signal period, and a vertical synchronization signal 3 at every signal period.
- the signal periods of the vertical synchronization signal 1 , the vertical synchronization signal 2 and the vertical synchronization signal 3 can all be referred to as the synchronization period T Z . That is to say, the synchronization period in the embodiment of the present application is the inverse of the frame rate of the electronic device.
- the names of the vertical synchronization signals may be different.
- the name of the above-mentioned vertical synchronization signal ie, vertical synchronization signal 1
- the name of the vertical synchronization signal for triggering the drawing of one or more layers may not be VSYNC_APP.
- the name of the vertical synchronization signal is, as long as it is a synchronization signal with similar functions and conforms to the technical idea of the method provided by the embodiments of the present application, it should be covered within the protection scope of the present application.
- the definitions of the above-mentioned vertical synchronization signals may also be different.
- the definition of the vertical synchronization signal 1 may be: the vertical synchronization signal 1 may be used to trigger the rendering of one or more layers;
- the definition of the vertical synchronization signal 2 may be: the vertical synchronization signal 2 may be used to trigger the rendering of one or more layers. It is used to trigger the generation of image frames according to one or more layers;
- the definition of the vertical synchronization signal 3 can be: the vertical synchronization signal 3 can be used to trigger the display of image frames.
- the definition of the vertical synchronization signal is not limited. However, no matter what the definition of the vertical synchronization signal is, as long as it is a synchronization signal with similar functions and conforms to the technical idea of the method provided by the embodiments of the present application, it should be covered within the protection scope of the present application.
- the above-mentioned display screen is a touch screen
- the user's operation on the display screen is a touch operation as an example
- the introduction is from "the user's finger inputs a touch operation on the touch screen” to "the touch screen displays the touch operation”.
- Corresponding image” process the software processing flow of the electronic device.
- the electronic device may include: a touch panel (TP)/TP driver (Driver) 10, an Input framework (ie Input Framework) 20, a UI framework (ie UI Framework) 30, and a Display framework (ie Display Framework) 40 and hardware display module 50.
- TP touch panel
- Driver Driver
- Input Framework ie Input Framework
- UI framework ie UI Framework
- Display framework ie Display Framework
- the software processing flow of the electronic device may include the following steps (1) to (5).
- Step (1) After the TP in the TP IC/TP driver 10 collects the touch operation of the user's finger on the TP of the electronic device, the TP driver reports the corresponding touch event to the Event Hub.
- Step (3) The UI thread (such as Do Frame) in the UI framework 30 draws one or more layers corresponding to the touch event; the rendering (Render) thread (such as Draw Frame) performs layering on one or more layers render.
- the above UI thread is a thread in a central processing unit (Central Processing Unit, CPU) of the electronic device.
- a Render thread is a thread in the GPU of an electronic device.
- the synthesis thread (Surface Flinger) in the Display framework 40 performs layer synthesis on one or more layers drawn (ie, one or more layers after rendering) to obtain an image frame.
- Step (5) The liquid crystal display panel (Liquid Crystal Display, LCD) of the hardware display module 50 is driven to receive the synthesized image frame, and the synthesized image frame is displayed by the LCD. After the LCD displays the image frame, the image displayed by the LCD can be perceived by the human eye.
- LCD Liquid Crystal Display
- the UI framework in response to the user's touch operation on the TP or UI event, can call the UI thread to draw one or more layers corresponding to the touch event after the arrival of the vertical synchronization signal 1, and then call the Render thread to One or more layers are rendered; then, the hardware composition (Hardware Composer, HWC) can call the composition thread to draw one or more layers (that is, one or more images after rendering) after the arrival of the vertical synchronization signal 2. layer) to perform layer synthesis to obtain image frames; finally, the hardware display module can refresh and display the above image frames on the LCD after the arrival of the vertical synchronization signal 3.
- the above UI event may be triggered by a user's touch operation on the TP.
- the UI event may be triggered automatically by the electronic device.
- the foreground application of the electronic device automatically switches the screen, the above UI event may be triggered.
- the foreground application is the application corresponding to the interface currently displayed on the display screen of the electronic device.
- the TP may periodically detect the user's touch operation. After the TP detects the touch operation, it can wake up the vertical synchronization signal 1 and vertical synchronization signal 2 to trigger the UI framework to perform layer drawing and rendering based on the vertical synchronization signal 1, and the hardware synthesis HWC to perform layer synthesis based on the vertical synchronization signal 2.
- the detection period of the TP for detecting the touch operation is the same as the signal period T3 of the vertical synchronization signal 3 (eg HW_VSYNC).
- the UI framework periodically performs layer drawing and rendering based on the vertical synchronization signal 1; the hardware synthesis HWC is based on the vertical synchronization signal 2 for periodic layer synthesis; LCD is based on the vertical synchronization signal 3 cycles.
- the image frame is refreshed automatically.
- the electronic device may drop frames during the process of drawing, rendering, synthesizing and refreshing display image frames of layers. Specifically, during the process of refreshing the display screen to display the image frame, a blank image may be displayed. In this way, the coherence and smoothness of the displayed image on the display screen will be affected, thereby affecting the user's visual experience.
- a vertical synchronization signal 1 arrives; in response to the vertical synchronization signal 1 at time t1, the electronic device performs "draw_1" and "render_1"; at time t2 , a vertical synchronization signal 2 arrives; in response to the vertical synchronization signal 2 at time t2, the electronic device executes "image frame synthesis_1"; at time t3, a vertical synchronization signal 3 arrives; in response to the vertical synchronization at time t3 Signal 3, the electronic device executes "image frame display_1". As shown in FIG.
- the Render thread takes a long time to render the layer, which also causes "drawing” and “rendering” to be unable to be completed in one synchronization cycle (not shown in the figure).
- the image displayed on the display screen has a frame loss phenomenon, that is, the display screen will display a blank image.
- the method of the embodiment of the present application it is possible to avoid the phenomenon of frame loss in the displayed image, so as to prevent the display screen from displaying a blank image. That is to say, the method of the embodiment of the present application can reduce the possibility of frame loss when the electronic device displays an image, and can ensure the smoothness of the displayed image on the display screen, thereby improving the user's visual experience.
- the execution body of the method provided in the embodiment of the present application may be an apparatus for processing an image.
- the apparatus may be any of the above-mentioned electronic devices (for example, the apparatus may be the electronic apparatus 100 shown in FIG. 1 ).
- the apparatus may also be a central processing unit (English: Central Processing Unit, CPU for short) of an electronic device, or a control module in the electronic device for executing the method provided by the embodiment of the present application.
- CPU Central Processing Unit
- the method provided by the embodiments of the present application is introduced by taking the above-mentioned method for image processing performed by an electronic device (such as a mobile phone) as an example.
- the vertical synchronization signal 1 (such as the VSYNC_APP signal) in the embodiment of the present application is the first vertical synchronization signal
- the vertical synchronization signal 2 (such as the VSYNC_SF signal) is the second vertical synchronization signal
- the vertical synchronization signal 3 (such as the HW_VSYNC signal) is The third vertical sync signal.
- the embodiments of the present application provide an image processing method.
- the image processing method may include S301-S302.
- the electronic device draws the first layer, renders the first layer, and caches the rendered first layer in the SF queue.
- S302 The electronic device draws the first layer before the first time, the electronic device draws the second layer before the first time, and renders the second layer, and caches the rendered second layer in the SF cache queue.
- the above-mentioned first layer may be drawn by the electronic device at the moment when a vertical synchronization signal 1 arrives.
- the first layer may be layer 1 drawn by the electronic device by executing "Draw_1" shown in FIG. 4A
- the layer 1 is the layer 1 drawn by the electronic device in response to the vertical synchronization signal 1 at time t1 , at time t1 started to draw.
- the second layer may be layer 2 drawn by executing “draw_2” after the electronic device executes “draw_1” shown in FIG. 4A or FIG. 5A to draw layer 1 .
- the above-mentioned first layer may be drawn after the drawing of one frame of layers is completed and before the arrival of the next vertical synchronization signal 1 .
- the first layer may be layer 2 drawn by the electronic device executing "draw_2" shown in FIG. 4A .
- the second layer may be layer 3 drawn by executing “draw_3” after the electronic device executes “draw_2” shown in FIG. 4A to draw layer 2 .
- the above-mentioned layer 2 ie, the first layer
- the above-mentioned layer 2 may be the electronic device after the drawing of the above-mentioned layer 1 is completed (that is, the electronic device finishes executing the above-mentioned "draw_1") and before the arrival of the vertical synchronization signal 1 at time t2 , drawn at time t 1.4 .
- time t 1.4 is after time t 1 and before time t 2 .
- the time t 1.4 shown in FIG. 4A is the same time as the time t x shown in FIG. 2C .
- the electronic device completes "Draw_1".
- the first layer may be layer 3 drawn by the electronic device executing "draw_3" shown in FIG. 4A .
- the second layer may be the layer 4 drawn by the electronic device executing “Draw_4” after drawing the layer 3 by executing “Draw_3” shown in FIG. 4A .
- the above-mentioned layer 3 ie, the first layer
- the above-mentioned layer 3 may be the electronic device after the drawing of the above-mentioned layer 2 is completed (that is, the electronic device finishes executing the above-mentioned "draw_2") and before the arrival of the vertical synchronization signal 1 at time t3 , drawn at time t 2.4 .
- time t 2.4 is after time t 2 and before time t 3 .
- the first moment is the moment when the vertical synchronization signal 1 for triggering the electronic device to draw the second layer arrives.
- the first layer is layer 1 drawn by the electronic device executing “Draw_1” shown in FIG. 4A
- the second layer is the map drawn by the electronic device executing “Draw_2” shown in FIG. 4A
- the above-mentioned first time is time t 2 shown in FIG. 4A ; in the conventional technology, the vertical synchronization signal 1 at time t 2 is used to trigger the electronic device to execute “draw_2” to draw layer 2 .
- the first layer is layer 2 drawn by the electronic device executing “draw_2” shown in FIG. 4A
- the second layer is drawn by the electronic device executing “draw_3” shown in FIG. 4A
- the above-mentioned first moment is the moment t 3 shown in FIG. 4A ; in the conventional technology, the vertical synchronization signal 1 at the moment t 3 is used to trigger the electronic device to execute “draw_3” to draw layer 3 .
- the UI thread of the electronic device periodically draws layers based on the vertical synchronization signal 1 . Therefore, in the conventional technology, the electronic device executes S301, even if the UI thread of the electronic device has completed the drawing of the first layer, but if the vertical synchronization signal 1 is not detected, the UI thread of the electronic device will not draw the second layer of. The UI thread of the electronic device will not start to draw the second layer until the next vertical synchronization signal 1 arrives.
- a vertical synchronization signal 1 arrives; in response to the vertical synchronization signal 1 at time t1, the UI thread of the electronic device can execute "draw_1" to draw layer 1 (ie, the first One layer), and then "render_1” is executed by the Render thread of the electronic device to render layer 1.
- the UI thread completes "draw_1" at time t x shown in FIG. 2C , that is, completes the drawing task of the first layer.
- FIG. 2C As shown in FIG. 2C, at time t1, a vertical synchronization signal 1 arrives; in response to the vertical synchronization signal 1 at time t1, the UI thread of the electronic device can execute "draw_1" to draw layer 1 (ie, the first One layer), and then "render_1" is executed by the Render thread of the electronic device to render layer 1.
- the UI thread completes "draw_1" at time t x shown in FIG. 2C , that is, completes the drawing task of the first layer.
- the UI thread can execute "draw_2" to draw layer 2 (ie, the second layer), the Render thread executes "render_2" to render layer 2.
- the above-mentioned idle period of the UI thread (the period of ⁇ t1 shown in FIG. 2C ) can be used to draw the second layer in advance.
- the drawing task of the second layer can be completed in advance, which can improve the possibility that the electronic device completes "rendering_2" before the arrival of the vertical synchronization signal 2 at time t3 shown in FIG. 2C .
- the electronic device may execute S302. This embodiment of the present application introduces a specific method for the electronic device to perform S302.
- the electronic device when the electronic device finishes drawing the first layer before the first moment, it can start drawing the second layer immediately after the drawing of the first layer is completed, and render the second image Floor.
- the foregoing S302 may include S302a.
- the electronic device finishes drawing the first layer before the first moment, and the electronic device responds to the end of drawing the first layer, draws the second layer, and renders the second layer, and caches the rendered layer in the SF cache queue. Second layer.
- a vertical synchronization signal 1 arrives; in response to the vertical synchronization signal 1 at time t1, the UI thread of the electronic device can execute “draw_1” to draw layer 1 (that is, the first One layer), and then "render_1” is executed by the Render thread of the electronic device to render layer 1.
- the UI thread completes “draw_1” at time t 1.4 shown in FIG. 4A , that is, the layer 1 is drawn.
- the UI thread may start to execute "draw_2" from time t 1.4 to draw layer 2 (ie, the second layer), and the Render thread executes "draw_2"render_2" render layer 2. Instead of waiting for the vertical synchronization signal 1 at time t 2 , the “draw_2” drawing layer 2 starts to be executed at time t 2 .
- the UI thread completes "draw_2" at time t 2.4 shown in FIG. 4A , that is, the drawing of layer 2 (ie, the first layer) is completed.
- the UI thread may start to execute “draw_3” to draw layer 3 (ie, the second layer) at time t 2.4 , and the Render thread executes “render”.
- _3" render layer 3. Instead of waiting for the vertical synchronization signal 1 at time t 3 , the “draw_3” drawing layer 3 starts to be executed at time t 3 .
- the UI thread completes "draw_3" at time t 3.4 shown in FIG. 4A , that is, the drawing of layer 3 (ie, the first layer) is completed.
- the UI thread may start to execute “draw_4” to draw layer 4 (ie, the second layer) at time t 3.4 , and the Render thread executes “render”.
- _4" render layer 4. Instead of waiting for the vertical synchronization signal 1 at time t 4 , the “draw_4” drawing layer 4 starts to be executed at time t 4 .
- “drawing_2” and “rendering_2” may be completed before the arrival of the vertical synchronization signal 2 at time t3 .
- the electronic device (such as the composition thread of the electronic device) can perform "image frame composition_2" in response to the vertical synchronization signal 2 at time t 3 , so that the electronic device (such as the LCD of the electronic device) can respond to t
- "image frame display_2" is executed. In this way, in the synchronization period from time t 4 to time t 5 shown in FIG. 2C , the problem of frame loss in the displayed image on the display screen (ie, the display screen will display a blank image) can be solved.
- the electronic device does not necessarily start to draw the second layer immediately in response to the end of drawing the first layer.
- the foregoing S302 may include S302b-S302c.
- the electronic device draws the first layer before the second time, the electronic device starts to draw the second layer from the second time, and renders the second layer, and caches the rendered second layer in the SF cache queue.
- the second time is a time-consuming time for triggering the electronic device to draw a predetermined percentage of the signal period of the vertical synchronization signal 1 of the first layer, and the predetermined percentage is less than 1.
- the preset percentage may be any value such as 50%, 33.33%, or 40%.
- the preset percentage may be pre-configured in the electronic device, or may be set by the user in the electronic device. In the following embodiments, the method of the embodiments of the present application is described by taking the preset percentage equal to 33.33% (ie, 1/3) as an example.
- the vertical synchronization signal 1 at time t1 shown in FIG. 5A is used to trigger the electronic device to execute “Draw_1” to draw layer 1 (ie, the first layer); the second time is t1 / Time 3 is the time-consuming time of the preset percentage of the signal period T1 of the vertical synchronization signal 1 at time t1.
- the period from time t 1 to time t 1/3 is a preset percentage of T 1 , for example, the period from time t 1 to time t 1/3 is equal to 1/3 of T 1 (ie, T 1 /3) .
- the first time is time t 2 shown in FIG. 5A
- the second time is time t 1/3 shown in FIG. 5A , which is before time t 2 .
- the electronic device executes “draw_1” and finishes drawing layer 1 (ie, the first layer) at time t 1.5 , which is before time t 1/3 (ie, the second time). That is, the electronic device finishes drawing layer 1 before time t 1/3 (ie, the second time). Therefore, the electronic device may execute S302b, and execute "Draw_2" to draw layer 2 (ie, the second layer) from time t 1/3 (ie, the second time).
- the vertical synchronization signal 1 at time t 2 shown in FIG. 5A is used to trigger the electronic device to execute “Draw_2” to draw layer 2 (ie, the first layer);
- the second time is t 2 Time /3 is the time-consuming time of the preset percentage of the signal period T2 of the vertical synchronization signal 1 at time t2 .
- the duration from time t 2 to time t 2/3 is equal to 1/3 of T2, that is, the duration from time t 2 to time t 2/3 is a preset percentage of T2.
- the first time is the time t 3 shown in FIG. 5A
- the second time is the time t 2/3 shown in FIG. 5A
- the time t 2/3 is before the time t 3 .
- the electronic device executes “draw_2” and finishes drawing layer 2 (ie, the first layer) at time t 2.5 , which is before time t 2/3 (ie, the second time). That is, the electronic device finishes drawing layer 2 before time t 2/3 (ie, the second time). Therefore, the electronic device may execute S302b, and execute "Draw_3" to draw layer 3 (ie, the second layer) from time t 2/3 (ie, the second time).
- the electronic device draws the first layer before the first time and after the second time, the electronic device responds to the end of drawing the first layer, draws the second layer, renders the second layer, and caches the second layer in the SF
- the queue caches the second layer after rendering.
- the vertical synchronization signal 1 at time t 3 shown in FIG. 5A is used to trigger the electronic device to execute “Draw_3” to draw layer 3 (ie, the first layer);
- the above-mentioned second time is t 3 / Time 3 , that is, the time-consuming time of the preset percentage of the signal period T3 of the vertical synchronization signal 1 at time t3 .
- the period from time t 3 to time t 3/3 is equal to T3/3, that is, the period from time t 3 to time t 3/3 is a preset percentage of T3.
- the first time is the time t 4 shown in FIG. 5A
- the second time is the time t 3/3 shown in FIG. 5A
- the time t 3/3 is before the time t 4 .
- the electronic device executes “draw_3” and finishes drawing layer 3 at time t 3.5 ; wherein, time t 3.5 is after time t 3/3 (ie, the second time), and at time t 4 (ie, the first time) a moment before). Therefore, the electronic device may execute S302c, and in response to the electronic device drawing layer 3 at time t 3.5 , execute "Draw_4" to draw layer 4 (ie, the second layer) at time t 3.5 .
- the electronic device may cache the rendered layer in the SF buffer queue (Buffer).
- the SF Buffer can cache the rendered layers in a queue according to the principle of first-in, first-out.
- the Render thread of the electronic device executes "render_1" shown in Fig. 5A to obtain the rendered layer 1; the Render thread can insert the rendered layer 1 into the SF Buffer; Then, the Render thread of the electronic device executes "render_2" shown in FIG. 5A to obtain the rendered layer 2; the Render thread can insert the rendered layer 2 into the SF Buffer; subsequently, the Render thread of the electronic device executes the rendering of FIG. 5A
- the "render_3" shown gets the rendered layer 3; the Render thread can insert the rendered layer 3 into the SF Buffer.
- SF Buffer caches layer 1, layer 2 and layer 3 according to the principle of first-in, first-out. That is to say, the layers in the SF Buffer shown in Figure 6 are queued in the order of layer 1, layer 2, and layer 3, and dequeued in the order of layer 1, layer 2, and layer 3.
- the method of this embodiment of the present application may further include S303 - S304 .
- the electronic device in response to the vertical synchronization signal 2, performs layer synthesis on the layers buffered in the SF buffer queue to obtain image frames, and buffers the synthesized image frames.
- the electronic device refreshes and displays the buffered image frame in response to the vertical synchronization signal 3 .
- a vertical synchronization signal 2 arrives; in response to the vertical synchronization signal 2 at time t 2 , the composition thread of the electronic device can execute the “image frame composition_1” pair.
- the rendered layer 1 is layered to obtain an image frame 1; at time t 3 shown in FIG. 4A or FIG. 5A, a vertical synchronization signal 3 arrives; in response to the vertical synchronization signal 3 at time t 3 , the LCD of the electronic device "Image frame display_1" can be executed to refresh and display the above-mentioned image frame 1.
- a vertical synchronization signal 2 arrives; in response to the vertical synchronization signal 2 at time t3 , the composition thread of the electronic device can execute "image frame composition_2" to render the layer 2. Perform layer synthesis to obtain image frame 2; at time t 4 shown in FIG. 4A or FIG. 5A, a vertical synchronization signal 3 arrives; in response to the vertical synchronization signal 3 at time t 4 , the LCD of the electronic device can execute "image”.
- Frame display_2 refresh the above image frame 2.
- a vertical synchronization signal 2 arrives; in response to the vertical synchronization signal 2 at time t 4 , the composition thread of the electronic device can execute "image frame composition_3" to the rendered image Layer 3 performs layer synthesis to obtain image frame 3; at time t 5 shown in FIG. 4A or FIG. 5A, a vertical synchronization signal 3 arrives; in response to the vertical synchronization signal 3 at time t 5 , the LCD of the electronic device can execute " Image frame display_3", refresh and display the above image frame 3.
- the "cached layer” described in S303 refers to the layer cached in the above-mentioned SF Buffer, such as the layer cached in the SF Buffer as shown in FIG. 6 .
- the synthesis thread of the electronic device in response to the vertical synchronization signal 2 at time t 2 shown in FIG. 4A or FIG. 5A , the synthesis thread of the electronic device can obtain layer 1 from the SF Buffer shown in FIG. team), execute "image frame composition_1" to perform layer composition on the rendered layer 1 to obtain image frame 1.
- “buffering the image frame” described in S303 refers to buffering the synthesized image frame into a frame (Frame) Buffer.
- the Frame Buffer can buffer image frames in a queue according to the principle of first-in, first-out.
- the image frame 1 obtained by the composition thread of the electronic device executing "image frame composition_1" shown in FIG. 4A or FIG. 5A may be inserted into the Frame Buffer shown in FIG. 7A .
- the composition thread of the electronic device executes 4 or “image frame composition_2” shown in FIG. 5A to obtain image frame 2, which can continue to be inserted into the Frame Buffer shown in FIG. 7A ; then, the composition thread of the electronic device executes FIG. 4A or FIG. 5A
- the shown "image frame composition_3" obtains the image frame 3 which can be inserted into the Frame Buffer shown in FIG. 7A.
- Frame Buffer buffers image frame 1, image frame 2 and image frame 3 according to the principle of first-in, first-out. That is to say, the layers in the Frame Buffer shown in FIG. 7A are queued in the order of image frame 1, image frame 2, and image frame 3, and dequeued in the order of image frame 1, image frame 2, and image frame 3. That is, the electronic device executes S304, and in response to the vertical synchronization signal 3, can refresh and display the image frames buffered in the Frame Buffer according to the principle of first-in, first-out.
- the UI thread of the electronic device performs the layer drawing task, which is triggered by the vertical synchronization signal 1; Can perform a layer drawing task.
- the UI thread does not need to trigger the vertical synchronization signal 1 to execute the layer drawing task; the UI thread can execute multiple layer drawing tasks in one synchronization period (ie, within one frame). Specifically, as shown in FIG. 4A or FIG.
- the UI thread after the UI thread finishes executing a layer drawing task, it can use the idle period to execute the next layer drawing task in advance; in this way, the UI thread can execute the next layer drawing task in the synchronization cycle (ie, a frame) can perform multiple layer drawing tasks.
- FIG. 7B shows that when the electronic device executes the solution of the conventional technology, those skilled in the art use the Android
- the general-purpose SysTrace tool captures electronic devices to draw timing diagrams of multi-frame layers.
- FIG. 7C shows that when the electronic device executes the solution of the embodiment of the present application, a person skilled in the art uses the electronic device captured by the SysTrace tool to draw the sequence diagram of the above-mentioned multi-frame layer.
- FIG. 7B and FIG. 7C shows that the electronic device executes the solution of the embodiment of the present application, a person skilled in the art uses the electronic device captured by the SysTrace tool to draw the sequence diagram of the above-mentioned multi-frame layer.
- the signal period of the vertical synchronization signal 1 is 11.11ms.
- the electronic device draws one frame of layers in response to one vertical synchronization signal 1 , and draws the next frame of layers in response to the next vertical synchronization signal 1 . Therefore, the frame interval between two adjacent frame layers is equal to the signal period of the vertical synchronization signal 1 (eg, 11.11ms).
- the drawing duration of one frame of layer is longer than the above signal period, the frame interval between this layer and the next frame layer will be longer than the signal period of vertical synchronization signal 1 (eg 11.11ms).
- the frame interval between two adjacent frames of layers will not be smaller than the signal period of the vertical synchronization signal 1 .
- the frame interval between two adjacent layers is greater than or equal to 11.11ms, for example, the frame interval between two adjacent layers is 11.35ms, and 11.35ms>11.11ms.
- the electronic device can draw the next frame of layer in response to the completion of drawing one frame of layer without waiting for the vertical synchronization signal 1 . Therefore, the frame interval between two adjacent frame layers is smaller than the signal period of the vertical synchronization signal 1 (eg, 11.11 ms).
- the frame interval between this layer and the next frame of layer may be greater than or equal to the signal period of vertical sync signal 1 (eg 11.11ms). That is to say, by implementing the solutions of the embodiments of the present application, the frame interval between layers of two adjacent frames may be smaller than the signal period of the vertical synchronization signal 1.
- the frame interval between two adjacent layers is 1.684ms, 1.684ms ⁇ 11.11ms.
- the electronic device after the electronic device finishes executing one layer drawing task, it can continue to execute the next layer drawing task, instead of waiting for the arrival of the vertical synchronization signal 1 to execute the next layer drawing task. That is to say, the electronic device can use the idle period of the UI thread (the period of ⁇ t1 shown in FIG. 2C ) to execute the next layer drawing task in advance. In this way, the layer drawing and rendering tasks can be completed in advance, the possibility of frame loss when the electronic device displays the image can be reduced, the smoothness of the displayed image on the display screen can be ensured, and the user's visual experience can be improved.
- the idle period of the UI thread the period of ⁇ t1 shown in FIG. 2C
- the electronic device in response to a user's touch operation on the TP or a UI event, the electronic device can start the above-mentioned process of layer drawing, rendering, composition, and image frame display based on the vertical synchronization signal.
- the electronic device may also respond to a user's touch operation on the TP or a UI event, and start the above-mentioned process of layer drawing, rendering, composition, and image frame display based on the vertical synchronization signal.
- the solution of the embodiment of the present application is different from the conventional technology: after starting the above process, the electronic device can no longer perform the layer drawing task based on the vertical synchronization signal 1; instead, in response to the completion of the previous layer drawing task, continue Execute the next layer drawing task.
- the electronic device does not perform layer drawing, rendering, composition, and image frame display according to the processes of S301-S304 for all touch operations or UI events.
- the electronic device can perform layer drawing, rendering, composition and image frame according to the process of S301-S304 show.
- the method in this embodiment of the present application may further include: the electronic device receives the first UI event.
- the electronic device may wake up the vertical sync signal.
- the electronic device can execute S301-S304.
- the first UI event is used to trigger the electronic device to display preset image content or display image content in a preset manner.
- the above-mentioned preset image content or image content displayed in a preset manner may be referred to as "deterministic animation".
- the above-mentioned first UI event may be a user operation received by the electronic device.
- the first UI event is a user operation (such as a touch operation, etc.) that can trigger the electronic device to display predefined image content. That is to say, the image content displayed by the electronic device triggered by the first UI event can be predetermined by the electronic device. Therefore, the electronic device can use the idle period of the UI thread to perform the layer drawing task in advance.
- the above-mentioned first UI event may be a Fling operation (also called a Fling gesture) input by the user on a display screen (eg, a touch screen) of the electronic device.
- a Fling gesture also called a Fling gesture
- the electronic device receives the Fling gesture input by the user, the user's finger slides against the display screen. After the finger leaves the display screen, the animation displayed on the display screen still slides in the direction of the finger sliding with "inertia" until it stops. That is to say, the electronic device can slide according to the inertia of the Fling gesture to calculate the image content to be displayed by the electronic device.
- the electronic device may use the idle period of the UI thread to perform the layer drawing task in advance.
- FIG. 7D shows a sequence diagram of drawing a multi-frame layer by an electronic device captured by a person skilled in the art using the SysTrace tool in the process that the electronic device receives and responds to the above-mentioned Fling operation.
- the electronic device receiving and responding to the Fling operation can be divided into four stages of falling (Down), sliding (Move), lifting (Up) and fling (Fling) as shown in FIG. 7D .
- Down shown in FIG. 7D means that the user's finger falls on the display screen (such as a touch screen) of the electronic device, and the electronic device can detect that the user's finger is down (Down).
- the Move shown in FIG. 7D means that the user's finger slides on the display screen after falling on the display screen, and the electronic device can detect the sliding (Move) of the user's finger.
- the Fling shown in FIG. 7D means that after the user lifts the finger, the animation displayed on the display screen still slides in the direction of the finger sliding with the "inertia".
- the electronic device can draw the layers in advance.
- the frame interval between two adjacent layers at time t o -t p , and the frame interval between two adjacent layers at time t p -t q are smaller than The frame interval between two adjacent layers at other times.
- the frame interval is equal to the signal period of the vertical synchronization signal 1 . It can be seen that, in the Fling stage shown in FIG. 7D , the electronic device draws at least two layers in advance.
- the above-mentioned first UI event may also be a user's click operation on a preset control in the foreground application.
- the foreground application is the application corresponding to the interface currently displayed on the display screen of the electronic device.
- the image content to be displayed by the electronic device is predefined. Therefore, the electronic device can use the idle period of the UI thread to perform the layer drawing task in advance.
- the mobile phone displays the call record interface 801 of the phone application shown in (a) of FIG. 8A .
- the above-mentioned first UI event may be a user's click operation on the preset control “contact book” 802 in the call recording interface 801 .
- the user's click operation on the preset control “address book” 802 is used to trigger the mobile phone to display the address book interface, such as the address book interface 803 shown in (b) of FIG. 8A .
- the Contacts interface is predefined. Therefore, in response to the user's click operation on the preset control "contact book” 802, the mobile phone can wake up the vertical synchronization signal, and execute the method of the embodiment of the present application.
- the mobile phone displays the main interface 804 shown in (a) of FIG. 8B .
- the main interface 804 includes the icon 805 of the setting application.
- the above-mentioned first UI event may be a user's click operation on the icon 805 of the setting application shown in (a) of FIG. 8B .
- the user's click operation on the icon 805 of the setting application ie, the preset control
- the settings interface 806 is predefined. Therefore, in response to the user's click operation on the icon 805 of the setting application, the mobile phone can wake up the vertical synchronization signal, and execute the method of the embodiment of the present application.
- the interface displayed by the mobile phone is also predefined.
- the mobile phone may display the mobile network setting interface.
- the mobile network settings interface is predefined. Therefore, in response to the user's click operation on some function options in the setting interface, the mobile phone can wake up the vertical synchronization signal, and execute the method of the embodiment of the present application.
- the mobile phone displays the main interface 804 shown in (a) of FIG. 8C .
- the main interface 804 includes an icon 807 of a video application.
- the above-mentioned first UI event may be the user's click operation on the icon 807 of the **video application shown in (a) of FIG. 8C .
- the user's click operation on the icon 807 of the **video application ie, the preset control
- the advertisement page of the **video application shown in (b) of FIG. 8C may be displayed.
- the advertising page for this **video app is pre-defined. Therefore, in response to the user's click operation on the icon 807 of the **video application, the mobile phone can wake up the vertical synchronization signal and execute the method of the embodiment of the present application.
- the above-mentioned first UI event may be a UI event automatically triggered by the electronic device.
- the foreground application of the electronic device automatically switches the screen, the above UI event may be triggered.
- the foreground application is the application corresponding to the interface currently displayed on the display screen of the electronic device.
- the electronic device when the electronic device displays a "deterministic animation" in response to the above-mentioned first UI event, it can perform layer drawing, rendering, composition, and image frame display according to the process of S301-S304.
- the possibility of frame loss when the electronic device displays an image can be reduced, the smoothness of the displayed image on the display screen can be ensured, and the user's visual experience can be improved.
- the electronic device can perform layer drawing, rendering, composition, and image frame display according to the process of S301-S304.
- the above preset function may also be referred to as an advance drawing function, a preprocessing function, or a smart layer processing function, or the like.
- the above preset mode may also be referred to as an advance drawing mode, a preprocessing mode, or an intelligent layer processing mode, or the like.
- the electronic device may enable the above-mentioned preset function or enter the above-mentioned preset mode in response to the user's operation of opening a preset option in the electronic device.
- the above preset option may be a function switch of a setting interface of the electronic device.
- the layers rendered by the Render thread of the electronic device are buffered in the SF Buffer, and the synthesizing thread sequentially performs layer synthesis on the layers buffered in the SF Buffer in response to the vertical synchronization signal 2.
- the SF Buffer of the electronic device can only cache the layers of 2 frames; then, there may be a problem that the layer drawn and rendered in advance by the electronic device cannot be cached in the SF Buffer. . In this way, the layer drawn and rendered by the electronic device in advance will overflow due to insufficient cache of the SF Buffer.
- FIG. 10 shows a schematic diagram of layer drawing, rendering, compositing and image frame display in the method of the embodiment of the present application; During the process of the method shown in Figure 9, the enqueue and dequeue of the layers in the SF Buffer.
- the UI thread of the electronic device can execute “Draw_A” to draw layer A in response to the vertical synchronization signal 1 at time t1, and then the Render thread can execute “Render_A” to render layer A.
- the Render thread of the electronic device finishes executing "Render_A" at time t A shown in FIG. 9 or FIG. 10A .
- the rendered layer A is queued in the SF Buffer.
- the composition thread of the electronic device can execute “image frame composition_A”; therefore, At time t2 , as shown in Fig. 10A, layer A is dequeued from the SF Buffer, and "image frame composition_A" is executed by the composition thread.
- the Render thread finishes executing “render_B”; therefore, as shown in FIG. 10B , the rendered layer B is queued in the SF Buffer at time t B.
- the Render thread finishes executing “render_C”; therefore, as shown in FIG. 10B , the rendered layer C is queued in the SF Buffer at time t C.
- the Render thread finishes executing “render_D”; therefore, as shown in FIG. 10B , the rendered layer D will be queued in the SF Buffer at time t D.
- the next vertical synchronization signal 2 ie, the next vertical synchronization signal 2 after time t2
- the layer B shown in FIG. 10B will be output from the SF Buffer.
- Team, "Image Frame Composition_B” is executed by the composition thread. That is to say, when layer D is enqueued in the SF Buffer at time t D , layer B has not yet been dequeued by the composition thread executing "image frame composition_B".
- layer D is queued in the SF Buffer at time t D , which will cause layer B to be dequeued from the SF Buffer at time t D , that is, layer B is dequeued from the SF Buffer at time t D.
- SF Buffer overflow.
- One frame that is, the synchronization period from time t4 to time t5 ) directly refreshes and executes "image frame display_C" to refresh and display image frame C; instead of refreshing and displaying image frame B corresponding to "rendering_B", there will be lost
- the phenomenon of frame affects the continuity of the displayed image of the electronic device and affects the user's visual experience.
- the electronic device can also expand the buffer space of the SF Buffer. For example, the electronic device can set the buffer space of the SF Buffer to M+p frames.
- the size of the buffer space of the SF Buffer may be determined according to the number of frames dropped by the electronic device within a preset time.
- M is the size of the buffer space of the SF Buffer before setting
- p is the number of frames dropped by the electronic device within the preset time.
- the electronic device can count the number of dropped frames in the process of executing the first UI event by the electronic device within the preset time, and set the size of the buffer space of the SF Buffer (ie, M+p) according to the counted number of dropped frames p.
- the above preset time may be one week, one day or half a day before the electronic device receives the first UI event this time.
- M is the size of the buffer space of the SF Buffer before setting
- p is a preset positive integer.
- the specific value of p can be pre-configured in the electronic device; or, can be set by the user.
- p can be equal to any positive integer such as 1, 2, or 3.
- the electronic device in response to the completion of the rendering of the second layer, if the SF Buffer is not enough to cache a new layer, the electronic device can expand the SF Buffer to increase the SF Buffer.
- the upper limit N of the SF Buffer can be set.
- the electronic device can set the buffer space of the SF Buffer to N frames at most. That is, when M+p is greater than the preset upper limit value N, the electronic device can set the buffer space of the SF Buffer to N frames.
- the specific value of N can be pre-configured in the electronic device; or, can be set by the user. For example, N can be equal to any positive integer such as 5, 6, 8, 10, etc.
- the electronic device may pre-configure the size of the buffer space of the SF Buffer. For example, in response to the above-mentioned first UI event, the electronic device may pre-configure the size of the buffer space (that is, M+p) of the SF Buffer according to the first UI event.
- M+p can be equal to any positive integer such as 5, 6, 8, 10, etc.
- FIG. 10E shows that when the electronic device performs the conventional technical solution, those skilled in the art use Schematic diagram of the change of the buffered frame in the SF Buffer captured by the general SysTrace tool.
- FIG. 10F shows a schematic diagram of changes of cached frames in the SF Buffer captured by a person skilled in the art using the SysTrace tool when the electronic device executes the solutions of the embodiments of the present application.
- each upward arrow shown in FIG. 10E and FIG. 10F is used to indicate that a buffer frame is added in the SF Buffer; each downward arrow shown in FIG. 10E and FIG. 10F is used to indicate that the SF Buffer is reduced in A cached frame.
- the buffered frame in the SF Buffer can only add one buffered frame in each signal cycle. Moreover, when the electronic device implements the conventional technical solution, the number of buffered frames in the SF Buffer does not exceed three.
- the SF Buffer adds one buffer frame and then reduces one buffer frame, and the number of buffer frames in the SF Buffer does not exceed three.
- the SF Buffer adds one buffer frame, and then reduces another buffer frame, and the number of buffer frames in the SF Buffer does not exceed three.
- the SF Buffer adds one buffer frame, and then reduces another buffer frame, and the number of buffer frames in the SF Buffer does not exceed three.
- the buffered frame in the SF Buffer can add multiple buffered frames in each signal cycle. Moreover, when the electronic device executes the method of the embodiment of the application, the number of buffered frames in the SF Buffer may exceed three.
- the SF Buffer includes at least two buffer frames.
- the SF Buffer is decreased by one, and two buffered frames are added, and the SF Buffer includes at least three buffered frames.
- the SF Buffer is decreased by one, and two buffered frames are added, and the SF Buffer includes at least four buffered frames.
- the electronic device in order to prevent the layer overflow in the SF Buffer from affecting the continuity of the image displayed by the electronic device, in this embodiment of the present application, can determine whether the above-mentioned SF Buffer has enough buffer space before executing the above-mentioned S302 Can be used to cache layers drawn and rendered ahead of time by electronic devices. Specifically, before S302, the method in this embodiment of the present application may further include: S1001-S1002.
- the electronic device determines the buffer space of the SF Buffer and the number of buffered frames in the SF Buffer.
- the cache space of the SF Buffer refers to the maximum number of layers that can be cached in the SF Buffer.
- the number of cached frames in the SF Buffer refers to the number of layers currently cached in the SF Buffer.
- the electronic device calculates the difference between the buffer space of the SF Buffer and the number of buffered frames in the SF Buffer to obtain the remaining buffer space of the SF Buffer.
- the buffer space of the SF Buffer is 3 frames, and the number of buffered frames in the SF Buffer is 2 frames; then, the remaining buffer space of the SF Buffer is 1 frame.
- the electronic device may execute S302. It can be understood that if the remaining buffer space of the SF Buffer is greater than the first preset threshold value, it means that the remaining buffer space of the SF Buffer is sufficient to cache the layers drawn and rendered in advance. In this case, the electronic device may execute S302 to draw and render the layer in advance.
- the electronic device After S1002, if the remaining buffer space of the SF Buffer is less than the second preset threshold value, it means that the remaining buffer space of the SF Buffer is not enough to buffer the layer drawn and rendered in advance. In this case, the electronic device will not execute S302 to draw and render the layer in advance; instead, according to the conventional technique, in response to the vertical synchronization signal 1, draw the second layer, and render the second layer, The second layer after rendering is cached in the SF Buffer.
- the electronic device can execute S1001- S1002. After S1002, if the remaining buffer space of the SF Buffer is greater than the first preset threshold value, the electronic device may execute S302 to draw and render the layer in advance. After S1002, if the remaining buffer space of the SF Buffer is less than the second preset threshold value, the electronic device will not execute S302, and draw and render the layer in advance; instead, in response to the vertical synchronization signal 1, draw and render the layer.
- the electronic device can execute S301- S304.
- the electronic device executes the process of the embodiment of the present application when the remaining buffer space of the SF Buffer is greater than the first space threshold, that is, when the remaining buffer space of the SF Buffer is sufficient to cache the layers drawn and rendered in advance. method to draw and render the layer ahead of time. In this way, it can reduce the problem of frame loss when drawing and rendering the layer in advance due to insufficient buffer space of the SF Buffer, and can reduce the possibility of frame loss when the electronic device displays the image, and can ensure the continuity of the displayed image on the display screen. Improve the user's visual experience.
- the animation native algorithm calculates the movement distance of the layer according to the time when the UI thread starts to draw the layer, and draws the layer according to the movement distance of the layer.
- the above method is used to calculate the movement distance, and the display screen of the electronic device is prone to jitter.
- the electronic device executes “Draw_A” to draw layer a in response to the vertical synchronization signal 1 at time t1.
- the electronic device can calculate the moving distance of the layer a according to the time when the electronic device starts to draw the layer a (ie time t1 ), and draw the layer a according to the moving distance of the layer a.
- the movement distance of a layer refers to the movement distance of the image content in the layer compared to the image content in the previous frame layer.
- the electronic device starts to execute the “draw_b” drawing layer b at time t b .
- the electronic device can calculate the moving distance of layer b according to the time when the electronic device starts to draw layer b (ie time t b ), and draw layer b according to the moving distance.
- the electronic device starts to execute the “draw_c” drawing layer c at time t c .
- the electronic device can calculate the moving distance of the layer c according to the time when the electronic device starts to draw the layer c (ie time t c ), and draw the layer c according to the moving distance.
- the electronic device starts to execute the “draw_d” drawing layer d at time t d .
- the electronic device can calculate the moving distance of the layer d according to the time when the electronic device starts to draw the layer d (ie time t d ), and draw the layer d according to the moving distance.
- the time taken to draw one frame of layer is too long (as shown in Figure 11A, the time taken to draw layer c is too long), not only the problem of frame loss will occur, but also the electronic device will start to draw the next frame.
- the time difference between the time of the frame layer (such as layer d) and the time when the electronic device starts to draw layer c is too large.
- the signal period of the vertical synchronization signal 1 is 16.67 ms.
- the time difference between time t c and time t d is too large, and the time difference is greater than 18.17 ms.
- the difference between the time when the layer d starts to be drawn and the time when the electronic device starts to draw the layer c and the synchronization period is too large.
- the synchronization period (ie, the signal period of the vertical synchronization signal 1 ) is the inverse of the frame rate of the electronic device.
- the electronic device refreshes and displays multiple frames of images with different moving distances respectively with a fixed duration (ie, a synchronization period)
- a fixed duration ie, a synchronization period
- the synchronization period is 11.1 ms.
- the animation native algorithm calculates the moving distance according to the time when each layer is drawn as shown in Figure 11A, and the display effect of the electronic device is: _a" corresponds to a frame of images in which the train travels at a constant speed, in a frame of images corresponding to "draw_b" the train travels at a constant speed, in a frame of images corresponding to "draw_c” the train suddenly accelerates, and "draw_d” corresponds to An image of the train suddenly slows down. That is, the display screen of the electronic device shakes.
- the electronic device may selectively calculate the movement distance of the layer based on the synchronization period of the electronic device or the time when the layer starts to be drawn.
- the method for drawing the second layer by the electronic device in S302 may include S1101.
- the electronic device calculates the movement distance of the second layer according to the signal period of the vertical synchronization signal 1, and draws the second layer according to the movement distance of the second layer.
- the movement distance of the second layer is the movement distance of the image content in the second layer compared to the image content in the first layer.
- the above S1101 may include S1101a-S1101b.
- the electronic device calculates the processing time of the second layer according to the signal period of the vertical synchronization signal 1.
- the electronic device calculates the movement distance of the second layer according to the processing time of the second layer, and draws the second layer according to the movement distance of the second layer.
- the processing time of the second layer is p i-1 +T i- 1 , i ⁇ 2, i is a positive integer.
- the p i-1 is the processing time of the i-1 th layer
- the T i-1 is the signal period of the vertical synchronization signal 1 used to trigger the electronic device to draw the i-1 th layer.
- the layer a drawn by the electronic device executing “draw_a” shown in FIG. 11A is the first layer drawn by the electronic device in response to the first UI event;
- the layer b drawn by “draw_b” is the second layer drawn by the electronic device in response to the first UI event;
- the layer c drawn by the electronic device executing "draw_c” shown in FIG. 11A is the response of the electronic device
- the layer d drawn by the electronic device executing "draw_d” shown in FIG. 11A is the fourth layer drawn by the electronic device in response to the first UI event.
- p 1 is the time when the electronic device starts to draw the above-mentioned layer a (time t 1 shown in FIG. 11A ); then, p 2 is the time t 2 shown in FIG. 11A .
- the electronic device can calculate the movement distance of the layer b according to the time t 2 , and draw the layer b according to the movement distance.
- the electronic device can calculate the movement distance of layer a according to time t1 , and draw layer a according to the movement distance.
- p 2 +T 2 is time t 3 shown in FIG. 11A
- the processing time p 3 of the layer c is time t 3 shown in FIG. 11A .
- the electronic device can calculate the movement distance of the layer c according to the time t 3 , and draw the layer c according to the movement distance.
- p 3 +T 3 is time t 4 shown in FIG. 11A
- the processing time p 4 of the layer d is time t 4 .
- the electronic device can calculate the movement distance of the layer d according to the time t4, and draw the layer d according to the movement distance.
- the electronic device can calculate the movement distance of the layer according to the processing time of the layer.
- the time difference between the processing time of one frame of layer and the processing time of the previous frame of layer is equal to the signal period of the vertical synchronization signal (ie, the above-mentioned synchronization period).
- the time difference between the processing time t2 of layer b and the processing time t1 of layer a is T1 equal to the synchronization period T1 ;
- the time difference between the processing time t3 of layer c and the processing time t2 of layer b T 2 is equal to the synchronization period T 2 .
- T1 the time difference between the processing time t3 of layer c and the processing time t2 of layer b T 2
- the processing time of the second layer is Max(pi -1 + T i-1 , p i '), i ⁇ 2, i is a positive integer.
- the p i-1 is the processing time of the i-1 th layer
- the T i-1 is the signal period of the vertical synchronization signal 1 used to trigger the electronic device to draw the i-1 th layer.
- p i ′ is the time when the electronic device starts to draw the i-th layer
- p i-1 is the processing time of the i-1-th layer.
- p 1 is the processing time of the first layer
- the processing time of the first layer is equal to the time when the electronic device starts to draw the first layer.
- the processing time of layer a is the time when the electronic device starts to draw the layer a (ie, time t1 shown in FIG. 11A ). That is to say, the time p 1 when the electronic device starts to draw the first layer is the time t 1 shown in FIG. 11A .
- the electronic device can calculate the movement distance of the layer a according to the time t1 , and draw the layer a according to the movement distance.
- the processing time p 2 of the layer b is Max(p 1 +T 1 , p 2 ′).
- p 2 ′ is the time t b when the electronic device starts to draw the second layer. Since p 1 is time t 1 shown in FIG. 11A ; therefore, p 1 +T 1 is time t 2 shown in FIG. 11A . Since t 2 is greater than t b (ie p 2 ′); therefore, the processing time p 2 of layer b is time t 2 . In this way, the electronic device can calculate the movement distance of the layer b according to the time t 2 , and draw the layer b according to the movement distance.
- the processing time p 3 of the layer c is Max(p 2 + T 2 , p 3 ′).
- p 3 ′ is the time t c when the electronic device starts to draw the third layer. Since p 2 is time t 2 shown in FIG. 11A ; therefore, p 2 +T 2 is time t 3 shown in FIG. 11A . Since t 3 is greater than t c (ie p 3 ′); therefore, the processing time p 3 of layer c is time t 3 . In this way, the electronic device can calculate the movement distance of the layer c according to the time t 3 , and draw the layer c according to the movement distance.
- the processing time p 4 of the layer d is Max(p 3 + T 3 , p 4 ′).
- p 4 ′ is the time t d when the electronic device starts to draw the fourth layer. Since p 3 is time t 3 shown in FIG. 11A ; therefore, p 3 +T 3 is time t 4 shown in FIG. 11A . Since t d (ie p 3 ′) is greater than t 4 ; therefore, the processing time p 4 of layer d is time t d (ie p 3 ′). In this way, the electronic device can calculate the movement distance of the layer d according to the time t d , and draw the layer d according to the movement distance.
- the electronic device may calculate the processing time of the second layer in the above manner, and save the processing time of the second layer in the time buffer queue of the electronic device.
- the above-mentioned time cache queue can cache the processing time of each layer according to the principle of first-in, first-out.
- the electronic device can selectively calculate the moving distance of the layer according to the time when the layer is started to be drawn or the processing time of the layer. In this way, it can be ensured that the time difference between the processing time of most layers and the processing time of the previous frame is equal to the signal period of the vertical synchronization signal (ie, the above synchronization period).
- the time difference between the processing time t2 of layer b and the processing time t1 of layer a is T1 equal to the synchronization period T1 ;
- the time difference between the processing time t3 of layer c and the processing time t2 of layer b T 2 is equal to the synchronization period T 2 . In this way, it is possible to reduce the possibility that the display screen of the electronic device shakes.
- the method of this implementation manner can reduce the possibility of frame loss when the electronic device displays an image; however, the possibility of frame loss will inevitably occur because the electronic device takes a long time to draw some layers. For example, as shown in FIG. 11A , it takes a long time for the electronic device to execute the layer c drawn by “ draw_c ”, which causes the electronic device to drop frames at time t5 - t6. In this case, the time difference and synchronization period between the processing time of the next frame layer (such as layer d) and the processing time of this frame layer (such as layer c) will be different.
- the time difference between the processing time t d of the layer d and the processing time t 3 of the layer c is the period of time t 3 - time t d , which is greater than the synchronization period T 3 .
- the layers drawn by the electronic device do not take such a long time, so the possibility of this situation is very low.
- FIG. 11B shows that the electronic device is When the animation native algorithm of Figure 11A calculates the movement distance of each layer, it is a schematic diagram of the change of the movement distance of each layer.
- FIG. 11B shows a schematic diagram of changes in the movement distances of each layer when the electronic device performs S1101 to calculate the movement distances of each of the layers shown in FIG. 11A .
- the abscissa in (b) of FIG. 11B and (c) of FIG. 11B is the frame number of each layer, and the ordinate is the movement distance of each layer.
- the frame number of the layer is used to indicate that the layer is the nth layer drawn by the electronic device, and n is a positive integer.
- a point 1102 is used to represent the moving distance of the layer c drawn by the electronic device executing “Draw c” shown in FIG. 11A
- a point 1103 is used to represent the electronic device
- the movement distance of the layer d drawn by performing "draw d" shown in FIG. 11A calculates the movement distance of layer c and layer d, and the movement distance of layer c in the previous frame (the movement distance represented by point 1102) shown in (b) in FIG. 11B will be larger, while The phenomenon that the movement distance of the layer d in the next frame (the movement distance represented by the point 1103 ) suddenly becomes smaller, that is, the phenomenon of screen shaking.
- the foregoing method is introduced in combination with the process of drawing layers in advance by the electronic device shown in FIG. 12 and expanding the SF.
- the electronic device can start the vertical synchronization signal (ie VSYNC); in response to the VSYNC at time t1 , the UI thread of the electronic device can draw layer 1, and the Render thread Render the drawn layer 1; at the time t x1 after the time t 1 , the UI thread has finished drawing the layer 1; the UI thread can draw the layer 2, and the rendered layer 2 is rendered by the Render thread.
- the vertical synchronization signal ie VSYNC
- the UI thread of the electronic device can draw layer 1, and the Render thread Render the drawn layer 1; at the time t x1 after the time t 1 , the UI thread has finished drawing the layer 1; the UI thread can draw the layer 2, and the rendered layer 2 is rendered by the Render thread.
- the layer 1 can be cached to the SF Buffer.
- the number of layers cached in the SF Buffer is 0. Therefore, after the Render thread caches the layer 1 in the SF Buffer at time t s1 , the number of layers cached in the SF Buffer becomes 1.
- the UI thread has finished drawing layer 2; the UI thread can draw layer 3, and the drawn layer 3 is rendered by the Render thread.
- the synthesis thread can read the above-mentioned layer 1 from the SF Buffer, perform layer synthesis on the layer 1, and obtain the image frame 1; that is, the layer 1 is output from the SF Buffer. Team, the number of layers cached in the SF Buffer becomes 0.
- the layer 2 can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the UI thread has finished drawing layer 3; the UI thread can draw layer 4, and the drawn layer 4 is rendered by the Render thread.
- the layer 3 can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 2.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays the image frame 1; and the synthesis thread can read the layer 2 from the SF Buffer, perform layer synthesis on the layer 2, and obtain the image frame 2; that is, layer 2 is dequeued from the SF Buffer. Therefore, at time t 3 shown in Figure 12, the number of layers buffered in the SF Buffer can become 1; however, at time t 3 , after the Render thread has finished rendering layer 4, the layer 4 can be cached in SF Buffer. Therefore, at time t 3 , the number of layers cached in the SF Buffer is still 2.
- VSYNC arrives, the UI thread draws layer 5, and the layer 5 is rendered by the Render thread.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays the image frame 2; the synthesis thread can read the layer 3 from the SF Buffer, perform layer synthesis on the layer 3, and obtain the image frame 3; That is, layer 3 is dequeued from the SF Buffer. Therefore, at time t4 shown in Figure 12 , the number of layers buffered in the SF Buffer can become 2. And, in response to VSYNC at time t4, the UI thread can draw layer 6 , and the drawn layer 6 is rendered by the Render thread.
- the number of layers in the SF Buffer can reach the upper limit. Therefore, after the UI thread draws layer 6 after time t 4 and before the arrival of VSYNC at time t 5 , the UI thread will not draw the layer in advance. At time t s5 shown in FIG. 12 , after the Render thread finishes rendering layer 6, the layer 6 can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays the image frame 3; the synthesis thread can read the layer 4 from the SF Buffer, perform layer synthesis on the layer 4, and obtain the image frame 4; That is, layer 4 is dequeued from the SF Buffer. Therefore, at time t5 shown in Figure 12 , the number of layers buffered in the SF Buffer can become 2. And, in response to VSYNC at time t5, the UI thread can draw layer 7 , and the drawn layer 7 is rendered by the Render thread.
- the number of layers in the SF Buffer can reach the upper limit. Therefore, after the UI thread draws layer 7 after time t 5 and before the arrival of VSYNC at time t 6 , the UI thread will not draw the layer in advance. At time t s6 shown in FIG. 12 , after the Render thread finishes rendering layer 7, the layer 7 can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays image frame 4; the synthesis thread can read layer 5 from the SF Buffer, perform layer synthesis on layer 5, and obtain image frame 5; That is, layer 5 is dequeued from the SF Buffer. Therefore, at time t 6 shown in FIG. 12 , the number of layers buffered in the SF Buffer can become 2. And, in response to VSYNC at time t6, the UI thread can draw layer 8 , and the layer 8 drawn by the Render thread is rendered. It can be understood that if the layer 8 rendered and drawn by the Render thread is buffered to the SF Buffer; then, the number of layers in the SF Buffer can reach the upper limit.
- the UI thread will not draw the layer in advance.
- the layer 8 can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- the electronic device draws the first layer before the first time, and the electronic device draws the second layer before the first time, which may include: if the electronic device draws the second layer before the first time After the first layer is drawn, the electronic device generates an XSYNC (also referred to as an XSYNC signal) before the first moment; the electronic device draws the second layer in response to the XSYNC.
- the electronic device draws layer 2 in response to XSYNC at time t x1 ; the electronic device draws layer 3 in response to XSYNC at time t x2 ; the electronic device draws layer 3 in response to XSYNC at time t x3 Layer 3.
- the electronic device may receive an interruption event for triggering the electronic device to stop displaying the image content corresponding to the first UI event.
- the SF Buffer may also cache the layers drawn and rendered in advance by the electronic device.
- the electronic device receives the above-mentioned interrupt event, how to process the layer corresponding to the first UI event buffered in the SF Buffer is described.
- the layer cached in the SF Buffer may not be deleted.
- the method in this embodiment of the present application may further include S1301-S1302.
- the electronic device receives a second UI event, where the second UI event is an interrupt (Down) event used to trigger the electronic device to stop displaying the image content corresponding to the first UI event.
- the second UI event is an interrupt (Down) event used to trigger the electronic device to stop displaying the image content corresponding to the first UI event.
- the above-mentioned second UI event may be a user operation (eg, a touch operation) that can trigger the electronic device to display image content different from the above-mentioned first UI event. That is, the image content displayed by the electronic device triggered by the second UI event is different from the image content displayed by the electronic device triggered by the first UI event.
- a user operation eg, a touch operation
- the above-mentioned second UI event may be a UI event that triggers the image displayed by the electronic device to be a "deterministic animation”; it may also be a UI event that triggers the electronic device to display any other image content except the above-mentioned "deterministic animation” UI events.
- the electronic device stops drawing the layer of the first UI event, and in response to the vertical synchronization signal 1, draws the third layer of the second UI event, renders the third layer, and stores it in the SF cache queue Cache the third layer after rendering.
- the electronic device receives a Down event (ie, the second UI event) at time t Down .
- a Down event ie, the second UI event
- the UI thread of the electronic device stops drawing the layer at the first UI time (the layer 9 after the layer 8 shown in FIG. 12 ); in response to the vertical synchronization signal 1 (such as VSYNC at time t7) ), the UI thread draws layer 1', and the Render thread renders the drawn layer 1'.
- the LCD of the electronic device refreshes and displays the image frame 5 ;
- the synthesis thread can read the layer 6 from the SF Buffer, perform layer synthesis on the layer 6, and obtain the image frame 6; that is, the layer 6 Dequeue from SF Buffer. Therefore, at time t7 shown in FIG. 12 , the number of layers buffered in the SF Buffer can become 2.
- the layer 1 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays image frame 6; the synthesis thread can read layer 7 from the SF Buffer, perform layer synthesis on layer 7, and obtain image frame 7; That is, layer 7 is dequeued from the SF Buffer. Therefore, at time t8 shown in Figure 12 , the number of layers buffered in the SF Buffer can become 2. And, in response to VSYNC at time t8, the UI thread can draw layer 2 ', and the drawn layer 2' is rendered by the Render thread. At time t s9 shown in FIG. 12 , after the Render thread finishes rendering layer 2 ′, the layer 2 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- VSYNC arrives, and the LCD of the electronic device refreshes and displays image frame 7; the synthesis thread can read layer 8 from the SF Buffer, perform layer synthesis on layer 8, and obtain image frame 8; That is, layer 8 is dequeued from the SF Buffer. Therefore, at time t 9 shown in FIG. 12 , the number of layers buffered in the SF Buffer can become 2. And, in response to VSYNC at time t9, the UI thread can draw layer 3 ', and the drawn layer 3' is rendered by the Render thread. At time t s10 shown in FIG. 12 , after the Render thread finishes rendering layer 3 ′, the layer 3 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 3.
- VSYNC arrives, and the synthesis thread can read layer 1' from the SF Buffer, perform layer synthesis on layer 1', and obtain image frame 1'; that is, layer 1' is converted from SF Dequeue from Buffer.
- layer 1 ′, layer 2 ′ and layer 3 ′ are all third layers.
- the electronic device receives the Down event at time t Down , there are 2 frames of layers (layer 6 and layer 7) cached in the SF Buffer; and the Render thread is drawing layer 8.
- the UI thread starts to draw the layer of the Down event at time t7, there are 3 frames of layers (layer 6, layer 7 and layer 8) cached in the SF Buffer.
- the electronic device may not delete the layer of the first UI event cached in the SF Buffer (such as layer 6, layer 7 and layer 8). ); instead, it continues to composite the layers in the SF Buffer in response to VSYNC, and refreshes to display the composited image frame.
- the electronic device may not delete the layer of the first UI event cached in the SF Buffer (such as layer 6, layer 7 and layer 8). ); instead, it continues to composite the layers in the SF Buffer in response to VSYNC, and refreshes to display the composited image frame.
- the above solution of not deleting the layer of the first UI event cached in the SF Buffer may cause the electronic device to delay the display of the second UI event because there are many layers of the first UI event cached in the SF Buffer.
- the touch response delay of the electronic device is relatively large, and the follow-up performance of the electronic device is poor.
- the delay time from "the user's finger inputs a touch operation on the touch screen” to "the touch screen displays an image corresponding to the touch operation and is perceived by the human eye” may be referred to as a touch response delay.
- the hand-holding performance of an electronic device can be reflected in the length of the touch response delay.
- the better the tracking performance of the electronic device the better the user experience of controlling the electronic device through touch operation, and the smoother the feeling.
- the electronic device can delete some or all of the layers cached in the SF Buffer.
- the electronic device can delete some of the layers cached in the SF Buffer. Specifically, as shown in FIG. 14 , after the above S1302, the electronic device may not execute S303-S304, but execute S1303.
- the electronic device After receiving the second UI event, the electronic device, in response to the vertical synchronization signal 2, determines whether the layer of the first UI event is included in the SF buffer queue.
- the electronic device can execute S1304 and S303-S304; if the SF cache queue does not include the layer of the first UI event, the electronic device can execute S303 -S304.
- the electronic device deletes the layer of the first UI event cached in the SF cache queue.
- the P frame layer buffered in the SF buffer queue (ie, the SF Buffer) is the layer of the first UI event.
- the electronic device may delete the Q frame layer in the P frame layer cached in the SF cache queue, and map the first frame layer of the SF cache queue after deleting the above Q frame layer. Layer compositing to get image frames, and buffering the composited image frames.
- the P frame layer is the layer of the first UI event, Q ⁇ P, and both P and Q are positive integers.
- the electronic device receives a Down event (ie, the second UI event) at time t Down .
- a Down event ie, the second UI event
- the process of layer drawing, layer rendering, layer composition and image frame display is the same as the process shown in FIG. 12 .
- the process of drawing and rendering layer 1', layer 2', and layer 3' by the device is the same as the process shown in FIG. 12, and details are not described here in this embodiment of the present application.
- the electronic device can determine whether the SF buffer queue includes the layer of the first UI event .
- the electronic device may perform S1304 to delete the layer of the first UI event cached in the SF Buffer every frame.
- Q the number of the first UI event cached in the SF Buffer every frame.
- the electronic device (such as the composition thread of the electronic device) can delete the 3-frame layer cached in the SF Buffer. 1 frame layer (that is, the layer 6 of the head of the SF cache queue); and, the electronic device (such as the synthesis thread of the electronic device) can delete the above layer 6.
- the first frame layer of the SF cache queue ( That is, layer 7) performs layer synthesis to obtain image frame 7, and buffers the synthesized image frame 7.
- layer 7 is deleted from the SF Buffer.
- the SF Buffer is dequeued for compositing image frame 7, and only layer 8 is left in the SF Buffer.
- the number of layers buffered in the SF Buffer becomes 1.
- the layer 1 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 2.
- the electronic device executes S1303, and can determine that the layer 8 of the first UI event is buffered in the SF Buffer.
- the electronic device (such as the composition thread of the electronic device) can execute S1304 to delete layer 8 and perform layer composition on layer 1'; as shown in FIG.
- layer 8 is dequeued from the SF Buffer and deleted,
- the layer 1' is dequeued from the SF Buffer for compositing the image frame 1', and the number of layers buffered in the SF Buffer becomes 0.
- the number of layers buffered in the SF Buffer becomes 0.
- the layer 2 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the electronic device executes S1303, and can determine that only layer 2' of the second UI event is cached in the SF Buffer, and the layer of the first UI event is not cached.
- the electronic device (such as the composition thread of the electronic device) can execute S1305 to perform layer composition on layer 2'; as shown in FIG.
- the layer 3 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the electronic device executes S1303, and can determine that only the layer 3' of the second UI event is cached in the SF Buffer, and the layer of the first UI event is not cached.
- the electronic device (such as the composition thread of the electronic device) can execute S1305 to perform layer composition on layer 3'; at time t10, layer 3 ' is dequeued from the SF Buffer for compositing image frame 3', and the buffered image frame 3' in the SF Buffer is dequeued.
- the number of layers becomes 0.
- the electronic device executes S1304 to delete the multi-frame layer of the first UI event buffered in the SF Buffer each time, that is, Q ⁇ 2.
- the electronic device can delete the 3-frame layer cached in the SF Buffer.
- 2-frame layer ie, layer 6 and layer 7 at the head of the SF cache queue
- the layer 1 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the electronic device executes S1303, and can determine that only layer 1' of the second UI event is cached in the SF Buffer, and the layer of the first UI event is not cached.
- the electronic device can perform S1305 to perform layer composition on layer 1'; as shown in FIG.
- layer 1 ' is dequeued from the SF Buffer for synthesizing image frame 1', and the layer cached in the SF Buffer is dequeued.
- the number becomes 0.
- the number of layers buffered in the SF Buffer becomes 0.
- the layer 2 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the electronic device executes S1303, and can determine that only layer 2' of the second UI event is cached in the SF Buffer, and the layer of the first UI event is not cached.
- the electronic device may execute S1305 to perform layer composition on layer 2'; at time t9, layer 2 ' is dequeued from the SF Buffer to synthesize image frame 2', and the number of layers buffered in the SF Buffer becomes 0.
- the layer 3 ′ can be cached in the SF Buffer, and the number of layers buffered in the SF Buffer becomes 1.
- the electronic device executes S1303, and can determine that only the layer 3' of the second UI event is cached in the SF Buffer, and the layer of the first UI event is not cached.
- the electronic device may execute S1305 to perform layer composition on layer 3'; at time t10, layer 3 ' is dequeued from the SF Buffer for synthesizing the image frame 3', and the number of layers buffered in the SF Buffer becomes 0.
- the electronic device can process multiple frames of layers of the first UI event at one time in response to a vertical synchronization signal 2 (such as the above-mentioned VSYNC).
- a vertical synchronization signal 2 such as the above-mentioned VSYNC.
- the electronic device may add a first UI event to the layer of the above-mentioned first UI event (that is, the UI event corresponding to the "deterministic animation").
- the layer with the first mark bit cached in the SF Buffer can be deleted.
- the method in this embodiment of the present application may further include S1901-S1902 and S1301-S1302. Wherein, after S1902, the electronic device may execute S303-S304.
- the electronic device sets a first flag bit for each frame layer of the first UI event, where the first flag bit is used to indicate that the corresponding layer is the layer of the first UI event.
- the UI thread of the electronic device may add a first marker bit to this frame of layer after drawing a frame of the first UI event.
- the electronic device executes S301, and after the UI thread finishes drawing the first layer, the UI thread may add a first mark bit to the first layer.
- the electronic device executes S301, and after the UI thread finishes drawing the second layer, the UI thread can add a first mark bit to the second layer.
- the electronic device After receiving the second UI event, the electronic device responds to the vertical synchronization signal 2 and deletes the layer in the SF buffer queue with the first flag bit set.
- S1902 may include: in response to the second UI event, the electronic device triggers a preset query event; in response to the preset query event, the electronic device sets the second mark bit, and does not include the first mark bit in the SF cache queue. Remove the second marker bit when layering. Wherein, the second mark bit is used to trigger the electronic device to delete the layer in the SF buffer queue where the first mark bit is set in response to the vertical synchronization signal 2 .
- the layer with the first mark bit in the SF cache queue can be deleted in response to the vertical synchronization signal 2; after the electronic device deletes the second mark bit, in response to the vertical synchronization signal 2 Then, instead of performing the operation of "deleting the layer with the first marker bit set in the SF buffer queue", continue to perform layer composition on the layer cached in the SF buffer.
- the UI thread of the electronic device can trigger a preset query event to the synthesis thread.
- the synthesis thread can delete the layer set with the first mark bit in the SF cache queue when receiving the vertical synchronization signal 2, and does not include the layer set with the first mark bit in the SF cache queue. Remove the second marker bit when layering.
- the above-mentioned second marker bit may also be referred to as a Delete marker bit.
- the electronic device receives a Down event (ie, the second UI event) at time t Down .
- a Down event ie, the second UI event
- the process of layer drawing, layer rendering, layer composition and image frame display is the same as that shown in FIG.
- the electronic device can delete the SF buffer queue that is set with the first The layer that marks the bit.
- the electronic device can delete the SF buffer queue that is set with the first The layer that marks the bit.
- 3 frames of layers including layer 6, layer 7 and layer 8) are cached in the SF Buffer.
- Layer 6, layer 7 and layer 8 are the layers of the first UI event, and the layer 6, layer 7 and layer 8 are all provided with the first mark bit.
- the electronic device can delete layer 6, layer 7 and layer 8.
- the number of layers cached in SF Buffer becomes 0; therefore, in response to Figure 19
- the composition thread will not perform layer composition at the VSYNC (such as vertical synchronization signal 2) at time t 7 shown in Figure 19.
- the LCD of the electronic device can Refresh and display image frame 5.
- the electronic device (such as the composition thread of the electronic device) does not perform layer composition, nor does it cache a new image frame in the SF Buffer; therefore, the response At VSYNC (including vertical synchronization signal 3 ) at time t8 shown in FIG. 19 , the LCD of the electronic device can only continue to display image frame 5 .
- the electronic device may need to process multiple VSYNC signals (eg, vertical synchronization signal 2 ) before completely deleting the cached layer in the SF Buffer that is set with the first flag bit.
- VSYNC signals eg, vertical synchronization signal 2
- the electronic device receives a Down event (ie, the second UI event) at time t Down .
- a Down event ie, the second UI event
- the processes of layer drawing, layer rendering, layer synthesis and image frame display are the same as those shown in FIG.
- the VSYNC such as the vertical synchronization signal 2
- the Render thread has not finished rendering layer 8 ; therefore, in response to the VSYNC at time t7 shown in FIG. 20 (such as the vertical synchronization signal 2) 2), the compositing thread can only delete layer 6 and layer 7 cached in the SF Buffer.
- the Render thread When VSYNC (eg, vertical synchronization signal 2 ) arrives at time t 8 shown in FIG. 20 , the Render thread has finished rendering layer 8 and buffers layer 8 to the SF Buffer. Therefore, in response to VSYNC (eg, vertical synchronization signal 2 ) at time t8 shown in FIG. 20 , the synthesis thread can delete layer 8 buffered in the SF Buffer. Moreover, when VSYNC (eg, vertical synchronization signal 2 ) arrives at time t8 shown in FIG. 20 , the Render thread has finished rendering layer 1 ′ and buffers layer 1 ′ into the SF Buffer. Therefore, in response to VSYNC (eg, vertical synchronization signal 2 ) at time t8 shown in FIG. 20 , the composition thread can perform layer composition on layer 1 ′ to reach image frame 1 ′.
- VSYNC eg, vertical synchronization signal 2
- the electronic device processes 2 VSYNC signals (such as VSYNC at time t7 and VSYNC at time t8), and then completely deletes the cached layer in the SF Buffer that is set with the first flag bit .
- the electronic device may delete the layer of the first UI event buffered in the SF Buffer in response to a vertical synchronization signal 2. In this way, after the next vertical synchronization signal 2 arrives, the electronic device can directly synthesize the layer of the interrupt event. In this way, the touch response delay of the electronic device in response to the second UI event can be shortened, and the follow-up performance of the electronic device can be improved.
- the electronic device calculates the movement distance of the corresponding layer according to the processing time of each layer. Moreover, the electronic device may cache the processing time of each layer in the time cache queue. After the electronic device executes the above process and deletes the layer of the first UI event cached in the SF Buffer, if the layer drawn by the electronic device does not fall back to the one before the first frame layer (such as layer 6) deleted by the electronic device The frame layer (eg, layer 5) may cause a large jump in the image content displayed by the electronic device, affecting the user experience.
- the electronic device deletes layer 6, layer 7 and layer 8 cached in the SF Buffer. After the electronic device deletes the layer 6 , the layer 7 and the layer 8 , the image frame displayed by the electronic device is the image frame 6 corresponding to the layer 5 .
- the UI thread of the electronic device has already processed to layer 8. That is to say, the processing logic of the UI thread has reached layer 8.
- the electronic device calculates the processing time of the next frame of layer according to the processing time of layer 8, and then calculates the moving distance according to the calculated processing time of the next frame of the layer, the display screen of the electronic device will appear by the layer The movement distance corresponding to 5 jumps directly to the movement distance corresponding to layer 8, and the content of the image displayed by the electronic device jumps greatly. Based on this, in the method of the embodiment of the present application, the electronic device may redraw the fourth layer, so as to return the logic of drawing the layer by the electronic device to the fourth layer, and obtain the processing time of the fourth layer.
- the fourth layer is a layer next to the layer corresponding to the image frame being displayed by the electronic device when the electronic device receives the second UI event.
- the UI thread of the electronic device receives a Down event (ie, the second UI event) at time t Down .
- the electronic device displays image frame 4 .
- the fourth layer is the next frame layer of the layer 4 corresponding to the image frame 4, that is, the layer 5.
- the electronic device can redraw layer 5 to return the logic of drawing layers by the electronic device to layer 5 .
- the fourth layer includes a layer corresponding to the image frame being displayed by the electronic device when the electronic device receives the second UI event, and a layer next to the layer corresponding to the image frame being displayed by the electronic device.
- the UI thread of the electronic device receives a Down event (ie, the second UI event) at time t Down .
- the electronic device displays image frame 4 .
- the fourth layer includes the layer 4 corresponding to the image frame 4 and the next frame layer (ie, the layer 5 ) of the layer 4 corresponding to the image frame 4 .
- the electronic device can redraw layer 4 and layer 5 to return the logic of drawing layers by the electronic device to layer 4 and layer 5 .
- the electronic device will no longer render the fourth layer, and the processing time of the fourth layer is used by the electronic device to calculate the movement distance of the fifth layer. For example, as shown in FIG. 20 , after time t Down , the electronic device does not render layer 5 again. For another example, as shown in FIG. 22A , after time t Down , the electronic device does not render layers 4 and 5 again.
- the first flag bit is added to the layer of the first UI event (that is, the UI event corresponding to the "deterministic animation"), and the SF Buffer is deleted in response to the interrupt event (that is, the second UI event).
- the electronic device can respond to the above preset query event to query the number of layers with the first mark bit cached in the SF Buffer, and the electronic device receives the first mark bit. 2.
- the number of layers to be cached in the SF cache queue when a UI event occurs, and the sum H of the queried numbers is calculated. Then, the electronic device can determine the above-mentioned fourth layer according to the calculated H.
- the composition thread of the electronic device can query the number of layers buffered in the SF Buffer that are provided with the first flag bit, and the UI thread of the electronic device waits when the UI thread of the electronic device receives the second UI event.
- the number of layers cached in the SF cache queue, and the sum H of the queried numbers is calculated.
- the UI thread of the electronic device receives a Down event (ie, the second UI event) at time t Down .
- the UI thread can trigger a preset query event to the synthesis thread, and the synthesis thread queries the SF Buffer at the moment of t Down for the number of layers (such as layers 6 and 7) that are cached in the SF Buffer and set with the first mark bit (such as layers 6 and 7); the number of layers is 2;
- the thread inquires that the number of layers (layer 8) to be cached in the SF cache queue when the UI thread receives the second UI event is 1.
- the fourth layer can be the H+h frame counted from the frame layer at the end of the SF Buffer in the direction from the tail of the SF Buffer to the head of the team when the electronic device receives the second UI event.
- Layer; where h 0, or h takes values in ⁇ 0,1 ⁇ in sequence.
- the layers buffered in the SF Buffer are as shown in FIG. 21 or FIG. 22B .
- FIG. 21 or Figure 22B there are layers 6 and 7 cached in the SF Buffer; layer 6 is at the head of the queue, and layer 7 is at the tail of the queue.
- the fourth layer is layer 5 shown in FIG. 19 or FIG. 20 .
- the fourth layer is when the electronic device receives the second UI event (ie, time t Down ), along the direction from the tail of the SF Buffer to the head of the queue, from the end of the SF Buffer at the end of the queue.
- the UI thread of the electronic device can redraw layer 5 during the period from time t Down to time t7.
- the fourth layer is layer 4 and layer 5 shown in Figure 22A.
- the fourth layer includes when the electronic device receives the second UI event (ie, time t Down ), along the direction from the tail of the SF Buffer to the head of the queue, from the end of the SF Buffer at the end of the queue.
- the UI thread of the electronic device can redraw layer 4 and layer 5 during the period from time t Down to time t7.
- the electronic device (such as the UI thread of the electronic device) redraws the fourth layer (layer 4 and layer 5 as shown in Figure 19 or Figure 20 ); however, the electronic device (such as the electronic device) Render thread) will no longer render this fourth layer. For example, as shown in FIG. 19 or FIG. 20 , after the UI thread finishes drawing layer 5 at time t7, the Render thread does not render layer 5. For another example, as shown in FIG. 22A , after the UI thread finishes drawing layers 4 and 5 at time t7, the Render thread does not render layers 4 and 5.
- the purpose of redrawing the fourth layer by the electronic device is to return the logic of drawing the layer of the electronic device (ie, the processing logic of the UI thread) to the fourth layer.
- the processing time of this fourth layer is used to calculate the movement distance. It can be understood that by returning the logic of the drawing layer of the electronic device to the fourth layer, and calculating the movement distance according to the processing time of the fourth layer, large jumps in the image content displayed by the electronic device can be avoided.
- the animation displayed by the electronic device in response to the first UI event is a directional animation (eg, an animation of an object moving in one direction).
- a directional animation eg, an animation of an object moving in one direction.
- the movement direction of the object is the same as the above-mentioned directionality Objects in animation move in opposite directions.
- first redrawing layer 4 and then redrawing layer 5 the problem that the moving direction of the object is opposite to the moving direction of the object in the above directional animation can be solved. As shown in FIG.
- the moving direction of the object is opposite to the moving direction of the object in the above-mentioned directional animation according to the moving direction of layer 8 to layer 4; however, according to the moving direction of layer 4 to layer 5, the moving direction of the object
- the direction of movement is the same as the direction of movement of the object in the above directional animation.
- the electronic device after the electronic device deletes the layer of the first UI event cached in the SF Buffer, it can redraw the fourth layer of the first UI event. In this way, it is possible to improve the continuity of the image content displayed by the electronic device and improve the user experience.
- Some embodiments of the present application provide an electronic device, which may include: a display screen (eg, a touch screen), a memory, and one or more processors.
- the display screen, memory and processor are coupled.
- the memory is used to store computer program code comprising computer instructions.
- the processor executes the computer instructions, the electronic device can perform various functions or steps performed by the electronic device in the foregoing method embodiments.
- the structure of the electronic device reference may be made to the structure of the electronic device 100 shown in FIG. 1 .
- the chip system 2300 includes at least one processor 2301 and at least one interface circuit 2302 .
- the processor 2301 and the interface circuit 2302 may be interconnected by wires.
- the interface circuit 2302 may be used to receive signals from other devices, such as the memory of an electronic device.
- the interface circuit 2302 may be used to send signals to other devices (eg, the processor 2301 or a touch screen of an electronic device).
- the interface circuit 2302 can read the instructions stored in the memory and send the instructions to the processor 2301 .
- the electronic device can be caused to perform the various steps in the above embodiments.
- the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
- Embodiments of the present application further provide a computer storage medium, where the computer storage medium includes computer instructions, when the computer instructions are executed on the above electronic device, the electronic device is made to perform each function performed by the electronic device in the above method embodiments or step.
- Embodiments of the present application further provide a computer program product, which, when the computer program product runs on a computer, enables the computer to perform each function or step performed by the electronic device in the foregoing method embodiments.
- the computer may be the electronic device described above.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
- the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Generation (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
- Image Processing (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种图像处理方法及电子设备,涉及图像处理技术领域,具体方案包括:电子设备绘制第一图层,并渲染第一图层,在SF缓存队列缓存渲染后的第一图层(S301);电子设备在第一时刻之前绘制完第一图层,在第一时刻之前电子设备绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层(S302);其中,第一时刻是用于触发电子设备绘制第二图层的第一垂直同步信号到来的时刻。
Description
本申请要求于2020年07月31日提交国家知识产权局、申请号为202010762068.9、发明名称为“一种图像处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及图像处理技术领域,尤其涉及一种图像处理方法及电子设备。
随着电子技术的发展,各类电子设备(如手机)的性能越来越好。消费者对电子产品的人机交互性能的要求也越来越高。其中,电子设备的显示内容在用户视觉上的连贯性是一项重要的人机交互性能。
而电子设备的高帧率显示也是一种发展趋势。例如,电子设备的帧率由60赫兹(Hz)发展到90Hz,再到120Hz。而电子设备的帧率越高,则更容易出现丢帧的问题,则会导致电子设备的显示内容的不连贯,影响用户体验。因此,如何减少甚至避免电子设备显示图像时出现丢帧的现象是亟待解决的问题。
发明内容
本申请实施例提供一种图像处理方法及电子设备,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请实施例提供一种图像处理方法,该方法可以应用于电子设备。该方法中,电子设备绘制第一图层,并渲染第一图层,在SF缓存队列缓存渲染后的第一图层。电子设备在第一时刻之前绘制完第一图层,在第一时刻之前电子设备可以绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。其中,SF英文全称Surface Flinger。上述第一时刻是用于触发电子设备绘制第二图层的第一垂直同步信号到来的时刻。
本申请中,电子设备在下一个第一垂直同步信号到来之前,执行完一个图层绘制任务(即绘制完第一图层)后,可继续执行下一个图层绘制任务(即绘制第二图层),而不是等待该第一垂直同步信号到来后才绘制第二图层。也就是说,电子设备可以利用UI线程的空闲时段提前执行下一个图层绘制任务。这样,可以提前完成图层的绘制和渲染任务,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
在第一方面的一种可能的设计方式中,电子设备在在第一时刻之前绘制完第一图层后,可以立即绘制第二图层。具体的,上述电子设备在第一时刻之前绘制完第一图层,在第一时刻之前电子设备绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层,可以包括:电子设备在第一时刻之前绘制完第一图层,电子设备响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。这种设计方式,给出电子设备提前绘制第二图层的一种具体方式。
在第一方面的另一种可能的设计方式中,即使电子设备在第一时刻之前绘制完第一图层,电子设备响应于第一图层绘制结束,不一定会立即开始绘制第二图层。
具体的,电子设备在第二时刻之前绘制完第一图层,电子设备则可以从第二时刻开始绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。该第二时刻是用于触发电子设备绘制第一图层的第一垂直同步信号的信号周期的预设百分比的耗时时刻,预设百分比小于1,第二时刻在第一时刻之前。
也就是说,如果电子设备在第二时刻之前绘制完第一图层,电子设备不会立即绘制第二图层,而是等待第二时刻到达时,才开始绘制第二图层。这种设计方式,给出电子设备提前绘制第二图层的一种具体方式。
在第一方面的另一种可能的设计方式中,电子设备还可能会在第一时刻之前,第二时刻之后绘制完第一图层。在这种情况下,电子设备可以响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。也就是说,电子设备可以在于第一图层绘制结束后,立即绘制第二图层。这种设计方式,给出电子设备提前绘制第二图层的一种具体方式。
在第一方面的另一种可能的设计方式中,电子设备响应于第一UI事件,可以提前绘制第二图层。具体的,电子设备可以接收第一UI事件。该第一UI事件用于触发电子设备显示预设图像内容或者以预设方式显示图像内容。该第一UI事件包括以下任一种:电子设备接收用户输入的抛滑操作,电子设备接收用户对前台应用中预设控件的点击操作,电子设备自动触发的UI事件。响应于该第一UI事件,电子设备绘制第一图层,并渲染第一图层,在SF缓存队列缓存渲染后的第一图层。
在第一方面的另一种可能的设计方式中,为了避免SF缓存队列中图层溢出而影响电子设备显示图像的连贯性,本申请实施例中,电子设备在提前绘制第二图层之前,可以判断上述SF缓存队列是否有足够的缓存空间可用于缓存电子设备提前绘制并渲染的图层。具体的,电子设备可以确定SF缓存队列的缓存空间和SF缓存队列中缓存帧的数量,该缓存帧是缓存在SF缓存队列中的图层;然后,计算SF缓存队列的缓存空间与缓存帧的数量的差值,得到SF缓存队列的剩余缓存空间。若SF缓存队列的剩余缓存空间大于第一预设门限值,在第一时刻之前电子设备绘制完第一图层,电子设备则在第一时刻之前绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
本申请中,电子设备在SF缓存队列的剩余缓存空间大于第一空间阈值的情况下,即SF缓存队列的剩余缓存空间足以缓存提前绘制并渲染的图层的情况下,提前绘制并渲染图层。这样,可以减少由于SF缓存队列缓存空间不足时提前绘制并渲染图层,而丢帧的问题,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的连贯性,从而提升用户的视觉体验。
在第一方面的另一种可能的设计方式中,若SF缓存队列的剩余缓存空间小于第二预设门限值,电子设备则响应于第一垂直同步信号,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在第一方面的另一种可能的设计方式中,电子设备可以动态设置SF缓存队列的缓存空间。具体的,在电子设备在第一时刻之前绘制完第一图层,在第一时刻之前电子 设备绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层之前,本申请实施例的方法还可以包括:电子设备将SF缓存队列的缓存空间设置为M+p帧。其中,M为设置前SF缓存队列的缓存空间的大小;p为电子设备在预设时间内丢帧的数量,或者,p为预设的正整数。
其中,电子设备动态设置SF缓存队列的缓存空间,可对SF Buffer的缓存空间进行扩容。这样,可以解决SF Buffer中图层溢出而影响电子设备显示图像的连贯性的问题,可以提升电子设备显示图像的连贯性。
在第一方面的另一种可能的设计方式中,若M+p大于预设上限值N,电子设备则将SF缓存队列的缓存空间设置为N帧。在这种设计方式中,电子设备设置了SF缓存队列的缓存空间的上限值。
为了减少电子设备的显示画面出现抖动现象的可能性,电子设备可以根据第一垂直同步信号的信号周期,计算对应图层的运动距离并根据该运动距离绘制图层。具体的,电子设备绘制第二图层包括:电子设备根据第一垂直同步信号的信号周期,计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层;其中,第二图层的运动距离是第二图层中的图像内容相比于第一图层中的图像内容的运动距离。采用本申请的方法,可以减少电子设备的显示画面出现抖动现象的可能性。
在第一方面的另一种可能的设计方式中,电子设备根据第一垂直同步信号的信号周期,计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层的方法可以包括:电子设备根据第一垂直同步信号的信号周期,计算第二图层的处理时间;根据第二图层的处理时间计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层。
其中,当第二图层是电子设备响应于第一UI事件绘制的第i个图层时,第二图层的处理时间为p
i-1+T
i-1,i≥2,i为正整数。p
i-1为第i-1个图层的处理时间;T
i-1为用于触发电子设备绘制第i-1个图层的第一垂直同步信号的信号周期。这种设计方式,给出电子设备计算第二图层的运动距离的一种具体方式。
在第一方面的另一种可能的设计方式中,电子设备可能会接收到用于触发电子设备停止显示上述第一UI事件对应的图像内容的中断事件。如电子设备可以接收第二UI事件。该第二UI事件是用于触发电子设备停止显示上述第一UI事件对应的图像内容的中断事件。电子设备响应于第二UI事件,可以停止绘制第一UI事件的图层。之后,电子设备响应于第二垂直同步信号,删除SF缓存队列中缓存的第一UI事件的图层。该第二垂直同步信号用于触发电子设备合成渲染后的图层得到图像帧。电子设备响应于第一垂直同步信号,可以绘制第二UI事件的第三图层,渲染第三图层,在SF缓存队列中缓存渲染后的第三图层。
其中,电子设备响应于第二UI事件,停止绘制第一UI事件的图层。之后,响应于第二垂直同步信号,删除SF缓存队列中缓存的第一UI事件的图层。这样,可以使得电子设备可以尽快显示第二UI事件的图像内容,可以减少触摸响应延迟,提升电子 设备的跟手性能。
在第一方面的另一种可能的设计方式中,在电子设备接收第二UI事件之后,电子设备响应于第一垂直同步信号,绘制第二UI事件的第三图层,渲染第三图层,在SF缓存队列中缓存渲染后的第三图层之前,本申请实施例的方法还可以包括:电子设备重新绘制第四图层,以将电子设备绘制图层的逻辑回退至第四图层,并获取第四图层的处理时间。其中,第四图层是电子设备接收到第二UI事件时,电子设备正在显示的图像帧对应的图层的下一帧图层;或者,第四图层包括电子设备接收到第二UI事件时,电子设备正在显示的图像帧对应的图层,以及电子设备正在显示的图像帧对应的图层的下一帧图层。
需要说明的是,电子设备不再渲染第四图层,第四图层的处理时间用于电子设备计算第四图层的运动距离。
其中,电子设备重新绘制第四图层将电子设备绘制图层的逻辑回退至第四图层,可以避免电子设备显示的图像内容出现的大幅度跳变,提升电子设备显示的图像内容的连贯性,提升用户体验。
第二方面,本申请实施例提供一种电子设备,该电子设备包括显示屏、存储器和一个或多个处理器。该显示屏、存储器与处理器耦合。其中,显示屏用于显示处理器生成的图像。该存储器用于存储计算机程序代码,计算机程序代码包括计算机指令。当计算机指令被处理器执行时,使得电子设备执行以下操作:绘制第一图层,并渲染第一图层,在SF缓存队列缓存渲染后的第一图层;在第一时刻之前绘制完第一图层,在第一时刻之前电子设备绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。其中,上述第一时刻是用于触发电子设备绘制第二图层的第一垂直同步信号到来的时刻。
在第二方面的一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:在第一时刻之前绘制完第一图层,响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:在第二时刻之前绘制完第一图层,从第二时刻开始绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
其中,第二时刻是用于触发电子设备绘制第一图层的第一垂直同步信号的信号周期的预设百分比的耗时时刻,预设百分比小于1,第二时刻在第一时刻之前。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:在第一时刻之前,第二时刻之后绘制完第一图层,响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:接收第一UI事件,第一UI事件用于触发显示屏显示预设图像内容或者以预设方式显示图像内容;第一UI事件包括以下任一种:电子设备接收用户输入的抛滑操作,电子设备接收用户对前台应用中预设控件的点击操作,电子设备自动触发的UI事件;响应于第一UI事件,绘制第一图层,并渲染第一图层,在SF 缓存队列缓存渲染后的第一图层。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:确定SF缓存队列的缓存空间和SF缓存队列中缓存帧的数量,缓存帧是缓存在SF缓存队列中的图层;计算SF缓存队列的缓存空间与缓存帧的数量的差值,得到SF缓存队列的剩余缓存空间;若SF缓存队列的剩余缓存空间大于第一预设门限值,在第一时刻之前绘制完第一图层,则在第一时刻之前绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:若SF缓存队列的剩余缓存空间小于第二预设门限值,则响应于第一垂直同步信号,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:将SF缓存队列的缓存空间设置为M+p帧。其中,M为设置前SF缓存队列的缓存空间的大小;p为电子设备在预设时间内丢帧的数量,或者,p为预设的正整数。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:若M+p大于预设上限值N,将SF缓存队列的缓存空间设置为N帧。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:根据第一垂直同步信号的信号周期,计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层;其中,第二图层的运动距离是第二图层中的图像内容相比于第一图层中的图像内容的运动距离。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:根据第一垂直同步信号的信号周期,计算第二图层的处理时间;根据第二图层的处理时间计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层。其中,当第二图层是电子设备响应于第一UI事件绘制的第i个图层时,第二图层的处理时间为p
i-1+T
i-1,i≥2,i为正整数。p
i-1为第i-1个图层的处理时间。T
i-1为用于触发电子设备绘制第i-1个图层的第一垂直同步信号的信号周期;
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:接收第二UI事件;响应于第二UI事件,停止绘制第一UI事件的图层;响应于第二垂直同步信号,删除SF缓存队列中缓存的第一UI事件的图层;其中,第二垂直同步信号用于触发电子设备合成渲染后的图层得到图像帧;响应于第一垂直同步信号,绘制第二UI事件的第三图层,渲染第三图层,在SF缓存队列中缓存渲染后的第三图层。其中,第二UI事件是用于触发电子设备停止显示第一UI事件对应的图像内容的中断事件。
在第二方面的另一种可能的设计方式中,当计算机指令被处理器执行时,使得电子设备还执行以下步骤:重新绘制第四图层,以将电子设备绘制图层的逻辑回退至第四图层,并获取第四图层的处理时间。其中,电子设备不再渲染第四图层,第四图层的处理时间用于电子设备计算第四图层的运动距离。第四图层是接收到第二UI事件时, 显示屏正在显示的图像帧对应的图层的下一帧图层;或者,第四图层包括接收到第二UI事件时,显示屏正在显示的图像帧对应的图层,以及显示屏正在显示的图像帧对应的图层的下一帧图层。
第三方面,本申请提供一种芯片系统,该芯片系统可以应用于包括存储器和显示屏的电子设备。该芯片系统包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。该接口电路用于从上述存储器接收信号,并向处理器发送该信号,该信号包括存储器中存储的计算机指令。当处理器执行该计算机指令时,电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令。当计算机指令在电子设备上运行时,使得该电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第五方面,本申请提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第二方面及其任一种可能的设计方式所述的电子设备,第三方面所述的芯片系统,第四方面所述的计算机可读存储介质,第五方面所述的计算机程序产品所能达到的有益效果,可参考如第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
图1为本申请实施例提供的一种电子设备的硬件结构示意图;
图2A为本申请实施例提供的一种垂直同步信号的示意图;
图2B为本申请实施例提供的一种电子设备响应于触摸操作显示图像的软件处理流程示意图;
图2C为常规技术中的一种电子设备进行图层绘制、渲染、合成以及图像帧显示_的原理示意图;
图3为本申请实施例提供的一种图像处理方法流程图;
图4A为本申请实施例提供的一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图4B为本申请实施例提供的一种图像处理方法流程图;
图5A为本申请实施例提供的一种电子设备进行图层绘制、渲染、合成以及图像帧显示_的原理示意图;
图5B为本申请实施例提供的另一种图像处理方法流程图;
图6为本申请实施例提供的一种SF Buffer缓存图层的方法示意图;
图7A为本申请实施例提供的一种Frame Buffer缓存图层的方法示意图;
图7B为SysTrace工具抓取的常规技术中电子设备绘制多帧图层的一种时序图;
图7C为SysTrace工具抓取的本申请实施例中电子设备绘制多帧图层的一种时序图;
图7D为SysTrace工具抓取的本申请实施例中电子设备绘制多帧图层的另一种时序图;
图8A为本申请实施例提供的电子设备的一种显示界面示意图;
图8B为本申请实施例提供的电子设备的另一种显示界面示意图;
图8C为本申请实施例提供的电子设备的另一种显示界面示意图;
图9为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图10A为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图10B为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图10C为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图10D为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图10E为常规技术中电子设备绘制多帧图层的过程中,SF Buffer中缓存帧的变化示意图;
图10F为本申请实施例中电子设备绘制多帧图层的过程中,SF Buffer中缓存帧的变化示意图;
图11A为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图11B为本申请实施例提供的一种图层的运动距离变化示意图;
图12为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图13为本申请实施例提供的另一种图像处理方法流程图;
图14为本申请实施例提供的另一种图像处理方法流程图;
图15为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图16A为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图16B为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图16C为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图16D为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图17为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图18A为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图18B为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图19为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图20为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图21为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图22A为本申请实施例提供的另一种电子设备进行图层绘制、渲染、合成以及图像帧显示的原理示意图;
图22B为本申请实施例提供的另一种SF Buffer缓存图层的方法示意图;
图23为本申请实施例提供的一种芯片系统的结构示意图。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种图像处理方法,该方法可以应用于包括显示屏(如触摸屏)的电子设备。通过该方法,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
示例性的,上述电子设备可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括显示屏(如触摸屏)的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。
请参考图1,为本申请实施例提供的一种电子设备100的结构示意图。如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头293,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直 接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头293,和无线通信模块160等供电。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通 信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packetradio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
其中,本申请实施例中的显示屏194可以是触摸屏。即该显示屏194中集成了触摸传感器180K。该触摸传感器180K也可以称为“触控面板”。也就是说,显示屏194可以包括显示面板和触摸面板,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器180K检测到的触摸操作后,可以由内核层的驱动(如TP驱动)传递给上层,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
电子设备100可以通过ISP,摄像头293,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头293反馈的数据。摄像头293用于捕获静态图像或视频。数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1, MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。例如,在本申请实施例中,处理器110可以通过执行存储在内部存储器121中的指令,内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。本申请实施例中,电子设备100可以通过压力传感器180A获取用户的触摸操作的按压力度。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100 的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
以下介绍上述垂直同步信号1、垂直同步信号2和垂直同步信号3。
垂直同步信号1:如VSYNC_APP。该垂直同步信号1可以用于触发绘制一个或多个图层,并渲染绘制的图层。也就是说,上述垂直同步信号1可用于触发UI线程绘制一个或多个图层,并由Render线程对UI线程绘制的一个或多个图层进行渲染。
垂直同步信号2:如VSYNC_SF。该垂直同步信号2可以用于触发对渲染的一个或多个图层进行图层合成得到图像帧。也就是说,上述垂直同步信号2可用于触发合成线程对Render线程渲染的一个或多个图层进行图层合成得到图像帧。
垂直同步信号3:如HW_VSYNC。该垂直同步信号3可以用于触发硬件刷新显示图像帧。
其中,垂直同步信号3是由电子设备的显示屏驱动触发的一个硬件信号。本申请实施例中,垂直同步信号3(如HW_VSYNC)的信号周期T3是根据电子设备的显示屏的帧率确定的。具体的,垂直同步信号3的信号周期T3是电子设备的显示屏(如LCD或OLED)的帧率的倒数。
例如,电子设备的显示屏的帧率可以为60赫兹(Hz)、70Hz、75Hz、80Hz、90Hz或者120Hz等任一值。以帧率是60Hz为例,上述垂直同步信号3的信号周期为1/60=0.01667秒(s)=16.667毫秒(ms)。以帧率是90Hz为例,上述垂直同步信号3的信号周期为1/90=0.01111秒(s)=11.11毫秒(ms)。需要注意的是,其中,电子设备可能支持多个不同的帧率。电子设备的帧率可以在上述不同的帧率之间切换。本申请实施例中所述的帧率是电子设备当前所使用的帧率。即垂直同步信号3的信号周期是电子设备当前所使用的帧率的倒数。
需要注意的是,本申请实施例中的垂直同步信号3是一个周期性离散信号。例如,如图2A所示,每间隔一个信号周期就会有一个由硬件驱动触发的垂直同步信号3。垂直同步信号1和垂直同步信号2是基于垂直同步信号3产生的,即垂直同步信号3可以是垂直同步信号1和垂直同步信号2的信号源。或者,垂直同步信号1和垂直同步信号2与垂直同步信号3同步。故垂直同步信号1和垂直同步信号2的信号周期与垂直同步信号3的信号周期相同,且相位一致。例如,如图2A所示,垂直同步信号1的信号周期,垂直同步信号2的信号周期,与垂直同步信号3的信号周期相同。并且,如图2A所示,垂直同步信号1、垂直同步信号2,以及垂直同步信号3的相位一致。可以理解的是,实际实施过程中,垂直同步信号1、垂直同步信号2,以及垂直同步信号3之间可能会因为各种因素(如处理性能)存在一定的相位误差。需要注意的是,在理解本申请实施例的方法时,上述相位误差被忽略。
综上所述,上述垂直同步信号1、垂直同步信号2和垂直同步信号3均为周期性离散信号。例如,如图2A所示,每间隔一个信号周期就会有一个垂直同步信号1,每间隔一个信号周期就会有一个垂直同步信号2,每间隔一个信号周期就会有一个垂直同步信号3。上述垂直同步信号1、垂直同步信号2和垂直同步信号3的信号周期都可以称为同步周期T
Z。也就是说,本申请实施例中的同步周期是电子设备的帧率的倒数。
需要注意的是,在不同的系统或者架构中,垂直同步信号的名称可能不同。例如, 在一些系统或者架构中,上述用于触发绘制一个或多个图层的垂直同步信号(即垂直同步信号1)的名称可能不是VSYNC_APP。但是,无论垂直同步信号的名称是什么,只要是具备类似功能的同步信号,符合本申请实施例提供的方法的技术思路,都应涵盖在本申请的保护范围之内。
并且,在不同的系统或者架构中,对上述垂直同步信号的定义也可能不同。例如,在另一些系统或架构中,上述垂直同步信号1的定义可以为:垂直同步信号1可以用于触发渲染一个或多个图层;垂直同步信号2的定义可以为:垂直同步信号2可以用于触发根据一个或多个图层生成图像帧;垂直同步信号3的定义可以为:垂直同步信号3可以用于触发显示图像帧。本申请实施例中,对垂直同步信号的定义不作限定。但是,无论对垂直同步信号做何种定义,只要是具备类似功能的同步信号,符合本申请实施例提供的方法的技术思路,都应涵盖在本申请的保护范围之内。
为了便于理解,本申请实施例这里结合图2B,以上述显示屏是触摸屏,用户在显示屏的操作是触摸操作为例,介绍从“用户手指在触摸屏输入触摸操作”到“触摸屏显示该触摸操作对应的图像”过程中,电子设备的软件处理流程。
如图2B所示,电子设备可以包括:触控面板(touch panel,TP)/TP驱动(Driver)10、Input框架(即Input Framework)20、UI框架(即UI Framework)30、Display框架(即Display Framework)40和硬件显示模块50。
如图2B所示,电子设备的软件处理流程可以包括以下步骤(1)-步骤(5)。步骤(1):TP IC/TP驱动10中的TP采集用户手指对电子设备的TP的触摸操作后,TP驱动向Event Hub上报相应的触摸事件。步骤(2):Input框架20的Input Reader线程可以从Event Hub中读取触摸事件,然后向InputDi spatcher线程发送该触摸事件;由Input Dispatcher线程向UI框架30中的UI线程上传该触摸事件。步骤(3):UI框架30中的UI线程(如Do Frame)绘制该触摸事件对应的一个或多个图层;渲染(Render)线程(如Draw Frame)对一个或多个图层进行图层渲染。其中,上述UI线程是电子设备的中央处理器(Central Processing Unit,CPU)中的线程。Render线程是电子设备的GPU中的线程。步骤(4):Display框架40中的合成线程(Surface Flinger)对绘制的一个或多个图层(即渲染后的一个或多个图层)进行图层合成得到图像帧。步骤(5):硬件显示模块50的液晶显示面板(Liquid Crystal Display,LCD)驱动可接收合成的图像帧,由LCD显示合成的图像帧。LCD显示图像帧后,LCD显示的图像可被人眼感知。
一般而言,响应于用户对TP的触摸操作或者UI事件,UI框架可以在垂直同步信号1到来后,调用UI线程绘制触控事件对应的一个或多个图层,再调用Render线程以对该一个或多个图层进行渲染;然后,硬件合成(Hardware Composer,HWC)可以在垂直同步信号2到来后,调用合成线程对绘制的一个或多个图层(即渲染后的一个或多个图层)进行图层合成得到图像帧;最后,硬件显示模块可以在垂直同步信号3到来后,在LCD刷新显示上述图像帧。其中,上述UI事件可以是由用户对TP的触摸操作触发的。或者,该UI事件可以是由电子设备自动触发的。例如,电子设备的前台应用自动切换画面时,可以触发上述UI事件。前台应用是电子设备的显示屏当前显示的界面对应的应用。
其中,TP可以周期性检测用户的触摸操作。TP检测到触摸操作后,可以唤醒上述垂直同步信号1和垂直同步信号2,以触发UI框架基于垂直同步信号1进行图层绘制和渲染,硬件合成HWC基于垂直同步信号2进行图层合成。其中,TP检测触摸操作的检测周期与垂直同步信号3(如HW_VSYNC)的信号周期T3相同。
需要注意的是,UI框架是基于垂直同步信号1周期性的进行图层绘制和渲染的;硬件合成HWC是基于垂直同步信号2周期性的进行图层合成的;LCD是基于垂直同步信号3周期性的进行图像帧刷新的。
其中,电子设备响应于上述垂直同步信号1、垂直同步信号2和垂直同步信号3,进行图层的绘制、渲染、合成和刷新显示图像帧的过程中,可能会出现丢帧的现象。具体的,显示屏刷新显示图像帧的过程中,可能会显示一帧空白图像。这样,会影响显示屏显示图像的连贯性和流畅性,从而影响用户的视觉体验。
例如,如图2C所示,在t
1时刻,一个垂直同步信号1到来;响应于t
1时刻的垂直同步信号1,电子设备执行“绘制_1”和“渲染_1”;在t
2时刻,一个垂直同步信号2到来;响应于t
2时刻的垂直同步信号2,电子设备执行“图像帧合成_1”;在t
3时刻,一个垂直同步信号3到来;响应于t
3时刻的垂直同步信号3,电子设备执行“图像帧显示_1”。如图2C所示,在t
2时刻,一个垂直同步信号1到来;响应于t
2时刻的垂直同步信号1,电子设备执行“绘制_2”和“渲染_2”。如图2C所示,由于“绘制_2”所需时长较大,导致“绘制_2”和“渲染_2”无法在一个同步周期(如t
2时刻到t
3时刻这一同步周期)内完成。即电子设备在t
3时刻的垂直同步信号2到来之前,未完成“渲染_2”;因此,电子设备只能等待t
4时刻的垂直同步信号2到来,响应于t
4时刻的垂直同步信号2,执行“图像帧合成_2”。如此,电子设备也只能等待t
5时刻的垂直同步信号3到来,响应于t
5时刻的垂直同步信号3,电子设备执行“图像帧显示_2”。
同样的,Render线程渲染图层所花费的时长较大,也会导致“绘制”和“渲染”无法在一个同步周期内完成(附图未示出)。
由图2C可知,在t
4时刻-t
5时刻这一同步周期,显示屏显示图像出现丢帧现象,即显示屏会显示一帧空白图像。而通过本申请实施例的方法,可以避免显示图像出现丢帧现象,以避免显示屏显示一帧空白图像。也就是说,通过本申请实施例的方法可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
示例性的,本申请实施例提供的方法的执行主体可以是用于处理图像的装置。该装置可以是上述电子设备中的任一种(例如,该装置可以为图1所示的电子设备100)。或者,该装置还可以为电子设备的中央处理器(英文:Central Processing Unit,简称:CPU),或者电子设备中的用于执行本申请实施例提供的方法的控制模块。
本申请实施例中以上述电子设备(如手机)执行图像处理方法为例,介绍本申请实施例提供的方法。其中,本申请实施例中的垂直同步信号1(如VSYNC_APP信号)是第一垂直同步信号,垂直同步信号2(如VSYNC_SF信号)是第二垂直同步信号,垂直同步信号3(如HW_VSYNC信号)是第三垂直同步信号。
本申请实施例提供一种图像处理方法。如图3所示,该图像处理方法可以包括 S301-S302。
S301、电子设备绘制第一图层,并渲染第一图层,在SF队列缓存渲染后的第一图层。
S302、电子设备在第一时刻之前绘制完第一图层,在第一时刻之前电子设备绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
在本申请实施例的一种情况下,上述第一图层可以是电子设备在一个垂直同步信号1到来的时刻开始绘制的。
例如,第一图层可以是电子设备执行图4A所示的“绘制_1”所绘制的图层1,该图层1是电子设备响应于t
1时刻的垂直同步信号1,在t
1时刻开始绘制的。第二图层可以是电子设备执行图4A或图5A所示的“绘制_1”绘制完图层1之后,执行“绘制_2”所绘制的图层2。
在本申请实施例的另一种情况下,上述第一图层可以是一帧图层绘制结束后,在下一个垂直同步信号1到来之前绘制的。
例如,第一图层可以是电子设备执行图4A所示的“绘制_2”所绘制的图层2。第二图层可以是电子设备执行图4A所示的“绘制_2”绘制完图层2之后,执行“绘制_3”所绘制的图层3。其中,上述图层2(即第一图层)可以是电子设备在上述图层1绘制结束(即电子设备执行完上述“绘制_1”)后,在t
2时刻的垂直同步信号1到来之前,于t
1.4时刻绘制的。其中,t
1.4时刻在t
1时刻之后,在t
2时刻之前。图4A所示的t
1.4时刻与图2C所示的t
x时刻是同一时刻。在图2C所示的t
x时刻,电子设备完成“绘制_1”。
又例如,第一图层可以是电子设备执行图4A所示的“绘制_3”所绘制的图层3。第二图层可以是电子设备执行图4A所示的“绘制_3”绘制完图层3之后,执行“绘制_4”所绘制的图层4。其中,上述图层3(即第一图层)可以是电子设备在上述图层2绘制结束(即电子设备执行完上述“绘制_2”)后,在t
3时刻的垂直同步信号1到来之前,于t
2.4时刻绘制的。其中,t
2.4时刻在t
2时刻之后,在t
3时刻之前。
其中,第一时刻是用于触发电子设备绘制第二图层的垂直同步信号1到来的时刻。
例如,在第一图层是电子设备执行图4A所示的“绘制_1”所绘制的图层1,第二图层是电子设备执行图4A所示的“绘制_2”所绘制的图层2的情况下,上述第一时刻是图4A所示的t
2时刻;在常规技术中,该t
2时刻的垂直同步信号1用于触发电子设备执行“绘制_2”绘制图层2。
又例如,在第一图层是电子设备执行图4A所示的“绘制_2”所绘制的图层2,第二图层是电子设备执行图4A所示的“绘制_3”所绘制的图层2的情况下,上述第一时刻是图4A所示的t
3时刻;在常规技术中,该t
3时刻的垂直同步信号1用于触发电子设备执行“绘制_3”绘制图层3。
一般而言,电子设备的UI线程是基于垂直同步信号1周期性地进行图层的绘制。因此,在常规技术中,电子设备执行S301,即使电子设备的UI线程已经完成第一图层的绘制,但是如果没有检测到垂直同步信号1,电子设备的UI线程是不会绘制第二图层的。电子设备的UI线程在下一个垂直同步信号1到来后,才会开始绘制第二图层。
例如,如图2C所示,在t
1时刻,一个垂直同步信号1到来;响应于t
1时刻的垂 直同步信号1,电子设备的UI线程可执行“绘制_1”绘制图层1(即第一图层),然后由电子设备的Render线程执行“渲染_1”渲染图层1。UI线程在图2C所示的t
x时刻完成“绘制_1”,即完成第一图层的绘制任务。但是,如图2C所示,在t
2时刻,下一个垂直同步信号1到来;响应于t
2时刻的垂直同步信号1,UI线程才可以执行“绘制_2”绘制图层2(即第二图层),Render线程执行“渲染_2”渲染图层2。
以第一时刻是图2C所示的t
2时刻为例。如图2C所示,如果电子设备在t
2时刻之前的t
x时刻绘制完第一图层(即执行完“绘制_1”);常规技术中,响应于t
2时刻的垂直同步信号1,才开始绘制第二图层(即执行完“绘制_2”)。如此,在图2C所示的t
x时刻-t
2时刻这段时间(如Δt1)内,UI线程处于空闲状态。
本申请实施例中,可以利用UI线程的上述空闲时段(如图2C所示的Δt1这段时间)提前绘制第二图层。如此,可以提前完成第二图层的绘制任务,可以提升电子设备在图2C所示的t
3时刻的垂直同步信号2到来之前完成“渲染_2”的可能性。如此,降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性。具体的,电子设备可以执行S302。本申请实施例这里介绍电子设备执行S302的具体方法。
在本申请实施例的一种实现方式中,电子设备在第一时刻之前绘制完第一图层时,可以在第一图层绘制结束后,立即开始绘制第二图层,并渲染第二图层。具体的,如图4B所示,上述S302可以包括S302a。
S302a、电子设备在第一时刻之前绘制完第一图层,电子设备响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在所述SF缓存队列缓存渲染后的第二图层。
例如,如图4A所示,在t
1时刻,一个垂直同步信号1到来;响应于t
1时刻的垂直同步信号1,电子设备的UI线程可执行“绘制_1”绘制图层1(即第一图层),然后由电子设备的Render线程执行“渲染_1”渲染图层1。UI线程在图4A所示的t
1.4时刻完成“绘制_1”,即绘制完图层1。响应于UI线程在图4A所示的t
1.4时刻完成“绘制_1”,UI线程可以从t
1.4时刻开始执行“绘制_2”绘制图层2(即第二图层),Render线程执行“渲染_2”渲染图层2。而不需要等待t
2时刻的垂直同步信号1,在t
2时刻才开始执行“绘制_2”绘制图层2。
又例如,UI线程在图4A所示的t
2.4时刻完成“绘制_2”,即绘制完图层2(即第一图层)。响应于UI线程在图4A所示的t
2.4时刻完成“绘制_2”,UI线程可以t
2.4时刻开始执行“绘制_3”绘制图层3(即第二图层),Render线程执行“渲染_3”渲染图层3。而不需要等待t
3时刻的垂直同步信号1,在t
3时刻才开始执行“绘制_3”绘制图层3。
又例如,UI线程在图4A所示的t
3.4时刻完成“绘制_3”,即绘制完图层3(即第一图层)。响应于UI线程在图4A所示的t
3.4时刻完成“绘制_3”,UI线程可以t
3.4时刻开始执行“绘制_4”绘制图层4(即第二图层),Render线程执行“渲染_4”渲染图层4。而不需要等待t
4时刻的垂直同步信号1,在t
4时刻才开始执行“绘制_4”绘制图层4。
如此,如图4A所示,可以在t
3时刻的垂直同步信号2到来之前,完成“绘制_2”和“渲染_2”。这样,可以使得电子设备(如电子设备的合成线程)可以响应于t
3时 刻的垂直同步信号2,执行“图像帧合成_2”,进而使得电子设备(如电子设备的LCD)可以响应于t
4时刻的垂直同步信号3,执行“图像帧显示_2”。如此,便可以解决图2C所示的t
4时刻-t
5时刻这一同步周期,显示屏显示图像出现丢帧现象(即显示屏会显示一帧空白图像)的问题。
在本申请实施例的另一种实现方式中,即使电子设备在第一时刻之前绘制完第一图层,电子设备响应于第一图层绘制结束,不一定会立即开始绘制第二图层。具体的,如图5B所示,上述S302可以包括S302b-S302c。
S302b、电子设备在第二时刻之前绘制完第一图层,电子设备从第二时刻开始绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
其中,第二时刻是用于触发电子设备绘制第一图层的垂直同步信号1的信号周期的预设百分比的耗时时刻,该预设百分比小于1。例如,预设百分比可以为50%、33.33%或者40%等任一数值。该预设百分比可以预先配置在电子设备中,也可以由用户在电子设备中设置。以下实施例中,以预设百分比等于33.33%(即1/3)为例,介绍本申请实施例的方法。
例如,常规技术中,图5A所示的t
1时刻的垂直同步信号1用于触发电子设备执行“绘制_1”绘制图层1(即第一图层);上述第二时刻是t
1/3时刻,即t
1时刻的垂直同步信号1的信号周期T
1的预设百分比的耗时时刻。具体的,t
1时刻-t
1/3时刻这段时长是T
1的预设百分比,如t
1时刻-t
1/3时刻这段时长等于T
1的1/3(即T
1/3)。在该示例中,第一时刻是图5A所示的t
2时刻,第二时刻是图5A所示的t
1/3时刻,t
1/3时刻在t
2时刻之前。
如图5A所示,电子设备执行“绘制_1”在t
1.5时刻绘制完图层1(即第一图层),t
1.5时刻在t
1/3时刻(即第二时刻)之前。也就是说,电子设备在t
1/3时刻(即第二时刻)之前绘制完图层1。因此,电子设备可以执行S302b,从t
1/3时刻(即第二时刻)开始执行“绘制_2”绘制图层2(即第二图层)。
又例如,常规技术中,图5A所示的t
2时刻的垂直同步信号1用于触发电子设备执行“绘制_2”绘制图层2(即第一图层);上述第二时刻是t
2/3时刻,即t
2时刻的垂直同步信号1的信号周期T2的预设百分比的耗时时刻。具体的,t
2时刻-t
2/3时刻这段时长等于T2的1/3,即t
2时刻-t
2/3时刻这段时长是T2的预设百分比。在该实施例中,第一时刻是图5A所示的t
3时刻,第二时刻是图5A所示的t
2/3时刻,t
2/3时刻在t
3时刻之前。
如图5A所示,电子设备执行“绘制_2”在t
2.5时刻绘制完图层2(即第一图层),t
2.5时刻在t
2/3时刻(即第二时刻)之前。也就是说,电子设备在t
2/3时刻(即第二时刻)之前绘制完图层2。因此,电子设备可以执行S302b,从t
2/3时刻(即第二时刻)开始执行“绘制_3”绘制图层3(即第二图层)。
S302c、电子设备在第一时刻之前,第二时刻之后绘制完所述第一图层,电子设备响应于第一图层绘制结束,绘制第二图层,并渲染第二图层,在SF缓存队列缓存渲染后的第二图层。
例如,常规技术中,图5A所示的t
3时刻的垂直同步信号1用于触发电子设备执行“绘制_3”绘制图层3(即第一图层);上述第二时刻是t
3/3时刻,即t
3时刻的垂直 同步信号1的信号周期T3的预设百分比的耗时时刻。具体的,t
3时刻-t
3/3时刻这段时长等于T3/3,即t
3时刻-t
3/3时刻这段时长是T3的预设百分比。在该实施例中,第一时刻是图5A所示的t
4时刻,第二时刻是图5A所示的t
3/3时刻,t
3/3时刻在t
4时刻之前。
如图5A所示,电子设备执行“绘制_3”在t
3.5时刻绘制完图层3;其中,t
3.5时刻在t
3/3时刻(即第二时刻)之后,在t
4时刻(即第一时刻)之前。因此,电子设备可以执行S302c,响应于电子设备在t
3.5时刻绘制完图层3,在t
3.5时刻执行“绘制_4”绘制图层4(即第二图层)。
本申请实施例中,电子设备可以在SF缓存队列(Buffer)中缓存渲染后的图层。该SF Buffer可以以队列的方式,按照先进先出的原则缓存渲染后的图层。
例如,结合图5A,如图6所示,电子设备的Render线程执行图5A所示的“渲染_1”得到渲染后的图层1;Render线程可将渲染后的图层1插入SF Buffer;然后,电子设备的Render线程执行图5A所示的“渲染_2”得到渲染后的图层2;Render线程可将渲染后的图层2插入SF Buffer;随后,电子设备的Render线程执行图5A所示的“渲染_3”得到渲染后的图层3;Render线程可将渲染后的图层3插入SF Buffer。其中,SF Buffer按照先进先出的原则缓存图层1、图层2和图层3。也就是说,图6所示的SF Buffer中的图层按照图层1、图层2、图层3的顺序入队,并按照图层1、图层2、图层3的顺序出队。如图3、图4B或图5B所示,在上述S301或S302之后,本申请实施例的方法还可以包括S303-S304。
S303、电子设备响应于垂直同步信号2,对SF缓存队列中缓存的图层进行图层合成得到图像帧,并缓存合成的图像帧。
S304、电子设备响应于垂直同步信号3,刷新显示缓存的图像帧。
示例性的,在图4A或图5A所示的t
2时刻,一个垂直同步信号2到来;响应于t
2时刻的垂直同步信号2,电子设备的合成线程可执行“图像帧合成_1”对渲染的图层1进行图层合成,得到图像帧1;在图4A或图5A所示的t
3时刻,一个垂直同步信号3到来;响应于t
3时刻的垂直同步信号3,电子设备的LCD可执行“图像帧显示_1”,刷新显示上述图像帧1。
在图4A或图5A所示的t
3时刻,一个垂直同步信号2到来;响应于t
3时刻的垂直同步信号2,电子设备的合成线程可执行“图像帧合成_2”对渲染的图层2进行图层合成,得到图像帧2;在图4A或图5A所示的t
4时刻,一个垂直同步信号3到来;响应于t
4时刻的垂直同步信号3,电子设备的LCD可执行“图像帧显示_2”,刷新显示上述图像帧2。
在图4A或图5A所示的t
4时刻,一个垂直同步信号2到来;响应于t
4时刻的垂直同步信号2,电子设备的合成线程可执行“图像帧合成_3”对渲染后的图层3进行图层合成,得到图像帧3;在图4A或图5A所示的t
5时刻,一个垂直同步信号3到来;响应于t
5时刻的垂直同步信号3,电子设备的LCD可执行“图像帧显示_3”,刷新显示上述图像帧3。
本申请实施例中,S303中所述的“缓存的图层”是指上述SF Buffer中缓存的图层,如图6所示的SF Buffer中缓存的图层。例如,响应于图4A或图5A所示的t
2时刻的垂直同步信号2,电子设备的合成线程可从图6所示的SF Buffer中获取图层1 (即图层1从SF Buffer中出队),执行“图像帧合成_1”对渲染的图层1进行图层合成,得到图像帧1。
其中,S303中所述的“缓存该图像帧”是指将合成的图像帧缓存至帧(Frame)Buffer。其中,该Frame Buffer可以以队列的方式,按照先进先出的原则缓存图像帧。例如,电子设备的合成线程执行图4A或图5A所示的“图像帧合成_1”得到的图像帧1可插入图7A所示的Frame Buffer。然后,电子设备的合成线程执行4或图5A所示的“图像帧合成_2”得到图像帧2可继续插入图7A所示的Frame Buffer;随后,电子设备的合成线程执行图4A或图5A所示的“图像帧合成_3”得到图像帧3可插入图7A所示的Frame Buffer。
其中,Frame Buffer按照先进先出的原则缓存图像帧1、图像帧2和图像帧3。也就是说,图7A所示的Frame Buffer中的图层按照图像帧1、图像帧2、图像帧3的顺序入队,并按照图像帧1、图像帧2、图像帧3的顺序出队。即电子设备执行S304,响应于垂直同步信号3,可按照先进先出的原则刷新显示缓存在Frame Buffer中的图像帧。
综上所述,在常规技术中,如图2C所示,电子设备的UI线程执行图层绘制任务,都是由垂直同步信号1触发的;UI线程在一个同步周期(即一帧内)只能执行一个图层绘制任务。而本申请实施例中,UI线程执行图层绘制任务则不需要垂直同步信号1的触发;UI线程在一个同步周期(即一帧内)可以执行多个图层绘制任务。具体的,如图4A或图5A所示,UI线程在执行完一个图层绘制任务后,可以利用空闲时段,提前执行下一个图层绘制任务;这样,UI线程便可以在同步周期(即一帧内)可以执行多个图层绘制任务。
请参考图7B,其示出电子设备执行常规技术的方案时,本领域技术人员使用安卓
通用的SysTrace工具抓取的电子设备绘制多帧图层的时序图。请参考图7C,其示出电子设备执行本申请实施例的方案时,本领域技术人员使用SysTrace工具抓取的电子设备绘制上述多帧图层的时序图。本申请实施例这里通过对比图7B和图7C,可以区分本申请实施例的方案与常规技术方案。其中,SysTrace工具的详细描述可以参考常规技术中的相关描述,这里不予赘述。
假设电子设备的屏幕刷新率为90Hz,垂直同步信号1的信号周期为11.11ms。执行常规技术的方案,电子设备响应于一个垂直同步信号1绘制一帧图层,响应于下一个垂直同步信号1绘制下一帧图层。因此,相邻两帧图层之间的帧间隔等于垂直同步信号1的信号周期(如11.11ms)。当一帧图层的绘制时长大于上述信号周期时,该图层与下一帧图层之间的帧间隔会大于垂直同步信号1的信号周期(如11.11ms)。也就是说,执行常规技术的方案,相邻两帧图层之间的帧间隔不会小于垂直同步信号1的信号周期。如图7B所示,相邻两个图层之间的帧间隔均大于或等于11.11ms,如相邻两个图层之间的帧间隔为11.35ms,11.35ms>11.11ms。
执行本申请实施例的方案,电子设备响应于绘制完一帧图层,不需要等待垂直同步信号1,便可以绘制下一帧图层。因此,相邻两帧图层之间的帧间隔小于垂直同步信号1的信号周期(如11.11ms)。当一帧图层的绘制时长较大时,该图层与下一帧图层之间的帧间隔可能会大于或等于垂直同步信号1的信号周期(如11.11ms)。也 就是说,执行本申请实施例的方案,相邻两帧图层之间的帧间隔会出现小于垂直同步信号1的信号周期的情况。如图7C所示,相邻两个图层之间的帧间隔为1.684ms,1.684ms<11.11ms。
本申请实施例中,电子设备执行完一个图层绘制任务后,可继续执行下一个图层绘制任务,而不是等待垂直同步信号1到来后才执行下一个图层绘制任务。也就是说,电子设备可以利用UI线程的空闲时段(如图2C所示的Δt1这段时间)提前执行下一个图层绘制任务。这样,可以提前完成图层的绘制和渲染任务,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
常规技术中,响应于用户对TP的触摸操作或者UI事件,电子设备可启动上述基于垂直同步信号进行图层绘制、渲染、合成和图像帧显示的流程。本申请实施例中,电子设备也可以响应于用户对TP的触摸操作或者UI事件,启动上述基于垂直同步信号进行图层绘制、渲染、合成和图像帧显示的流程。本申请实施例的方案与常规技术不同的是:在启动上述流程后,电子设备则可以不再基于垂直同步信号1执行图层的绘制任务;而是响应于前一个图层绘制任务完成,继续执行下一个图层绘制任务。
但是,本申请实施例中,电子设备并不是针对所有的触摸操作或者UI事件,均按照S301-S304的流程进行图层绘制、渲染、合成和图像帧显示。本申请实施例中,电子设备在上述触摸操作或者UI事件触发电子设备显示的图像为“确定性动画”的情况下,可以按照S301-S304的流程,进行图层绘制、渲染、合成和图像帧显示。
具体的,在上述S301之前,本申请实施例的方法还可以包括:电子设备接收到第一UI事件。响应于该第一UI事件,电子设备可以唤醒垂直同步信号。在唤醒垂直同步信号后,电子设备便可以执行S301-S304。该第一UI事件用于触发电子设备显示预设图像内容或者以预设方式显示图像内容。其中,上述预设图像内容或者以预设方式显示的图像内容可以称为“确定性动画”。
在一种实现方式中,上述第一UI事件可以是电子设备接收的用户操作。在这种实现方式中,该第一UI事件是一种可以触发电子设备显示预先定义好的图像内容的用户操作(如触摸操作等)。也就是说,该第一UI事件触发电子设备所显示的图像内容,是电子设备可以预先确定的。因此,电子设备可以利用UI线程的空闲时段提前执行图层绘制任务。
举例来说,上述第一UI事件可以是用户在电子设备的显示屏(如触摸屏)输入的抛滑(Fling)操作(也称为Fling手势)。电子设备接收到用户输入的Fling手势,用户的手指贴合显示屏滑动,手指离开显示屏后,显示屏显示的动画仍然随“惯性”向手指滑动方向滑动,直到停止的过程。也就是说,电子设备可以按照Fling手势的惯性滑动,计算出电子设备将要显示的图像内容。在这种情况下,电子设备可以利用UI线程的空闲时段提前执行图层绘制任务。
示例性的,请参考图7D,其示出电子设备接收并响应上述抛滑(Fling)操作的过程中,本领域技术人员使用SysTrace工具抓取的电子设备绘制多帧图层的时序图。
其中,电子设备接收并响应Fling操作可以分为图7D所示的落下(Down)、滑动(Move)、抬起(Up)和抛滑(Fling)四个阶段。图7D所示的Down是指用户的手指 落在电子设备的显示屏(如触摸屏)上,电子设备可以检测到用户手指落下(Down)。图7D所示的Move是指用户的手指在显示屏落下后,在显示屏上滑动,电子设备可以检测到用户手指的滑动(Move)。图7D所示的Up是指用户的手指在显示屏滑动一段距离后,离开显示屏,即手指从显示屏抬起,电子设备可以检测到用户手指的抬起(Up)。图7D所示的Fling是指用户的手指抬起后,显示屏显示的动画仍然随“惯性”向手指滑动方向滑动。
可以理解,当用户手指抬起(Up)后Fling的轨迹可以按照用户手指抬起前的滑动操作根据滑动惯性确定,即Fling的轨迹是可以预估出来的。因此,在图7D所示的Fling阶段,电子设备可以提前绘制图层。如图7D所示,t
o时刻-t
p时刻相邻的两个图层之间的帧间隔,以及t
p时刻-t
q时刻相邻的两个图层之间的帧间隔较小,小于其他时刻相邻的两个图层之间的帧间隔。其中,上述其他时刻相邻的两个图层之间的帧间隔中,帧间隔等于垂直同步信号1的信号周期。由此可见,在图7D所示的Fling阶段,电子设备提前绘制了至少两个图层。
举例来说,上述第一UI事件还可以是用户对前台应用中预设控件的点击操作。前台应用是电子设备的显示屏当前显示的界面对应的应用。响应于用户对该预设控件的点击操作,电子设备所要显示的图像内容是预先定义好的。因此,电子设备可以利用UI线程的空闲时段提前执行图层绘制任务。
例如,以电子设备是手机为例,该手机显示图8A中的(a)所示的电话应用的通话记录界面801。上述第一UI事件可以是用户对该通话记录界面801中的预设控件“通讯录”802的点击操作。用户对预设控件“通讯录”802的点击操作,用于触发手机显示通讯录界面,如图8A中的(b)所示的通信录界面803。该通讯录界面是预先定义好的。因此,响应于该用户对预设控件“通讯录”802的点击操作,手机可唤醒垂直同步信号,执行本申请实施例的方法。
又例如,以电子设备是手机为例,该手机显示图8B中的(a)所示的主界面804。该主界面804中包括设置应用的图标805。上述第一UI事件可以是用户对图8B中的(a)所示的设置应用的图标805的点击操作。用户对设置应用的图标805(即预设控件)的点击操作,用于触发手机显示设置界面,如图8B中的(b)所示的设置界面806。该设置界面806是预先定义好的。因此,响应于该用户对设置应用的图标805的点击操作,手机可唤醒垂直同步信号,执行本申请实施例的方法。
并且,响应于用户对设置界面806中部分功能选项(如移动网络选项,或者锁屏和密码选项)的点击操作,手机所显示的界面也是预先定义好的。如响应于用户对设置界面806中移动网络选项的点击操作,手机可显示移动网络设置界面。该移动网络设置界面是预先定义好的。因此,响应于该用户对设置界面中部分功能选项的点击操作,手机可唤醒垂直同步信号,执行本申请实施例的方法。
又例如,以电子设备是手机为例,该手机显示图8C中的(a)所示的主界面804。该主界面804中包括**视频应用的图标807。上述第一UI事件可以是用户对图8C中的(a)所示的**视频应用的图标807的点击操作。用户对**视频应用的图标807(即预设控件)的点击操作,用于触发手机显示该**视频应用的首页。一般而言,手机显示该**视频应用的首页之前,可显示图8C中的(b)所示的该**视频应用的广告页面。 该**视频应用的广告页面是预先定义好的。因此,响应于该用户对**视频应用的图标807的点击操作,手机可唤醒垂直同步信号,执行本申请实施例的方法。
在另一种实现方式中,上述第一UI事件可以是电子设备自动触发的UI事件。例如,电子设备的前台应用自动切换画面时,可以触发上述UI事件。前台应用是电子设备的显示屏当前显示的界面对应的应用。
本申请实施例中,电子设备响应于上述第一UI事件,显示“确定性动画”的情况下,可以按照S301-S304的流程,进行图层绘制、渲染、合成和图像帧显示。这样,可以在保证电子设备显示内容准确的前提下,降低电子设备显示图像时出现丢帧的可能性,保证显示屏显示图像的流畅性,从而提升用户的视觉体验。
在另一些实施例中,电子设备开启预设功能后或者进入预设模式后,可以按照S301-S304的流程,进行图层绘制、渲染、合成和图像帧显示。例如,上述预设功能还可以称为提前绘制功能、预先处理功能或者智能图层处理功能等。上述预设模式还可以称为提前绘制模式、预先处理模式或者智能图层处理模式等。
其中,电子设备可以响应于用户对电子设备中预设选项的开启操作,开启上述预设功能或者进入上述预设模式。例如,上述预设选项可以是电子设备的设置界面的功能开关。
由上述实施例可知:电子设备的Render线程渲染的图层缓存在SF Buffer中,由合成线程响应于垂直同步信号2,依次对SF Buffer中缓存的图层进行图层合成。一般而言,电子设备的SF Buffer中可缓存最多N帧的图层,如N=2或N=3。但是,针对本申请实施例的上述方案而言,如果电子设备的SF Buffer只能缓存2帧的图层;那么,则可能会出现SF Buffer中无法缓存电子设备提前绘制并渲染的图层的问题。如此,则会出现电子设备提前绘制并渲染的图层,因为SF Buffer的缓存不足而溢出的现象。
例如,请参考图9、图10A-图10D;图10示出本申请实施例的方法中图层绘制、渲染、合成和图像帧显示的原理示意图;图10A-图10D示出在电子设备执行图9所示的方法的过程中,SF Buffer中图层的入队和出队情况。
如图9所示,电子设备的UI线程响应于t
1时刻的垂直同步信号1可执行“绘制_A”绘制图层A,然后Render线程可执行“渲染_A”对图层A进行渲染。电子设备的Render线程在图9或图10A所示的t
A时刻执行完“渲染_A”。在t
A时刻,如图10A所示,渲染后的图层A在SF Buffer入队。
如图9所示,在t
B时刻之前的t
2时刻,垂直同步信号2到来,响应于t
2时刻的垂直同步信号2,电子设备的合成线程可执行“图像帧合成_A”;因此,在t
2时刻,如图10A所示,图层A从SF Buffer出队,由合成线程执行“图像帧合成_A”。
在图9所示的t
B时刻,Render线程执行完“渲染_B”;因此,如图10B所示,渲染后的图层B于t
B时刻在SF Buffer入队。在图9所示的t
C时刻,Render线程执行完“渲染_C”;因此,如图10B所示,渲染后的图层C于t
C时刻在SF Buffer入队。
并且,在图9所示的t
D时刻,Render线程执行完“渲染_D”;因此,如图10B所示,渲染后的图层D将于t
D时刻在SF Buffer入队。但是,在t
D时刻之后的t
3时刻,下一个垂直同步信号2(即t
2时刻之后的下一个垂直同步信号2)才会到来,图10B 所示的图层B才会从SF Buffer出队,由合成线程执行“图像帧合成_B”。也就是说,图层D于t
D时刻在SF Buffer入队时,图层B还未由合成线程执行“图像帧合成_B”而出队。在这种情况下,如图10C所示,图层D于t
D时刻在SF Buffer入队,会导致的图层B于t
D时刻从SF Buffer出队,即图层B于t
D时刻从SF Buffer溢出。
如此,响应于图9所示的t
3时刻的垂直同步信号2,则只能是图10D所示的图层C从SF Buffer出队,由合成线程执行“图像帧合成_C”。由图9可知,由于图10C所示的图层C的溢出,则会导致电子设备在t
3时刻-t
4时刻这一同步周期执行“图像帧显示_A”刷新显示图像帧A之后,在下一帧(即t
4时刻-t
5时刻这一同步周期)直接刷新执行“图像帧显示_C”刷新显示图像帧C;而不会刷新显示“渲染_B”对应的图像帧B,出现丢帧的现象,影响电子设备显示图像的连贯性,影响用户视觉体验。
为了解决SF Buffer中图层溢出而影响电子设备显示图像的连贯性的问题,电子设备还可以对SF Buffer的缓存空间进行扩容。例如,电子设备可以将SF Buffer的缓存空间设置为M+p帧。
在一些实施例中,SF Buffer的缓存空间的大小(即M+p)可以是根据该电子设备在预设时间内丢帧的数量确定的。其中,M为设置前SF Buffer的缓存空间的大小;p为电子设备在预设时间内丢帧的数量。
具体的,电子设备可以统计该电子设备在预设时间内执行第一UI事件的过程中丢帧的数量,根据统计的丢帧数量p,设置SF Buffer的缓存空间大小(即M+p)。例如,上述预设时间可以是截止电子设备本次接收第一UI事件之前的一周、一天或者半天内。
在另一些实施例中,M为设置前SF Buffer的缓存空间的大小,p为预设的正整数。p的具体数值可以预先配置在电子设备中;或者,可以由用户设置。例如,p可以等于1、2或3等任一正整数。
在该实施例中,响应于上述第二图层渲染完成,如果SF Buffer中不足以缓存新的图层,电子设备则可以对该SF Buffer进行扩容,增大该SF Buffer。其中,电子设备每次对SF Buffer进行扩容,可将SF Buffer的缓存空间增大p帧。以电子设备预先配置SF Buffer可缓存2帧图层(即M=2),p=1为例。电子设备可以将SF Buffer扩容,使SF Buffer可缓存3帧图层,即M+p=3。
本申请实施例中,可设置SF Buffer的上限值N。具体的,该电子设备最多可将SF Buffer的缓存空间设置为N帧。也就是说,当M+p大于预设上限值N时,电子设备可以将SF Buffer的缓存空间设置为N帧。N的具体数值可以预先配置在电子设备中;或者,可以由用户设置。例如,N可以等于5、6、8、10等任一正整数。
在另一些实施例中,电子设备可以预先配置SF Buffer的缓存空间的大小。例如,电子设备响应于上述第一UI事件,可以按照该第一UI事件,预先配置SF Buffer的缓存空间的大小(即M+p)。例如,M+p可以等于5、6、8、10等任一正整数。
示例性的,请参考图10E,其示出电子设备执行常规技术的方案时,本领域技术人员使用
通用的SysTrace工具抓取的SF Buffer中缓存帧的变化示意图。请参考图10F,其示出电子设备执行本申请实施例的方案时,本领域技术人员使用SysTrace工具抓取的SF Buffer中缓存帧的变化示意图。本申请实施例这里通过对比图10E和图10F,可以分析出本申请实施例的方案与常规技术方案中SF Buffer的不 同变化,以说明电子设备执行本申请实施例的方法可以对SF Buffer进行扩容。
需要说明的是,图10E和图10F所示的每个向上的箭头用于表示SF Buffer中增加一个缓存帧;图10E和图10F所示的每个向下的箭头用于表示SF Buffer中减少一个缓存帧。
其中,电子设备执行常规技术的方案,SF Buffer中缓存帧在每个信号周期只能增加一个缓存帧。并且,电子设备执行常规技术的方案,SF Buffer中缓存帧的数量不超过3个。
例如,在图10E所示的t
1时刻-t
2时刻这一信号周期,SF Buffer增加了1个缓存帧,然后又减少了1个缓存帧,SF Buffer中缓存帧的数量不超过3个。在图10E所示的t
2时刻-t
3时刻这一信号周期,SF Buffer增加了1个缓存帧,然后又减少了1个缓存帧,SF Buffer中缓存帧的数量不超过3个。在图10E所示的t
3时刻-t
4时刻这一信号周期,SF Buffer增加了1个缓存帧,然后又减少了1个缓存帧,SF Buffer中缓存帧的数量不超过3个。
其中,电子设备执行申请实施例的方法,SF Buffer中缓存帧在每个信号周期可以增加多个缓存帧。并且,电子设备执行申请实施例的方法,SF Buffer中缓存帧的数量可以超过3个。
例如,在图10F所示的t
a时刻-t
b时刻这一信号周期,SF Buffer增加了2个缓存帧,SF Buffer中至少包括2个缓存帧。在图10F所示的t
b时刻-t
c时刻这一信号周期,SF Buffer减少了1个,增加了2个缓存帧,SF Buffer中至少包括3个缓存帧。在图10E所示的t
c时刻-t
d时刻这一信号周期,SF Buffer减少了1个,增加了2个缓存帧,SF Buffer中至少包括4个缓存帧。
在另一些实施例中,为了避免SF Buffer中图层溢出而影响电子设备显示图像的连贯性,本申请实施例中,电子设备在执行上述S302之前,可以判断上述SF Buffer是否有足够的缓存空间可用于缓存电子设备提前绘制并渲染的图层。具体的,在S302之前,本申请实施例的方法还可以包括:S1001-S1002。
S1001、电子设备确定SF Buffer的缓存空间和该SF Buffer中缓存帧的数量。
其中,SF Buffer的缓存空间是指SF Buffer中最多能够缓存的图层的数量。该SF Buffer中缓存帧的数量是指该SF Buffer中当前已经缓存的图层的数量。
S1002、电子设备计算SF Buffer的缓存空间与SF Buffer中缓存帧的数量的差值,得到SF Buffer的剩余缓存空间。
例如,假设SF Buffer的缓存空间为3帧,SF Buffer中缓存帧的数量为2帧;那么,SF Buffer的剩余缓存空间则为1帧。
在S1002之后,若SF Buffer的剩余缓存空间大于第一预设门限值,电子设备则可以执行S302。可以理解,如果SF Buffer的剩余缓存空间大于第一预设门限值,则表示SF Buffer的剩余缓存空间足以缓存提前绘制并渲染的图层。在这种情况下,电子设备则可以执行S302,提前绘制并渲染图层。
在S1002之后,若SF Buffer的剩余缓存空间小于第二预设门限值,则表示SF Buffer的剩余缓存空间不足以缓存提前绘制并渲染的图层。在这种情况下,电子设备则不会执行S302,提前绘制并渲染图层;而是按照常规技术中的方式,响应于垂直同 步信号1,绘制第二图层,并渲染第二图层,在SF Buffer缓存渲染后的第二图层。
需要说明的是,本申请实施例中,电子设备每次在绘制完一个图层(即第一图层)后,在绘制下一个图层(即第二图层)之前,都可以执行S1001-S1002。在S1002之后,若SF Buffer的剩余缓存空间大于第一预设门限值,电子设备则可以执行S302,提前绘制并渲染图层。在S1002之后,若SF Buffer的剩余缓存空间小于第二预设门限值,电子设备则不会执行S302,提前绘制并渲染图层;而是响应于垂直同步信号1,绘制并渲染图层。在电子设备响应于垂直同步信号1,绘制并渲染图层,并将渲染后的图层缓存至SF Buffer的过程中,如果电子设备再一次接收到上述第一UI事件,电子设备可以执行S301-S304。
本申请实施例中,电子设备在SF Buffer的剩余缓存空间大于第一空间阈值的情况下,即SF Buffer的剩余缓存空间足以缓存提前绘制并渲染的图层的情况下,执行本申请实施例的方法提前绘制并渲染图层。这样,可以减少由于SF Buffer的缓存空间不足时提前绘制并渲染图层,而丢帧的问题,可以降低电子设备显示图像时出现丢帧的可能性,可以保证显示屏显示图像的连贯性,从而提升用户的视觉体验。
一般而言,
的动画原生算法是按照UI线程开始绘制图层的时间来计算该图层的运动距离,并根据该图层的运动距离绘制该图层的。但是,针对本申请实施例中电子设备利用UI线程的空闲时段提前绘制图层的方案而言,采用上述方式计算运动距离,电子设备的显示画面容易出现抖动的现象。
例如,如图11A所示,电子设备响应于t
1时刻的垂直同步信号1执行“绘制_A”绘制图层a。其中,采用
的动画原生算法,电子设备可以按照电子设备开始绘制图层a的时间(即t
1时刻)来计算该图层a的运动距离,根据该图层a的运动距离绘制图层a。其中,一个图层的运动距离是指该图层中的图像内容相比于上一帧图层中的图像内容的运动距离。
如图11A所示,电子设备在t
b时刻开始执行“绘制_b”绘制图层b。其中,采用
的动画原生算法,电子设备可以按照电子设备开始绘制图层b的时间(即t
b时刻)来计算图层b的运动距离,根据该运动距离绘制图层b。
如图11A所示,电子设备在t
c时刻开始执行“绘制_c”绘制图层c。其中,采用
的动画原生算法,电子设备可以按照电子设备开始绘制图层c的时间(即t
c时刻)来计算图层c的运动距离,根据该运动距离绘制图层c。
如图11A所示,电子设备在t
d时刻开始执行“绘制_d”绘制图层d。其中,采用
的动画原生算法,电子设备可以按照电子设备开始绘制图层d的时间(即t
d时刻)来计算图层d的运动距离,根据该运动距离绘制图层d。
其中,如果绘制一帧图层所花费的时长过大(如图11A所示,绘制图层c花费的时长过大),则不仅会出现丢帧的问题,还会导致电子设备开始绘制下一帧图层(如图层d)的时间与电子设备开始绘制图层c的时间差过大。例如,假设垂直同步信号1的信号周期为16.67ms。如图11B中的(a)所示,t
c时刻与t
d时刻的时间差过大,该时间差大于18.17ms。进而,会导致开始绘制图层d的时间与电子设备开始绘制图层c的时间差与同步周期(即垂直同步信号1的信号周期)相差过大。
采用
的动画原生算法,按照上述开始绘制图层的时间计算该图层的运动 距离时,开始绘制一帧图层的时间与开始绘制上一帧图层的时间的时间差和同步周期(即垂直同步信号1的信号周期)相差越大,这一帧图层的运动距离则越大。
但是,电子设备刷新显示每一帧图像(即一个图像帧)的时长是固定的,为一个同步周期。该同步周期(即垂直同步信号1的信号周期)是电子设备的帧率的倒数。
如此,电子设备以固定的时长(即一个同步周期)分别刷新显示运动距离不同的多帧图像,则会出现显示画面抖动的现象。示例性的,假设电子设备的帧率为90Hz为例,则同步周期为11.1ms。以电子设备需要显示火车匀速行驶的动态图像为例,动画原生算法是按照图11A所示的开始绘制各个图层的时间来计算运动距离,电子设备的显示效果为:图11A所示的“绘制_a”对应的一帧图像中火车匀速行驶,“绘制_b”对应的一帧图像中火车匀速行驶,“绘制_c”对应的一帧图像中火车突然加速行驶,“绘制_d”对应的一帧图像中火车突然减速行驶。即电子设备的显示画面出现抖动的现象。
由此可见,上述按照时间差的方式计算运动距离也不适用于本方案。本申请实施例中电子设备可以选择性的基于电子设备的同步周期或者开始绘制图层的时间来计算图层的运动距离。具体的,上述S302中电子设备绘制第二图层的方法可以包括S1101。
S1101、电子设备根据垂直同步信号1的信号周期,计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层。
其中,第二图层的运动距离是第二图层中的图像内容相比于第一图层中的图像内容的运动距离。示例性的,上述S1101可以包括S1101a-S1101b。
S1101a、电子设备根据垂直同步信号1的信号周期,计算第二图层的处理时间。
S1101b、电子设备根据第二图层的处理时间计算第二图层的运动距离,并根据第二图层的运动距离绘制第二图层。
在该实施例的一种实现方式中,当第二图层是电子设备响应于第一UI事件绘制的第i个图层时,第二图层的处理时间为p
i-1+T
i-1,i≥2,i为正整数。该p
i-1为第i-1个图层的处理时间,该T
i-1为用于触发电子设备绘制第i-1个图层的垂直同步信号1的信号周期。
示例性的,假设电子设备执行图11A所示的“绘制_a”所绘制的图层a是电子设备响应于第一UI事件绘制的第1个图层;电子设备执行图11A所示的“绘制_b”所绘制的图层b是电子设备响应于第一UI事件绘制的第2个图层;电子设备执行图11A所示的“绘制_c”所绘制的图层c是电子设备响应于第一UI事件绘制的第3个图层;电子设备执行图11A所示的“绘制_d”所绘制的图层d是电子设备响应于第一UI事件绘制的第4个图层。
例如,当第二图层是上述图层b(即电子设备响应于第一UI事件绘制的第2个图层,i=2)时,图层b的处理时间为p
2=p
1+T
1。其中,p
1是电子设备开始绘制上述图层a的时间(如图11A所示的t
1时刻);那么,p
2则是图11A所示的t
2时刻。如此,电子设备可以按照t
2时刻计算图层b的运动距离,并根据该运动距离绘制图层b。电子设备可以按照t
1时刻计算图层a的运动距离,并根据该运动距离绘制图层a。
又例如,当第二图层是上述图层c(电子设备响应于第一UI事件绘制的第3个图层,i=3)时,图层c的处理时间p
3=p
2+T
2。其中,p
2+T
2为图11A所示的t
3时刻,图层 c的处理时间p
3为图11A所示的t
3时刻。如此,电子设备可以按照t
3时刻计算图层c的运动距离,并根据该运动距离绘制图层c。
又例如,当第一图层是上述图层d(电子设备响应于第一UI事件绘制的第4个图层,i=4)时,图层d的处理时间p
4=p
3+T
3。其中,p
3+T
3为图11A所示的t
4时刻,图层d的处理时间p
4为t
4时刻。如此,电子设备可以按照t
4时刻计算图层d的运动距离,并根据该运动距离绘制图层d。
该实现方式中,电子设备可以按照图层的处理时间计算该图层的运动距离。这样,可以保证一帧图层的处理时间与上一帧图层的处理时间的时间差等于垂直同步信号的信号周期(即上述同步周期)。例如,上述图层b的处理时间t
2与图层a的处理时间t
1的时间差为T
1等于同步周期T
1;图层c的处理时间t
3与图层b的处理时间t
2的时间差为T
2等于同步周期T
2。这样,可以减少电子设备的显示画面出现抖动现象的可能性。
在该实施例的另一种实现方式中,当第二图层是电子设备响应于第一UI事件绘制的第i个图层时,第二图层的处理时间为Max(p
i-1+T
i-1,p
i′),i≥2,i为正整数。该p
i-1为第i-1个图层的处理时间,该T
i-1为用于触发电子设备绘制第i-1个图层的垂直同步信号1的信号周期。p
i′为电子设备开始绘制第i个图层的时间,p
i-1为第i-1个图层的处理时间。
其中,p
1为第1个图层的处理时间,该第1个图层的处理时间等于电子设备开始绘制第1个图层的时间。例如,图层a的处理时间为电子设备开始绘制该图层a的时间(即图11A所示的t
1时刻)。也就是说,电子设备开始绘制第1个图层的时间p
1为图11A所示的t
1时刻。如此,电子设备可以按照t
1时刻计算图层a的运动距离,并根据该运动距离绘制图层a。
例如,当第二图层是上述图层b(电子设备响应于第一UI事件绘制的第2个图层,i=2)时,图层b的处理时间p
2为Max(p
1+T
1,p
2′)。其中,p
2′为电子设备开始绘制第2个图层的时间t
b。由于p
1为图11A所示的t
1时刻;因此,p
1+T
1为图11A所示的t
2时刻。由于t
2大于t
b(即p
2′);因此,图层b的处理时间p
2为t
2时刻。如此,电子设备可以按照t
2时刻计算图层b的运动距离,并根据该运动距离绘制图层b。
又例如,当第二图层是上述图层c(电子设备响应于第一UI事件绘制的第3个图层,i=3)时,图层c的处理时间p
3为Max(p
2+T
2,p
3′)。其中,p
3′为电子设备开始绘制第3个图层的时间t
c。由于p
2为图11A所示的t
2时刻;因此,p
2+T
2为图11A所示的t
3时刻。由于t
3大于t
c(即p
3′);因此,图层c的处理时间p
3为t
3时刻。如此,电子设备可以按照t
3时刻计算图层c的运动距离,并根据该运动距离绘制图层c。
又例如,当第二图层是上述图层d(电子设备响应于第一UI事件绘制的第4个图层,i=4)时,图层d的处理时间p
4为Max(p
3+T
3,p
4′)。其中,p
4′为电子设备开始绘制第4个图层的时间t
d。由于p
3为图11A所示的t
3时刻;因此,p
3+T
3为图11A所示的t
4时刻。由于t
d(即p
3′)大于t
4;因此,图层d的处理时间p
4为t
d时刻(即p
3′)。如此,电子设备可以按照t
d时刻计算图层d的运动距离,并根据该运动距离绘制图层d。
电子设备可以采用上述方式计算第二图层的处理时间,并在电子设备的时间缓存队列中保存该第二图层的处理时间。其中,上述时间缓存队列可以按照先进先出的原 则缓存各个图层的处理时间。
该实现方式中,电子设备可以选择性的按照开始绘制图层的时间或者图层的处理时间计算该图层的运动距离。这样,可以保证大部分图层的处理时间与上一帧图层的处理时间的时间差等于垂直同步信号的信号周期(即上述同步周期)。例如,上述图层b的处理时间t
2与图层a的处理时间t
1的时间差为T
1等于同步周期T
1;图层c的处理时间t
3与图层b的处理时间t
2的时间差为T
2等于同步周期T
2。这样,可以减少电子设备的显示画面出现抖动现象的可能性。
虽然采用该实现方式的方法,可以减少电子设备显示图像时出现丢帧的可能性;但是,不可避免的会因为电子设备绘制部分图层所花费的时长较大,而出现丢帧的可能性。例如,如图11A所示,电子设备执行“绘制_c”所绘制的图层c花费的时长较大,导致电子设备在t
5时刻-t
6时刻出现丢帧。在这种情况下,下一帧图层(如图层d)的处理时间与这一帧图层(如图层c)的处理时间的时间差与同步周期会存在差异。例如,图层d的处理时间t
d与图层c的处理时间t
3的时间差为t
3时刻-t
d时刻这段时间,大于同步周期T
3。但是,一般情况下,电子设备绘的图层花费的时长不会如此大,因此出现这种情况的可能性很低。
示例性的,请参考图11B中的(b),其示出电子设备按照
的动画原生算法计算图11A所示的各个的运动距离时,各个图层的运动距离的变化示意图。请参考图11B中的(c),其示出电子设备执行S1101计算图11A所示的各个的运动距离时,各个图层的运动距离的变化示意图。
其中,图11B中的(b)和图11B中的(c)中的横坐标为各个图层的帧数,纵坐标为各个图层的运动距离。其中,图层的帧数用于指示该图层是电子设备绘制的第n个图层,n为正整数。
在图11B中的(b)所示的虚线框1101中,点1102用于表示电子设备执行图11A所示的“绘制c”所绘制的图层c的运动距离,点1103用于表示电子设备执行图11A所示的“绘制d”所绘制的图层d的运动距离。按照
的动画原生算法,计算图层c和图层d的运动距离,会出现图11B中的(b)所示的前一帧图层c的运动距离(点1102表示的运动距离)较大,而下一帧图层d的运动距离(点1103表示的运动距离)突然变小的现象,即画面抖动的现象。
而电子设备执行S1101计算图11A所示的各个的运动距离,则不会出现图11B中的(b)所示的抖动现象。例如,如图11B中的(c)所示,虚线框1104中的黑色曲线较为平滑,没有出现相邻图层的运动距离剧烈变化的现象。
综上所述,采用本申请实施例的方法,可以减少电子设备的显示画面出现抖动现象的可能性。
示例性的,本申请实施例这里结合图12所示的电子设备提前绘制图层,并对SF进行扩容的过程,介绍上述方法。
如图12所示,电子设备接收到上述第一UI事件后,可以启动垂直同步信号(即VSYNC);响应于t
1时刻的VSYNC,电子设备的UI线程可以绘制图层1,并由Render线程渲染绘制的图层1;在t
1时刻之后的t
x1时刻,UI线程已绘制完图层1;UI线程可以绘制图层2,并由Render线程渲染绘制的图层2。
Render线程在图12所示的t
s1时刻渲染完图层1,可以将该图层1缓存至SF Buffer。如图12所示,在t
1时刻-t
s1时刻这段时间,SF Buffer中未缓存图层,即SF Buffer中缓存的图层的数量为0。因此,Render线程在t
s1时刻将该图层1缓存至SF Buffer后,SF Buffer中缓存的图层数量变为1。在图12所示的t
x2时刻,UI线程已绘制完图层2;UI线程可以绘制图层3,并由Render线程渲染绘制的图层3。
在图12所示的t
2时刻,VSYNC到来,合成线程可以从SF Buffer中读出上述图层1,对图层1进行图层合成,得到图像帧1;即图层1从SF Buffer中出队,SF Buffer中缓存的图层的数量变为0。在图12所示的t
s2时刻,Render线程渲染完图层2,可以将该图层2缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。在图12所示的t
x3时刻,UI线程已绘制完图层3;UI线程可以绘制图层4,并由Render线程渲染绘制的图层4。在图12所示的t
s3时刻,Render线程渲染完图层3,可以将该图层3缓存至SF Buffer,SF Buffer中缓存的图层的数量变为2。
在图12所示的t
3时刻,VSYNC到来,电子设备的LCD刷新显示图像帧1;并且,合成线程可以从SF Buffer中读出图层2,对图层2进行图层合成,得到图像帧2;即图层2从SF Buffer中出队。因此,在图12所示的t
3时刻,SF Buffer中缓存的图层的数量可以变为1;但是,在t
3时刻,Render线程渲染完图层4,可以将该图层4缓存至SF Buffer。因此,在t
3时刻,SF Buffer中缓存的图层的数量仍为2。在图12所示的t
3时刻,VSYNC到来,UI线程绘制图层5,由Render线程渲染绘制的图层5。
其中,假设SF Buffer中最多可缓存3帧图层。在t
3时刻,SF Buffer中已经缓存了2帧图层;并且,在t
3时刻,UI线程开始绘制图层5。如果Render线程渲染绘制的图层5缓存至SF Buffer;那么,SF Buffer中的图层数量则可以达到上限值。因此,在t
3时刻后,UI线程绘制完图层5之后,t
4时刻的VSYNC到来之前,UI线程不会提前绘制图层。在图12所示的t
s4时刻,Render线程渲染完图层5,可以将该图层5缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
4时刻,VSYNC到来,电子设备的LCD刷新显示图像帧2;合成线程可以从SF Buffer中读出图层3,对图层3进行图层合成,得到图像帧3;即图层3从SF Buffer中出队。因此,在图12所示的t
4时刻,SF Buffer中缓存的图层的数量可以变为2。并且,响应于t
4时刻的VSYNC,UI线程可以绘制图层6,由Render线程渲染绘制的图层6。可以理解,如果Render线程渲染绘制的图层6缓存至SF Buffer;那么,SF Buffer中的图层数量则可以达到上限值。因此,在t
4时刻后,UI线程绘制完图层6之后,t
5时刻的VSYNC到来之前,UI线程不会提前绘制图层。在图12所示的t
s5时刻,Render线程渲染完图层6,可以将该图层6缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
5时刻,VSYNC到来,电子设备的LCD刷新显示图像帧3;合成线程可以从SF Buffer中读出图层4,对图层4进行图层合成,得到图像帧4;即图层4从SF Buffer中出队。因此,在图12所示的t
5时刻,SF Buffer中缓存的图层的数量可以变为2。并且,响应于t
5时刻的VSYNC,UI线程可以绘制图层7,由Render线程渲染绘制的图层7。可以理解,如果Render线程渲染绘制的图层7缓存至SF Buffer;那么,SF Buffer中的图层数量则可以达到上限值。因此,在t
5时刻后,UI线程绘制 完图层7之后,t
6时刻的VSYNC到来之前,UI线程不会提前绘制图层。在图12所示的t
s6时刻,Render线程渲染完图层7,可以将该图层7缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
6时刻,VSYNC到来,电子设备的LCD刷新显示图像帧4;合成线程可以从SF Buffer中读出图层5,对图层5进行图层合成,得到图像帧5;即图层5从SF Buffer中出队。因此,在图12所示的t
6时刻,SF Buffer中缓存的图层的数量可以变为2。并且,响应于t
6时刻的VSYNC,UI线程可以绘制图层8,由Render线程渲染绘制的图层8。可以理解,如果Render线程渲染绘制的图层8缓存至SF Buffer;那么,SF Buffer中的图层数量则可以达到上限值。因此,在t
6时刻后,UI线程绘制完图层8之后,t
7时刻的VSYNC到来之前,UI线程不会提前绘制图层。在图12所示的t
s7时刻,Render线程渲染完图层8,可以将该图层8缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
需要说明的是,本申请实施例中,电子设备在第一时刻之前绘制完第一图层,电子设备在第一时刻之前,绘制第二图层,可以包括:若电子设备在第一时刻之前绘制完第一图层,电子设备在第一时刻之前,电子设备生成XSYNC(也称为XSYNC信号);电子设备响应于XSYNC,绘制第二图层。例如,如图12所示,电子设备响应于t
x1时刻的XSYNC,绘制图层2;电子设备响应于t
x2时刻的XSYNC,绘制图层3;电子设备响应于t
x3时刻的XSYNC,绘制图层3。
可以理解的是,电子设备可能会接收到用于触发电子设备停止显示上述第一UI事件对应的图像内容的中断事件。此时,SF Buffer中可能还缓存有电子设备提前绘制并渲染的图层。以下实施例中介绍电子设备接收到上述中断事件时,如何处理SF Buffer中缓存的第一UI事件对应的图层。
在一些实施例中,电子设备接收到上述中断事件后,可以不删除SF Buffer中缓存的图层。具体的,如图13所示,在上述S303之前,本申请实施例的方法还可以包括S1301-S1302。
S1301、电子设备接收第二UI事件,该第二UI事件是用于触发电子设备停止显示第一UI事件对应的图像内容的中断(Down)事件。
其中,上述第二UI事件可以是一种可以触发电子设备显示与上述第一UI事件不同的图像内容的用户操作(如触摸操作)。也就是说,该第二UI事件触发电子设备所显示的图像内容,与第一UI事件触发电子设备所显示的图像内容不同。
需要说明的是,上述第二UI事件可以是触发电子设备显示的图像为“确定性动画”的UI事件;也可以是触发电子设备显示除上述“确定性动画”之外的其他任一图像内容的UI事件。
可以理解,电子设备响应于第一UI事件显示对应的图像内容的过程中,如果接收到另一个UI事件(如第二UI事件),则表示用户想要操作电子设备显示其他的图像内容(即第二UI事件对应的图层内容)。
S1302、电子设备响应于第二UI事件,停止绘制第一UI事件的图层,并响应于垂直同步信号1,绘制第二UI事件的第三图层,渲染第三图层,在SF缓存队列中缓存渲染后的第三图层。
例如,如图12所示,电子设备在t
Down时刻接收到Down事件(即第二UI事件)。响应于该Down事件,电子设备的UI线程停止绘制第一UI时间的图层(如图12所示的图层8之后的图层9);响应于垂直同步信号1(如t
7时刻的VSYNC),UI线程绘制图层1′,Render线程渲染绘制的图层1′。
并且,响应于t
7时刻的VSYNC,电子设备的LCD刷新显示图像帧5;合成线程可以从SF Buffer中读出图层6,对图层6进行图层合成,得到图像帧6;即图层6从SF Buffer中出队。因此,在图12所示的t
7时刻,SF Buffer中缓存的图层的数量可以变为2。在图12所示的t
s8时刻,Render线程渲染完图层1′,可以将该图层1′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
8时刻,VSYNC到来,电子设备的LCD刷新显示图像帧6;合成线程可以从SF Buffer中读出图层7,对图层7进行图层合成,得到图像帧7;即图层7从SF Buffer中出队。因此,在图12所示的t
8时刻,SF Buffer中缓存的图层的数量可以变为2。并且,响应于t
8时刻的VSYNC,UI线程可以绘制图层2′,由Render线程渲染绘制的图层2′。在图12所示的t
s9时刻,Render线程渲染完图层2′,可以将该图层2′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
9时刻,VSYNC到来,电子设备的LCD刷新显示图像帧7;合成线程可以从SF Buffer中读出图层8,对图层8进行图层合成,得到图像帧8;即图层8从SF Buffer中出队。因此,在图12所示的t
9时刻,SF Buffer中缓存的图层的数量可以变为2。并且,响应于t
9时刻的VSYNC,UI线程可以绘制图层3′,由Render线程渲染绘制的图层3′。在图12所示的t
s10时刻,Render线程渲染完图层3′,可以将该图层3′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为3。
在图12所示的t
10时刻,VSYNC到来,合成线程可以从SF Buffer中读出图层1′,对图层1′进行图层合成,得到图像帧1′;即图层1′从SF Buffer中出队。
其中,上述图层1′、图层2′和图层3′均为第三图层。如图12所示,电子设备在t
Down时刻接收到Down事件时,SF Buffer中缓存有2帧图层(图层6和图层7);并且,Render线程正在绘制图层8。UI线程在t
7时刻开始绘制Down事件的图层时,SF Buffer中缓存有3帧图层(图层6、图层7和图层8)。
由图12和上述描述可知:在该实施例中,电子设备接收到Down事件后,可以不删除SF Buffer中缓存的第一UI事件的图层(如图层6、图层7和图层8);而是继续响应于VSYNC合成SF Buffer中的图层,并刷新显示合成的图像帧。
可以理解的是,采用上述不删除SF Buffer中缓存的第一UI事件的图层的方案,可能会因为SF Buffer中缓存了较多第一UI事件的图层,而导致电子设备延迟显示第二UI事件的图像内容,电子设备的触摸响应延迟较大,电子设备的跟手性能较差。其中,从“用户手指在触摸屏输入触摸操作”到“触摸屏显示该触摸操作对应的图像被人眼感知”的延迟时间可以称为触摸响应延迟。电子设备的跟手性能可以体现为触摸响应延迟的长度。具体的,触摸响应延迟越长,跟手性能越差;触摸响应延迟越短,跟手性能越好。其中,电子设备的跟手性能越好,用户通过触摸操作控制电子设备的使用体验越好,感觉越流畅。
为了缩短电子设备的触摸响应延迟,提升电子设备的跟手性能,在另一些实施例 中,电子设备接收到上述中断事件后,可以删除SF Buffer中缓存的部分或全部图层。
在该实施例中,电子设备可以删除SF Buffer中缓存的部分图层。具体的,如图14所示,在上述S1302之后,电子设备可以不执行S303-S304,而是执行S1303。
S1303、从接收到第二UI事件开始,电子设备响应于垂直同步信号2,判断SF缓存队列中是否包括第一UI事件的图层。
具体的,在S1303之后,若SF缓存队列中包括第一UI事件的图层,电子设备可以执行S1304和S303-S304;若SF缓存队列不包括第一UI事件的图层,电子设备可以执行S303-S304。
S1304、电子设备删除SF缓存队列中缓存的第一UI事件的图层。
其中,假设SF缓存队列(即SF Buffer)中缓存的P帧图层为第一UI事件的图层。在一些实施例中,电子设备可以删除SF缓存队列中缓存的该P帧图层中的Q帧图层,并对删除上述Q帧图层后SF缓存队列的队首的一帧图层进行图层合成得到图像帧,并缓存合成的图像帧。其中,P帧图层为第一UI事件的图层,Q≤P,P和Q均为正整数。
例如,如图15或图17所示,电子设备在t
Down时刻接收到Down事件(即第二UI事件)。其中,图15或图17中电子设备接收到Down事件之前,进行图层绘制、图层渲染、图层合成和图像帧显示的过程与图12所示的过程相同,图15或图17中电子设备绘制并渲染图层1′、图层2′和图层3′的过程与图12所示的过程相同,本申请实施例这里不再赘述。
从图15或图17所示的t
Down时刻接收到Down事件开始,响应于t
7时刻的VSYNC(包括垂直同步信号2),电子设备可以判断SF缓存队列中是否包括第一UI事件的图层。其中,在图15所示的t
Down时刻之后、t
7时刻之前的t
s7时刻,如图16A所示,SF Buffer中缓存了3帧图层,这3帧图层包括图层6、图层7和图层8,图层6、图层7和图层8是第一UI事件的图层。也就是说,结合图15或图17,电子设备执行S1303,可以确定SF缓存队列中包括第一UI事件的图层;并且,SF缓存队列中缓存了3帧第一UI事件的图层,即P=3。
在该实施例的一种实现方式中,电子设备执行S1304可以隔帧删除SF Buffer中缓存的第一UI事件的图层。在该实施例中,Q=1。
例如,在图15所示的t
s7时刻,如图16A所示,SF缓存队列中缓存了3帧图层(包括图层6、图层7和图层8。图层6、图层7和图层8是第一UI事件的图层。因此,响应于图15所示的t
7时刻的VSYNC,电子设备(如电子设备的合成线程)可以删除SF Buffer中缓存了3帧图层中的1帧图层(即SF缓存队列的队首的图层6);并且,电子设备(如电子设备的合成线程)可以对删除上述图层6后SF缓存队列的队首的一帧图层(即图层7)进行图层合成得到图像帧7,并缓存合成的图像帧7。例如,如图16B所示,在t
7时刻,图层6从SF Buffer出队被删除,图层7从SF Buffer出队用于合成图像帧7,SF Buffer中只剩下图层8。如图15所示,在t
7时刻,SF Buffer中缓存的图层的数量变为1。
在图15所示的t
s8时刻,Render线程渲染完图层1′,可以将该图层1′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为2。响应于t
s8时刻之后t
8时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中缓存有第一UI事件的图层8。电子设备 (如电子设备的合成线程)可以执行S1304删除图层8,对图层1′进行图层合成;如图16C所示,在t
8时刻,图层8从SF Buffer出队被删除,图层1′从SF Buffer出队用于合成图像帧1′,SF Buffer中缓存的图层的数量变为0。如图15所示,在t
8时刻,SF Buffer中缓存的图层的数量变为0。
在图15所示的t
s9时刻,Render线程渲染完图层2′,可以将该图层2′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。响应于t
s9时刻之后t
9时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中仅缓存有第二UI事件的图层2′,未缓存第一UI事件的图层。电子设备(如电子设备的合成线程)可以执行S1305对图层2′进行图层合成;如图16D所示,在t
9时刻,图层2′从SF Buffer出队用于合成图像帧2′,SF Buffer中缓存的图层的数量变为0。如图15所示,在t
9时刻,SF Buffer中缓存的图层的数量变为0。
在图15所示的t
s10时刻,Render线程渲染完图层3′,可以将该图层3′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。响应于t
s10时刻之后t
10时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中仅缓存有第二UI事件的图层3′,未缓存第一UI事件的图层。电子设备(如电子设备的合成线程)可以执行S1305对图层3′进行图层合成;在t
10时刻,图层3′从SF Buffer出队用于合成图像帧3′,SF Buffer中缓存的图层的数量变为0。
在该实施例的另一种实现方式中,在P≥2的情况下,电子设备执行S1304每次可删除SF Buffer中缓存的第一UI事件的多帧图层,即Q≥2。例如,以下实施例中以P=3,Q=2为例,介绍本实施例的方法。
例如,在图17所示的t
s7时刻,如图16A所示,SF缓存队列中缓存了3帧图层(包括图层6、图层7和图层8。图层6、图层7和图层8是第一UI事件的图层。因此,响应于图17所示的t
7时刻的VSYNC,电子设备(如电子设备的合成线程)可以删除SF Buffer中缓存了3帧图层中的2帧图层(即SF缓存队列的队首的图层6和图层7);并对删除上述图层6和图层7后SF缓存队列的队首的一帧图层(即图层8)进行图层合成得到图像帧8,并缓存合成的图像帧8。例如,如图18A所示,在t
7时刻,图层6从SF Buffer出队被删除,图层7从SF Buffer出队被删除,图层8从SF Buffer出队用于合成图像帧8,SF Buffer中缓存的图层的数量变为0。如图15所示,在t
7时刻,SF Buffer中缓存的图层的数量变为0。
在图17所示的t
s8时刻,Render线程渲染完图层1′,可以将该图层1′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。响应于t
s8时刻之后t
8时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中仅缓存有第二UI事件的图层1′,未缓存第一UI事件的图层。电子设备可以执行S1305对图层1′进行图层合成;如图18B所示,在t
8时刻,图层1′从SF Buffer出队用于合成图像帧1′,SF Buffer中缓存的图层的数量变为0。如图15所示,在t
8时刻,SF Buffer中缓存的图层的数量变为0。
在图17所示的t
s9时刻,Render线程渲染完图层2′,可以将该图层2′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。响应于t
s9时刻之后t
9时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中仅缓存有第二UI事件的图层2′,未缓 存第一UI事件的图层。电子设备可以执行S1305对图层2′进行图层合成;在t
9时刻,图层2′从SF Buffer出队用于合成图像帧2′,SF Buffer中缓存的图层的数量变为0。
在图17所示的t
s10时刻,Render线程渲染完图层3′,可以将该图层3′缓存至SF Buffer,SF Buffer中缓存的图层的数量变为1。响应于t
s10时刻之后t
10时刻的VSYNC,电子设备执行S1303,可以确定SF Buffer中仅缓存有第二UI事件的图层3′,未缓存第一UI事件的图层。电子设备可以执行S1305对图层3′进行图层合成;在t
10时刻,图层3′从SF Buffer出队用于合成图像帧3′,SF Buffer中缓存的图层的数量变为0。
在该实施例中,电子设备响应于一个垂直同步信号2(如上述VSYNC),电子设备可以一次处理第一UI事件的多帧图层。这样,可以缩短电子设备响应第二UI事件的触摸响应延迟,可以提升电子设备的跟手性能。
在另一些实施例中,为了缩短电子设备的触摸响应延迟,提升电子设备的跟手性能,电子设备可以为上述第一UI事件(即“确定性动画”对应的UI事件)的图层添加第一标记位,然后在接收到上述中断事件(即第二UI事件)时,可以删除SF Buffer中缓存的带有第一标记位的图层。
具体的,本申请实施例的方法还可以包括S1901-S1902和S1301-S1302。其中,在S1902之后,电子设备可以执行S303-S304。
S1901、电子设备为第一UI事件的每一帧图层设置第一标记位,该第一标记位用于指示对应的图层是第一UI事件的图层。
其中,电子设备的UI线程可以在绘制完第一UI事件的一帧图层后,为这一帧图层添加第一标记位。例如,电子设备执行S301,UI线程绘制完第一图层后,UI线程可以为该第一图层添加第一标记位。电子设备执行S301,UI线程绘制完第二图层后,UI线程可以为该第二图层添加第一标记位。
S1902、从接收到第二UI事件开始,电子设备响应于垂直同步信号2,删除SF缓存队列中设置有第一标记位的图层。
示例性的,本申请实施例这里介绍S1902的具体实现方法。上述S1902可以包括:响应于第二UI事件,电子设备触发预设查询事件;响应于预设查询事件,电子设备设置第二标记位,并在SF缓存队列中不包括设置有第一标记位的图层时删除第二标记位。其中,该第二标记位用于触发电子设备响应于垂直同步信号2删除SF缓存队列中设置有第一标记位的图层。可以理解,电子设备设置第二标记位后,响应于垂直同步信号2便可以删除SF缓存队列中设置有第一标记位的图层;电子设备删除第二标记位后,响应于垂直同步信号2便可以不执行“删除SF缓存队列中设置有第一标记位的图层”的操作,而是继续对SF Buffer中缓存的图层进行图层合成。
具体的,电子设备的UI线程接收到上述第二UI事件(即中断事件)后,可以向合成线程触发预设查询事件。合成线程响应于该预设查询事件,在接收到垂直同步信号2时便可以删除SF缓存队列中设置有第一标记位的图层,并在SF缓存队列中不包括设置有第一标记位的图层时删除第二标记位。其中,上述第二标记位也可以称为Delete标记位。
例如,如图19所示,电子设备在t
Down时刻接收到Down事件(即第二UI事件)。其中,图19中电子设备接收到Down事件之前,进行图层绘制、图层渲染、图层合成和图像帧显示的过程与图12所示的过程相同,本申请实施例这里不再赘述。
从图19所示的t
Down时刻接收到Down事件开始,响应于t
7时刻的VSYNC(包括垂直同步信号2),电子设备(如电子设备的合成线程)可以删除SF缓存队列中设置有第一标记位的图层。其中,在图19所示的t
Down时刻之后、t
7时刻之前的t
s7时刻,如图16A所示,SF Buffer中缓存了3帧图层(包括图层6、图层7和图层8。图层6、图层7和图层8是第一UI事件的图层,该图层6、图层7和图层8均设置有第一标记位。因此,电子设备(如电子设备的合成线程)可以删除图层6、图层7和图层8。删除图层6、图层7和图层8之后,SF Buffer中缓存的图层的数量变为0;因此,响应于图19所示的t
7时刻的VSYNC(如垂直同步信号2),合成线程不会执行图层合成。响应于图19所示的t
7时刻的VSYNC(如垂直同步信号3),电子设备的LCD可以刷新显示图像帧5。由于t
7时刻-t
8时刻这段时间,电子设备(如电子设备的合成线程)没有执行图层合成,也不会在SF Buffer中缓存新的图像帧;因此,响应于图19所示的t
8时刻的VSYNC(包括垂直同步信号3),电子设备的LCD只能继续显示图像帧5。
需要注意的是,在一些实施例中,电子设备可能需要处理多个VSYNC信号(如垂直同步信号2),才可以完全删除SF Buffer中缓存的设置有第一标记位的图层。
例如,如图20所示,电子设备在t
Down时刻接收到Down事件(即第二UI事件)。其中,图20中电子设备接收到Down事件之前,进行图层绘制、图层渲染、图层合成和图像帧显示的过程与图12所示的过程相同,本申请实施例这里不再赘述。在图20所示的t
7时刻的VSYNC(如垂直同步信号2)到达时,Render线程还未渲染完图层8;因此,响应于图20所示的t
7时刻的VSYNC(如垂直同步信号2),合成线程只能删除SF Buffer中缓存的图层6和图层7。在图20所示的t
8时刻的VSYNC(如垂直同步信号2)到达时,Render线程已经渲染完图层8,并将图层8缓存至SF Buffer。因此,响应于图20所示的t
8时刻的VSYNC(如垂直同步信号2),合成线程可以删除SF Buffer中缓存的图层8。并且,在图20所示的t
8时刻的VSYNC(如垂直同步信号2)到达时,Render线程已经渲染完图层1′,并将图层1′缓存至SF Buffer。因此,响应于图20所示的t
8时刻的VSYNC(如垂直同步信号2),合成线程可以对图层1′进行图层合成到达图像帧1′。
由上述描述可知:在图20中,电子设备处理了2个VSYNC信号(如t
7时刻的VSYNC和t
8时刻的VSYNC),才完全删除SF Buffer中缓存的设置有第一标记位的图层。
在该实施例中,电子设备可以在接收到中断事件后,响应于一个垂直同步信号2,电子设备可以删除SF Buffer中缓存的第一UI事件的图层。如此,在下一个垂直同步信号2到来后,电子设备便可以直接合成中断事件的图层。这样,可以缩短电子设备响应第二UI事件的触摸响应延迟,可以提升电子设备的跟手性能。
由上述实施例可知:电子设备是按照各个图层的处理时间计算对应图层的运动距离的。并且,电子设备可以将各个图层的处理时间缓存在时间缓存队列中。电子设备执行上述流程,删除SF Buffer中缓存的第一UI事件的图层之后,如果电子设备绘制的图层不回退到电子设备删除的第1帧图层(如图层6)的前一帧图层(如图层5), 则可能会导致电子设备显示的图像内容的大幅度跳变,影响用户体验。
例如,结合上述实施例,如图19或图20所示,电子设备删除了SF Buffer中缓存的图层6、图层7和图层8。电子设备删除了图层6、图层7和图层8之后,该电子设备所显示的图像帧是图层5对应的图像帧6。但是,电子设备的UI线程已经处理到了图层8。也就是说,UI线程的处理逻辑已经到了图层8。如果电子设备按照图层8的处理时间来计算下一帧图层的处理时间,然后根据计算得到的该下一帧图层的处理时间计算运动距离,则会出现电子设备的显示画面由图层5对应的运动距离直接跳变至图层8对应的运动距离,电子设备显示的图像内容出现的大幅度跳变。基于此,本申请实施例的方法中,电子设备还可以重新绘制第四图层,以将电子设备绘制图层的逻辑回退至第四图层,并获取该第四图层的处理时间。
其中,该第四图层是电子设备接收到第二UI事件时,电子设备正在显示的图像帧对应的图层的下一帧图层。例如,如图20所示,电子设备的UI线程在t
Down时刻接收到Down事件(即第二UI事件)。在t
Down时刻,电子设备显示图像帧4。第四图层是图像帧4对应的图层4的下一帧图层,即图层5。如图20所示,电子设备可以重新绘制图层5,以将电子设备绘制图层的逻辑回退至图层5。
或者,该第四图层包括电子设备接收到第二UI事件时,电子设备正在显示的图像帧对应的图层,以及电子设备正在显示的图像帧对应的图层的下一帧图层。例如,如图22A所示,电子设备的UI线程在t
Down时刻接收到Down事件(即第二UI事件)。在t
Down时刻,电子设备显示图像帧4。第四图层包括图像帧4对应的图层4和图像帧4对应的图层4的下一帧图层(即图层5)。如图22A所示,电子设备可以重新绘制图层4和图层5,以将电子设备绘制图层的逻辑回退至图层4和图层5。
但是,需要说明的是,电子设备不会再渲染该第四图层,该第四图层的处理时间用于电子设备计算第五图层的运动距离。例如,如图20所示,t
Down时刻之后,电子设备没有再渲染图层5。又例如,如图22A所示,t
Down时刻之后,电子设备没有再渲染图层4和图层5。
在另一些实施例中,结合上述在第一UI事件(即“确定性动画”对应的UI事件)的图层添加第一标记位,响应于中断事件(即第二UI事件),删除SF Buffer中缓存的带有第一标记位的图层的方案,电子设备可以响应于上述预设查询事件,查询SF Buffer中缓存的设置有第一标记位的图层的数量,以及电子设备接收到第二UI事件时待缓存至SF缓存队列中的图层的数量,计算查询到的数量之和H。然后,电子设备可以根据计算得到的H,确定上述第四图层。
示例性的,响应于上述预设查询事件,电子设备的合成线程可以查询SF Buffer中缓存的设置有第一标记位的图层的数量,以及电子设备的UI线程接收到第二UI事件时待缓存至SF缓存队列中的图层的数量,计算查询到的数量之和H。
例如,如图19、图20或图22A所示,电子设备的UI线程在t
Down时刻接收到Down事件(即第二UI事件)。UI线程可以向合成线程触发预设查询事件,合成线程在t
Down时刻查询到SF Buffer中缓存的设置有第一标记位的图层(如图层6和图层7)的数量为2;合成线程在t
Down时刻查询到UI线程接收到第二UI事件时待缓存至SF缓存队列中的图层(图层8)的数量为1。电子设备可以计算查询到的数量之和H=3。
其中,第四图层可以为电子设备接收到第二UI事件时,沿着SF Buffer的队尾向队首的方向,从SF Buffer的队尾的一帧图层开始数的第H+h帧图层;其中,h=0,或者h依次在{0,1}中取值。
其中,在图19、图20或图22A所示的t
Down时刻,SF Buffer中缓存的图层如图21或图22B所示。如图21或图22B所示,SF Buffer中缓存有图层6和图层7;图层6位于队首,图层7位于队尾。
在实现方式(1)中,h=0。结合图19或图20,H=3,H+h=3。在这种实现方式中,第四图层是图19或图20所示的图层5。例如,如图21所示,第四图层为电子设备接收到第二UI事件时(即t
Down时刻),沿着SF Buffer的队尾向队首的方向,从SF Buffer的队尾的一帧图层(即图层7)开始数的第3(即H+h=3)帧图层,如图21所示的图层5。如图19或图20所示,电子设备的UI线程在t
Down时刻-t
7时刻这段时间可以重新绘制图层5。
在实现方式(2)中,h依次在{0,1}中取值。结合图22A,H=3,H+h依次为3和4。在这种实现方式中,第四图层是图22A所示的图层4和图层5。例如,如图22B所示,第四图层包括电子设备接收到第二UI事件时(即t
Down时刻),沿着SF Buffer的队尾向队首的方向,从SF Buffer的队尾的一帧图层(即图层7)开始数的第H+h(如3和4)帧图层,如图22B所示的图层4和图层5。如图22A所示,电子设备的UI线程在t
Down时刻-t
7时刻这段时间可以重新绘制图层4和图层5。
需要注意的是,虽然电子设备(如电子设备的UI线程)重新绘制了第四图层(如图19或图20所示的图层4和图层5);但是,电子设备(如电子设备的Render线程)不会再渲染该第四图层。例如,如图19或图20所示,在UI线程于t
7时刻绘制完图层5之后,Render线程没有渲染图层5。又例如,如图22A所示,在UI线程于t
7时刻绘制完图层4和图层5之后,Render线程没有渲染图层4和图层5。
其中,电子设备重新绘制第四图层,是为了将电子设备的绘制图层的逻辑(即UI线程的处理逻辑)回退至第四图层。该第四图层的处理时间用于计算运动距离。可以理解,将电子设备的绘制图层的逻辑回退至第四图层,并按照第四图层的处理时间计算运动距离,可以避免电子设备显示的图像内容出现的大幅度跳变。
需要说明的是,在一些情况下,电子设备响应于第一UI事件所显示的动画是方向性动画(如物体向一个方向运动的动画)。在这种情况下,电子设备的UI线程如图20所示绘制图层8之后,再重新绘制图层5,则按照图层8到图层5的运动方向,物体的运动方向与上述方向性动画中物体的运动方向相反。针对这种情况,采用上述实现方式(2)的方案,先重新绘制图层4,再重新绘制图层5,则可以解决物体的运动方向与上述方向性动画中物体的运动方向相反的问题。如图22A所示,虽然按照图层8到图层4的运动方向,物体的运动方向与上述方向性动画中物体的运动方向相反;但是,按照图层4到图层5的运动方向,物体的运动方向与上述方向性动画中物体的运动方向相同。
本实施例中,电子设备删除SF Buffer中缓存的第一UI事件的图层之后,可以重新绘制第一UI事件的第四图层。这样,则可能提升电子设备显示的图像内容的连贯性,提升用户体验。
本申请一些实施例提供了一种电子设备,该电子设备可以包括:显示屏(如触摸屏)、存储器和一个或多个处理器。该显示屏、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中电子设备执行的各个功能或者步骤。该电子设备的结构可以参考图1所示的电子设备100的结构。
本申请实施例还提供一种芯片系统,如图23所示,该芯片系统2300包括至少一个处理器2301和至少一个接口电路2302。处理器2301和接口电路2302可通过线路互联。例如,接口电路2302可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路2302可用于向其它装置(例如处理器2301或者电子设备的触摸屏)发送信号。示例性的,接口电路2302可读取存储器中存储的指令,并将该指令发送给处理器2301。当所述指令被处理器2301执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中电子设备执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中电子设备执行的各个功能或者步骤。该计算机可以是上述电子设备。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的 形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (29)
- 一种图像处理方法,其特征在于,所述方法应用于电子设备,所述方法包括:所述电子设备绘制第一图层,并渲染所述第一图层,在SF缓存队列缓存渲染后的所述第一图层;所述电子设备在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层;其中,所述第一时刻是用于触发所述电子设备绘制所述第二图层的第一垂直同步信号到来的时刻。
- 据权利要求1所述的方法,其特征在于,所述电子设备在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层,包括:所述电子设备在所述第一时刻之前绘制完所述第一图层,所述电子设备响应于所述第一图层绘制结束,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求1所述的方法,其特征在于,所述电子设备在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层,包括:所述电子设备在第二时刻之前绘制完所述第一图层,所述电子设备从所述第二时刻开始绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层;其中,所述第二时刻是用于触发所述电子设备绘制所述第一图层的所述第一垂直同步信号的信号周期的预设百分比的耗时时刻,所述预设百分比小于1,所述第二时刻在所述第一时刻之前。
- 据权利要求3所述的方法,其特征在于,所述方法还包括:所述电子设备在所述第一时刻之前,所述第二时刻之后绘制完所述第一图层,所述电子设备响应于所述第一图层绘制结束,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求1-4中任一项所述的方法,其特征在于,在所述电子设备绘制第一图层,并渲染所述第一图层,在SF缓存队列缓存渲染后的所述第一图层,所述方法还包括:所述电子设备接收第一UI事件,所述第一UI事件用于触发所述电子设备显示预设图像内容或者以预设方式显示图像内容;所述第一UI事件包括以下任一种:所述电子设备接收用户输入的抛滑操作,所述电子设备接收用户对前台应用中预设控件的点击操作,所述电子设备自动触发的UI事件;其中,所述电子设备绘制第一图层,并渲染所述第一图层,在SF缓存队列缓存渲染后的第一图层,包括:响应于所述第一UI事件,所述电子设备绘制所述第一图层,并渲染所述第一图层,在所述SF缓存队列缓存渲染后的第一图层。
- 据权利要求1-5中任一项所述的方法,其特征在于,在所述电子设备在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层之前,所述方法还包括:所述电子设备确定所述SF缓存队列的缓存空间和所述SF缓存队列中缓存帧的数量,所述缓存帧是缓存在所述SF缓存队列中的图层;所述电子设备计算所述SF缓存队列的缓存空间与所述缓存帧的数量的差值,得到所述SF缓存队列的剩余缓存空间;其中,若所述SF缓存队列的剩余缓存空间大于第一预设门限值,在所述第一时刻之前所述电子设备绘制完所述第一图层,所述电子设备则在所述第一时刻之前绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求6所述的方法,其特征在于,所述方法还包括:若所述SF缓存队列的剩余缓存空间小于第二预设门限值,所述电子设备则响应于所述第一垂直同步信号,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求1-7中任一项所述的方法,其特征在于,在所述电子设备在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层之前,所述方法还包括:所述电子设备将所述SF缓存队列的缓存空间设置为M+p帧;其中,M为设置前所述SF缓存队列的缓存空间的大小;p为所述电子设备在预设时间内丢帧的数量,或者,p为预设的正整数。
- 据权利要求8所述的方法,其特征在于,所述方法还包括:若M+p大于预设上限值N,所述电子设备则将所述SF缓存队列的缓存空间设置为N帧。
- 据权利要求1-9中任一项所述的方法,其特征在于,所述电子设备绘制第二图层,包括:所述电子设备根据所述第一垂直同步信号的信号周期,计算所述第二图层的运动距离,并根据所述第二图层的运动距离绘制所述第二图层;其中,所述第二图层的运动距离是所述第二图层中的图像内容相比于所述第一图层中的图像内容的运动距离。
- 据权利要求10所述的方法,其特征在于,所述电子设备根据所述第一垂直同步信号的信号周期,计算所述第二图层的运动距离,并根据所述第二图层的运动距离绘制所述第二图层,包括:所述电子设备根据所述第一垂直同步信号的信号周期,计算所述第二图层的处理时间;其中,当所述第二图层是所述电子设备响应于第一UI事件绘制的第i个图层时,所述第二图层的处理时间为p i-1+T i-1,i≥2,i为正整数;所述p i-1为第i-1个图层的处理时间;所述T i-1为用于触发所述电子设备绘制所述第i-1个图层的第一垂直同步信号的信号周期;所述电子设备根据所述第二图层的处理时间计算所述第二图层的运动距离,并根 据所述第二图层的运动距离绘制所述第二图层。
- 据权利要求1-11中任一项所述的方法,其特征在于,所述方法还包括:所述电子设备接收第二UI事件,所述第二UI事件是用于触发所述电子设备停止显示第一UI事件对应的图像内容的中断事件;其中,所述第一UI事件用于触发所述电子设备显示预设图像内容或者以预设方式显示图像内容,所述第一图层和所述第二图层是所述第一UI事件触发所述电子设备绘制的;所述电子设备响应于所述第二UI事件,停止绘制所述第一UI事件的图层;所述电子设备响应于第二垂直同步信号,删除所述SF缓存队列中缓存的所述第一UI事件的图层;其中,所述第二垂直同步信号用于触发所述电子设备合成渲染后的图层得到图像帧;所述电子设备响应于所述第一垂直同步信号,绘制所述第二UI事件的第三图层,渲染所述第三图层,在所述SF缓存队列中缓存渲染后的所述第三图层。
- 据权利要求12所述的方法,其特征在于,在所述电子设备接收第二UI事件之后,所述电子设备响应于所述第一垂直同步信号,绘制所述第二UI事件的第三图层,渲染所述第三图层,在所述SF缓存队列中缓存渲染后的第三图层之前,所述方法还包括:所述电子设备重新绘制第四图层,以将所述电子设备绘制图层的逻辑回退至所述第四图层,并获取所述第四图层的处理时间;其中,所述电子设备不再渲染所述第四图层,所述第四图层的处理时间用于所述电子设备计算所述第四图层的运动距离;所述第四图层是所述电子设备接收到所述第二UI事件时,所述电子设备正在显示的图像帧对应的图层的下一帧图层;或者,所述第四图层包括所述电子设备接收到第二UI事件时,所述电子设备正在显示的图像帧对应的图层,以及所述电子设备正在显示的图像帧对应的图层的下一帧图层。
- 一种电子设备,其特征在于,所述电子设备包括显示屏、存储器和一个或多个处理器;所述显示屏、所述存储器与所述处理器耦合;其中,所述显示屏用于显示所述处理器生成的图像,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令被所述处理器执行时,使得所述电子设备执行以下操作:绘制第一图层,并渲染所述第一图层,在SF缓存队列缓存渲染后的所述第一图层;在第一时刻之前绘制完所述第一图层,在所述第一时刻之前所述电子设备绘制第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层;其中,所述第一时刻是用于触发所述电子设备绘制所述第二图层的第一垂直同步信号到来的时刻。
- 据权利要求14所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:在所述第一时刻之前绘制完所述第一图层,响应于所述第一图层绘制结束,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求14所述的电子设备,其特征在于,当所述计算机指令被所述处理 器执行时,使得所述电子设备还执行以下步骤:在第二时刻之前绘制完所述第一图层,从所述第二时刻开始绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层;其中,所述第二时刻是用于触发所述电子设备绘制所述第一图层的所述第一垂直同步信号的信号周期的预设百分比的耗时时刻,所述预设百分比小于1,所述第二时刻在所述第一时刻之前。
- 据权利要求16所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:在所述第一时刻之前,所述第二时刻之后绘制完所述第一图层,响应于所述第一图层绘制结束,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求14-17中任一项所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:接收第一UI事件,所述第一UI事件用于触发所述显示屏显示预设图像内容或者以预设方式显示图像内容;所述第一UI事件包括以下任一种:所述电子设备接收用户输入的抛滑操作,所述电子设备接收用户对前台应用中预设控件的点击操作,所述电子设备自动触发的UI事件;响应于所述第一UI事件,绘制所述第一图层,并渲染所述第一图层,在所述SF缓存队列缓存渲染后的第一图层。
- 据权利要求14-18中任一项所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:确定所述SF缓存队列的缓存空间和所述SF缓存队列中缓存帧的数量,所述缓存帧是缓存在所述SF缓存队列中的图层;计算所述SF缓存队列的缓存空间与所述缓存帧的数量的差值,得到所述SF缓存队列的剩余缓存空间;若所述SF缓存队列的剩余缓存空间大于第一预设门限值,在所述第一时刻之前绘制完所述第一图层,则在所述第一时刻之前绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求19所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:若所述SF缓存队列的剩余缓存空间小于第二预设门限值,则响应于所述第一垂直同步信号,绘制所述第二图层,并渲染所述第二图层,在所述SF缓存队列缓存渲染后的所述第二图层。
- 据权利要求14-20中任一项所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:将所述SF缓存队列的缓存空间设置为M+p帧;其中,M为设置前所述SF缓存队列的缓存空间的大小;p为所述电子设备在预设时间内丢帧的数量,或者,p为预设的正整数。
- 据权利要求21所述的电子设备,其特征在于,当所述计算机指令被所述处理 器执行时,使得所述电子设备还执行以下步骤:若M+p大于预设上限值N,将所述SF缓存队列的缓存空间设置为N帧。
- 据权利要求14-22中任一项所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:根据所述第一垂直同步信号的信号周期,计算所述第二图层的运动距离,并根据所述第二图层的运动距离绘制所述第二图层;其中,所述第二图层的运动距离是所述第二图层中的图像内容相比于所述第一图层中的图像内容的运动距离。
- 据权利要求23所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:根据所述第一垂直同步信号的信号周期,计算所述第二图层的处理时间;其中,当所述第二图层是所述电子设备响应于第一UI事件绘制的第i个图层时,所述第二图层的处理时间为p i-1+T i-1,i≥2,i为正整数;所述p i-1为第i-1个图层的处理时间;所述T i-1为用于触发所述电子设备绘制所述第i-1个图层的第一垂直同步信号的信号周期;根据所述第二图层的处理时间计算所述第二图层的运动距离,并根据所述第二图层的运动距离绘制所述第二图层。
- 据权利要求14-24中任一项所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:接收第二UI事件,所述第二UI事件是用于触发所述电子设备停止显示第一UI事件对应的图像内容的中断事件;其中,所述第一UI事件用于触发所述电子设备显示预设图像内容或者以预设方式显示图像内容,所述第一图层和所述第二图层是所述第一UI事件触发所述电子设备绘制的;响应于所述第二UI事件,停止绘制所述第一UI事件的图层;响应于第二垂直同步信号,删除所述SF缓存队列中缓存的所述第一UI事件的图层;其中,所述第二垂直同步信号用于触发所述电子设备合成渲染后的图层得到图像帧;响应于所述第一垂直同步信号,绘制所述第二UI事件的第三图层,渲染所述第三图层,在所述SF缓存队列中缓存渲染后的所述第三图层。
- 据权利要求25所述的电子设备,其特征在于,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行以下步骤:重新绘制第四图层,以将所述电子设备绘制图层的逻辑回退至所述第四图层,并获取所述第四图层的处理时间;其中,所述电子设备不再渲染所述第四图层,所述第四图层的处理时间用于所述电子设备计算所述第四图层的运动距离;所述第四图层是接收到所述第二UI事件时,所述显示屏正在显示的图像帧对应的图层的下一帧图层;或者,所述第四图层包括接收到第二UI事件时,所述显示屏正在显示的图像帧对应的图层,以及所述显示屏正在显示的图像帧对应的图层的下一帧图层。
- 一种芯片系统,其特征在于,所述芯片系统应用于包括存储器和显示屏的电子设备;所述芯片系统包括一个或多个接口电路和一个或多个处理器;所述接口电路和 所述处理器通过线路互联;所述接口电路用于从所述存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-13中任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-13中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/796,126 US20230116975A1 (en) | 2020-07-31 | 2021-03-17 | Image processing method and electronic device |
EP21851485.9A EP4083792A4 (en) | 2020-07-31 | 2021-03-17 | IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE |
MX2023001377A MX2023001377A (es) | 2020-07-31 | 2021-03-17 | Metodo de procesamiento de imagenes y dispositivo electronico. |
CN202180051354.7A CN116075808A (zh) | 2020-07-31 | 2021-03-17 | 一种图像处理方法及电子设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010762068.9 | 2020-07-31 | ||
CN202010762068.9A CN114092595B (zh) | 2020-07-31 | 2020-07-31 | 一种图像处理方法及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022021895A1 true WO2022021895A1 (zh) | 2022-02-03 |
Family
ID=80037459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/081367 WO2022021895A1 (zh) | 2020-07-31 | 2021-03-17 | 一种图像处理方法及电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230116975A1 (zh) |
EP (1) | EP4083792A4 (zh) |
CN (3) | CN115631258B (zh) |
MX (1) | MX2023001377A (zh) |
WO (1) | WO2022021895A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116055611A (zh) * | 2022-06-24 | 2023-05-02 | 荣耀终端有限公司 | 绘制操作的执行方法、电子设备及可读介质 |
CN116594543A (zh) * | 2023-07-18 | 2023-08-15 | 荣耀终端有限公司 | 显示方法、设备及可读存储介质 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116723265B (zh) * | 2022-09-14 | 2024-07-16 | 荣耀终端有限公司 | 图像处理方法、可读存储介质、程序产品和电子设备 |
CN116700654B (zh) * | 2022-09-15 | 2024-04-09 | 荣耀终端有限公司 | 一种图像显示方法、装置、终端设备及存储介质 |
CN116700655B (zh) * | 2022-09-20 | 2024-04-02 | 荣耀终端有限公司 | 一种界面显示方法及电子设备 |
CN117891422A (zh) * | 2022-10-13 | 2024-04-16 | 荣耀终端有限公司 | 图像处理方法和电子设备 |
CN116069187B (zh) * | 2023-01-28 | 2023-09-01 | 荣耀终端有限公司 | 一种显示方法及电子设备 |
CN117724779A (zh) * | 2023-06-09 | 2024-03-19 | 荣耀终端有限公司 | 一种生成界面图像的方法及电子设备 |
CN117724781A (zh) * | 2023-07-04 | 2024-03-19 | 荣耀终端有限公司 | 一种应用程序启动动画的播放方法和电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517051A (zh) * | 2012-06-28 | 2014-01-15 | 联想(北京)有限公司 | 控制方法和电子设备 |
WO2017030735A1 (en) * | 2015-08-20 | 2017-02-23 | Qualcomm Incorporated | Refresh rate matching with predictive time-shift compensation |
CN108829475A (zh) * | 2018-05-29 | 2018-11-16 | 北京小米移动软件有限公司 | Ui绘制方法、装置及存储介质 |
CN110503708A (zh) * | 2019-07-03 | 2019-11-26 | 华为技术有限公司 | 一种基于垂直同步信号的图像处理方法及电子设备 |
CN110502294A (zh) * | 2019-07-20 | 2019-11-26 | 华为技术有限公司 | 数据处理的方法、装置及电子设备 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9019300B2 (en) * | 2006-08-04 | 2015-04-28 | Apple Inc. | Framework for graphics animation and compositing operations |
US8786619B2 (en) * | 2011-02-25 | 2014-07-22 | Adobe Systems Incorporated | Parallelized definition and display of content in a scripting environment |
US9874991B2 (en) * | 2013-01-15 | 2018-01-23 | Apple Inc. | Progressive tiling |
US9207986B2 (en) * | 2013-04-11 | 2015-12-08 | Facebook, Inc. | Identifying a next window of idle time to perform pre-generation tasks of content portions outside of the displayable region stored in a message queue |
CN104301795B (zh) * | 2014-09-26 | 2017-10-20 | 四川长虹电器股份有限公司 | 基于3d模型的智能电视大数据海报信息管理方法 |
US10515326B2 (en) * | 2015-08-28 | 2019-12-24 | Exacttarget, Inc. | Database systems and related queue management methods |
CN107369197B (zh) * | 2017-07-05 | 2022-04-15 | 腾讯科技(深圳)有限公司 | 图片处理方法、装置及设备 |
CN109788334A (zh) * | 2019-01-31 | 2019-05-21 | 北京字节跳动网络技术有限公司 | 弹幕处理方法、装置、电子设备及计算机可读存储介质 |
CN110209444B (zh) * | 2019-03-20 | 2021-07-09 | 华为技术有限公司 | 一种图形渲染方法和电子设备 |
CN109992347B (zh) * | 2019-04-10 | 2022-03-25 | Oppo广东移动通信有限公司 | 界面显示方法、装置、终端及存储介质 |
CN110018759B (zh) * | 2019-04-10 | 2021-01-12 | Oppo广东移动通信有限公司 | 界面显示方法、装置、终端及存储介质 |
CN110489228B (zh) * | 2019-07-16 | 2022-05-17 | 华为技术有限公司 | 一种资源调度的方法和电子设备 |
CN110377264B (zh) * | 2019-07-17 | 2023-07-21 | Oppo广东移动通信有限公司 | 图层合成方法、装置、电子设备及存储介质 |
CN111298443B (zh) * | 2020-01-21 | 2023-06-30 | 广州虎牙科技有限公司 | 游戏对象控制方法和装置、电子设备及存储介质 |
-
2020
- 2020-07-31 CN CN202211321633.3A patent/CN115631258B/zh active Active
- 2020-07-31 CN CN202010762068.9A patent/CN114092595B/zh active Active
-
2021
- 2021-03-17 CN CN202180051354.7A patent/CN116075808A/zh active Pending
- 2021-03-17 EP EP21851485.9A patent/EP4083792A4/en active Pending
- 2021-03-17 MX MX2023001377A patent/MX2023001377A/es unknown
- 2021-03-17 US US17/796,126 patent/US20230116975A1/en active Pending
- 2021-03-17 WO PCT/CN2021/081367 patent/WO2022021895A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517051A (zh) * | 2012-06-28 | 2014-01-15 | 联想(北京)有限公司 | 控制方法和电子设备 |
WO2017030735A1 (en) * | 2015-08-20 | 2017-02-23 | Qualcomm Incorporated | Refresh rate matching with predictive time-shift compensation |
CN108829475A (zh) * | 2018-05-29 | 2018-11-16 | 北京小米移动软件有限公司 | Ui绘制方法、装置及存储介质 |
CN110503708A (zh) * | 2019-07-03 | 2019-11-26 | 华为技术有限公司 | 一种基于垂直同步信号的图像处理方法及电子设备 |
CN110502294A (zh) * | 2019-07-20 | 2019-11-26 | 华为技术有限公司 | 数据处理的方法、装置及电子设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4083792A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116055611A (zh) * | 2022-06-24 | 2023-05-02 | 荣耀终端有限公司 | 绘制操作的执行方法、电子设备及可读介质 |
CN116055611B (zh) * | 2022-06-24 | 2023-11-03 | 荣耀终端有限公司 | 绘制操作的执行方法、电子设备及可读介质 |
CN116594543A (zh) * | 2023-07-18 | 2023-08-15 | 荣耀终端有限公司 | 显示方法、设备及可读存储介质 |
CN116594543B (zh) * | 2023-07-18 | 2024-03-26 | 荣耀终端有限公司 | 显示方法、设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN116075808A (zh) | 2023-05-05 |
MX2023001377A (es) | 2023-04-27 |
CN115631258B (zh) | 2023-10-20 |
US20230116975A1 (en) | 2023-04-20 |
CN115631258A (zh) | 2023-01-20 |
EP4083792A1 (en) | 2022-11-02 |
EP4083792A4 (en) | 2023-08-16 |
CN114092595A (zh) | 2022-02-25 |
CN114092595B (zh) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022021895A1 (zh) | 一种图像处理方法及电子设备 | |
WO2021000921A1 (zh) | 一种基于垂直同步信号的图像处理方法及电子设备 | |
WO2020187157A1 (zh) | 一种控制方法和电子设备 | |
WO2020177585A1 (zh) | 一种手势处理方法及设备 | |
US11847992B2 (en) | Control method based on vertical synchronization signal and electronic device | |
CN114579075B (zh) | 数据处理方法和相关装置 | |
WO2021032097A1 (zh) | 一种隔空手势的交互方法及电子设备 | |
WO2022068501A1 (zh) | 一种基于垂直同步信号的图像处理方法及电子设备 | |
CN114579076B (zh) | 数据处理方法和相关装置 | |
WO2022089153A1 (zh) | 一种基于垂直同步信号的控制方法及电子设备 | |
WO2021027678A1 (zh) | 一种基于垂直同步信号的图像处理方法及电子设备 | |
CN116501210A (zh) | 一种显示方法、电子设备及存储介质 | |
WO2022068477A1 (zh) | 一种事件处理方法及设备 | |
CN115048012A (zh) | 数据处理方法和相关装置 | |
WO2024041047A1 (zh) | 一种屏幕刷新率切换方法及电子设备 | |
WO2023045806A1 (zh) | 触控屏中的位置信息计算方法和电子设备 | |
CN115904184B (zh) | 数据处理方法和相关装置 | |
CN117711354B (zh) | 显示方法、可读存储介质和电子设备 | |
WO2024114770A1 (zh) | 一种光标控制方法和电子设备 | |
WO2022143094A1 (zh) | 一种窗口页面的交互方法、装置、电子设备以及可读存储介质 | |
WO2022206709A1 (zh) | 应用程序的组件加载方法及相关装置 | |
CN118363688A (zh) | 界面渲染方法、电子设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21851485 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021851485 Country of ref document: EP Effective date: 20220725 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |