WO2024098871A1 - 数据处理方法、设备及存储介质 - Google Patents
数据处理方法、设备及存储介质 Download PDFInfo
- Publication number
- WO2024098871A1 WO2024098871A1 PCT/CN2023/113128 CN2023113128W WO2024098871A1 WO 2024098871 A1 WO2024098871 A1 WO 2024098871A1 CN 2023113128 W CN2023113128 W CN 2023113128W WO 2024098871 A1 WO2024098871 A1 WO 2024098871A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- frame
- frames
- vsync
- time
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 238000009877 rendering Methods 0.000 claims abstract description 351
- 238000000034 method Methods 0.000 claims abstract description 117
- 230000008569 process Effects 0.000 claims abstract description 74
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 description 116
- 238000003786 synthesis reaction Methods 0.000 description 116
- 238000012545 processing Methods 0.000 description 78
- 239000010410 layer Substances 0.000 description 48
- 238000010586 diagram Methods 0.000 description 25
- 238000003780 insertion Methods 0.000 description 20
- 230000037431 insertion Effects 0.000 description 20
- 230000006870 function Effects 0.000 description 17
- 230000000694 effects Effects 0.000 description 15
- 230000008859 change Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 238000007726 management method Methods 0.000 description 10
- 230000009469 supplementation Effects 0.000 description 10
- 238000010295 mobile communication Methods 0.000 description 9
- 230000003993 interaction Effects 0.000 description 7
- 239000013589 supplement Substances 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000003028 Stuttering Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000010621 bar drawing Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the present application relates to the field of image display technology, and in particular to a data processing method, device and storage medium.
- the terminal device can respond to the user's sliding operation on the display screen and control the displayed content to slide with or without the hand to facilitate the user to browse related content. These contents need to go through the links of drawing, rendering, synthesis, and display before they can be presented on the display screen.
- drawing and rendering may time out and cause frame loss, making the image synthesis and display cycle unstable, and the image is not obtained based on continuous frames, which may cause abnormal phenomena such as display freezes and jumps in the content displayed on the display.
- the present application provides a data processing method, device and storage medium, which aim to solve the jamming and jumping phenomena caused by drawing rendering timeout and frame loss.
- the present application provides a data processing method.
- the method includes: displaying a first interface of a first application; responding to a sliding operation on the first interface, obtaining an input event corresponding to the sliding operation; obtaining a first VSync signal, and rendering the Nth frame based on a first MOVE event, wherein the first MOVE event is extracted from the input event corresponding to the sliding operation based on the timestamp of the first VSync signal; when the rendering time of the Nth frame is greater than a VSync signal cycle, after the rendering of the Nth frame is completed, obtaining the number of lost frames, and displaying the Nth frame; selecting a minimum value from the number of lost frames and the set maximum number of interpolated frames as the number of interpolated frames M; before the second VSync signal arrives, rendering M frames based on the second MOVE event, and displaying M frames; wherein the second VSync signal is the first VSync signal received after the rendering of the Nth frame is completed, and the second MOVE event is extracted from the input event corresponding to
- the lost frames can be made up by one or more frames through frame filling, reducing the frame loss caused by missing the VSync signal due to the drawing rendering timeout, so that the content displayed on the display can change smoothly, increase the smoothness of the display, reduce jumps, and further improve the user experience.
- obtaining the number of lost frames includes: determining a first time for starting drawing rendering and a second time for ending drawing rendering of the Nth frame; calculating the number of lost frames according to the set drawing rendering duration corresponding to the first time, the second time, and the Nth frame, where the set drawing rendering duration is one VSync signal cycle.
- the first time is such as Tbegin mentioned below
- the second time is such as Tend mentioned below.
- a certain drawing rendering time is, for example, a VSync signal cycle, namely, VSync mentioned below
- the number of lost frames is, for example, M mentioned below.
- selecting a minimum value from the number of lost frames and the set maximum number of insertable frames as the number of insertable frames M including: determining the reception time of the second VSync signal according to the VSync signal cycle; determining the average drawing rendering time of each frame in the N frames that have completed drawing rendering according to the drawing rendering time of the N frames that have completed drawing rendering; calculating the predicted number of insertable frames according to the receiving time, the second time and the average drawing rendering time; selecting a minimum value from the predicted number of insertable frames, the number of lost frames and the set maximum number of insertable frames as the number of insertable frames M.
- the receiving time is, for example, TnextVsyn mentioned below
- the average drawing rendering time is, for example, Taverage mentioned below
- the predicted number of insertable frames is, for example, countAllow mentioned below.
- the predicted number of interpolated frames is calculated according to the receiving time, the second time and the average drawing rendering time: predicted number of interpolated frames ⁇ (receiving time-second time)/average drawing rendering time.
- the predicted number of insertable frames is taken as an integer.
- the method also includes: when the first application is cold started, obtaining the package name of the first application; determining the application category of the first application according to the package name; when the application category of the first application matches the set application type that supports interpolation, when the drawing and rendering time of the Nth frame is greater than one VSync signal cycle, after the drawing and rendering of the Nth frame is completed, executing the steps of obtaining the number of lost frames, selecting a minimum value from the number of lost frames and the set maximum number of interpolated frames as the number of interpolated frames, and drawing and rendering M frames based on the second MOVE event before the second VSync signal arrives.
- the method also includes: determining that the input event acts on a control in the first interface according to the reporting point information corresponding to the input event; when the control acted on is a RecyclerView control or a ListView control, when the drawing and rendering time of the Nth frame is greater than one VSync signal cycle, after the drawing and rendering of the Nth frame is completed, executing the step of obtaining the number of lost frames, selecting a minimum value from the number of lost frames and the set maximum number of insertable frames as the number of insert frames, and before the second VSync signal arrives, drawing and rendering M frames based on the second MOVE event.
- the method further includes: in the process of rendering the Nth frame, determining the number of layers to be rendered; when the number of layers is one, in the When the drawing and rendering time of N frames is longer than one VSync signal cycle, when the drawing and rendering of the Nth frame is completed, the steps of obtaining the number of lost frames, selecting a minimum value from the number of lost frames and the set maximum number of insertable frames as the number of insertable frames, and drawing and rendering M frames based on the second MOVE event before the second VSync signal arrives are executed.
- the method further includes: when a sliding distance corresponding to a MOVE event extracted from an input event corresponding to a sliding operation based on timestamps of two adjacent VSync signals is greater than a minimum sliding distance threshold, and when a drawing and rendering duration of the Nth frame is greater than one VSync signal cycle, after the drawing and rendering of the Nth frame is completed, executing the step of obtaining the number of lost frames, selecting a minimum value from the number of lost frames and a set maximum number of insertable frames as the number of insertable frames, and before the second VSync signal arrives, drawing and rendering M frames based on the second MOVE event.
- the method also includes: when the drawing rendering time of the Nth frame is not greater than one VSync signal cycle, and the Nth frame is the first frame of the drawing rendering operation, after the drawing rendering of the Nth frame is completed, the set offset is offset on the basis of the Nth frame to obtain the N+1th frame; before the second VSync signal arrives, the N+1th frame is drawn and rendered, and the N+1th frame is displayed.
- the Nth frame is the first frame, such as frame 1 described below
- the N+1th frame is frame 1' described below.
- a frame is pre-inserted for drawing and rendering at the beginning of the sliding operation, and then one more frame is cached in the cache queue, reducing the situation of frameless synthesis caused by subsequent drawing and rendering timeout, and reducing display jams. For example, if only one frame is lost, a smooth transition can be achieved through the inserted frame without jamming, thereby improving the user experience.
- the method also includes: when the event extracted from the input event corresponding to the sliding operation based on the timestamp of the third VSync signal is a DOWN event, determining that the Nth frame is the first frame of the drawing and rendering operation; wherein the third VSync signal is a VSync signal received before the first VSync signal and is adjacent to the first VSync signal.
- the present application provides a terminal device.
- the terminal device includes: a memory and a processor, the memory and the processor are coupled; the memory stores program instructions, and when the program instructions are executed by the processor, the terminal device executes instructions of the method in the first aspect or any possible implementation of the first aspect.
- the second aspect and any implementation of the second aspect correspond to the first aspect and any implementation of the first aspect respectively.
- the technical effects corresponding to the second aspect and any implementation of the second aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
- the present application provides a computer-readable medium for storing a computer program, wherein the computer program includes instructions for executing the method in the first aspect or any possible implementation of the first aspect.
- the third aspect and any implementation of the third aspect are respectively the same as the first aspect and any implementation of the first aspect.
- the technical effects corresponding to the third aspect and any one of the implementations of the third aspect can refer to the technical effects corresponding to the first aspect and any one of the implementations of the first aspect, which will not be repeated here.
- the present application provides a computer program, comprising instructions for executing the method in the first aspect or any possible implementation of the first aspect.
- the fourth aspect and any implementation of the fourth aspect correspond to the first aspect and any implementation of the first aspect, respectively.
- the technical effects corresponding to the fourth aspect and any implementation of the fourth aspect can refer to the technical effects corresponding to the above-mentioned first aspect and any implementation of the first aspect, which will not be repeated here.
- the present application provides a chip, the chip comprising a processing circuit and a transceiver pin, wherein the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the first aspect or any possible implementation of the first aspect to control the receiving pin to receive a signal and control the sending pin to send a signal.
- the fifth aspect and any implementation of the fifth aspect correspond to the first aspect and any implementation of the first aspect, respectively.
- the technical effects corresponding to the fifth aspect and any implementation of the fifth aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
- FIG1 is a schematic diagram showing a hardware structure of a terminal device
- FIG2 is a schematic diagram showing a software structure of a terminal device
- FIG3 is a schematic diagram of an exemplary application scenario
- FIG4 is a schematic diagram of an exemplary application scenario
- FIG5 is a schematic diagram showing an exemplary data processing flow
- FIG6 is a schematic diagram showing an exemplary trend of a data frame during data processing
- FIG. 7 is a schematic diagram showing, by way of example, changes in the content displayed on the interface when no frame is lost;
- FIG8 is a schematic diagram showing an exemplary data processing flow when one frame of data is lost
- FIG9 is a schematic diagram showing exemplary changes in interface display content when a frame of data is lost
- FIG10 is a schematic diagram showing an exemplary data processing flow when multiple frames of data are lost continuously
- FIG11 is a schematic diagram showing exemplary changes in interface display content when multiple frames of data are lost continuously
- FIG12 is a schematic diagram showing, by way of example, functional modules involved in a data processing method provided in an embodiment of the present application.
- FIG13 is a timing diagram exemplarily showing the interaction process between functional modules involved in the data processing method provided in an embodiment of the present application.
- FIG14 is a schematic diagram showing an exemplary data processing flow for first frame insertion
- FIG15 is a schematic diagram showing an exemplary data processing flow for performing frame supplementation when one frame is lost
- FIG. 16 is an exemplary data processing flow for inserting the first frame and performing frame supplementation when one frame is lost.
- FIG17 is a schematic diagram showing an exemplary data processing flow of inserting a first frame and performing frame supplementation when multiple frames are lost continuously;
- FIG18 is a schematic diagram of another data processing flow for exemplarily performing first frame insertion and frame supplementation when multiple frames are lost continuously;
- FIG19 is a flowchart exemplarily showing a data processing method provided in an embodiment of the present application.
- FIG20 is a schematic diagram showing exemplary specific processing operations included in the drawing, rendering, and synthesis stages
- FIG21 is a flowchart exemplarily illustrating another data processing method provided in an embodiment of the present application.
- FIG22 is a schematic diagram of a first interface of a first application shown as an example
- FIG. 23 is another schematic diagram showing an exemplary first interface of a first application.
- a and/or B in this article is merely a description of the association relationship of associated objects, indicating that three relationships may exist.
- a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
- first and second in the description and claims of the embodiments of the present application are used to distinguish different objects rather than to describe a specific order of objects.
- a first target object and a second target object are used to distinguish different target objects rather than to describe a specific order of target objects.
- words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
- multiple refers to two or more than two.
- multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
- the hardware structure of the terminal device (such as a mobile phone, a tablet computer, a touch-screen PC, etc.) to which the embodiments of the present application are applicable is first described in conjunction with the accompanying drawings.
- the terminal device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor, a gyroscope sensor, Sensors, air pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity light sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, etc. are not listed here one by one and this application does not limit them.
- the current operation can be accurately determined, as well as the location where the operation is applied, the point information of the location, etc.
- the data processing flow during the operation is specifically described.
- hands-free sliding refers to the behavior in which, within a certain application, a user moves a finger or a stylus on the display screen to change the content displayed in the current interface.
- the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc.
- different processing units may be independent devices or integrated in one or more processors.
- a memory may be provided in the processor 110 for storing instructions and data.
- the memory in the processor 110 is a cache memory.
- the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
- the charging management module 140 is used to receive charging input from a charger.
- the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
- the wireless communication function of the terminal device 100 can be implemented by antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, modem processor and baseband processor, etc. It should be noted that antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
- the mobile communication module 150 can provide a solution for wireless communication including 2G/3G/4G/5G applied to the terminal device 100.
- the mobile communication module 150 can include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
- the mobile communication module 150 can receive electromagnetic waves from the antenna 1, and filter, amplify, and process the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
- at least some of the functional modules of the mobile communication module 150 can be set in the processor 110.
- at least some of the functional modules of the mobile communication module 150 can be set in the same device as at least some of the modules of the processor 110.
- the modulation and demodulation processor may include a modulator and a demodulator.
- the modulator is used to modulate the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a medium-high frequency signal.
- the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
- the modem processor can be an independent device.
- the modem processor can be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc., applied to the terminal device 100.
- WLAN wireless local area networks
- BT Bluetooth
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication
- IR infrared
- the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signal and performs filtering, and sends the processed signal to the processor 110.
- the wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2.
- the terminal device 100 implements the display function through a GPU, a display screen 194, and an application processor.
- the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos, etc.
- the display screen 194 includes a display panel.
- the terminal device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
- the terminal device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos are stored in the external memory card.
- the internal memory 121 can be used to store computer executable program codes, which include instructions.
- the processor 110 executes various functional applications and data processing of the terminal device 100 by running the instructions stored in the internal memory 121.
- the internal memory 121 may include a program storage area and a data storage area.
- the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
- the data storage area may store data created during the use of the terminal device 100 (such as audio data, a phone book, etc.), etc.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
- the processor 110 executes various functional applications of the terminal device and the data processing method provided in the present application by running the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor 110.
- the terminal device 100 can receive the The speaker 170B, the microphone 170C, the earphone interface 170D, and the application processor implement audio functions, such as music playing and recording.
- the button 190 includes a power button, a volume button, etc.
- the button 190 may be a mechanical button. It may also be a touch button.
- the terminal device 100 may receive a button input and generate a key signal input related to the user settings and function control of the terminal device 100.
- the motor 191 may generate a vibration prompt.
- the indicator 192 may be an indicator light, which may be used to indicate the charging status, the change in power, or may be used to indicate messages, missed calls, notifications, etc.
- the hardware structure of the terminal device 100 is introduced here. It should be understood that the terminal device 100 shown in FIG1 is only an example. In a specific implementation, the terminal device 100 may have more or fewer components than those shown in the figure, may combine two or more components, or may have different component configurations.
- the various components shown in FIG1 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application-specific integrated circuits.
- the software structure of the terminal device 100 is described below. Before describing the software structure of the terminal device 100, the architecture that can be adopted by the software system of the terminal device 100 is first described.
- the software system of the terminal device 100 can adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
- the software systems used by the current mainstream terminal devices include but are not limited to Windows systems, Android systems and iOS systems.
- the present application embodiment takes the layered architecture Android system as an example to exemplify the software structure of the terminal device 100.
- FIG. 2 is a software structure block diagram of the terminal device 100 according to an embodiment of the present application.
- the layered architecture of the terminal device 100 divides the software into several layers, each of which has a clear role and division of labor.
- the layers communicate with each other through software interfaces.
- the Android system is divided into four layers, from top to bottom, namely, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
- the application layer may include a series of application packages. As shown in FIG2 , the application package may include applications such as gallery, settings, text messages, mailboxes, browsers, videos, etc., which are not listed here one by one and are not limited in this application.
- the application framework layer provides application programming interfaces (APIs) and programming frameworks for applications in the application layer.
- APIs application programming interfaces
- programming frameworks can be described as functions/services/frameworks, etc.
- the application framework layer can be divided into the system service framework layer (commonly known as the Framework layer, which is implemented based on the Java language) and the local service framework layer (commonly known as the native layer, which is implemented based on the C or C++ language) based on the programming interface it manages and the implementation language of the programming framework.
- system service framework layer commonly known as the Framework layer, which is implemented based on the Java language
- native layer commonly known as the native layer
- the Framework layer may include a window manager, an input manager, a content provider, a view system, an activity manager, etc., which are not listed one by one here and are not limited in this application.
- the window manager is used to manage window programs.
- the window manager can obtain the display screen size, determine whether Whether there is a status bar, lock screen, etc., specifically in the technical solution provided in this application, it is also used to determine the focus window to obtain information such as the focus window layer, the corresponding application package name, etc.
- the input manager (InputManagerService) is used to manage the program of the input device.
- the input manager can determine input operations such as mouse click operations, keyboard input operations, and hand-free sliding operations.
- it is mainly used to determine hand-free sliding operations.
- the content provider is used to store and obtain data and make the data accessible to the application.
- the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc., which are not listed here and are not limited by this application.
- the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, etc.
- the view system can be used to build applications.
- a display interface can be composed of one or more views.
- a display interface including a text notification icon can include a view for displaying text and a view for displaying pictures.
- the activity manager is used to manage the life cycle of each application and the navigation back function. It is responsible for creating the Android main thread and maintaining the life cycle of each application.
- the native layer may include an input reader, an input dispatcher, an image synthesis system, etc., which are not listed one by one here and are not limited in this application.
- EventHub event monitoring port/function
- InputReader input reader
- InputReader After receiving the event, InputReader will send the event to the input dispatcher (InputDispatcher).
- InputDispatcher After receiving the event sent by InputReader, InputDispatcher will distribute the event to the corresponding application (hereinafter referred to as: application) through the input manager.
- the image synthesis system namely the surface flinger (hereinafter referred to as the SF thread), is used to control image synthesis and generate vertical synchronization (Vetical Synchronization, VSync) signals.
- VSync vertical Synchronization
- SF threads include: synthesis thread, VSync thread, cache thread (such as queue buffer).
- the synthesis thread is used to be awakened by the VSync signal for synthesis; the VSync thread is used to generate the next VSync signal according to the VSync signal request; there are one or more cache queues in the cache thread, and each cache queue is used to store the cache data of its corresponding application, such as image data drawn and rendered according to the data frame.
- Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
- the core library consists of two parts: one part is the function that needs to be called by the Java language, and the other part is the Android core library.
- the system library can include multiple functional modules, such as image rendering library, image synthesis library, input processing library, media library, etc.
- the image rendering library is used for rendering two-dimensional or three-dimensional images
- the image synthesis library is used for synthesizing two-dimensional or three-dimensional images.
- the application renders the image through the image rendering library, and then the application sends the rendered image to the cache queue of the SF thread.
- the SF thread reads the image from the cache.
- the queue sequentially obtains a frame of images to be synthesized, and then performs image synthesis through the image synthesis library.
- the input processing library is a library for processing input devices, which can realize mouse, keyboard and hand-free sliding input processing, etc.
- the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
- the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG and PNG, etc.
- the kernel layer in the Android system is a layer between hardware and software.
- the kernel layer at least includes sensor driver, display driver, audio driver, Bluetooth driver, GPS driver, etc.
- the software structure of the terminal device 100 is introduced here. It can be understood that the layers in the software structure shown in FIG. 2 and the components contained in each layer do not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, the terminal device 100 may include more or fewer layers than shown in the figure, and each layer may include more or fewer components, which is not limited in the present application.
- Frame refers to the smallest unit of a single picture in the interface display.
- a frame can be understood as a still picture. Displaying multiple connected frames quickly and continuously can create the illusion of object movement.
- Frame rate refers to the number of frames that refresh the image in 1 second, which can also be understood as the number of times the graphics processor in the terminal device refreshes the screen per second.
- a high frame rate can produce smoother and more realistic animations. The more frames per second, the smoother the displayed action will be.
- Frame drawing refers to the picture drawing of the display interface.
- the display interface can be composed of one or more views, each view can be drawn by the visual control of the view system, each view is composed of subviews, and a subview corresponds to a small widget in the view, for example, one of the subviews corresponds to a symbol in the picture view.
- Frame rendering It is to color the drawn view or add 3D effects, etc.
- 3D effects can be lighting effects, shadow effects, and texture effects.
- Frame synthesis It is the process of synthesizing multiple or more rendered views into a display interface.
- VSync Vertical synchronization
- terminal devices generally display based on VSync signals to synchronize image drawing, rendering, synthesis and screen refresh display processes.
- the VSync signal is a periodic signal
- the VSync signal period can be set according to the screen refresh rate.
- the VSync signal period can be 16.6ms, that is, the terminal device generates a control signal every 16.6ms to trigger the VSync signal period.
- the VSync signal period can be 11.1ms, that is, the terminal device generates a control signal every 11.1ms to trigger the VSync signal period.
- VSync signals include software VSync (VSync-APP or VSync-SF) and hardware VSync (VSync-HW).
- VSync-APP is used to trigger the drawing and rendering process
- VSync-SF is used to trigger the synthesis process
- the hardware VSync signal (VSync-HW) is used to trigger the screen display refresh process.
- software VSync and hardware VSync keep cycle synchronization. Taking the change between 60Hz and 120Hz as an example, if VSync-HW switches from 60Hz to 120Hz, and VSync-APP and VSync-SF change synchronously, switching from 60Hz to 120Hz.
- the kernel layer processes the touch operation into an original input event (including touch coordinates, touch force, timestamp of the touch operation, and other information), and EventHub listens to the original input event and stores it in the kernel layer.
- InputReader reads the original input event from the kernel layer through the input processing library and hands it over to InputDispatcher, which packages the original input event, such as encapsulating it into a set data format, and determines the application corresponding to the original input event, and then reports the original input event to the input manager.
- the input manager parses the information of the original input event (including: operation type and reporting point location, etc.) and determines the focus application based on the current focus, that is, the application corresponding to the original input event, and sends the parsed information to the focus application.
- the focus can be the finger touch point in a touch operation or the click position in a stylus or mouse click operation.
- the focus application is the application running in the foreground of the terminal device or the application corresponding to the touch position in the touch operation.
- the focus application determines the control corresponding to the original input event based on the information of the parsed original input event (for example, the reported point position).
- the WeChat application calls the image rendering library in the system library through the view system to draw and render the image.
- the WeChat application sends the drawn and rendered image to the cache queue of the SF thread.
- the drawn and rendered image is synthesized into the WeChat interface through the image synthesis library in the system library.
- the SF thread uses the display driver of the kernel layer to make the display screen display the corresponding interface of the WeChat application.
- (1) and (2) in FIG3 and (1) and (2) in FIG4 are schematic diagrams of interfaces of a terminal device in different applications in a possible implementation.
- the terminal device may receive an upward sliding operation or a downward sliding operation by a user in the direction of the arrow in the interface of the social application shown in (1) of Figure 3, or in the setting-related interface shown in (2) of Figure 3, or in the document interface shown in (1) of Figure 4, or in the product browsing interface shown in (2) of Figure 4.
- the terminal device receives the sliding operation made by the user, the terminal device performs frame drawing, rendering, synthesis and other processes based on the sliding operation, and finally displays the synthesized picture, so that the content displayed on the display changes as the user's finger moves on the display, that is, the content of the current interface is updated following the movement of the finger.
- the interface display of the display screen of the terminal device usually needs to go through the processes of drawing, rendering and synthesis.
- the interface drawing process of the terminal device may include the processes of background drawing, subview drawing, scroll bar drawing, etc.
- the interface synthesis process of the terminal device may include the processes of vertex processing and pixel processing.
- the focus application gets the original InputDispatcher dispatched by InputManagerService After the input event, the original input event will be distributed.
- the distribution process is, for example, that the drawing rendering thread (such as Choreographer) used for interface drawing in the UI thread corresponding to the focus application initiates a VSync signal request, and draws one frame after receiving the VSync signal.
- t1 to t7 are the time points when the Choreographer in the UI thread receives the VSync signal.
- the VSync signal is sent to the Choreographer by the SF thread, specifically the VSync thread in the SF thread, in each VSync signal cycle after receiving the VSync signal request. Since the VSync signal has a fixed period, the duration between any two time points from t1 to t7 is fixed, that is, one VSync signal cycle.
- Choreographer after receiving the VSync signal at time point t1 , Choreographer renders frame 1 .
- each frame of drawn frame image data will be sent by Choreographer in the UI thread to the cache thread of the SF thread, such as the frame M image data in Figure 6, and the cache thread will cache the frame image data into the cache queue corresponding to the focus application, so that the synthesis thread in the SF thread can take it out from the cache queue for synthesis at the corresponding synthesis time point.
- N M-1
- M is an integer greater than 1.
- Choreographer sends the drawn frame 1 image data to the cache thread of the SF thread, and the cache thread caches the frame 1 image data into the cache queue corresponding to the focus application. Accordingly, when the corresponding synthesis time point is reached, the synthesis thread takes out the frame 1 image data from the cache queue for synthesis. After the synthesis is completed, the terminal device can display the content corresponding to frame 1 on the display screen by calling the display driver of the kernel layer.
- the drawing, rendering, synthesis, and display of frames 2 to 7 received at time points t2 to t7 are similar to those of frame 1 and are not described in detail here.
- each frame will lag for a certain length of time from drawing and rendering to synthesis, and will lag for a certain length of time from synthesis to display, and these two lag lengths may be the same or different. Therefore, as shown in FIG5 , frame 1 is rendered at time point t1, and during the drawing and rendering cycle of frame 1, such as the time from t1 to t2 in FIG5 , the frame image data before frame 1 synthesized by the SF thread (not shown in FIG5 ), the display driver drives the display screen to display the content before the original input event is triggered.
- Choreographer will send a request for the next VSync signal to the VSync thread. Even if there is still available time in the current cycle, the rendering of the next frame will not be performed. Instead, after receiving the next VSync signal, such as at time point t2 in Figure 5, the rendering of frame 2 will begin.
- the lag time from rendering to synthesis of each frame is taken as one VSync information cycle as an example.
- the synthesis thread will take out the image data of frame 1 from the cache queue and merge the image data of frame 1.
- the display driver drives the display screen to display the content corresponding to frame 1 starting at time t3, and displays the content corresponding to frame 1 during the entire duration from t3 to t4 until the next VSync.
- the signal periodically takes out the newly synthesized content for display.
- FIG5 shows that each frame is rendered within the corresponding cycle.
- the content displayed on the display screen will refresh the interface according to a fixed cycle as the finger/stylus moves.
- the focus application as a social application as an example
- the user slides from point P1 to point P4 along the arrow direction on the current interface without letting go of the hand, and the touch sensor in the touch panel receives the operation.
- the kernel layer processes the operation as a raw input event, and its EventHub listens to the raw input event and stores it in the kernel layer.
- InputReader reads the raw input event from the kernel layer through the input processing library and hands it to InputDispatcher, which packages the raw input event, such as encapsulating it into a set data format, and determines the application corresponding to the raw input event, and then reports the raw input event to the input manager.
- the input manager parses the information of the raw input event (including: operation type and reporting point position, etc.) and determines the focus application according to the current focus, and sends the parsed information to the focus application.
- the Choreographer in the focus application will request a VSync signal. After receiving the VSync signal, it will start to render the frame data corresponding to the current reported point, and the synthesis thread will synthesize it, and the display driver will send it to the display.
- the interface content shown in Figure 7 (2) will be displayed.
- Choreographer starts drawing and rendering frame 3.
- Choreographer has not completed the drawing of frame 3 within the period from t3 to t4, that is, the drawing and rendering of frame 3 has timed out, when a new VSysn signal is received at time point t4, since frame 3 has not been drawn and rendered, frame 4 is not drawn and rendered.
- frame 3 starting from time point t3 has not been drawn and rendered, there is no frame 3 image data in the cache queue at time point t4. Therefore, the synthesis thread cannot obtain the frame 3 image data from the cache queue at time point t4.
- Choreographer completes the rendering of frame 3. Since no new VSync signal, i.e., the VSync signal corresponding to time point t5, has been received, Choreographer enters a short blank period. After receiving the VSync signal at time point t5, Choreographer starts rendering frame 5. Meanwhile, since the rendering of frame 3 is completed before time point t5, the image data of frame 3 is cached in the cache queue. Therefore, at time point t5, the synthesis thread can take out the image data of frame 3 from the cache queue for merging, so that the display driver can complete the lag time of one VSync signal cycle, i.e., time point t6 in FIG8 . The content corresponding to frame 3 synthesized by the synthesis thread is sent for display.
- the display driver since the content corresponding to frame 2 begins to be displayed at time point t4, the display driver does not obtain a new frame synthesized by the synthesis thread until time point t6, specifically the content corresponding to frame 3. Therefore, the content corresponding to frame 2 will continue to be displayed from time point t2 to time point t6.
- Choreographer has not completed the drawing of frame 3.
- frame 3 has not been drawn and rendered yet, so frame 6 is not drawn and rendered.
- the synthesis thread cannot obtain the frame 3 image data from the cache queue at time point t7.
- Choreographer completes the rendering of frame 3. Since no new VSync signal, i.e., the VSync signal corresponding to time point t7, has been received, Choreographer enters a short blank period. After receiving the VSync signal at time point t7, Choreographer starts rendering frame 7. At the same time, because the drawing and rendering of frame 3 is completed before time point t7, the image data of frame 3 is cached in the cache queue.
- the synthesis thread can take out the image data of frame 3 from the cache queue for merging, so that the display driver can send the content corresponding to frame 3 synthesized by the synthesis thread to the display within the lag time of one VSync signal cycle, that is, time point t8 in Figure 10.
- the display driver since the content corresponding to frame 2 starts to be displayed at time point t4, the display driver does not obtain a new frame synthesized by the synthesis thread until time point t8, specifically the content corresponding to frame 3, so the content corresponding to frame 2 will be continuously displayed from time point t2 to time point t8. If more frames are lost during the drawing and rendering phase, the same screen will be displayed longer on the display screen under the sliding operation without letting go of the hand, and because multiple frames are lost, such as frame 4, frame 5, and frame 6, after the content corresponding to frame 3 is displayed, in the next display cycle, the display driver will directly drive the display screen to display the content of frame 7, and the lost frames in the middle will cause the screen to jump, affecting the user experience.
- the user still takes the case of a sliding motion in the interface of a social application and the change of the interface content as an example.
- the user still slides along the arrow direction from point P1 to point P4, wherein the drawing and rendering of the frame corresponding to point P1 has not timed out, so the interface content corresponding to the finger at point P1 is displayed normally as shown in FIG11 (1).
- the present application provides a data processing method to solve the above-mentioned stuttering and jumping phenomena caused by drawing rendering timeout and frame loss.
- FIG12 is a schematic diagram of functional modules involved in a data processing method provided in an embodiment of the present application, and the locations of these functional modules.
- the functional modules involved may include applications located at the application layer.
- This application is called the focus application
- the window manager and input manager are located in the Framework layer
- the SF thread, input reader, and input dispatcher are located in the native layer
- the hardware synthesizer is located in the hardware abstraction layer (HAL layer)
- the display driver and sensor driver are located in the kernel layer, as well as the display screen, sensor and other hardware.
- the implementation of the data processing method will also involve an image rendering library, an image synthesis library, and an input processing library in the system library.
- the SF thread includes a VSync thread, a cache thread, and a composition thread.
- the VSync thread when performing a hand-free sliding operation, sends a VSync signal to the Choreographer once in each VSync signal cycle, so that the Choreographer starts rendering a frame after receiving the VSync signal.
- Choreographer needs to first read the original input event (hereinafter referred to as: Input event) from ViewRootImpl and process it according to the timestamp corresponding to the VSync signal (during the sliding operation without letting go of the hand, each reporting point will correspond to a timestamp).
- Input event the original input event
- VSync signal the timestamp corresponding to the VSync signal
- the acquisition of Input events is to send a request for obtaining reporting point information (carrying the timestamp corresponding to the above-mentioned VSync signal) to InputManagerService through ViewRootImpl, and then InputManagerService transmits the reporting point information acquisition request to InputDispatcher, and InputDispatcher transmits the reporting point information acquisition request to InputReader, and finally InputReader obtains the Input event corresponding to the timestamp carried in the reporting point information acquisition request from the kernel layer, and obtains and saves the reporting point information in the kernel layer when the Input event occurs.
- the InputDispatcher which calls the callBack callback registered by the focus application in it, and returns the reporting information to the ViewRootImpl of the focus application through the InputManagerService. Then, it is passed from ViewRootImpl to Choreographer to complete the acquisition of a frame of data.
- Choreographer will pass the currently read Input event to FirstInputManager before obtaining the frame data and starting drawing and rendering.
- Input events for the sliding operation without releasing the finger usually include DOWN events (finger drops), MOVE events (finger moves), and UP events (finger lifts).
- FirstInputManager detects the serial number of the continuously input Input event changes.
- FirstInputManage detects that the DOWN event changes to the serial number of the MOVE event, and the content displayed on the display starts to move for the first time, a new event Event can be generated and the new Event can be notified to Choreographer, so that Choreographer continues to draw and render a frame in the same cycle after drawing and rendering the currently received frame data, that is, after the first frame is completed, a frame is inserted in the cycle where the first frame is located.
- FirstInputManage detects frame loss, it also calculates the number of frames to be supplemented and the specific frames that need to be supplemented, and then re-triggers the above frame acquisition process to obtain the lost frames for supplementation.
- Choreographer After Choreographer completes the rendering of each frame, it transmits the rendered frame image data to the cache thread, which then caches it to the corresponding cache queue.
- the synthesis thread takes out the frame image data from the cache queue for synthesis, and finally transmits the synthesized content to the hardware synthesizer, which calls the display driver to drive the display screen for display, thereby realizing the update of the picture displayed on the display screen.
- FIG. 13 is a timing diagram of the interaction of functional modules involved in the implementation of a data processing method provided in an embodiment of the present application.
- FIG13 directly takes the case where a user performs a non-hands-off sliding operation (Input event) in a focus application, the sensor detects the sliding operation, the input reader obtains the Input event, and obtains the reporting point information corresponding to the Input event, reports the Input event to the input dispatcher, and the input dispatcher obtains the focus application corresponding to the Input event from the window manager, and then according to the package name and other information corresponding to the focus application, through the CallBack callback registered by the focus application in the input dispatcher, the Input event is dispatched to the input thread of the focus application through the input manager for recording and management, and the focus application initiates a request for a VSync-APP signal to the VSync thread in response to the user's non-hands-off sliding operation as an example, and specifically describes
- the VSync-APP signal and VSync-SF signal involved in Figure 13 are first explained.
- the time point of receiving the VSync-APP signal is the time point that triggers the drawing and rendering thread to start drawing and rendering, such as each time point from t1 to t7 in Figure 5
- the VSync-SF signal is the time point that triggers the synthesis thread to start synthesis.
- the synthesis processing time point for the drawing and rendering image corresponding to frame 1 can be the t2 time point in Figure 5.
- the request for the VSync-APP signal is initiated by the thread corresponding to the focus application, such as the application main thread (UI thread) to the VSync thread (the UI application is not shown in Figure 13, and the direct initiation from the drawing rendering thread in the UI thread is taken as an example), and the request for the VSync-SF signal can be initiated by the cache thread to the VSync thread, for example.
- the focus application such as the application main thread (UI thread) to the VSync thread
- the request for the VSync-SF signal can be initiated by the cache thread to the VSync thread, for example.
- the VSync thread since the sending period corresponding to the VSync-APP signal and the VSync-SF signal is fixed, which is related to the frame rate, and the direct lag time between the VSync-APP signal and the VSync-SF signal is also fixed, such as the one VSync signal period mentioned above, the VSync thread generates VSync-APP signal and VSync-SF signal according to the VSync signal period corresponding to the current frame rate, and sends the generated VSync-APP signal to the drawing and rendering thread and sends the generated VSync-SF signal to the synthesis thread in each VSync signal period.
- the Input events corresponding to the sliding operation can include DOWN events, MOVE events and UP events.
- the drawing rendering thread sends a request for a VSync-SF signal to the VSync thread and responds.
- the VSync-APP signal 1 is received, if the finger moves, the read Input event 1 can be a MOVE event, and if it is lifted, it is an UP event.
- This embodiment takes the case where Input event 1 is a MOVE event as an example.
- the input dispatcher will continuously dispatch the received Input events to the input thread through the input manager.
- the corresponding reporting frequency depends on the sampling rate of the display. For example, if the sampling rate is 120Hz, the display driver dispatches data to the input thread every 8ms or so and stores it in the event queue for consumption. The drawing and rendering thread consumes the Input events in the event queue according to the VSync signal timestamp.
- the drawing and rendering thread will obtain the frame data required for drawing and rendering through the input thread, input manager, input dispatcher and input reader in the way of obtaining the point information mentioned above, such as frame 1 mentioned above, and then draw and render frame 1, and cache the drawn and rendered image 1 into the cache queue allocated by the cache thread for the focus application.
- the drawing and rendering thread needs to first initiate a request to the cache thread to allocate a cache queue.
- the cache thread allocates the corresponding cache queue for the focus application and informs the drawing and rendering thread of the address information of the allocated cache queue. In this way, after the drawing and rendering thread completes the drawing and rendering of each frame, it can cache the rendered image into the cache queue according to the address information.
- the rendering thread when drawing the rendering image 1 according to the Input event 1, the rendering thread will also send the Input event 1 to the interpolation module, and the interpolation module determines whether to perform an interpolation operation or a frame supplementation operation.
- the interpolation module detects the types of two adjacent Input events (in this embodiment, Input event 1 is a MOVE event type, which is adjacent to Input event 1, and the last received Input event 0 is a DOWN event as an example), determines that the two adjacent Input events are a DOWN event and a MOVE event, respectively, and generates an interpolation event.
- Input event 1 is a MOVE event type, which is adjacent to Input event 1
- the last received Input event 0 is a DOWN event as an example
- a new event namely, an insertion event
- the rendering thread is notified to process the insertion event, that is, to perform another rendering operation in the current processing cycle.
- the drawing and rendering thread since the cycle of sending a new VSync-APP signal, such as VSync-APP signal 2, has not yet arrived, the drawing and rendering thread has not received VSync-APP signal 2, and will not obtain Input event 2 corresponding to VSync-APP signal 2 from the input thread, that is, the frame data for drawing and rendering.
- the interpolation module when the interpolation module generates an interpolation event, for example, it can add a corresponding offset to generate a new Input event based on the position of the previous Input event.
- the new Input event is the interpolation event mentioned above, so that the drawing and rendering thread can perform another drawing and rendering according to the new Input event (interpolation event).
- the package name of the application can be obtained first, and then the application category of the application can be determined according to the package name. Finally, it is determined whether the application category of the application matches the application type that supports first frame insertion or frame loss and frame filling, such as whether it is in the set whitelist.
- the drawing and rendering thread sends the currently processed Input event to the interpolation module, so that the interpolation module performs the operations in steps S103, S108, S112, etc.
- the cold start mentioned above means that when the application is started, there is no process of the application in the background. At this time, the system will recreate a new process and assign it to the application.
- This startup method is called a cold start (the application process does not exist in the background).
- the system will recreate a new process and assign it to the application, so it will first create and initialize the application class (Application class), then create and initialize the MainActivity class corresponding to the application that provides an interface for interaction with the user (including a series of measurements, layouts, and drawing), and finally the interface of the application, such as the default homepage displayed after starting the application, will be displayed on the display.
- the security protection program in the terminal device (which may be called IAware APK) is used to obtain the package name.
- application types set in the whitelist mentioned above can be news, instant messaging, shopping, browsers, videos, short videos, forums and other application types, which will not be listed one by one here, and this embodiment does not limit this.
- the method of determining the application type according to the package name may be based on the classification of application types according to the package names of different applications in an application market installed in the terminal device for providing downloadable applications.
- the enable identifier corresponding to the interpolation module can be set to "True", so that when each frame is rendered, when the rendering thread recognizes that the enable identifier corresponding to the interpolation module is "True”, the relevant information of the currently processed Input event can be transmitted to the interpolation module, thereby triggering the interpolation module to perform judgment and processing. Conversely, if the enable identifier corresponding to the interpolation module is recognized as "False", the interpolation module will not participate in the entire sliding operation, that is, the first frame interpolation operation will not be performed, and no frame supplementation will be performed after the frame is lost.
- the drawing and rendering thread when the drawing and rendering thread recognizes that the enable identifier corresponding to the interpolation module is "True", it can also determine which control in the interface displayed on the display screen the Input event is for based on the reporting point information corresponding to the currently processed Input event. Accordingly, when the control targeted by the current Input event is a RecyclerView control or a ListView control, when the drawing and rendering thread performs drawing and rendering according to the Input event, it will send the currently processed Input event to the interpolation module, so that the interpolation module performs the operations in steps S103, S108, S112, etc.
- the above judgment can further determine whether the interpolation module needs to perform corresponding processing when drawing and rendering each frame, so as to avoid the involvement of the interpolation module in scenarios where drawing timeout and frame loss do not usually occur, thereby reducing the occupancy of terminal device resources and improving the data processing speed.
- the drawing and rendering thread can determine the number of layers to be drawn and rendered during the drawing and rendering of each frame.
- the number of layers is one, that is, in a single-layer drawing and rendering scenario
- the drawing and rendering thread sends the relevant information of the currently processed Input event to the interpolation module for processing.
- the relevant information of the current Input event is not sent to the interpolation module.
- the multi-layer scenario mentioned above covers other controls on a RecyclerView control or a ListView control.
- a small window displays the live content in the details interface.
- the rendering thread when the rendering thread is currently rendering a single layer of the interface, it can also be determined whether the sliding distances corresponding to two adjacent Input events are greater than the minimum sliding distance threshold TouchSlop corresponding to the sliding operation.
- the rendering thread when rendering is performed according to the Input, the rendering thread sends the currently processed Input event to the interpolation module, so that the interpolation module executes the operations in steps S103, S108, S112, etc.
- the drawing and rendering thread draws and renders the interpolation frame image according to the interpolation frame event, and caches the drawn and rendered interpolation frame image into a cache queue corresponding to the focus application.
- drawing and rendering operation performed based on the frame data extracted from the interpolation event is similar to the drawing and rendering operation performed based on the frame data in the normally received Input event.
- the specific implementation details can be found above and will not be repeated here.
- VSync-APP signal 1 is received at time t1, and during the period from t1 to t2, that is, before receiving VSync-APP signal 2, the drawing and rendering thread will perform drawing and rendering operations on frame 1.
- frame 1' in the interpolation event will also be drawn and rendered during this period. That is, during the period from t1 to t2, two drawn and rendered images will be obtained, namely, the image corresponding to frame 1 and the image corresponding to frame 1'.
- step S106 is executed.
- VSync thread when the VSync thread sends the VSync-APP signal 2 to the drawing and rendering thread, it will also send the VSync-SF signal 1 to the synthesis thread.
- the synthesis thread after receiving the VSync-SF signal 1, the synthesis thread will take out a frame of rendered image from the cache queue and execute step S105.
- the synthesis thread after receiving the VSync-SF signal, the synthesis thread will take out a frame of rendered image that was previously put into the cache queue from the cache queue for merging. If there is no frame in the queue, no processing will be performed.
- the synthesis thread takes out the rendered image 1 corresponding to the Input event 1 previously put into the cache queue from the cache queue, and performs synthesis processing on the rendered image 1 .
- the synthesis thread will send the synthesized image 1 to the display driver, and then the display driver will send it for display, that is, drive the display screen to display the content of FIG. 1, to achieve Update of the screen.
- the synthesis thread when the synthesis thread sends the synthesized image 1 to the display driver, it specifically goes through the hardware synthesizer located at the HAL layer shown in Figure 12, and then the hardware synthesizer transmits the synthesized image 1 to the display driver.
- VSync-SF signal 1 when VSync-SF signal 1 is received at time t2, and between t2 and t3, that is, before receiving VSync-SF signal 2, the drawing and rendering thread will merge the image corresponding to frame 1.
- the display driver receives the image corresponding to frame 1 synthesized by the synthesis thread, and can drive the display screen to display the image corresponding to frame 1.
- the drawing rendering thread reads the recorded Input event 2 from the input thread according to the timestamp of the VSync-APP signal 2 (taking the Input event as a MOVE event as an example).
- the rendering thread after rendering the frame data in Input event 2, the rendering thread will also cache image 2 to the cache queue. At the same time, when rendering image 2 according to Input event 2, the rendering thread will also send Input event 2 to the interpolation module, and the interpolation module will determine whether to perform a frame interpolation operation.
- the interpolation operation/interpolation event mentioned in this embodiment is specifically an interpolation operation performed on the first frame.
- the frame inserted in this operation is part of the first frame, and the drawing and rendering thread does not need to read new Input events from the input thread.
- the supplementary frame operation/interpolation event is specifically an operation performed when a frame is lost due to a drawing and rendering timeout of a certain frame during the movement of the finger.
- the frame inserted in this operation is a lost frame, that is, the drawing and rendering thread needs to read Input events from the input thread.
- these two operations can both be called interpolation operations, and this embodiment does not limit this name.
- the interpolation module detects the types of two adjacent Input events (Input event 1 and Input event 2), determines that the two adjacent Input events are a MOVE event and a MOVE event, and the drawing and rendering of image 2 has not timed out, and does not trigger a frame interpolation operation.
- step S110 is executed.
- the VSync thread will continue to send the corresponding VSync-SF signal to the synthesis thread in each VSync signal cycle. Accordingly, after receiving a new VSync-SF signal, the synthesis thread will also take out the drawn and rendered image at the head of the queue from the cache queue for synthesis processing, and send the synthesized image to the display driver, which will be displayed by the display driver. For example, when receiving VSync-SF signal 2, the synthesis thread will execute step S109, and when receiving VSync-SF signal 3, the synthesis thread will execute step S115, etc.
- the synthesis thread takes out the drawing at the head of the queue from the cache queue.
- the rendered interpolation image is drawn and the rendered interpolation image is synthesized.
- the synthesis thread receives the VSync-SF signal 2 at time point t3, at which time the synthesis thread takes out the drawing rendering image corresponding to frame 1' from the cache queue for synthesis processing.
- the display driver will receive the synthesized image corresponding to frame 1' synthesized by the synthesis thread, and then drive the display screen to display the synthesized image corresponding to frame 1'.
- the drawing rendering thread reads the recorded Input event 3 (taking the Input event as a MOVE event as an example) from the input thread according to the timestamp of the VSync-APP signal 3 .
- the rendering thread after rendering the frame data in Input event 3, the rendering thread will also cache image 3 to the cache queue. At the same time, when rendering image 3 according to Input event 3, the rendering thread will also send Input event 3 to the interpolation module, and the interpolation module will determine whether to perform a frame interpolation operation.
- the interpolation module detects the types of two adjacent Input events (Input event 3 and Input event 4), determines that the two adjacent Input events are MOVE events and MOVE events, and the drawing and rendering of image 3 times out (when the drawing and rendering thread receives VSync-APP signal 4, it is still drawing and rendering image 3), and triggers a frame insertion operation.
- the frame insertion operation may be completed before the VSync thread sends the VSync-APP signal 5, that is, after receiving the frame insertion instruction sent by the interpolation module, the drawing and rendering thread directly reads the Input event 4 from the event thread according to the timestamp of the received VSync-APP signal 4, that is, executes step S113, and then executes step S114.
- Input events that appear in the present embodiment such as Input event 0, Input event 1, Input event 2, Input event 3, Input event 4, etc., are generated in chronological order.
- the sliding operation process without removing the hand includes DOWN, MOVE1, MOVE2, MOVE3, and UP, as the user's finger slides on the display screen
- the Input event 0, Input event 1, Input event 2, Input event 3, and Input event 4 generated in sequence are DOWN event, MOVE1 event, MOVE2 event, MOVE3 event, and UP event, respectively.
- the drawing rendering thread reads the recorded Input event 4 from the input thread according to the timestamp of the VSync-APP signal 4 (the Input event is a MOVE event, such as the MOVE3 event mentioned above as an example).
- the drawing and rendering thread obtains the image 4 through drawing and rendering, it will also cache the rendered image 4 in the cache queue, so that the synthesis thread can retrieve the rendered image 4 from the cache queue when receiving the VSync-SF signal 4.
- the rendered image 4 is synthesized.
- the drawing and rendering thread when the drawing and rendering thread has not completed the drawing and rendering of frame 3 between t3 and t4, that is, before receiving VSync-APP signal 4, after receiving VSync-APP signal 4, the drawing and rendering thread does not read Input event 4 from the event thread according to the timestamp of VSync-APP signal 4, and continues to render frame 3.
- the interpolation module determines that frame supplementation is required.
- the drawing and rendering thread can read frame 4 corresponding to time point t4 from the input thread, and then perform drawing and rendering.
- the synthesis thread since frame 3 has not been drawn and rendered at time point t4, there is no image of frame 3 after drawing and rendering in the cache queue.
- the synthesis thread does not perform the merge operation at time point t4, and only starts the synthesis process after reading the image of frame 3 after drawing and rendering from the cache queue at time point t5. Accordingly, since the drawing and rendering thread makes up for the lost frame 4, at each subsequent synthesis time point, that is, after receiving the VSync-SF signal, in the absence of drawing and rendering timeout or frame loss, the synthesis thread can sequentially take out the image corresponding to frame 4, the image corresponding to frame 5, the image corresponding to frame 6, the image corresponding to frame 7, and the image corresponding to frame 8 from the cache queue.
- the display driver will only fail to obtain the image synthesized by the synthesis thread at time t5, and will continue to display the content corresponding to frame 2 in the VSync signal cycle from t5 to Figure 6, and can then normally update and display the content corresponding to the new frame data in each VSync signal cycle.
- the display driver since the content corresponding to frame 2 is only displayed for one more VSync signal cycle, which is only a few milliseconds, the user will not feel obvious lag, and the content corresponding to the frame data that changes in sequence will be updated normally in each VSync signal cycle, so the content displayed on the display will not jump.
- the synthesis thread takes out the rendered image 2 at the head of the queue from the cache queue, and performs synthesis processing on the rendered image 2 .
- the synthesis thread receives the VSync-SF signal 3 at time point t4 , at which time the synthesis thread takes out the drawing rendering image corresponding to frame 2 from the cache queue for synthesis processing.
- the display driver receives the synthesized image corresponding to frame 2 synthesized by the synthesis thread, and then drives the display screen to display the synthesized image corresponding to frame 2.
- a frame is pre-inserted for drawing and rendering at the beginning of the sliding operation, and then one more frame is cached in the cache queue, reducing the situation of frameless synthesis caused by subsequent drawing and rendering timeout, and reducing display jams. For example, if only one frame is lost, a smooth transition can be achieved through the inserted frame without jamming, thereby improving the user experience.
- the lost frames are made up by one or more frames through frame filling, reducing the frame loss caused by missing the VSync signal due to the drawing rendering timeout, so that the content displayed on the display can change smoothly, increase the smoothness of the display, reduce jumps, and further improve the user experience.
- FIG19 is a flow chart of a data processing method provided in an embodiment of the present application. As shown in FIG19 , the method specifically includes:
- the first application is the focus application currently running in the foreground.
- the first interface is the interface currently displayed by the first application.
- the first interface may be the circle of friends interface shown in FIG. 7 above, and the content displayed in the first interface may be, for example, the screen shown in FIG. 7 (1).
- the input events corresponding to the sliding operation may include a DOWN event, a MOVE event, and an UP event.
- the first MOVE event is extracted from the input event corresponding to the sliding operation based on the timestamp of the first VSync signal
- the Nth frame is an image data frame corresponding to the first MOVE event that needs to be drawn and rendered.
- the first VSync signal mentioned in this embodiment can be, for example, any time point mentioned above, such as a VSync signal received at any time point from t1 to t7 in FIG. 18
- the corresponding first MOVE event is the event corresponding to the time point of the VSync signal, which may be a DOWN event, or a MOVE event, or an UP event in practical applications.
- the interpolation (supplementation) scenario targeted by the data processing method provided in this embodiment is specifically targeted at MOVE events. Therefore, here, the event extracted from the input event corresponding to the sliding operation based on the timestamp of the first VSync signal is taken as an example of a MOVE event.
- the MOVE event extracted from the input event corresponding to the sliding operation based on the timestamp of the first VSync signal is referred to as the first MOVE event.
- each link has a corresponding VSync signal.
- the vertical synchronization signal for triggering the synthesis process is specifically the VSync-SF signal mentioned above
- the vertical synchronization signal for triggering the display screen refresh process is specifically the VSync-HW signal mentioned above.
- two adjacent VSync-APP signals are separated by a first time length
- two adjacent VSync-SF signals are separated by a second time length
- two adjacent VSync-HW signals are separated by a third time length. That is, every first time length, the first application, such as the main application thread (UI thread) mentioned above or directly the drawing and rendering thread, will receive a VSync-APP signal, such as receiving VSync-APP signal 1 at time point t1 mentioned above, receiving VSync-APP signal 2 at time point t2, and so on.
- UI thread main application thread
- the synthesis thread for performing synthesis processing will receive a VSync-SF signal; every third time duration, the display driver will receive a VSync-HW signal.
- the first duration, the second duration, and the third duration may be the same duration, such as the VSync signal period mentioned above.
- the VSync signal period is 16.6ms, that is, every 16.6ms the VSync thread will generate a corresponding VSync-APP signal to send to the drawing and rendering thread, and every 16.6ms the VSync thread will generate a corresponding VSync-SF signal to send to the synthesis thread, and every 16.6ms the VSync thread will generate a corresponding VSync-HW signal to send to the display driver.
- the time when the display driver obtains the content synthesized by the synthesis thread to drive the display screen to display lags behind the time when the synthesis thread starts to perform the synthesis operation
- the time when the synthesis thread starts to perform the synthesis operation lags behind the time when the drawing and rendering thread starts to perform the drawing and rendering operation, that is, the sending time of the VSync-HW signal sent by the VSync thread lags behind the sending time of the VSync-SF signal
- the sending time of the VSync-SF signal lags behind the sending time of the VSync-APP signal.
- VSync-HW signal 1 is sent to the drawing and rendering thread
- VSync-APP signal 2 is sent to the drawing and rendering thread
- VSync-SF signal 1 is sent to the synthesis thread
- VSync-APP signal 3 is sent to the drawing and rendering thread
- VSync-SF signal 2 is sent to the synthesis thread
- VSync-HW signal 1 is sent to the display driver.
- the first duration, the second duration and the third duration may be different, specifically satisfying the third duration> the second duration> the first duration. In this way, it can be ensured that the next link starts after the previous link is processed, ensuring that the next link can get the data processed by the previous link, such as the display driver can get the content synthesized by the synthesis thread, and the synthesis thread can get the content drawn and rendered by the drawing and rendering thread.
- the premise of triggering the VSync thread to generate VSync-APP signal, VSync-SF signal, VSync-HW signal according to the VSync signal cycle and sending them according to the VSync signal cycle is that a request is initiated to the VSync thread when the input event is a DOWN event, as mentioned above.
- the first application such as the drawing and rendering thread described above, sends a first message (a request for a VSync-APP signal) to the VSync thread in the SF thread.
- the VSync thread can be generated according to the first duration (such as a VSync signal cycle), and in each VSync signal cycle, send a corresponding VSync-APP signal to the drawing and rendering thread of the first application.
- the cache thread in the SF thread can send a second message (a request for a VSync-SF signal) to the VSync thread.
- the VSync thread can generate according to the second duration (such as a VSync signal cycle) and send the corresponding VSync-SF signal to the synthesis thread in each VSync signal cycle.
- the display driver can send a third message (a request for a VSync-HW signal) to the VSync thread.
- the VSync thread can be generated according to the third duration (such as a VSync signal cycle), and send a corresponding VSync-HW signal to the display driver in each VSync signal cycle.
- the relationship between the timestamp of each VSync-APP signal and the reporting point and input event can be found above. Based on this relationship, it can be determined which reporting point corresponds to the input data, and then the image frame corresponding to the reporting point can be obtained. The acquisition of image frames can be found above, and will not be repeated here.
- drawing and rendering can be specifically divided into a drawing phase and a rendering phase.
- the drawing phase specifically includes: input (used to pass input events to the corresponding object for processing), animation (used to calculate the position of each frame of animation), measurement (used to obtain and maintain the size of each view (View) and view group (ViewGrop) according to the setting of control properties in the xml layout file and code), layout (used to determine the display position of the control according to the information obtained by the strategy), and drawing (after the user determines the display position of the control, draw all layers in the application window on the canvas (canvas) to construct drawing instructions).
- input used to pass input events to the corresponding object for processing
- animation used to calculate the position of each frame of animation
- measurement used to obtain and maintain the size of each view (View) and view group (ViewGrop) according to the setting of control properties in the xml layout file and code
- layout used to determine the display position of the control according to the information obtained by the strategy
- drawing after the user determines the display position of the control, draw
- the rendering stage specifically includes: synchronization (for synchronizing the drawing instructions after drawing from the CPU), rendering (for adjusting the brightness, contrast, saturation, etc. of the drawn layer), Store in the cache queue (used to store the rendered execution result in the cache queue).
- synchronization for synchronizing the drawing instructions after drawing from the CPU
- rendering for adjusting the brightness, contrast, saturation, etc. of the drawn layer
- Store in the cache queue used to store the rendered execution result in the cache queue.
- the implementation method of obtaining the number of lost frames given in step S204 and the number of inserted frames M finally determined in step S205 may be, for example:
- Tbegin is the time when the drawing and rendering thread calls the doFrame interface (the interface used to start drawing and rendering), such as the time point when each VSync-APP signal is received above, such as t1, t2, t3, etc.
- Tend is the actual time point when the drawing and rendering is completed.
- VSync VSync signal cycle
- count floor[(Tend ⁇ Tbegin)/VSync] ⁇ 1.
- set drawing rendering duration refers to the maximum drawing rendering duration corresponding to each frame of data under ideal conditions, such as one VSync signal cycle.
- a minimum value is selected from the number of lost frames count and the set maximum number of insertable frames as the number of frames to be inserted after the current drawing and rendering is completed, that is, the number of insert frames M.
- the maximum number of insertable frames can be set to 2.
- the actual remaining time available for frame insertion may be insufficient.
- the number of frames that can be inserted in the remaining time can be estimated, and then a minimum value is selected from the predicted number of insertable frames and M determined in the above manner as the number of frames to be inserted after the current drawing and rendering is completed.
- the specific implementation method can be as follows:
- the receiving time TnextVsyn of the next VSync-APP signal is determined.
- the sending period of the VSync-APP signal as a VSync signal period, such as the duration between any two time points (t1 and t2) mentioned above as an example, as shown in FIG15 , when the rendering of frame 3 starting at time point t3 times out, resulting in the loss of frame 4, the receiving time TnextVsyn of the next VSync-APP signal may be time point t5 in FIG15 .
- the rendering time set for each frame is a VSync signal cycle, such as 16.6ms.
- the time required to complete the rendering of frames 1, 2, and 3 is 4.6ms, 5.6ms, and 16.8ms respectively.
- M min(M (obtained in the above step (3)), countAllow), that is, a minimum value is selected from countAllow, count and the set maximum number of insertable frames (such as 2) as the number of frames to be inserted after the current drawing and rendering is completed.
- the above-mentioned determination of the relationship between the image data frame to be inserted and the lost image data frame it specifically refers to which frame of image data that is lost is the inserted image data frame, and then according to the time point when the lost image data frame is theoretically to be drawn and rendered (the timestamp of the VSync-APP signal received at this time point), the corresponding input event is found from the input thread, and then the frame in the input event is used as the frame to be inserted, and it is drawn and rendered.
- the second VSync signal is the first VSync signal received after the Nth frame is rendered, and the second MOVE event is extracted from the input event corresponding to the sliding operation according to the timestamps of all or part of the VSync signals received during the Nth frame rendering process.
- frame 3 is drawn and rendered before time point t6, and the first VSync signal received after frame 3 is drawn and rendered is the VSync signal received at time point t6.
- the second MOVE event is obtained by extracting the timestamps of all or part of the VSync signals received during the rendering process of frame 3, such as from t2 to t6, from the input event corresponding to the sliding operation.
- the second MOVE event is specifically a MOVE event extracted from the input event corresponding to the sliding operation according to the timestamp of the VSync signal received at time point t4, and a MOVE event extracted from the input event corresponding to the sliding operation according to the timestamp of the VSync signal received at time point t5.
- the timestamp of the signal is extracted from the MOVE event corresponding to the input event of the sliding operation. Accordingly, the M frames finally inserted after the Nth frame (frame 3) are frame 4 and frame 5.
- the data processing method provided in this embodiment when the drawing rendering times out, makes up for the lost frames by one or more frames through frame filling, thereby reducing the frame loss caused by missing the VSync signal due to the drawing rendering timeout, so that the content displayed on the display screen can change smoothly, increase the smoothness of the display, reduce jumps, and further improve the user experience.
- FIG21 is a flow chart of a data processing method provided in an embodiment of the present application. As shown in FIG21 , the method specifically includes:
- S301 displaying a first interface of a first application, wherein the first interface displays a first screen, the first screen includes a first content and a second content, the first content is displayed in a first area of the first interface, and the second content is displayed in a second area of the first interface.
- the first screen is shown in FIG. 22 , wherein the first content is, for example, content related to friend A displayed in the first area shown in FIG. 22 , and the second content is, for example, content related to friend B displayed in the second area shown in FIG. 22 .
- S303 upon receiving a vertical synchronization signal that triggers a drawing and rendering process, determining an input event of a reporting point corresponding to a timestamp of the vertical synchronization signal that triggers the drawing and rendering process.
- VSync-APP signal vertical synchronization signal
- VSync-SF signal vertical synchronization signal
- VSync-HW signal vertical synchronization signal
- interpolation module needs to intervene in the drawing and rendering process to perform interpolation (first frame interpolation, frame supplementation after frame loss (interpolation)), please refer to the description after S103 in the above embodiment for details, which will not be repeated here.
- the situation of inserting frames in the drawing and rendering stage can be divided into inserting a frame of image data after the first frame at the beginning of the sliding operation, and inserting one or more frames of image data when the drawing and rendering timeouts during the sliding operation, resulting in frame loss. Therefore, in this embodiment, the frame insertion strategy is determined according to the drawing and rendering time of two adjacent input events and the current input event, which may include:
- the drawing rendering time of the current input event exceeds the set drawing rendering time
- the number of frames to be inserted after the current drawing rendering is completed, as well as the relationship between the image data frame to be inserted and the lost image data frame are determined, and a frame insertion instruction is generated according to the number of frames and the relationship to obtain a frame insertion strategy.
- the interpolation strategy mentioned in this embodiment is an implementation scenario of the interpolation event, that is, the solution of inserting a frame after the first frame of image data frame mentioned above.
- the specific implementation details can be found above and will not be repeated here.
- the interpolation strategy mentioned in this embodiment is an implementation scenario for generating frame supplement instructions, that is, the scheme of determining the number of interpolation frames after frame loss and then interpolating (supplementing) frames as mentioned above.
- the specific implementation details can be found above and will not be repeated here.
- the determination of the interpolation strategy mentioned in this embodiment is essentially the processing logic of the interpolation module determining whether the first frame interpolation is required, or determining whether the current drawing rendering has timed out and caused frame loss, and then the frame supplement is required.
- the program instructions may not generate the interpolation strategy step, but directly trigger the first frame interpolation process when the first frame interpolation condition is met, and directly determine the number of frames that can be inserted and then interpolate when the frame supplement condition is met.
- the interpolation strategy when the interpolation strategy includes an interpolation event, it can be determined that the interpolation strategy indicates that interpolation is required.
- the drawing and rendering thread completes drawing and rendering the first image frame (such as frame 1 above) to obtain the first drawing and rendering image
- the next VSync-APP signal such as the VSync-APP signal 2 received at time point t2 above
- a frame of image data in the interpolation event (such as part of the data in frame 1 above, frame 1') is used as the second image frame, the second image frame is inserted after the Nth frame, and the second image frame is drawn and rendered.
- the interpolation strategy when the interpolation strategy includes a frame insertion instruction, it can be determined that the interpolation strategy indicates that interpolation is required.
- the drawing and rendering thread completes the drawing and rendering of the first image frame (such as frame 3 in the above text) to obtain the first drawing and rendering image
- the next VSync-APP signal such as the VSync-APP signal 5 received at time point t5 in the above text
- the timestamp corresponding to the input event where each image frame data to be inserted in the frame number is located is determined according to the relationship in the interpolation instruction, such as the timestamp of VSync-APP signal 4 corresponding to frame 4; the image frame data in the input event corresponding to each timestamp is obtained as the second image frame, and each second image frame is drawn and rendered.
- the frame filling operation can be that before the VSync-APP signal 6 received at time point t6 arrives, only frame 4 is selected as the second image frame according to the interpolation strategy, and frame 4 is drawn and rendered first on the premise that it is estimated that it will not affect the drawing of frame 5.
- the frame filling operation can be before the arrival of the VSync-APP signal 7 received at the time point t7, and when it is estimated that it will not affect the drawing and rendering of frame 6, between the end time of the drawing and rendering of frame 3 and the time point t7, if the remaining time allows the drawing and rendering of two frames of image data, then frame 4 and frame 5 can be used as the second image frame, and frame 4 and frame 5 can be drawn and rendered in turn. After the drawing and rendering of frame 5 is completed, frame 6 can be drawn and rendered within the cycle.
- the interpolation strategy when the interpolation strategy includes an interpolation event, it can be determined that the interpolation strategy indicates that interpolation is required.
- the drawing and rendering thread completes drawing and rendering the first image frame (such as frame 1 above) to obtain the first drawing and rendering image
- the next VSync-APP signal such as the VSync-APP signal 2 received at time point t2 above
- a frame of image data in the interpolation event (such as part of the data in frame 1 above, frame 1') is used as the second image frame, the second image frame is inserted after the first image frame, and the second image frame is drawn and rendered.
- the interpolation strategy when the interpolation strategy includes a frame insertion instruction, it can be determined that the interpolation strategy indicates that interpolation is required.
- the drawing and rendering thread completes the drawing and rendering of the first image frame (such as frame 3 in the above text) to obtain the first drawing and rendering image
- the next VSync-APP signal such as the VSync-APP signal 5 received at time point t5 in the above text
- the timestamp corresponding to the input event where each image frame data to be inserted in the frame number is located is determined according to the relationship in the interpolation instruction, such as the timestamp of VSync-APP signal 4 corresponding to frame 4; the image frame data in the input event corresponding to each timestamp is obtained as the second image frame, and each second image frame is drawn and rendered.
- the frame filling operation can be that before the VSync-APP signal 6 received at time point t6 arrives, only frame 4 is selected as the second image frame according to the interpolation strategy, and frame 4 is drawn and rendered first on the premise that it is estimated that it will not affect the drawing of frame 5.
- the frame filling operation can be before the arrival of the VSync-APP signal 7 received at the time point t7, and when it is estimated that it will not affect the drawing and rendering of frame 6, between the end time of the drawing and rendering of frame 3 and the time point t7, if the remaining time allows the drawing and rendering of two frames of image data, then frame 4 and frame 5 can be used as the second image frame, and frame 4 and frame 5 can be drawn and rendered in turn. After the drawing and rendering of frame 5 is completed, frame 6 can be drawn and rendered within the cycle.
- the image synthesis system when receiving the vertical synchronization signal that triggers the synthesis process, the image synthesis system obtains the first drawing and rendering image, synthesizes the first drawing and rendering image, and obtains a second picture, where the second picture includes the second content and the third content.
- the display driver when receiving the vertical synchronization signal that triggers the display screen refresh process, drives the display screen to display the second picture, and following the sliding operation, the first area displays the second content, and the second area displays the third content.
- the relevant content of friend B originally displayed in the second area in Figure 22 is displayed in the first area in Figure 23
- the third content (such as the relevant content of friend C in Figures 22 and 23) is displayed in the second area.
- the data processing method provided in this embodiment pre-inserts a frame for drawing and rendering at the initial stage of the sliding operation, and then caches one more frame in the cache queue, thereby reducing the situation of frameless synthesis caused by subsequent drawing and rendering timeout, and reducing display jams. For example, if only one frame is lost, a smooth transition can be achieved through the inserted frame without jamming, thereby improving the user experience.
- the lost frames are made up by one or more frames through frame filling, reducing the frame loss caused by missing the VSync signal due to the drawing rendering timeout, so that the content displayed on the display can change smoothly, increase the smoothness of the display, reduce jumps, and further improve the user experience.
- the terminal device includes hardware and/or software modules corresponding to the execution of each function.
- the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered to be beyond the scope of this application.
- the data processing methods provided in the above embodiments implemented by the terminal device in the actual application scenario can also be executed by a chip system included in the terminal device, wherein the chip system may include a processor.
- the chip system can be coupled to the memory so that the chip system calls the computer program stored in the memory when it is running to implement the steps executed by the above terminal device.
- the processor in the chip system can be an application processor or a processor other than an application processor.
- an embodiment of the present application also provides a computer-readable storage medium, which stores computer instructions.
- the terminal device executes the above-mentioned related method steps to implement the data processing method in the above-mentioned embodiment.
- an embodiment of the present application further provides a computer program product.
- the computer program product When the computer program product is run on a terminal device, the terminal device executes the above-mentioned related steps to implement the data processing method in the above-mentioned embodiment.
- an embodiment of the present application also provides a chip (which may also be a component or module), which may include one or more processing circuits and one or more transceiver pins; wherein the transceiver pins and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the above-mentioned related method steps to implement the data processing method in the above-mentioned embodiment, so as to control the receiving pin to receive the signal, so as to control the sending pin to send the signal.
- a chip which may also be a component or module
- the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the above-mentioned related method steps to implement the data processing method in the above-mentioned embodiment, so as to control the receiving pin to receive the signal, so as to control the sending pin to send the signal.
- the terminal device, computer-readable storage medium, computer program product or chip provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种数据处理方法、设备及存储介质。该方法在滑动操作过程中,根据实时的绘制渲染情况,确定是否进行插帧,在确定要进行插帧时,通提前插入一帧,或者将丢失的帧补回一帧或多帧,从而减少后续绘制渲染超时导致的无帧合成的情况,减少显示的卡顿,同时能够增加显示的流畅性,显示的内容的跳变,提升用户体验。
Description
本申请要求于2022年11月07日提交中国专利局、申请号为202211382250.7、发明名称为“数据处理方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及图像显示技术领域,尤其涉及一种数据处理方法、设备及存储介质。
目前,用户可以通过终端设备的显示屏查阅各类内容。当内容较多时,显示屏不能一次显示全部内容。终端设备可以响应于用户在显示屏上的滑动操作,控制所显示的内容进行跟手或不跟手的滑动以方便用户浏览相关内容。这些内容的需要经过绘制渲染、合成、送显等环节,最终才能呈现到显示屏中。
但是,在需要显示的内容较多时,绘制渲染可能会超时,出现丢帧,使得合成、送显的图像周期不固定,并且图像不是根据连续帧得到的,从而导致显示屏中显示的内容出现显示卡顿、跳变等异常现象。
发明内容
为了解决上述技术问题,本申请提供一种数据处理方法、设备及存储介质,旨在解决因绘制渲染超时、丢帧,导致的卡顿、跳变现象。
第一方面,本申请提供一种数据处理方法。该方法包括:显示第一应用的第一界面;响应作用于第一界面的滑动操作,获取滑动操作对应的输入事件;获取第一VSync信号,并基于第一MOVE事件绘制渲染第N帧,第一MOVE事件是基于第一VSync信号的时间戳从滑动操作对应的输入事件中提取得到的;在第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,获取丢帧数,并显示第N帧;从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M;在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧,并显示M帧;其中,第二VSync信号是第N帧绘制渲染完成后接收到的首个VSync信号,第二MOVE事件是根据第N帧绘制渲染过程中接收到的全部或部分VSync信号的时间戳从滑动操作对应的输入事件中提取得到的。
由此,在绘制渲染超时时,通过补帧的方式将丢失的帧补回一帧或多帧,减少由于绘制渲染超时错过VSync信号导致的丢帧,使得显示屏显示的内容能够平稳变化,增加显示的流畅性,减少跳变,进一步提升用户体验。
根据第一方面,获取丢帧数,包括:确定第N帧开始绘制渲染的第一时间和结束绘制渲染的第二时间;根据第一时间、第二时间和第N帧对应的设定的绘制渲染时长,计算丢帧数,设定的绘制渲染时长为一个VSync信号周期。
示例性的,第一时间例如下文所说的Tbegin,第二时间例如下文所说的Tend,设
定的绘制渲染时长,例如为一个VSync信号周期,即下文所说的VSync,丢帧数例如下文所说的M。
根据第一方面,或者以上第一方面的任意一种实现方式,基于下述公式,根据第一时间、第二时间和第N帧对应的设定的绘制渲染时长,计算丢帧数:丢帧数=floor[(第二时间–第一时间)/VSync信号周期]–1。
根据第一方面,或者以上第一方面的任意一种实现方式,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M,包括:根据VSync信号周期,确定第二VSync信号的接收时间;根据已完成绘制渲染的N帧的绘制渲染时长,确定已完成绘制渲染的N帧中每一帧的平均绘制渲染时长;根据接收时间、第二时间和平均绘制渲染时长,计算预测可插帧数;从预测可插帧数、丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M。
示例性的,接收时间例如下文所说的TnextVsyn,平均绘制渲染时长长例如下文所说的T平均,预测可插帧数例如下文所说的countAllow。
根据第一方面,或者以上第一方面的任意一种实现方式,基于下述公式,根据接收时间、第二时间和平均绘制渲染时长长,计算预测可插帧数:预测可插帧数≤(接收时间-第二时间)/平均绘制渲染时长长。
示例性的,预测可插帧数取整数部分。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在第一应用冷启动时,获取第一应用的包名;根据包名,确定第一应用的应用类别;在第一应用的应用类别与设定的支持插帧的应用类型匹配时,在第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,执行获取丢帧数,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
由此,能够加快数据处理过程,减少不必要的插帧处理。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:根据输入事件对应的报点信息,确定输入事件作用于第一界面中的控件;在作用于的控件为RecyclerView控件或ListView控件时,在第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,执行获取丢帧数,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
由此,能够加快数据处理过程,减少不必要的插帧处理。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在对第N帧进行绘制渲染的过程中,确定要绘制渲染的图层数量;在图层数量为一个时,在第
N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,执行获取丢帧数,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
由此,能够加快数据处理过程,减少不必要的插帧处理。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在基于相邻两次VSync信号的时间戳从滑动操作对应的输入事件中提取得到MOVE事件对应的滑动距离大于最小滑动距离阈值时,在第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,执行获取丢帧数,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
由此,能够加快数据处理过程,减少不必要的插帧处理。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在第N帧的绘制渲染时长不大于一个VSync信号周期,且第N帧为绘制渲染操作的首帧时,当第N帧绘制渲染完成后,在第N帧的基础上偏移设定的偏移量,得到第N+1帧;在第二VSync信号到达前,绘制渲染第N+1帧,并显示第N+1帧。
示例性的,在第N帧为首帧时,如下文所述的帧1,则第N+1帧即为下文所说的帧1’。
由此,在滑动操作初期预先插入一帧进行绘制渲染,进而缓存队列中多缓存一帧,减少后续绘制渲染超时导致的无帧合成的情况,减少显示的卡顿,如在仅丢一帧的情况下,可以通过插入的帧平滑过渡,不会出现卡顿,提升用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,方法还包括:在基于第三VSync信号的时间戳从滑动操作对应的输入事件中提取得到的事件为DOWN事件时,确定第N帧为绘制渲染操作的首帧;其中,第三VSync信号是第一VSync信号前接收到的,与第一VSync信号相邻的VSync信号。
第二方面,本申请提供了一种终端设备。该终端设备包括:存储器和处理器,存储器和处理器耦合;存储器存储有程序指令,程序指令由处理器执行时,使得所述终端设备执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,本申请提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任
意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请提供了一种计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第一方面或第一方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
图1为示例性示出的终端设备的硬件结构示意图;
图2为示例性示出的终端设备的软件结构示意图;
图3为示例性示出的应用场景示意图;
图4为示例性示出的应用场景示意图;
图5为示例性示出的一种数据处理流程的示意图;
图6为示例性示出的数据处理过程中数据帧走向的示意图;
图7为示例性示出的不丢帧情况下界面显示内容的变化示意图;
图8为示例性示出的丢一帧数据时的数据处理流程的示意图;
图9为示例性示出的丢一帧数据时界面显示内容的变化示意图;
图10为示例性示出的连续丢多帧数据时的数据处理流程的示意图;
图11为示例性示出的连续丢多帧数据时界面显示内容的变化示意图;
图12为示例性示出的本申请实施例提供的的数据处理方法涉及的功能模块的示意图;
图13为示例性示出的本申请实施例提供的数据处理方法涉及的功能模块之间交互过程的时序图;
图14为示例性示出的进行首帧插帧的数据处理流程的示意图;
图15为示例性示出的丢1帧时进行补帧的数据处理流程的示意图;
图16为示例性示出的进行首帧插帧,同时在丢1帧时进行补帧的数据处理流程
的示意图;
图17为示例性示出的进行首帧插帧,同时在连续丢多帧时进行补帧的数据处理流程的示意图;
图18为示例性示出的进行首帧插帧,同时在连续丢多帧时进行补帧的又一数据处理流程的示意图;
图19为示例性示出的本申请实施例提供的一种数据处理方法的流程图;
图20为示例性示出的绘制、渲染、合成阶段包括的具体处理操作的示意图;
图21为示例性示出的本申请实施例提供的又一种数据处理方法的流程图;
图22为示例性示出的第一应用的第一界面的示意图;
图23为示例性示出的第一应用的第一界面的又一示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
为了更好的理解本申请实施例提供的技术方案,在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的适用于的终端设备(例如手机、平板电脑、可触控PC机等)的硬件结构进行说明。
参见图1,终端设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
示例性的,在一些实现方式中,传感器模块180可以包括压力传感器,陀螺仪传
感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等,此处不再一一例举,本申请对此不作限制。
基于上述传感器,在使用终端设备的过程中,当用户对显示屏194作出操作,如单击、双击、滑动、不离手滑动时,便可以精准确定当前作出的操作,以及该操作作用于的位置、所在位置的报点信息等。具体到本申请提供的技术方案中,以不离手滑动操作为例,针对该操作过程中数据的处理流程进行具体说明。关于针对不离手滑动操作产生的数据处理方法的具体细节,详见下文,此处不再赘述。
需要说明的,所谓不离手滑动是指在某一应用内,用户通过手指或手写笔在显示屏上移动,使得当前界面内显示的内容发生变化的行为。
此外,需要说明的是,处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
此外,还需要说明的是,处理器110中还可以设置存储器,用于存储指令和数据。在一些实现方式中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
继续参见图1,示例性的,充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
继续参见图1,示例性的,终端设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。需要说明的是,天线1和天线2用于发射和接收电磁波信号。
继续参见图1,示例性的,移动通信模块150可以提供应用在终端设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实现方式中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实现方式中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
此外,需要说明的是,调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解
调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实现方式中,调制解调处理器可以是独立的器件。在另一些实现方式中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
继续参见图1,示例性的,无线通信模块160可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
此外,还需要说明的是,终端设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
继续参见图1,示例性的,显示屏194用于显示图像,视频等。显示屏194包括显示面板。在一些实现方式中,终端设备100可以包括1个或N个显示屏194,N为大于1的正整数。
此外,还需要说明的是,终端设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
继续参见图1,示例性的,外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
继续参见图1,示例性的,内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行终端设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储终端设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,执行终端设备的各种功能应用以及本申请提供的数据处理方法。
此外,还需要说明的是,终端设备100可以通过音频模块170,扬声器170A,受
话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
继续参见图1,示例性的,按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。终端设备100可以接收按键输入,产生与终端设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
关于终端设备100的硬件结构就介绍到此,应当理解的是,图1所示终端设备100仅是一个范例,在具体实现中,终端设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图1中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
为了更好的理解图1所示终端设备100的软件结构,以下对终端设备100的软件结构进行说明。在对终端设备100的软件结构进行说明之前,首先对终端设备100的软件系统可以采用的架构进行说明。
具体的,在实际应用中,终端设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。
此外,可理解的,目前主流的终端设备使用的软件系统包括但不限于Windows系统、Android系统和iOS系统。为了便于说明,本申请实施例以分层架构的Android系统为例,示例性说明终端设备100的软件结构。
此外,后续关于本申请实施例提供的数据处理方案,在具体实现中同样适用于其他系统。
参见图2,为本申请实施例的终端设备100的软件结构框图。
如图2所示,终端设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实现方式中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
其中,应用程序层可以包括一系列应用程序包。如图2所示,应用程序包可以包括图库、设置、短信、邮箱、浏览器、视频等应用程序,此处不再一一列举,本申请对此不作限制。
其中,应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。在一些实现方式中,这些编程接口和编程框架可以描述为函数/服务/框架等。
此外,需要说明的是,应用程序框架层基于其管理的编程接口和编程框架的实现语言,可以划分为系统服务框架层(通常所说的Framework层,这一层基于Java语言实现)和本地服务框架层(通常所说的native层,这一层基于C或C++语言实现)。
继续参见图3,示例性的,Framework层可以包括窗口管理器、输入管理器、内容提供器、视图系统、活动管理器等,此处不再一一列举,本申请对此不作限制。
其中,窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小、判断是
否有状态栏、锁定屏幕等,具体到本申请提供的技术方案中,还用于确定焦点窗口,以获取焦点窗口图层、对应的应用程序包名等信息。
其中,输入管理器(InputManagerService)用于管理输入设备的程序。例如,输入管理器可以确定鼠标点击操作、键盘输入操作和不离手滑动等输入操作。具体到本申请提供的技术方案中,主要用于确定不离手滑动操作。
其中,内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等,此处不再一一列举,本申请对此不作限制。
其中,视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
其中,活动管理器用于管理各个应用程序的生命周期以及导航回退功能。负责Android的主线程创建,各个应用程序的生命周期的维护。
继续参见图3,示例性的,native层可以包括输入读取器、输入派发器、图像合成系统等,此处不再一一列举,本申请对此不作限制。
需要说明的是,终端设备在使用过程中,显示屏会按照设定的周期,每隔几毫秒调用一次事件监听端口/函数(EventHub),如果监听到用户对显示屏作出的操作,如不离手滑动操作(可以看作是一个事件),就将该事件上报给输入读取器(InputReader)。即,InputReader用于从EventHub中读取出事件,或者直接接收EventHub上报的事件。
此外,InputReader在获得事件后,会将事件发送给输入派发器(InputDispatcher)。
其中,InputDispatcher在拿到InputReader发送的事件后,会将该事件通过输入管理器分发给对应的应用程序(后续简称为:应用)。
其中,图像合成系统,即surface flinger(后续表示为SF线程)用于控制图像合成,以及产生垂直同步(Vetical Synchronization,VSync)信号。
可理解的,SF线程包括:合成线程、VSync线程、缓存线程(如quene buffer)。其中,合成线程用于被VSync信号唤醒进行合成;VSync线程用于根据VSync信号请求生成下一个VSync信号;缓存线程中存在一个或多个缓存队列,每一个缓存队列分别用于存放其对应的应用的缓存数据,如根据数据帧绘制渲染出的图像数据。
其中,安卓运行时(Android Runtime)包括核心库和虚拟机。Android Runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
系统库可以包括多个功能模块。例如:图像渲染库、图像合成库、输入处理库、媒体库等。
其中,图像渲染库用于二维或三维图像的渲染;图像合成库用于二维或三维图像的合成。
可能的实现方式中,应用通过图像渲染库对图像进行绘制渲染,然后应用将绘制渲染后的图像发送至SF线程的缓存队列中。每当VSync信号到来时,SF线程从缓存
队列中按顺序获取待合成的一帧图像,然后通过图像合成库进行图像合成。
其中,输入处理库用于处理输入设备的库,可以实现鼠标、键盘和不离手滑动输入处理等。
其中,媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4、H.264、MP3、AAC、AMR、JPG和PNG等。
此外,可理解的,Android系统中的内核层是硬件和软件之间的层。内核层至少包含传感器驱动、显示驱动、音频驱动、蓝牙驱动、GPS驱动等。
关于终端设备100的软件结构就介绍到此,可以理解的是,图2示出的软件结构中的层以及各层中包含的部件,并不构成对终端设备100的具体限定。在本申请另一些实施例中,终端设备100可以包括比图示更多或更少的层,以及每个层中可以包括更多或更少的部件,本申请不做限定。
为了便于理解,示例性的给出部分与本申请实施例相关概念的说明以供参考。
1、帧:是指界面显示中最小单位的单幅画面。一帧可以理解为一副静止的画面,快速连续地显示多个相连的帧可以形成物体运动的假象。
2、帧率:是指在1秒钟时间里刷新图片的帧数,也可以理解为终端设备中图形处理器每秒钟刷新画面的次数。高的帧率可以得到更流畅和更逼真的动画。每秒钟帧数越多,所显示的动作就会越流畅。
需要说明的是,界面显示帧前通常需要经过绘制、渲染、合成等过程。
3、帧绘制:是指显示界面的图片绘制。显示界面可以由一个或多个视图组成,各个视图可以由视图系统的可视控件绘制,各个视图由子视图组成,一个子视图对应视图中的一个小部件,例如,其中的一个子视图对应图片视图中的一个符号。
4、帧渲染:是将绘制后的视图进行着色操作或增加3D效果等。例如:3D效果可以是灯光效果、阴影效果和纹理效果等。
5、帧合成:是将多个上述一个或多个渲染后的视图合成为显示界面的过程。
6、垂直同步(vetical synchronization,VSync)信号:用于控制帧的绘制渲染、合成、送显等进程起始的信号。
需要说明的是,为了保证显示的流畅性,避免出现显示卡顿等现象,终端设备一般基于VSync信号进行显示,以对图像的绘制、渲染、合成和屏幕刷新显示等流程进行同步。
可以理解的是,VSync信号为周期性信号,VSync信号周期可以根据屏幕刷新率进行设置,例如,屏幕刷新率为60Hz时,VSync信号周期可以为16.6ms,即终端设备每间隔16.6ms生成一个控制信号使VSync信号周期触发。还例如,屏幕刷新率为90Hz时,VSync信号周期可以为11.1ms,即终端设备每间隔11.1ms生成一个控制信号使VSync信号周期触发。
此外,还需要说明的是,VSync信号包括软件VSync(VSync-APP或VSync-SF)和硬件VSync(VSync-HW)。其中,VSync-APP用于触发绘制渲染流程;VSync-SF用于触发合成流程。硬件VSync信号(VSync-HW)用于触发屏幕显示刷新流程。通常情况下,软件VSync和硬件VSync保持周期同步。以60Hz和120Hz变化为例,若
VSync-HW从60Hz切换到120Hz,VSync-APP、VSync-SF同步变化,从60Hz切换到120Hz。
下面结合应用启动或应用中发生界面切换的场景,示例性说明终端设备100软件以及硬件的工作流程。
示例性的,当触控面板中触摸传感器接收到触摸操作时,内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸力度,触摸操作的时间戳等信息),EventHub监听到该原始输入事件,将其存储在内核层。InputReader通过输入处理库将原始输入事件从内核层中读取出,并将其交由InputDispatcher,由InputDispatcher对原始输入事件进行包装处理,如封装为设定的数据格式,并确定该原始输入事件对应的应用,进而将原始输入事件上报至输入管理器。输入管理器解析该原始输入事件的信息(包括:操作类型和报点位置等)和根据当前焦点确定焦点应用,即该原始输入事件对应的应用,并将解析后的信息发送至焦点应用。
可理解的,焦点可以是触摸操作中手指触碰点或者手写笔、鼠标点击操作中点击位置。焦点应用为终端设备前台运行的应用或者触摸操作中触碰位置对应的应用。焦点应用根据解析后的原始输入事件的信息(例如,报点位置)确定该原始输入事件所对应的控件。
以该触摸操作是不离手滑动操作,该不离手滑动操作所对应的控件为微信应用的列表控件为例,微信应用通过视图系统,调用系统库中图像渲染库对图像进行绘制渲染。微信应用将绘制渲染后的图像发送至SF线程的缓存队列中。通过系统库中图像合成库将绘制渲染后的图像合成为微信界面。SF线程通过内核层的显示驱动,使得显示屏显示微信应用的相应界面。
下面结合附图,对本申请提供的技术方案适用于的应用场景进行说明。示例性的,图3中(1)、(2)所示,图4中(1)、(2)所示为可能的实现中一种终端设备的在不同应用下的界面示意图。
示例性的,终端设备可以在3中(1)所示的社交应用的界面,或在3中(2)所示的设置相关界面中,或在图4中(1)所示的文档界面,或在图4中(2)所示的商品浏览界面,等接收用户沿箭头方向的向上滑动操作,或向下滑操作,当终端设备接收到用户作出的滑动操作时,终端设备基于滑动操作进行帧绘制、渲染、合成等过程,最终将合成的画面进行送显,使得显示屏显示的内容随着用户手指在显示屏上的移动而发生变化,即跟随手指的移动,更新当前界面的内容。
可以理解的,终端设备的显示屏的界面显示通常需要经过绘制、渲染和合成等过程。示例性的,终端设备的界面绘制过程可以包括背景绘制、子视图的绘制、滚动条的绘制等过程。终端设备的界面合成过程可以包括顶点处理和像素处理等处理过程。
但是,若终端设备在绘制渲染图像(帧)时超时(例如,超过一个VSync周期),会导致其所占用的其他周期中要绘制渲染的图像帧丢失,进而导致终端设备出现显示卡顿、跳变等异常现象。
下面结合图5至图11对终端设备的界面显示涉及的数据处理流程,以及数据处理过程中数据帧的走线,以及出现丢帧时的数据处理流程和界面内容变化进行说明。
示例性的,焦点应用在得到InputDispatcher通过InputManagerService派发的原始
输入事件后,会对原始输入事件进行分发。其分发过程例如为由焦点应用对应的UI线程中用于进行界面绘制的绘制渲染线程(如Choreographer)发起VSync信号请求,并在接收到VSync信号后进行一帧的绘制。基于该前提,参见图5,示例性的,t1至t7为UI线程中的Choreographer接收到VSync信号的时间点。其中,VSync信号由SF线程,具体为SF线程中的VSync线程在接收到VSync信号请求后,在每一个VSync信号周期发送给Choreographer。由于VSync信号有固定的周期,故而t1至t7中任意两个时间点之间的时长是固定的,即一个VSync信号周期。
继续参见图5,示例性的,当在t1时间点接收到VSync信号后,Choreographer对帧1进行绘制渲染。
可理解的,在实际应用中,每一帧绘制好的帧图像数据,会由UI线程中的Choreographer将当前绘制好的帧图像数据,如图6中的帧M图像数据发送至SF线程的缓存线程,由缓存线程将该帧图像数据缓存到该焦点应用对应的缓存队列中,以便SF线程中合成线程在对应的合成时间点,从缓存队列中取出进行合成。其中,N=M-1,M为大于1的整数。
基于图6所示的帧图像数据的流向,继续参见图5,示例性的,在帧1绘制渲染完成后,Choreographer将绘制好的帧1图像数据发送至SF线程的缓存线程,由缓存线程将帧1图像数据缓存到该焦点应用对应的缓存队列中。相应地,在到达对应的合成时间点时,合成线程从缓存队列中取出帧1图像数据进行合成,在完成合成后,终端设备可以通过调用内核层的显示驱动,在显示屏显示帧1对应的内容。
需要说明的是,图5,以及后续图8、图10、图13、图15和图16中,出现在SF线程中的帧数据,如帧1、帧2、帧3等,均指绘制渲染好的与帧数据对应的图像数据;出现在显示驱动中的帧数据如帧1、帧2、帧3等,均指合成得到的与帧数据对应的图片。
继续参见图5,示例性的,t2至t7时间点分别接收到的帧2至帧7的绘制渲染、合成、显示与帧1的类似,此处不再赘述。
此外,需要说明的是,每帧从绘制渲染到合成会滞后一定时长,从合成到显示又会滞后一定时长,这两个滞后时长可以相同,也可以不相同。故而,如图5所示,在t1时间点开始对帧1进行绘制渲染,并在帧1的绘制渲染周期,如图5中t1至t2的时间内,SF线程合成的帧1之前的帧图像数据(图5未示出),显示驱动驱动显示屏显示的是触发原始输入事件前的内容。
示例性的,在帧1绘制渲染完成后,Choreographer会向VSync线程发送请求下一个VSync信号的请求,即便当前周期还有可用时长,也不会进行下一帧的绘制渲染,而是在接收到下一个VSync信号后,如图5中t2时间点时,才开始对帧2绘制渲染。本实施例以每帧从绘制渲染到合成的滞后时间为一个VSync信息周期为例,则在t2时间点,Choreographe开始对帧2进行绘制渲染时,合成线程会从缓存队列中取出帧1图像数据,对帧1图像数据进行合并。
示例性的,对于从合成到显示滞后的时长,仍以一个VSync信息周期为例,由于帧1图像数据的合成是在t2时间点进行的,故而显示驱动驱动显示屏显示帧1对应的内容是在t3时刻开始,并在t3至t4整个时长内显示帧1对应的内容,直到下个VSync
信号周期取出新合成的内容进行显示。
可理解的,图5为每一帧都在对应的周期内绘制渲染完成,对于这种情况,显示屏中显示的内容会随着手指/手写笔的移动,按照固定的周期进行界面的刷新。以焦点应用为社交应用为例,如图7中(1)所示,用户在当前界面沿箭头方向,从P1点不离手滑动至P4点的过程中,触控面板中的触控传感器接收到该操作,由内核层将该操作处理为原始输入事件,并将其EventHub监听到该原始输入事件,将其存储在内核层。InputReader通过输入处理库将原始输入事件从内核层中读取出,并将其交由InputDispatcher,由InputDispatcher对原始输入事件进行包装处理,如封装为设定的数据格式,并确定该原始输入事件对应的应用,进而将原始输入事件上报至输入管理器。输入管理器解析该原始输入事件的信息(包括:操作类型和报点位置等)和根据当前焦点确定焦点应用,并将解析后的信息发送至焦点应用。焦点应用中的Choreographer便会请求VSync信号,在接收到VSync信号后开始对当前报点对应的帧数据进行绘制渲染,并由合成线程进行合成,显示驱动进行送显。当手指滑动到P2时,在完成对P2点的帧数据的绘制渲染、合成后,最终会显示图7中(2)所示的界面内容。
可理解的,由于每一帧从绘制渲染到合成,再到显示,以每个阶段的滞后时间为一个VSync信号周期为例,第一帧从绘制渲染到最终的显示,只有两个VSync信号周期,后续在图5所示的数据处理过程下,就会在每一个VSync信号周期自动进行界面内容的切换,即从P1点滑动到P2点时,在用户无感知的情况下会立马切换为图7中(2)的内容,从P2点滑动到P3点时,在用户无感知的情况下会立马切换为图7中(3)的内容,从P3点滑动到P4点时,在用户无感知的情况下会立马切换为图7中(4)的内容,即整个滑动过程,界面内容的变化是比较顺滑的,不存在卡顿、跳变。
然而这仅仅是理想状态下,即绘制渲染过程不超时,不丢帧的情况下。但是,随着用户对应用需求的不断提高,应用的界面中显示的内容越来越多,这就会导致滑动过程中,每一帧数据要绘制渲染的控件、图标等越来越多,进而导致一帧数据无法在一个周期内完成绘制渲染。
如图8所示,在t3时间点,Choreographer对帧3开始绘制渲染,示例性的,在一种可能的情况下,如果在t3至t4这一周期内,Choreographer没有完成对帧3的绘制,即帧3的绘制渲染超时,在t4时间点接收到新的VSysn信号时,由于帧3还没有绘制渲染完成,因此不对帧4进行绘制渲染,同时由于t3时间点开始的帧3还没有绘制渲染完,在t4时间点缓存队列中没有帧3图像数据,因此在t4时间点合成线程无法从缓存队列中获取到帧3图像数据。
继续参见图8,示例性的,在t4至t5区间内,Choreographer完成了对帧3的绘制渲染,由于还没有接收到新的VSync信号,即t5时间点对应的VSync信号,Choreographer进入短暂的空白期,在t5时间点接收到VSync信号后,开始对帧5进行绘制渲染,同时由于在t5时间点前,完成了对帧3的绘制渲染,缓存队列中缓存了帧3图像数据,因此在t5时间点,合成线程可以从缓存队列中取出帧3图像数据进行合并,以使显示驱动能够在一个VSync信号周期的滞后时间,即图8中的t6时间点,
将合成线程合成的帧3对应的内容送显。
继续参见图8,示例性的,由于在t4时间点,开始显示帧2对应的内容后,直到t6时间点,显示驱动才得到合成线程合成的新一帧,具体为帧3对应的内容,因此帧2对应的内容会从t2时间点持续显示到t6时间点。
可理解的,图8中每一帧的绘制渲染、合成、送显逻辑与图5所示类似,未在本实施例中详尽说明的地方,可以参见图5所示实施例的部分,此处不再赘述。
对于图8所示的数据处理流程,具体到实际应用中,仍以社交应用的界面中发生不离手滑动,界面内容变化为例。如图9中(1)所示,用户依旧沿箭头方向从P1点不离手滑动至P4点,其中,P1点、P2点分别对应的帧的绘制渲染均未超时,故而手指在P1点时对应的界面内容如图9中(1)所示正常显示,手指在P2点时对应的界面内容如图9中(2)所示也正常显示,但是从P2点滑动到P3点时,P3点对应的帧的绘制渲染超时,导致手指在P3点时导致对应的界面内容没有发生变化如图9中(3),依旧为P2点对应的内容,接着在手指从P3点滑动到P4点的过程中,P3点对应的帧完成了绘制渲染,在手指滑动到P4点时,界面会更新为P3点对应的帧的的内容,即图9中(4)所示界面内容与正常绘制渲染,未超时的图7中(3)所示P3点的界面内容相同,对于丢一帧的情况下,界面至少在两个VSync信号周期内不会发生变化,如果丢帧数量更多,持续的时间就会越长,使得用户感知到卡顿,影响用户体验。
继续参见图8,示例性的,由于帧4丢失了,在t7时间点,显示屏显示的画面会直接从帧3的内容跳变为帧5的内容,在只丢一帧的情况下,界面跳变可能不明显,用户不会感知到,但是如果连续丢了数帧,界面跳变就会很明显,用户就会感知到界面跳变,影响用户体验。
如图10所示,在t3时间点,Choreographer对帧3开始绘制渲染,在t3至t4这一周期内,Choreographer没有完成对帧3的绘制,在t4时间点接收到新的VSysn信号时,由于帧3还没有绘制渲染完成,因此不对帧4进行绘制渲染,同时由于t3时间点开始的帧3还没有绘制渲染完,在t4时间点缓存队列中没有帧3图像数据,因此在t4时间点合成线程无法从缓存队列中获取到帧3图像数据。
继续参见图10,示例性的,在t4至t5这一周期内,Choreographer没有完成对帧3的绘制,在t5时间点接收到新的VSysn信号时,由于帧3还没有绘制渲染完成,因此不对帧5进行绘制渲染,同时由于t3时间点开始的帧3还没有绘制渲染完,在t5时间点缓存队列中没有帧3图像数据,因此在t6时间点合成线程无法从缓存队列中获取到帧3图像数据。
继续参见图10,示例性的,在t5至t6这一周期内,Choreographer没有完成对帧3的绘制,在t6时间点接收到新的VSysn信号时,由于帧3还没有绘制渲染完成,因此不对帧6进行绘制渲染,同时由于t3时间点开始的帧3还没有绘制渲染完,在t6时间点缓存队列中没有帧3图像数据,因此在t7时间点合成线程无法从缓存队列中获取到帧3图像数据。
继续参见图10,示例性的,在t6至t7区间内,Choreographer完成了对帧3的绘制渲染,由于还没有接收到新的VSync信号,即t7时间点对应的VSync信号,Choreographer进入短暂的空白期,在t7时间点接收到VSync信号后,开始对帧7进
行绘制渲染,同时由于在t7时间点前,完成了对帧3的绘制渲染,缓存队列中缓存了帧3图像数据,因此在t7时间点,合成线程可以从缓存队列中取出帧3图像数据进行合并,以使显示驱动能够在一个VSync信号周期的滞后时间,即图10中的t8时间点,将合成线程合成的帧3对应的内容送显。
继续参见图10,示例性的,由于在t4时间点,开始显示帧2对应的内容后,直到t8时间点,显示驱动才得到合成线程合成的新一帧,具体为帧3对应的内容,因此帧2对应的内容会从t2时间点持续显示到t8时间点。如果绘制渲染阶段丢失的帧越多,不离手滑动操作下,显示屏中显示的同一画面的时间就会越长,并且由于丢失了多帧,如帧4、帧5和帧6,因此在帧3对应的内容显示完后,在下一个显示周期,显示驱动会直接驱动显示屏显示帧7的内容,中间丢失的帧会导致画面出现跳变,使用用户体验。
可理解的,图10中每一帧的绘制渲染、合成、送显逻辑与图5所示类似,未在本实施例中详尽说明的地方,可以参见图5所示实施例的部分,此处不再赘述。
对于图10所示的数据处理流程,具体到实际应用中,仍以社交应用的界面中发生不离手滑动,界面内容变化为例。如图11中(1)所示,用户依旧沿箭头方向从P1点不离手滑动至P4点,其中,P1点对应的帧的绘制渲染未超时,故而手指在P1点时对应的界面内容如图11中(1)所示正常显示,如果P1点与P2点之间还有一个位移点,而该点对应的帧的绘制渲染超时,假设该点对应的帧的绘制渲染一直持续到P4点前才完成,则手指从该点移动到P2点,以及从P2点移动到P3点时,界面内容均不发生变化,如图11中(2)和(3)所示,与P1点对应的帧绘制渲染、合成、送显的内容相同,即图11中(1)、(2)和(3)界面内容相同。在手指移动到P4点后,由于P2点和P3点的对应的帧已经丢失,故而在P4点对应的帧绘制渲染、合并完成后,在下一周期,显示屏显示的画面会直接从图11中(3),即P1点对应的帧的内容,跳变为图11中(4),P4点对应的帧的内容,使得用户感知该跳变,影响用户体验。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
终上,在某一应用内,用户通过手指、手写笔(后续以手指为例说明)在显示屏上移动时,随着界面内容的增多,每一帧数据要绘制渲染的控件、图标等越来越多,绘制渲染过程会出现超时、丢帧,进而导致显示屏显示的画面出现卡顿、跳变现象。
有鉴于此,本申请提供了一种数据处理方法,以解决上述因绘制渲染超时、丢帧,导致的卡顿、跳变现象。
下面通过具体的实施例对本申请提供的数据处理方法进行详细说明。下面的实施例可以相互结合或独立实施,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图12为本申请实施例提供的一种数据处理方法涉及的功能模块,以及这些功能模块所在位置的示意图。
如图12所示,针对任意应用内进行的不离手滑动操作过程中,基于本申请实施例提供的数据处理方法,涉及的功能模块可包括位于应用程序层的应用,本实施例将
该应用称为焦点应用,位于Framework层的窗口管理器和输入管理器,位于native层的SF线程、输入读取器、输入派发器,位于硬件抽象层(HAL层)的硬件合成器,位于内核层的显示驱动、传感器驱动,以及显示屏、传感器等硬件。
此外,通过上文针对图2所描述的终端设备的软件架构的描述可知,数据处理方法的实现还会涉及系统库中的图像渲染库、图像合成库,以及输入处理库。
继续参见图12,示例性的,对于每一个焦点应用,在对帧数据进行绘制渲染时,具体会涉及在应用的UI线程中实现/调用的输入线程(ViewRootImplement,可表示为ViewRootImpl)、绘制渲染线程(Choreographer),以及插帧模块(FirstInputManager)。
继续参见图12,示例性的,SF线程包括VSync线程、缓存线程和合成线程。
示例性的,在进行不离手滑动操作时,VSync线程会在每一个VSync信号周期向Choreographer发送一次VSync信号,以使得Choreographer接收到VSync信号后,开始进行一帧的绘制渲染。
具体的,对于每一帧的绘制渲染,Choreographer需要先根据VSync信号对应的时间戳(不离手滑动操作过程中,每一个报点会对应一个时间戳),从ViewRootImpl读取原始输入事件(后续称为:Input事件)并进行处理。具体的,Input事件的获取,例如是通过ViewRootImpl向InputManagerService发送获取报点信息请求(携带上述VSync信号对应的时间戳),再由InputManagerService将该报点信息获取请求传输至InputDispatcher,由InputDispatcher将该报点信息获取请求传输至InputReader,最终由InputReader根据报点信息获取请求中携带的时间戳,从内核层中获取该时间戳对应的Input事件发生时,获取并保存在内核层的报点信息。
相应地,在获取到对应的报点信息后,逐级向上传递至InputDispatcher,由InputDispatcher调用该焦点应用在其内注册的callBack回调,将报点信息通过InputManagerService返回至该焦点应用的ViewRootImpl,在从ViewRootImpl传给Choreographer即可完成一帧数据的获取。
此外,需要说明的是,在本实施例中,为了解决帧数据在绘制渲染过程中出现的超时、丢帧导致的显示屏的画面卡顿、跳变问题,Choreographer在得到帧数据开始绘制渲染前,会将当前读取的Input事件传递至FirstInputManager。
可理解的,对于不离手滑动操作的Input事件,通常包括DOWN事件(手指落下)、MOVE事件(手指移动)、UP事件(手指抬起)。
示例性的,当Choreographer将当前读取的Input事件传递至FirstInputManage后,FirstInputManager会检测持续输入的Input事件变化的序列号。当FirstInputManage检测到DOWN事件变为MOVE事件的序列号,且显示屏中显示的内容开始第一次移动时,可以生成一个新的事件Event,并将新的Event告知Choreographer,使得Choreographer在绘制渲染当前接收到的帧数据后,在同一周期内继续绘制渲染一帧,即在首帧完成绘制渲染后,再首帧所在的周期内在插入一帧。
此外,当FirstInputManage检测丢帧时,还会计算运行补的帧的数量,以及具体需要补那几帧,进入重新触发上述获取帧的流程,获取丢失的帧进行补帧。
继续参见图12,示例性的,Choreographer完成每一帧的绘制渲染后,便会将绘制渲染后的帧图像数据穿给缓存线程,进而由缓存线程将其缓存到对应的缓存队列,
待到达合成线程的处理周期时,又合成线程从缓存队列中取出帧图像数据进行合成,最终将合成的内容传输给硬件合成器,由硬件合成器调用显示驱动驱动显示屏进行显示,由此便实现了显示屏显示的画面的更新。
关于数据处理过程中,图12中示出的功能模块的具体交互流程,以下结合附图进行说明。
图13为本申请实施例提供的一种数据处理方法的实现过程中,涉及的功能模块交互的时序图。
参见图13,示例性的省去了部分功能模块,如图12中的显示屏、传感器、输入读取器、输入派发器、输入管理器、窗口管理器、硬件合成器等。图13中直接以用户在焦点应用中进行不离手滑动(Input事件)时,传感器监测到该滑动操作,输入读取器获取到Input事件,并得到该Input事件对应的报点信息,将Input事件上报至输入派发器,由输入派发器从窗口管理器中获取到该Input事件对应的焦点应用,进而根据焦点应用对应的包名等信息,通过该焦点应用在输入派发器中注册的CallBack回调,将Input事件经输入管理器派发给该焦点应用的输入线程进行记录管理,并且该焦点应用响应于用户的不离手滑动操作,向VSync线程发起了请求VSync-APP信号的请求为例,对不离手滑动过程中,输入线程、绘制渲染线程、插帧模块、缓存线程、合成线程和显示驱动进行的交互进行具体说明。
此外,在结合图13进行说明之前,首先对图13中涉及的VSync-APP信号和VSync-SF信号进行说明,具体的,在本实施例中接收VSync-APP信号时间点即为触发绘制渲染线程开始绘制渲染的时间点,如图5中的t1至t7每一个时间点,而VSync-SF信号则是触发合成线程开始合成的时间点,本实施例任一滞后VSync-APP信号一个VSync信号周期为例,对于帧1对应的绘制渲染图像的合成处理时间点,则可以为图5中的t2时间点。
此外,还需要说的是,请求VSync-APP信号的请求是由焦点应用对应的线程,如应用主线程(UI线程)向VSync线程发起的(图13中未示出UI应用,直接以从UI线程中的绘制渲染线程发起为例),请求VSync-SF信号的请求,例如可以是由缓存线程向VSync线程发起。
此外,还需要说明的是,由于VSync-APP信号和VSync-SF信号对应的发送周期是固定的,其与帧率相关,并且VSync-APP信号和VSync-SF信号直接的滞后时间也为固定的,如上述所说的一个VSync信号周期,故而VSync线程根据当前的帧率对应的VSync信号周期定时生成VSync-APP信号和VSync-SF信号,并在每一个VSync信号周期,向绘制渲染线程发送生成的VSync-APP信号,向合成线程发送生成的VSync-SF信号即可。
S101,绘制渲染线程在接收到VSync-APP信号1时,根据VSync-APP信号1的时间戳从输入线程中读取记录的Input事件1。
通过上述描述可知,对于滑动操作对应的Input事件,可以包括DOWN事件、MOVE事件和UP事件。以DOWN事件时,绘制渲染线程向VSync线程发送了请求VSync-SF信号的请求,并作出了响应为例,此时在接收到VSync-APP信号1时,如果手指发生了移动,读取的Input事件1可为MOVE事件,如果抬起则为UP事件。
本实施例以Input事件1为MOVE事件为例。
此外,可理解的,在不离手滑动的过程中,输入派发器会将接收到的Input事件,经输入管理器持续派发给输入线程,对应上报频率取决于显示屏的采样率,例如采样率为120Hz,则显示驱动每隔8ms左右将数据派发给输入线程,存入事件队列内等待消耗。绘制渲染线程根据VSync信号时间戳消耗事件队列内的Input事件。
S102,绘制渲染线程在接收到Input事件1(MOVE事件)后,开始对Input事件1(MOVE事件)进行处理。
具体的说,绘制渲染线程会通过上述所说的获取报点信息的方式,通过输入线程、输入管理器、输入派发器和输入读取器获取到需要进行绘制渲染的帧数据,如上所说的帧1,然后对帧1进行绘制渲染,并将绘制渲染好的图像1缓存到缓存线程为该焦点应用分配的缓存队列中。
可理解的,在实际应用中,绘制渲染线程需要先向缓存线程发起分配缓存队列的请求,缓存线程响应于该请求为该焦点应用分配对应的缓存队列,并将分配好的缓存队列的地址信息告知绘制渲染线程,这样绘制渲染线程在完成对每一帧的绘制渲染后,就可以根据地址信息将绘制渲染好的图像缓存到缓存队列。
继续参见图13,示例性的,在本实施例中,绘制渲染线程在根据Input事件1绘制渲染图像1的过程中,还会将Input事件1发送给插帧模块,由插帧模块确定是否进行插帧操作或补帧操作。
S103,插帧模块检测相邻两次Input事件的类型(本实施例以Input事件1为MOVE事件类型,与Input事件1相邻,上一次接收到的Input事件0为DOWN事件类似为例),确定相邻两次Input事件分别为DOWN事件和MOVE事件,生成插帧事件。
具体的说,在相邻两次Input事件为DOWN事件和MOVE事件时,表明当前发生了第一帧的绘制渲染,即接收到VSync-APP信号1后进行的绘制渲染操作是对首帧数据进行的。为了避免后续出现丢一帧时,显示屏显示的画面出现卡顿,在本实施例提供的数据处理方法中,借助插帧模块生成了新的事件,即插帧事件,并通知绘制渲染线程处理插帧事件,即在当前处理周期内再进行一次绘制渲染操作。
可理解的,由于此时还没有到达发送新的VSync-APP信号,如VSync-APP信号2的周期,绘制渲染线程没有接收到VSync-APP信号2,不会从输入线程获取VSync-APP信号2对应的Input事件2,即进行绘制渲染的帧数据。为了保证本次绘制渲染操作,插帧模块在生成插帧事件时,例如可以在上一Input事件的位置的基础上,增加相应偏移生成新Input事件,该新Input事件即为上述所说的插帧事件,这样绘制渲染线程就可以根据新Input事件(插帧事件)再进行一次绘制渲染。
此外,需要说明的是,为了加快数据处理过程,减少不必要的插帧处理,在对每一帧进行绘制渲染的过程中,是否要由插帧模块生成插帧事件,以及下文所说的补帧操作,可以按照如下判断逻辑确定。
示例性的,在一些实现方式中,在每一应用冷启动时,可以先获取应用的包名,进而根据包名,确定应用的应用类别,最终通过判断应用的应用类别是否与设定的支持进行首帧插帧,或丢帧补帧的应用类型匹配,如是否在设置的白名单中,在确定应用的应用类别与设定的白名单中的应用类型匹配时,当接收到焦点应用对应的Input
事件,进行绘制渲染时,绘制渲染线程才向插帧模块发送当前处理的Input事件,以使插帧模块执行S103,S108,S112等步骤中的操作。
可理解的,上述所说的冷启动,指当应用启动时,后台没有该应用的进程,这时系统会重新创建一个新的进程分配给该应用,这种启动方式就叫做冷启动(后台不存在该应用进程)。冷启动时,系统会重新创建一个新的进程分配给该应用,所以会先创建和初始化应用程序类(Application类),再创建和初始化该应用对应的提供与用户进行交互的界面的MainActivity类(包括一系列的测量、布局、绘制),最后该应用的界面,如启动应用后默认显示的首界面才会显示在显示屏上。
此外,关于上述所说的在每一应用冷启动时,获取应用的包名的操作,例如是由终端设备中的安全防护程序(可称为IAware APK),尤其进行包名的获取的。
此外,关于上述所说的设定在白名单中的应用类型,例如可以是新闻、即时通讯、购物、浏览器、视频、短视频、论坛等应用类型,此处不再一一例举,本实施例对此不作限制。
此外,关于根据包名确定应用的应用类型的方式,可以是基于终端设备中安装的提供下载应用的应用市场中,根据不同应用的包名对应用类型的划分确定的。
由此,在通过上述方式确定每一应用的应用类型别与设定的白名单中应用类型匹配时,可以将插帧模块对应的使能标识符设置为“True”,这样在对每一帧进行绘制渲染时,绘制渲染线程识别到插帧模块对应的使能标识符为“True”时,就可以将当前处理的Input事件的相关信息传输给插帧模块,进而触发插帧模块进行判断处理。反之,若识别到插帧模块对应的使能标识符为“False”,则整个滑动操作过程中,插帧模块都不参与,即不会进行首帧插帧的操作,也不会在丢帧后进行补帧。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,在另一些实现方式中,在绘制渲染线程识别到插帧模块对应的使能标识符为“True”时,还可以根据当前处理的Input事件对应的报点信息,确定Input事件是针对显示屏显示的界面中的哪一控件的。相应地,在当前的Input事件针对的控件为RecyclerView控件或ListView控件时,当绘制渲染线程根据该Input事件进行绘制渲染时,才会向插帧模块发送当前处理的Input事件,以使插帧模块执行S103,S108,S112等步骤中的操作。
可理解的,由于RecyclerView控件和ListView控件中通常会有涉及大量内容的绘制渲染,故而通过上述判断可以进一步确定是否需要在对每一帧进行绘制渲染时,都由插帧模块进行相应的处理,避免通常不会出现绘制超时、丢帧的场景下,插帧模块也参与,从而降低了对终端设备资源的占用,同时也能够提高数据处理速度。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,在另一些实现方式中,在当前的Input事件针对的控件为RecyclerView控件或ListView控件时,绘制渲染线程可以在对每一帧进行绘制渲染的过程中,确定要绘制渲染的图层数量,在图层数量为一个时,即单图层的绘制渲染场景下,绘制渲染线程才将当前处理的Input事件的相关信息发送给插帧模块进行处理。而对于多图层
的场景,为了避免插帧模块的介入导致后续每一帧都延迟,增加其他处理难度,在绘制渲染的为多图层时,当前Input事件的相关信息不发送给插帧模块。
示例性的,上述所说的多图层场景,例如在RecyclerView控件或ListView控件上覆盖了其他控件的情况,具体到实际应用中例如在使用购物软件查看商品详情时,在详情界面还小窗口显示了直播内容。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,在另一些实现方式中,在绘制渲染线程当前进行绘制渲染的界面示单图层时,还可以确定相邻两次的Input事件对应的滑动距离是否大于滑动操作对应的最小滑动距离阈值TouchSlop。
相应地,在相邻两次的Input事件对应的滑动距离大于最小滑动距离阈值时,在根据该Input进行绘制渲染时,绘制渲染线程才向插帧模块发送当前处理的Input事件,以使插帧模块执行S103,S108,S112等步骤中的操作。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
S104,绘制渲染线程根据插帧事件绘制渲染插帧图像,并将绘制渲染好的插帧图像缓存到该焦点应用对应的缓存队列中。
可理解的,根据插帧事件中提取出的帧数据进行的绘制渲染操作,与正常接收到的Input事件中的帧数据进行的绘制渲染操作类似,具体实现细节可以参见上文,此处不再赘述。
以Input事件1中的帧数据为图14中的帧1为例,则在t1时刻接收到VSync-APP信号1,并在t1至t2时间内,即接收到VSync-APP信号2前,绘制渲染线程会对帧1进行绘制渲染操作,同时在对帧1绘制渲染后,在该周期内还会对插帧事件中的帧1’进行绘制渲染。即,在t1至t2这一周期内,会得到两个绘制渲染后的图像,分别为帧1对应的图像和帧1’对应的图像。
继续参见图13,示例性的,在对插入的帧进行绘制渲染后的插帧图像缓存到缓存队列后,如果当前周期还未结束,则绘制渲染线程进入短暂的空白期,如果当前周期结束,即接收到了VSync-APP信号2,则执行步骤S106。
继续参见图13,示例性的,如果合成线程开始合成的时间点滞后Input事件1绘制渲染的时间点一个VSync信号周期,则VSync线程向绘制渲染线程发送VSync-APP信号2的时候,也会向合成线程发送VSync-SF信号1。
相应地,合成线程在接收到VSync-SF信号1后,便会从缓存队列中取出一帧绘制渲染好的图像,执行步骤S105。
示例性的,在接收到VSync-SF信号后,合成线程则会从缓存队列中取出先放入缓存队列的一帧绘制渲染好的图像进行合并,如果队列中没有,则不处理。
S105,合成线程在接收到VSync-SF信号1后,从缓存队列中取出先放入缓存队列的Input事件1对应的绘制渲染好的图像1,并对绘制渲染好的图像1进行合成处理。
继续参见图13,示例性的,在得到合成的图像1后,合成线程便会将合成的图像1发送给显示驱动,进而由显示驱动进行送显,即驱动显示屏显示图1的内容,实现
画面的更新。
可理解的,通过上述描述可知,合成线程在将合成的图像1发送给显示驱动时,具体是通过图12中示出的位于HAL层的硬件合成器,进而由硬件合成器将合成的图像1传输给显示驱动。
如图14所示,在t2时刻接收到VSync-SF信号1,并在t2至t3时间内,即接收到VSync-SF信号2前,绘制渲染线程会对帧1对应的图像进行合并处理。这样在得到帧1对应的合成图像后,在滞后一定时间,如上所说的合成到显示也为一个VSync信号周期时,在t3时间点,显示驱动接收到来合成线程合成好的帧1对应的图像,就可以驱动显示屏显示帧1对应的图像了。
S106,绘制渲染线程在接收到VSync-APP信号2,根据VSync-APP信号2的时间戳从输入线程中读取记录的Input事件2(以Input事件为MOVE事件为例)。
S107,绘制渲染线程在接收到Input事件2(MOVE事件)后,开始对Input事件2(MOVE事件)进行处理,即根据Input事件2绘制渲染图像2。
关于对Input事件2的绘制渲染,与Input事件1的类似,具体可以参见上文,此处不再赘述。
继续参见图13,示例性的,在对Input事件2中的帧数据绘制渲染完后,绘制渲染线程还会将图像2缓存到缓存队列。同时绘制渲染线程在根据Input事件2绘制渲染图像2的过程中,还会将Input事件2发送给插帧模块,由插帧模块确定是否进行补帧操作。
需要说明的是,本实施例中所说的插帧操作/插帧事件具体是针对首帧进行的插帧操作,在该操作中插入的帧为首帧的部分内容,绘制渲染线程不需要从输入线程中读取新的Input事件。而补帧操作/补帧事件具体是针对手指移动过程中,因为某一帧的绘制渲染超时导致丢帧时进行的操作,在该操作中插入的帧为丢失的帧,即绘制渲染线程需要从输入线程中读取Input事件。在实际应用中,这两个操作都可以称为插帧操作,本实施例对此名称不作限定。
S108,插帧模块检测相邻两次Input事件的类型(Input事件1与Input事件2),确定相邻两次Input事件为MOVE事件和MOVE事件,且图像2的绘制渲染未超时,不触发补帧操作。
对于这种情况,即正常绘制渲染,不存在绘制渲染超时,丢帧的情况,绘制渲染线程在完成对Input事件2的绘制渲染后,如果当前周期还未结束,则绘制渲染线程进入短暂的空白期,如果当前周期结束,即接收到了VSync-APP信号3,则执行步骤S110。
继续参见图13,示例性的,在绘制渲染线程进行绘制渲染的过程中,VSync线程在每一个VSync信号周期,会继续向合成线程发送对应的VSync-SF信号。相应地,合成线程在接收到新的VSync-SF信号后,也会从缓存队列中取出位于队首的绘制渲染好的图像进行合成处理,并将合成的图像发送给显示驱动,由显示驱动送显,如在接收到VSync-SF信号2时,合成线程会执行步骤S109,在接收到VSync-SF信号3时,合成线程会执行步骤S115等。
S109,合成线程在接收到VSync-SF信号2后,从缓存队列中取出位于队首的绘
制渲染好的插帧图像,并对绘制渲染好的插帧图像进行合成处理。
如图14中,合成线程在t3时间点接收到VSync-SF信号2,此时合成线程会从缓存队列中取出帧1’对应的绘制渲染图像进行合成处理。
相应地,在滞后一个VSync信号周期后,即在t4时间点显示驱动会接收到合成线程合成的帧1’对应的合成图像,进而在驱动显示屏显示帧1’对应的合成图像。
S110,绘制渲染线程在接收到VSync-APP信号3,根据VSync-APP信号3的时间戳从输入线程中读取记录的Input事件3(以Input事件为MOVE事件为例)。
S111,绘制渲染线程在接收到Input事件3(MOVE事件)后,开始对Input事件3(MOVE事件)进行处理,即根据Input事件3绘制渲染图像3。
关于对Input事件3的绘制渲染,与Input事件1的类似,具体可以参见上文,此处不再赘述。
继续参见图13,示例性的,在对Input事件3中的帧数据绘制渲染完后,绘制渲染线程还会将图像3缓存到缓存队列。同时绘制渲染线程在根据Input事件3绘制渲染图像3的过程中,还会将Input事件3发送给插帧模块,由插帧模块确定是否进行补帧操作。
S112,插帧模块检测相邻两次Input事件的类型(Input事件3与Input事件4),确定相邻两次Input事件为MOVE事件和MOVE事件,且图像3的绘制渲染超时(在绘制渲染线程接收到VSync-APP信号4时,仍在绘制渲染图像3),触发补帧操作。
示例性的,在一些实现方式中,对于绘制渲染图3的过程中接收到了VSync-APP信号4,绘制渲染完成图像3后还未接收VSync-APP信号5的场景,补帧操作可以是在VSync线程下发VSync-APP信号5前完成的,即绘制渲染线程在接收到插帧模块下发的补帧指令后,直接根据已经接收到的VSync-APP信号4的时间戳从事件线程中读取Input事件4,即执行步骤S113,然后执行步骤S114。
此外,需要说明的是,在本实施例中出现的Input事件,如Input事件0、Input事件1、Input事件2、Input事件3、Input事件4等,是按照时间顺序产生的,例如在不离手滑动操作过程包括DOWN、MOVE1、MOVE2、MOVE3、UP时,随着用户手指在显示屏的滑动,按序产生的Input事件0、Input事件1、Input事件2、Input事件3、Input事件4分别为DOWN事件、MOVE1事件、MOVE2事件、MOVE3事件、UP事件。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
S113,绘制渲染线程根据VSync-APP信号4的时间戳从输入线程中读取记录的Input事件4(以Input事件为MOVE事件,如上所说的MOVE3事件为例)。
S114,绘制渲染线程在接收到Input事件4(MOVE事件)后,开始对Input事件4(MOVE事件)进行处理,即根据Input事件4绘制渲染图像4。
关于对Input事件4的绘制渲染,与Input事件1的类似,具体可以参见上文,此处不再赘述。
相应地,绘制渲染线程在绘制渲染得到图像4后,同样会将绘制渲染的图像4缓存到缓存队列中,以便合成线程在接收到VSync-SF信号4时,能够从缓存队列中取
出绘制渲染的图像4进行合成处理。
如图15所示,当绘制渲染线程在t3至t4时间内,即接收到VSync-APP信号4前,没有完成对帧3的绘制渲染,在接收到VSync-APP信号4后,绘制渲染线程不根据VSync-APP信号4的时间戳从事件线程中读取Input事件4,继续对帧3进行绘制渲染,当绘制渲染线程在t5时间点前对帧3完成了绘制渲染,经插帧模块确定需要进行补帧,在剩余时间满足绘制渲染帧4所需的时长时,绘制渲染线程可从输入线程中读取出t4时间点对应的帧4,然后进行绘制渲染。
继续参见图15,示例性的,由于在t4时间点帧3还没有绘制渲染完成,故而缓存队列中没有对帧3绘制渲染完成后的图像,合成线程在t4时间点不执行合并操作,在t5时间点从缓存队列中读取到帧3绘制渲染完成后的图像,才开始进行合成处理,相应地由于绘制渲染线程补上了丢失的帧4,故而在后续每一个合成时间点,即接收到VSync-SF信号后,在没有发生绘制渲染超时、丢帧的情况下,合成线程能够按序从缓存队列中取出帧4对应的图像、帧5对应的图像、帧6对应的图像、帧7对应的图像、帧8对应的图像。
相应地,显示驱动也只在t5时间点没有正常取到合成线程合成的图像,会在t5至图6这一个VSync信号周期内持续显示帧2对应的内容,而后续则可以在每一个VSync信号周期正常更新显示新一帧数据对应的内容。这样,由于帧2对应的内容仅多显示了一个VSync信号周期,只有几毫秒,因此用户不会感觉到明显卡顿,并且后续在每一个VSync信号周期,都会正常更新顺序变化的帧数据对应的内容,因此显示屏显示的内容也不会出现跳变。
此外,需要说明的是,如果在图15中,在t1时间点对帧1(首帧)进行绘制渲染后,插入了帧1’,则在后续对帧3进行绘制渲染时,如果只超了一个VSync信息周期,并且对丢失的帧4进行了补帧,则最终合并线程和显示驱动的处理过程会如图16所示,即每一个VSync信号周期,显示屏显示的内容都会按序更新,并且由于没有丢失帧,因此显示屏显示的内容也不会出现跳变。
S115,合成线程在接收到VSync-SF信号3后,从缓存队列中取出位于队首的绘制渲染好的图像2,并对绘制渲染好的图像2进行合成处理。
如图14中,合成线程在t4时间点接收到VSync-SF信号3,此时合成线程会从缓存队列中取出帧2对应的绘制渲染图像进行合成处理。
相应地,在滞后一个VSync信号周期后,即在t5时间点显示驱动会接收到合成线程合成的帧2对应的合成图像,进而在驱动显示屏显示帧2对应的合成图像。
此外,需要说明的是,在图13所示内容的基础上,如果绘制渲染线程在接收到VSync-APP信号5时,仍没有完成图像3的绘制渲染。如图17所示,在t5时间点绘制渲染线程还在对帧3进行绘制渲染,在t6时间点,即接收到VSync-APP信号6时,才完成了对帧3的绘制渲染,由于帧3的绘制渲染占了t3至t6,3个VSync信号周期,在t4至t5原本应该对帧4进行绘制渲染时间内,没有对帧4进行绘制渲染,帧4丢失,同样在t5至t6原本应该对帧5进行绘制渲染时间内,没有对帧5进行绘制渲染,帧5丢失。即,图17所示,在对帧3进行绘制渲染的期间,丢失了帧4和帧5这两帧,为了避免用户明显感知到卡顿,可以在t6时间点接收到VSync-APP信号6时,
先补一帧,然后再对帧6进行绘制渲染。
示例性的,在进行补帧时,可以先判断帧4+帧6的绘制渲染时间是否能够在t6至t7这一时间周期内完成,如果可以,则可以按照上述方式先从输入事件读取t4时间点对应的帧4进行绘制渲染,在对帧4进行绘制渲染完,在正常读取VSync-APP信号6的时间戳原本对应的帧6进行绘制渲染。这样进行补帧后,最终只丢失了一帧(帧5),而后续则可以在每一个VSync信号周期正常更新显示新一帧数据对应的内容。这样,由于帧2对应的内容仅多显示了一个VSync信号周期,只有几毫秒,因此用户不会感觉到明显卡顿,由于补了一帧(帧4),因此显示屏也不会直接从帧3的内容跳变为帧6的内容,中间会有帧4的内容进行过大,因此跳变也不会很明显。
示例性的,如果在t6至t7这一时间周期内,能够完成对帧4+帧5+帧6的绘制渲染,即可以补两帧,则可以按照上述方式先从输入事件读取t4时间点对应的帧4进行绘制渲染,在帧4绘制渲染后,继续从输入事件读取t5时间点对应的帧5进行绘制渲染,在对帧5进行绘制渲染完,在正常读取VSync-APP信号6的时间戳原本对应的帧6进行绘制渲染,这样在t7时间点接收到VSync-APP信号7,对帧7进行绘制渲染前,在保证帧6不丢失的情况下,还可以将丢失的帧4和帧5都补好,最终合并线程和显示驱动的处理过程会如图18所示,即每一个VSync信号周期,显示屏显示的内容都会按序更新,并且由于没有丢失帧,因此显示屏显示的内容也不会出现跳变。
这样,在滑动操作初期预先插入一帧进行绘制渲染,进而缓存队列中多缓存一帧,减少后续绘制渲染超时导致的无帧合成的情况,减少显示的卡顿,如在仅丢一帧的情况下,可以通过插入的帧平滑过渡,不会出现卡顿,提升用户体验。
此外,在绘制渲染超时时,通过补帧的方式将丢失的帧补回一帧或多帧,减少由于绘制渲染超时错过VSync信号导致的丢帧,使得显示屏显示的内容能够平稳变化,增加显示的流畅性,减少跳变,进一步提升用户体验。
为了更好的理解本申请实施例提供的数据处理方法,针对图13中,绘制渲染线程和插帧模块之间的具体处理逻辑,以及判断是否插帧、补帧,具体补几帧的具体实现细节,以下进行具体说明。
图19为本申请实施例提供的一种数据处理方法的流程示意图。如图19所示,该方法具体包括:
S201,显示第一应用的第一界面。
示例性的,第一应用即为当前处于前台运行的焦点应用。第一界面即为该第一应用当前显示的界面,以第一应用为即时通讯类应用为例,第一界面可以为上述图7中所示的朋友圈界面,在该第一界面中显示的内容例如为图7中(1)所示的画面。
S202,响应作用于第一界面的滑动操作,获取滑动操作对应的输入事件。
示例性的,对于滑动操作对应的输入事件可包括DOWN事件、MOVE事件和UP事件。
此外,关于输入事件的获取,可以参见上文显示屏、传感器、传感器驱动、EventHub、输入读取器、输入派发器、窗口管理器、输入管理器和输入线程之间的交互说明,此处不再赘述。
S203,获取第一VSync信号,并基于第一MOVE事件绘制渲染第N帧。
具体的说,第一MOVE事件是基于第一VSync信号的时间戳从滑动操作对应的输入事件中提取得到的,而第N帧即为第一MOVE事件对应的一帧需要绘制渲染的图像数据帧。
可理解的,本实施例中所说的第一VSync信号,例如可以是上文中任意一个时间点,如图18中t1至t7中任意一个时间点接收到的VSync信号,相应地第一MOVE事件即为该VSync信号的时间点对应的事件,在实际应用中可能是DOWN事件,或者MOVE事件,或者UP事件。本实施例提供的数据处理方法所针对的插帧(补帧)场景具体针对MOVE事件。故而,此处以基于第一VSync信号的时间戳从滑动操作对应的输入事件中提取得到的事件为MOVE事件为例,为了便于区分其他时间接收到的VSync信号的时间戳对应的MOVE时间,此处将基于第一VSync信号的时间戳从滑动操作对应的输入事件中提取得到的MOVE事件称为第一MOVE事件。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,需要说明的是,在绘制渲染、合成、显示的过程中,每一环节都有对应的VSync信号。其中,上述所说的第一VSync信号,以及后续出现的第二VSync信号、第三VSync信号,均指用于触发绘制渲染流程的垂直同步信号,该用于触发绘制渲染流程的垂直同步信号具体为上文所说的VSync-APP信号。
相应地,用于触发合成流程的垂直同步信号具体为上文所说的VSync-SF信号,触发显示屏刷新流程的垂直同步信号具体为上文所说的VSync-HW信号。
此外,还需要说明的是,在实际应用中,相邻两个VSync-APP间隔第一时长,相邻两个VSync-SF信号间隔第二时长,相邻两个VSync-HW信号间隔第三时长。即,每隔第一时长,第一应用,如上文所说的应用主线程(UI线程)或者直接是绘制渲染线程就会接收到一个VSync-APP信号,如在上文所说的t1时间点接收到VSync-APP信号1,在t2时间点接收到VSync-APP信号2等。
相应地,每隔第二时长,用于进行合成处理的合成线程,就会接收到一个VSync-SF信号;每隔第三时长,显示驱动就会接收到一个VSync-HW信号。
此外,还需要说明的是,在一些实现方式中,第一时长、第二时长和第三时长可以为相同的时长,如上文所说的VSync信号周期。其与终端设备进行数据处理时的帧率有关,例如对于60Hz的场景,VSync信号周期为16.6ms,即每隔16.6msVSync线程就会产生对应的VSync-APP信号发送给绘制渲染线程,同样每隔16.6msVSync线程就会产生对应的VSync-SF信号发送给合成线程,同样每隔16.6msVSync线程就会产生对应的VSync-HW信号发送给显示驱动。
此外,通过上述描述可知,显示驱动获取合成线程合成的内容驱动显示屏进行显示的时间要滞后合成线程开始执行合成操作的时间,而合成线程开始执行合成操作的时间要滞后绘制渲染线程开始执行绘制渲染操作的时间,即VSync线程发送VSync-HW信号的发送时间要滞后于VSync-SF信号的发送时间,而VSync-SF信号的发送时间又滞后于VSync-APP信号的发送时间。例如以VSync-HW信号的发送时间要滞后于VSync-SF信号的发送时间一个VSync信号周期,VSync-SF信号的发送时间又滞后于VSync-APP信号的发送时间为一个VSync信号周期为例,则VSync线程可
在上文的t1时间点向绘制渲染线程发送VSync-APP信号1,在t2时间点向绘制渲染线程发送VSync-APP信号2,同时向合成线程发送VSync-SF信号1,在t3时间点向绘制渲染线程发送VSync-APP信号3,同时向合成线程发送VSync-SF信号2,向显示驱动发送VSync-HW信号1。
此外,还需要说明的是,在另一些实现方式中,第一时长、第二时长和第三时长可以不相同,具体满足第三时长>第二时长>第一时长。这样,可以保证上一环节处理完后,下一环节才开始,确保下一环节可以拿到上一环节处理得到的数据,如显示驱动可以拿到合成线程合成的内容,合成线程能够拿到绘制渲染线程绘制渲染出的内容。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,触发VSync线程按照VSync信号周期生成VSync-APP信号、VSync-SF信号、VSync-HW信号,并按照VSync信号周期进行发送的前提,如上文所说可以是在输入事件为DOWN事件时,向VSync线程发起请求的。
例如,在基于VSync-APP信号的时间戳从滑动操作对应的输入事件中提取得到的事件为DOWN事件时,第一应用,如上文所述的绘制渲染线程向SF线程中的VSync线程发送第一消息(请求VSync-APP信号的请求)。这样,VSync线程就可以按照第一时长(如一个VSync信号周期)生成,并在每一个VSync信号周期,向第一应用的绘制渲染线程发送对应的VSync-APP信号。
相应地,在基于VSync-APP信号的时间戳从滑动操作对应的输入事件中提取得到的事件为DOWN事件时,SF线程中的缓存线程可以向VSync线程发送第二消息(请求VSync-SF信号的请求)。这样,VSync线程就可以按照第二时长(如一个VSync信号周期)生成,并在每一个VSync信号周期,向合成线程发送对应的VSync-SF信号。
相应地,在基于VSync-APP信号的时间戳从滑动操作对应的输入事件中提取得到的事件为DOWN事件时,显示驱动可以向VSync线程发送第三消息(请求VSync-HW信号的请求)。这样,VSync线程就可以按照第三时长(如一个VSync信号周期)生成,并在每一个VSync信号周期,向显示驱动发送对应的VSync-HW信号。
关于每一个VSync-APP信号的时间戳与报点、输入事件之前的关系可以参见上文,基于该关系便可以确定获取哪一个报点对应的输入数据,进而获取到该报点对应的图像帧。关于图像帧的获取可以参见上文,此处不再赘述。
参见图20,示例性的,绘制渲染,具体可以分为绘制阶段和渲染阶段。其中,在绘制阶段具体包括:输入(用于将输入事件传递给对应的对象进行处理)、动画(用于计算每一帧的动画的位置)、测量(用于根据xml布局文件和代码中对控件属性的设置,获取以及保持每个视图(View)和视图组(ViewGrop)的尺寸)、布局(用于根据策略获得的信息,确定控件的显示位置)、绘制(用户在确定控件的显示位置后,在画布(canvas)上绘制应用程序窗口中的所有图层,构造绘制指令)。
继续参见图20,示例性的,在渲染阶段具体包括:同步(用于从CPU中同步绘制后的绘制指令)、渲染(用于对绘制后的图层进行亮度、对比度、饱和度等的调整)、
存入缓存队列(用于将渲染后的执行结果存入缓存队列)。关于具体的绘制、渲染,本实施例对此不再赘述。
S204,在第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当第N帧绘制渲染完成后,获取丢帧数,并显示第N帧。
结合图18,示例性的,在第N帧为图18中的帧3时,由于帧3的绘制渲染时长占了3个VSync信号周期,在帧3绘制渲染完成后,上文所说的插帧模块会获取丢帧数,同时SF线程在接收到VSync-SF信号后会对帧3绘制渲染后的内容进行合成,而显示驱动则会在接收到VSync-HW信号后,对SF线程合成的帧3的内容进行显示。
S205,从丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M。
关于步骤S204中给出的获取丢帧数,以及步骤S205中最终确定的插帧数M的实现方式,例如可以是:
(1)确定第N帧开始绘制渲染的第一时间Tbegin和结束绘制渲染的第二时间Tend。
其中,Tbegin是绘制渲染线程调用doFrame接口(用于开始绘制渲染的接口)的时间,如上文中接收到每一个VSync-APP信号时的时间点,如t1、t2、t3等。Tend即为完成绘制渲染的实际时间点。
(2)根据Tbegin、Tend和第N帧对应的设定的绘制渲染时长(如一个VSync信号周期,后续表示为VSync,计算丢帧数count。
示例性的,在一种实现方式中count=floor[(Tend–Tbegin)/VSync]–1。
需要说明的是,设定的绘制渲染时长是指理想状况下,每一帧数据对应的最大绘制渲染时长,如为一个VSync信号周期。
(3)从丢帧数count和设定的最大可插帧数中选取一个最小值,作为本次绘制渲染完成后要插入的帧数,即插帧数M。
示例性的,为了避免插帧对后续数据处理的影响,可以将最大可插帧数设为2。
即,M=min(count,2)。
示例性的,如果计算出的当前丢了1帧,即count=1,则本次绘制渲染完成后要插入的帧数M=1,如果计算出的count大于2,则本次绘制渲染完成后要插入的帧数M=2。
进一步地,在实际应用中,可能实际剩余的可用于进行补帧操作的时间不足,为了避免根据上述方式确定的M不合适,还可以预估一下剩余时间可插入的帧数,然后根据从预测可插帧数和上述方式确定的M中选择一个最小值,作为本次绘制渲染完成后要插入的帧数。具体实现方式可如下:
(4)根据VSync-APP信号的发送周期,确定下一个VSync-APP信号的接收时间TnextVsyn。
示例性的,以VSync-APP信号的发送周期为一个VSync信号周期,如上文任意两个时间点(t1与t2)之间的时长为例,如图15所示,在t3时间点开始绘制渲染的帧3超时,导致帧4丢失时,下一个VSync-APP信号的接收时间TnextVsyn,可为图15中的t5时间点。
(5)根据已经完成绘制渲染的多帧图像数据实际的绘制渲染时长,确定已完成
绘制渲染的N帧中每一帧的平均绘制渲染时长(T平均)。
例如,为每一帧设定的绘制渲染时长为一个VSync信号周期,如16.6ms,实际作业中帧1、帧2、帧3完成绘制渲染所需的时分别为4.6ms,5.6ms,16.8ms,则根据这3帧实际的绘制渲染时长,计算出的T平均=(4.6ms+5.6ms+16.8ms)/3=9ms。
(6)根据TnextVsyn、Tend和T平均,计算预测可插帧数countAllow。
示例性的,在一种实现方式中countAllow≤(TnextVsyn-Tend)/T平均。
以TnextVsyn-Tend=16.3ms为例,则countAllow≤1.45,取整可知最终countAllow=1。
(7),M=min(M(上述步骤(3)中得到的),countAllow)即从countAllow、count和设定的最大可插帧数(如2)中选取一个最小值,作为本次绘制渲染完成后要插入的帧数。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,关于上述所说的确定要插入的图像数据帧与丢失的图像数据帧之间的关系,具体是指插入的图像数据帧具体为丢失的哪一帧图像数据,进而根据理论上要对丢失的图像数据帧进行绘制渲染的时间点(该时间点接收到的VSync-APP信号的时间戳),从输入线程中查找到对应的输入事件,进而将该输入事件中的帧作为要插入的帧,并对其进行绘制渲染。
关于要插入的帧对应的时间戳,可以满足Tend-(Tend–Tbegin)%VSync。以VSync=10ms,Tend=45ms,Tbegin=30ms为例,则最终计算出的要插入的帧对应的时间戳=40ms。仍以图15所示为例,在帧3绘制超时,丢失帧4时,根据该方式计算出的要插入的帧对应的时间戳=40ms,即理论上t4时间点对应的帧4。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。对于丢失多帧的场景,同一可以按照上述进行处理,此处不再赘述。
S206,在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧,并显示M帧。
需要说明的是,第二VSync信号是第N帧绘制渲染完成后接收到的首个VSync信号,第二MOVE事件是根据第N帧绘制渲染过程中接收到的全部或部分VSync信号的时间戳从滑动操作对应的输入事件中提取得到的。
结合图18,示例性的,仍以第N帧为帧3为例,则帧3在t6时间点前才绘制渲染完成,而在帧3绘制渲染完成后接收到的首个VSync信号即为t6时间点接收到的VSync信号。
相应地,第二MOVE事件,便为帧3绘制渲染过程中,如t2至t6时间内,接收到的全部或部分VSync信号的时间戳从滑动操作对应的输入事件中提取得到的。
仍以图18所示为例,示例性的,对于运行插入第N帧绘制渲染过程中接收到的全部VSync信号的时间戳从滑动操作对应的输入事件中提取得到的第二MOVE事件的场景,第二MOVE事件具体为根据t4时间点接收到的VSync信号的时间戳从滑动操作对应的输入事件中提取得到的MOVE事件,以及根据t5时间点接收到的VSync
信号的时间戳从滑动操作对应的输入事件中提取得到的MOVE事件。相应地,最终在第N帧(帧3)后插入的M帧为帧4和帧5。
关于对插入的M帧的绘制渲染,以及显示的处理与上述第N帧的绘制渲染和显示类似,此处不再赘述。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
由此,本实施例提供的数据处理方法,在绘制渲染超时时,通过补帧的方式将丢失的帧补回一帧或多帧,减少由于绘制渲染超时错过VSync信号导致的丢帧,使得显示屏显示的内容能够平稳变化,增加显示的流畅性,减少跳变,进一步提升用户体验。
图21为本申请实施例提供的一种数据处理方法的流程示意图。如图21所示,该方法具体包括:
S301,显示第一应用的第一界面,第一界面显示了第一画面,第一画面包括第一内容和第二内容,第一内容显示于第一界面的第一区域显示了第一内容,第二内容显示于第一界面的第二区域。
示例性的,第一画面例如图22所示,其中第一内容例如为图22所示的第一区域中显示的与朋友A相关的内容,第二内容例如为图22所示的第二区域中显示的与朋友B相关的内容。
S302,在接收到用户在第一界面的滑动操作时,获取滑动操作过程中,每一报点对应的输入事件,输入事件为DOWN事件、MOVE事件和UP事件中任意一种。
关于输入事件的获取,可以参见上文显示屏、传感器、传感器驱动、EventHub、输入读取器、输入派发器、窗口管理器、输入管理器和输入线程之间的交互说明,此处不再赘述。
S303,在接收到触发绘制渲染流程的垂直同步信号时,确定触发绘制渲染流程的垂直同步信号的时间戳对应的报点的输入事件。
关于触发绘制渲染流程的垂直同步信号(VSync-APP信号),以及下文出现的触发合成流程的垂直同步信号(VSync-SF信号)、触发显示屏刷新流程的垂直同步信号(VSync-HW信号)的具体说明详见上述实施例,此处不再赘述。
S304,获取时间戳对应的报点的输入事件中的第一图像帧,对第一图像帧进行绘制渲染,并在绘制渲染的过程中根据相邻两次的输入事件和当前输入事件的绘制渲染时间,确定插帧策略。
关于绘制渲染的具体实现细节详见上文,此处不再赘述。
此外,关于绘制渲染过程中,是否需要插帧模块介入进行插帧(首帧插帧、丢帧后补帧(插帧))判断处理,详见上述实施例中S103后的说明,此处不再赘述。
此外,通过上文描述可知,基于本申请提供的数据处理方法,在绘制渲染阶段插帧的情况,可以分为在滑动操作初期,在首帧后插入一帧图像数据,以及在滑动操作过程中,绘制渲染超时,导致丢帧时,补入一帧或多帧图像数据。故而,本实施例中根据相邻两次的输入事件和当前输入事件的绘制渲染时间,确定插帧策略,可以包括:
在相邻两次的输入事件分别为DOWN事件和MOVE事件时,在上一Input事件的基础上增加偏移,生成新Input事件作为插帧事件,得到插帧策略;
或者,
在相邻两次的输入事件分别为不同的MOVE事件,且当前输入事件的绘制渲染时间超过设定的绘制渲染时长时,确定本次绘制渲染完成后要插入的帧数,以及要插入的图像数据帧与丢失的图像数据帧之间的关系,并根据帧数和关系生成补帧指令,得到插帧策略。
可理解的,本实施例中所说的,插帧策略为插帧事件的实现场景,即为上文中所说的在首帧图像数据帧后,插入一帧的方案,具体实现细节可以参见上文,此处不再赘述。
此外,还应当理解的,本实施例中所说的插帧策略为生成补帧指令的实现场景,即为上文所说的丢帧后确定插帧数,进而插帧(补帧)的方案,具体实现细节可以参见上文,此处不再赘述。
也就是说,本实施例中所说的确定插帧策略,具体实质由插帧模块判断是否需要进行首帧插帧,或者确定当前绘制渲染是否超时导致丢帧,进而需要补帧的处理逻辑。在实际应用中,设备在实现本申请提供的数据处理方法时,程序指令可以不存在生成插帧策略一步,而是在满足首帧插条件时直接触发首帧插帧流程,在满足丢帧补帧条件时,直接确定能够插入的帧数进而插帧。
示例性的,在插帧策略中包括插帧事件时,可以确定插帧策略指示需要进行插帧。这种情况下,绘制渲染线程在对第一图像帧(如上文的帧1)绘制渲染完成得到第一绘制渲染图像后,在下一个VSync-APP信号,如上文中t2时间点接收到的VSync-APP信号2到达前,将插帧事件中的一帧图像数据(如上文中帧1中的部分数据,帧1’)作为第二图像帧,在第N帧后插入第二图像帧,并对第二图像帧进行绘制渲染。
示例性的,在插帧策略中包括补帧指令时,可以确定插帧策略指示需要进行插帧。这种情况下,绘制渲染线程在对第一图像帧(如上文中帧3)绘制渲染完成得到第一绘制渲染图像后,在下一个VSync-APP信号,如在上文中t5时间点接收到的VSync-APP信号5到达前,根据补帧指令中的关系,确定帧数中每一个要插入的图像帧数据所在的输入事件对应的时间戳,如帧4对应的VSync-APP信号4的时间戳;获取每一个时间戳对应的报点的输入事件中的图像帧数据作为第二图像帧,并对每一个第二图像帧进行绘制渲染。
可理解的,对于丢失的帧有帧4时,则补帧的操作可以是在t6时间点接收到的VSync-APP信号6到达前,根据插帧策略仅选取帧4一帧作为第二图像帧,在预估不影响帧5绘制的前提下,先对帧4进行绘制渲染。
此外,可理解的,对于丢失的帧有帧4和帧5时,则补帧的操作可以是在t7时间点接收到的VSync-APP信号7到达前,在预估不影响对帧6绘制渲染的情况下,在帧3绘制渲染完的结束时间至t7时间点之间,如果剩余时间允许对两帧图像数据进行绘制渲染,则可以将帧4、帧5均作为第二图像帧,并依次对帧4、帧5进行绘制渲染,在对帧5绘制渲染完成后,在该周期内接着对帧6进行绘制渲染。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
S305,在插帧策略指示需要进行插帧时,在对第一图像帧绘制渲染完成得到第一
绘制渲染图像后,在下一个触发绘制渲染流程的垂直同步信号到达前,在第一图像帧后插入第二图像帧,并对第二图像帧进行绘制渲染。
示例性的,在插帧策略中包括插帧事件时,可以确定插帧策略指示需要进行插帧。这种情况下,绘制渲染线程在对第一图像帧(如上文的帧1)绘制渲染完成得到第一绘制渲染图像后,在下一个VSync-APP信号,如上文中t2时间点接收到的VSync-APP信号2到达前,将插帧事件中的一帧图像数据(如上文中帧1中的部分数据,帧1’)作为第二图像帧,在第一图像帧后插入第二图像帧,并对第二图像帧进行绘制渲染。
示例性的,在插帧策略中包括补帧指令时,可以确定插帧策略指示需要进行插帧。这种情况下,绘制渲染线程在对第一图像帧(如上文中帧3)绘制渲染完成得到第一绘制渲染图像后,在下一个VSync-APP信号,如在上文中t5时间点接收到的VSync-APP信号5到达前,根据补帧指令中的关系,确定帧数中每一个要插入的图像帧数据所在的输入事件对应的时间戳,如帧4对应的VSync-APP信号4的时间戳;获取每一个时间戳对应的报点的输入事件中的图像帧数据作为第二图像帧,并对每一个第二图像帧进行绘制渲染。
可理解的,对于丢失的帧有帧4时,则补帧的操作可以是在t6时间点接收到的VSync-APP信号6到达前,根据插帧策略仅选取帧4一帧作为第二图像帧,在预估不影响帧5绘制的前提下,先对帧4进行绘制渲染。
此外,可理解的,对于丢失的帧有帧4和帧5时,则补帧的操作可以是在t7时间点接收到的VSync-APP信号7到达前,在预估不影响对帧6绘制渲染的情况下,在帧3绘制渲染完的结束时间至t7时间点之间,如果剩余时间允许对两帧图像数据进行绘制渲染,则可以将帧4、帧5均作为第二图像帧,并依次对帧4、帧5进行绘制渲染,在对帧5绘制渲染完成后,在该周期内接着对帧6进行绘制渲染。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,通过上文描述可知,在实际应用中,可以仅选择在首帧后插入一帧,入在上文中帧1后插入帧1’,也可以仅选择在丢帧后,将丢失部分或全部帧补回来;还可以同时选择在首帧后插入一帧,后续在丢帧后,将丢失部分或全部帧补回来,具体的实现细节可以参见上文,此处不再赘述。
S306,在接收到触发合成流程的垂直同步信号时,图像合成系统获取第一绘制渲染图像,对第一绘制渲染图像进行合成,得到第二画面,第二画面包括第二内容和第三内容。
S307,在接收到触发显示屏刷新流程的垂直同步信号时,显示驱动驱动显示屏显示第二画面,跟随滑动操作,第一区域显示第二内容,第二区域显示第三内容。
示例性的,在第二画面如图23所示时,其中原本显示在图22中第二区域的朋友B的相关内容就显示到图23中第一区域,而第三内容(如图22、图23中朋友C的相关内容)则显示到了第二区域。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,关于合成线程进行合成操作和显示驱动驱动显示屏显示合成线程合成的内
容的具体细节,可以参见上文,此处不再赘述。
由此,本实施例提供的数据处理方法,在滑动操作初期预先插入一帧进行绘制渲染,进而缓存队列中多缓存一帧,减少后续绘制渲染超时导致的无帧合成的情况,减少显示的卡顿,如在仅丢一帧的情况下,可以通过插入的帧平滑过渡,不会出现卡顿,提升用户体验。
此外,在绘制渲染超时时,通过补帧的方式将丢失的帧补回一帧或多帧,减少由于绘制渲染超时错过VSync信号导致的丢帧,使得显示屏显示的内容能够平稳变化,增加显示的流畅性,减少跳变,进一步提升用户体验。
此外,可以理解的是,终端设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
此外,需要说明的是,在实际的应用场景中由终端设备实现的上述各实施例提供的数据处理方法,也可以由终端设备中包括的一种芯片系统来执行,其中,该芯片系统可以包括处理器。该芯片系统可以与存储器耦合,使得该芯片系统运行时调用该存储器中存储的计算机程序,实现上述终端设备执行的步骤。其中,该芯片系统中的处理器可以是应用处理器也可以是非应用处理器的处理器。
另外,本申请实施例还提供一种计算机可读存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在终端设备上运行时,使得终端设备执行上述相关方法步骤实现上述实施例中的数据处理方法。
另外,本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在终端设备上运行时,使得终端设备执行上述相关步骤,以实现上述实施例中的数据处理方法。
另外,本申请的实施例还提供一种芯片(也可以是组件或模块),该芯片可包括一个或多个处理电路和一个或多个收发管脚;其中,所述收发管脚和所述处理电路通过内部连接通路互相通信,所述处理电路执行上述相关方法步骤实现上述实施例中的数据处理方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
此外,通过上述描述可知,本申请实施例提供的终端设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。
Claims (13)
- 一种数据处理方法,其特征在于,所述方法包括:显示第一应用的第一界面;响应作用于所述第一界面的滑动操作,获取所述滑动操作对应的输入事件;获取第一VSync信号,并基于第一MOVE事件绘制渲染第N帧,所述第一MOVE事件是基于所述第一VSync信号的时间戳从所述滑动操作对应的输入事件中提取得到的;在所述第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当所述第N帧绘制渲染完成后,获取丢帧数,并显示所述第N帧;从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M;在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧,并显示所述M帧;其中,所述第二VSync信号是所述第N帧绘制渲染完成后接收到的首个VSync信号,所述第二MOVE事件是根据所述第N帧绘制渲染过程中接收到的全部或部分VSync信号的时间戳从所述滑动操作对应的输入事件中提取得到的。
- 根据权利要求1所述的方法,其特征在于,所述获取丢帧数,包括:确定所述第N帧开始绘制渲染的第一时间和结束绘制渲染的第二时间;根据所述第一时间、所述第二时间和所述第N帧对应的设定的绘制渲染时长,计算所述丢帧数,所述设定的绘制渲染时长为一个VSync信号周期。
- 根据权利要求2所述的方法,其特征在于,基于下述公式,根据所述第一时间、所述第二时间和所述第N帧对应的设定的绘制渲染时长,计算所述丢帧数:
丢帧数=floor[(第二时间–第一时间)/VSync信号周期]–1。 - 根据权利要求2所述的方法,其特征在于,所述从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数M,包括:根据所述VSync信号周期,确定第二VSync信号的接收时间;根据已完成绘制渲染的N帧的绘制渲染时长,确定已完成绘制渲染的N帧中每一帧的平均绘制渲染时长;根据所述接收时间、所述第二时间和所述平均绘制渲染时长,计算预测可插帧数;从所述预测可插帧数、所述丢帧数和设定的最大可插帧数中选取一个最小值,作为所述插帧数M。
- 根据权利要求4所述的方法,其特征在于,基于下述公式,根据所述接收时间、所述第二时间和所述平均绘制渲染时长,计算预测可插帧数:
预测可插帧数≤(接收时间-第二时间)/平均绘制渲染时长。 - 根据权利要求1所述的方法,其特征在于,所述方法还包括:在所述第一应用冷启动时,获取所述第一应用的包名;根据所述包名,确定所述第一应用的应用类别;在所述第一应用的应用类别与设定的支持插帧的应用类型匹配时,在所述第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当所述第N帧绘制渲染完成后,执行所述获取丢帧数,所述从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及所述在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
- 根据权利要求6所述的方法,其特征在于,所述方法还包括:根据所述输入事件对应的报点信息,确定所述输入事件作用于所述第一界面中的控件;在作用于的所述控件为RecyclerView控件或ListView控件时,在所述第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当所述第N帧绘制渲染完成后,执行所述获取丢帧数,所述从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及所述在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
- 根据权利要求7所述的方法,其特征在于,所述方法还包括:在对所述第N帧进行绘制渲染的过程中,确定要绘制渲染的图层数量;在所述图层数量为一个时,在所述第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当所述第N帧绘制渲染完成后,执行所述获取丢帧数,所述从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及所述在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
- 根据权利要求8所述的方法,其特征在于,所述方法还包括:在基于相邻两次VSync信号的时间戳从所述滑动操作对应的输入事件中提取得到MOVE事件对应的滑动距离大于最小滑动距离阈值时,在所述第N帧的绘制渲染时长大于一个VSync信号周期的情况下,当所述第N帧绘制渲染完成后,执行所述获取丢帧数,所述从所述丢帧数和设定的最大可插帧数中选取一个最小值,作为插帧数,以及所述在第二VSync信号到达前,基于第二MOVE事件绘制渲染M帧的步骤。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:在所述第N帧的绘制渲染时长不大于一个VSync信号周期,且所述第N帧为绘制渲染操作的首帧时,当所述第N帧绘制渲染完成后,在所述第N帧的基础上偏移设定的偏移量,得到第N+1帧;在所述第二VSync信号到达前,绘制渲染所述第N+1帧,并显示所述第N+1帧。
- 根据权利要求10所述的方法,其特征在于,所述方法还包括:在基于第三VSync信号的时间戳从所述滑动操作对应的输入事件中提取得到的事件为DOWN事件时,确定所述第N帧为绘制渲染操作的首帧;其中,所述第三VSync信号是所述第一VSync信号前接收到的,与所述第一VSync信号相邻的VSync信号。
- 一种终端设备,其特征在于,所述终端设备包括:存储器和处理器,所述存储器和所述处理器耦合;所述存储器存储有程序指令,所述程序指令由所述处理器执行时,使得所述终端设备执行如权利要求1至11任意一项所述的数据处理方法。
- 一种计算机可读存储介质,其特征在于,包括计算机程序,当所述计算机程序在终端设备上运行时,使得所述终端设备执行如权利要求1至11任意一项所述的数据处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211382250.7A CN117991961A (zh) | 2022-11-07 | 2022-11-07 | 数据处理方法、设备及存储介质 |
CN202211382250.7 | 2022-11-07 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2024098871A1 true WO2024098871A1 (zh) | 2024-05-16 |
WO2024098871A9 WO2024098871A9 (zh) | 2024-07-04 |
Family
ID=90898044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/113128 WO2024098871A1 (zh) | 2022-11-07 | 2023-08-15 | 数据处理方法、设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117991961A (zh) |
WO (1) | WO2024098871A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019047956A1 (zh) * | 2017-09-08 | 2019-03-14 | 中兴通讯股份有限公司 | 一种提高图像流畅度的方法及装置 |
CN113254120A (zh) * | 2021-04-02 | 2021-08-13 | 荣耀终端有限公司 | 数据处理方法和相关装置 |
CN114579075A (zh) * | 2022-01-30 | 2022-06-03 | 荣耀终端有限公司 | 数据处理方法和相关装置 |
CN114764357A (zh) * | 2021-01-13 | 2022-07-19 | 华为技术有限公司 | 界面显示过程中的插帧方法及终端设备 |
-
2022
- 2022-11-07 CN CN202211382250.7A patent/CN117991961A/zh active Pending
-
2023
- 2023-08-15 WO PCT/CN2023/113128 patent/WO2024098871A1/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019047956A1 (zh) * | 2017-09-08 | 2019-03-14 | 中兴通讯股份有限公司 | 一种提高图像流畅度的方法及装置 |
CN114764357A (zh) * | 2021-01-13 | 2022-07-19 | 华为技术有限公司 | 界面显示过程中的插帧方法及终端设备 |
CN113254120A (zh) * | 2021-04-02 | 2021-08-13 | 荣耀终端有限公司 | 数据处理方法和相关装置 |
CN114579075A (zh) * | 2022-01-30 | 2022-06-03 | 荣耀终端有限公司 | 数据处理方法和相关装置 |
Also Published As
Publication number | Publication date |
---|---|
CN117991961A (zh) | 2024-05-07 |
WO2024098871A9 (zh) | 2024-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114579075B (zh) | 数据处理方法和相关装置 | |
CN116501210B (zh) | 一种显示方法、电子设备及存储介质 | |
CN113254120B (zh) | 数据处理方法和相关装置 | |
CN114443269B (zh) | 帧率调节方法和相关装置 | |
CN114911336B (zh) | 调整频率的方法、装置、电子设备及可读存储介质 | |
CN116055786B (zh) | 一种显示多个窗口的方法及电子设备 | |
CN115097994B (zh) | 数据处理方法和相关装置 | |
CN111597000A (zh) | 一种小窗口管理方法及终端 | |
CN116089096B (zh) | 负载资源调度方法及电子设备 | |
WO2024041047A1 (zh) | 一种屏幕刷新率切换方法及电子设备 | |
WO2023231655A9 (zh) | 弹幕识别方法和相关装置 | |
WO2022247541A1 (zh) | 一种应用程序动效衔接的方法及装置 | |
CN115934314A (zh) | 一种应用运行方法以及相关设备 | |
WO2024156206A9 (zh) | 一种显示方法及电子设备 | |
CN116708753B (zh) | 预览卡顿原因的确定方法、设备及存储介质 | |
WO2024098871A1 (zh) | 数据处理方法、设备及存储介质 | |
CN117724781A (zh) | 一种应用程序启动动画的播放方法和电子设备 | |
WO2022247542A1 (zh) | 一种动效计算方法及装置 | |
CN116257235B (zh) | 绘制方法及电子设备 | |
CN113079332B (zh) | 移动终端及其录屏方法 | |
CN116414337A (zh) | 帧率切换方法及装置 | |
CN115904184A (zh) | 数据处理方法和相关装置 | |
WO2024087970A1 (zh) | 数据处理方法和相关装置 | |
WO2024198633A1 (zh) | 一种视频切换方法及电子设备 | |
WO2024016798A9 (zh) | 图像显示方法和相关装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23887566 Country of ref document: EP Kind code of ref document: A1 |