WO2021013055A1 - 数据处理的方法、装置及电子设备 - Google Patents

数据处理的方法、装置及电子设备 Download PDF

Info

Publication number
WO2021013055A1
WO2021013055A1 PCT/CN2020/102534 CN2020102534W WO2021013055A1 WO 2021013055 A1 WO2021013055 A1 WO 2021013055A1 CN 2020102534 W CN2020102534 W CN 2020102534W WO 2021013055 A1 WO2021013055 A1 WO 2021013055A1
Authority
WO
WIPO (PCT)
Prior art keywords
cycle
thread
timing
processing
task
Prior art date
Application number
PCT/CN2020/102534
Other languages
English (en)
French (fr)
Inventor
王亮
余先宇
李煜
支渠成
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20844411.7A priority Critical patent/EP4002112A4/en
Publication of WO2021013055A1 publication Critical patent/WO2021013055A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Definitions

  • This application relates to the field of electronic technology, in particular to a data processing method, device and electronic equipment.
  • the current mainstream smart terminal operating system is the Android system.
  • the Android system when an application is running in the foreground, various interface refreshes and changes occur.
  • the typical problem is that the interface changes quickly and slowly, and the fluency of the system is poor.
  • the main reason is that multiple threads need to run in the background, including high-load tasks that consume more power consumption, while the foreground tasks cannot obtain the required central processing unit (CPU) resources in time, which increases the time consumption of a single frame. , Causing lag.
  • CPU central processing unit
  • the present application provides a data processing method, device and electronic device, which can reduce the load of UI threads in some scenarios, shorten the running time of UI threads, and reduce stalls, thereby improving the fluency of the system and improving user experience.
  • a data processing method in response to a received trigger signal, the main thread in the first cycle starts to process the timing-related data in one or more tasks in the first cycle.
  • Task the main thread obtains the preprocessing result of the timing-independent task in the one or more tasks, the preprocessing result is the result of processing the timing-independent task in the second cycle, and the second cycle is the first cycle
  • the main thread combines the results of the timing related tasks processed in the first cycle and the obtained preprocessing results of the timing irrelevant tasks to determine the main thread processing results; the main thread sends the main thread processing to the rendering thread result.
  • the solution provided by this application recognizes the timing-independent tasks in the UI thread offline, which can be a timing-independent task or the Android API interface of a timing-independent task, and identifies the single frame idle time of the UI thread, or whether the UI thread is in IDLE status.
  • the single-frame idle time is used to preprocess the timing-independent tasks, and the timing-independent tasks in the UI thread are "moved" to the UI thread single frame idle time for processing, and Cache processing results.
  • the IDLE state is used to preprocess the timing-independent tasks, and the timing-independent tasks in the UI thread are "moved" to the IDLE state period of the UI thread for processing, and the processing results are cached.
  • This method can reduce the load task of the UI thread in some scenarios, reduce lag, thereby improve the fluency of the system and improve the user experience.
  • the method before the main thread in the first cycle starts processing the timing-related task, the method further includes: processing the timing-related task in the second cycle to obtain The preprocessing result of the timing-independent task; the preprocessing result of the timing-independent task is cached.
  • the duration of the second cycle includes the main thread running time and the main thread idle
  • the method further includes: determining that the running time of the timing-independent task is less than or equal to the idle time of the main thread.
  • the UI running time of the current frame is reduced by preprocessing the unrelated tasks of timing, and the load of the main thread is reduced, so as to achieve the purpose of reducing the total time consumption of the UI thread and the rendering thread.
  • the processing procedure of the timing-independent task in the first cycle is moved to the single frame idle time of the UI thread in the second cycle for preprocessing, and the processing result of the timing-independent task is cached.
  • the UI thread of the first cycle is processing, it is enough to directly obtain the cached result of the timing-independent task, which reduces the time for the current first-cycle UI thread to run the timing-independent task, thereby reducing the UI thread and rendering thread in the current cycle The total time-consuming.
  • the main thread in the second cycle is in an idle state.
  • the single frame idle time of the UI thread cannot be used, because the running time of this type of timing-independent tasks may have been longer than the duration of one cycle, and the processing required by this type of function If the time is long, the single frame idle time of the UI thread cannot be used for the preprocessing process, the idle state of the UI thread can be used for processing, and the processing result of the UI thread's timing-independent tasks can be cached. When the time-consuming function is executed, the result of running in the IDLE state is directly taken, thereby reducing the running time of the long function.
  • the method further includes: determining the idle time of the main thread in the third cycle, wherein the main thread in the third cycle is in the running state ,
  • the duration of the third cycle includes the running time of the main thread and the idle time of the main thread; when the running time of the time-independent task is greater than the idle time of the main thread in the third cycle, the time-independent task is processed in the second cycle , Get the preprocessing result of this time-independent task.
  • the relocation of the pre-processing process for processing time-independent tasks to the UI thread idle time or IDLE state may be determined by the system. For example, the system may prioritize whether the running time of a timing-independent task is less than or equal to the idle time of the main thread. When the running time of a time-independent task is less than or equal to the idle time of the UI thread, the next one or more cycles Time-independent tasks are moved to the idle time of the UI thread of the current cycle for processing; when the running time of the unsatisfied time-independent tasks is greater than the idle time of the UI thread, the time-independent tasks of the next one or more cycles are moved to the current UI thread. IDLE state is processed. Or, when the system judges that there is currently the IDLE state of the UI thread, the next one or more cycles of unrelated tasks can be moved to the IDLE state of the current UI thread for processing, which is not limited in this application.
  • the main thread in the first cycle in response to the trigger signal received, starts to process one or more tasks in the first cycle
  • the timing-related tasks in includes: in response to a received trigger signal, stopping the preprocessing flow of the main thread in the first cycle, and starting to process the timing-related tasks in one or more tasks in the first cycle.
  • a data processing device comprising: a processing unit, in response to a received trigger signal, in a first cycle, starting to process the sequence of one or more tasks in the first cycle Relevant tasks; an acquiring unit for acquiring the preprocessing result of the timing-independent task in the one or more tasks, the preprocessing result is the result of processing the timing-independent task in the second cycle, and the second cycle is The cycle before the first cycle; the processing unit is further configured to combine the result of the timing-related task processed in the first cycle and the obtained preprocessing result of the timing-independent task to determine the main thread processing result; The unit is used to send the processing result of the main thread to the rendering thread.
  • the processing unit before the processing unit starts processing the timing-related task, the processing unit is further configured to: process the timing-independent task in the second cycle to obtain the timing-independent task The preprocessing result of the task; the device further includes: a buffer unit for buffering the preprocessing result of the time-independent task.
  • the duration of the second cycle includes the main thread running time and the main thread idle time
  • the processing unit is also used to determine when the running time of the timing-independent task is less than or equal to the idle time of the main thread.
  • the main thread in the second cycle is in an idle state.
  • the processing unit is further configured to: determine the idle duration of the main thread in the third cycle, and the main thread in the third cycle is in the running state ,
  • the duration of the third cycle includes the running time of the main thread and the idle time of the main thread; when the running time of the time-independent task is greater than the idle time of the main thread in the third cycle, the time-independent task is processed in the second cycle , Get the preprocessing result of this time-independent task.
  • the processing unit is specifically configured to: in response to the received trigger signal, stop the preprocessing procedure in the first period, and start processing One or more tasks in the first cycle are time-related tasks.
  • the present application provides a device included in an electronic device, and the device has a function of realizing the behavior of the electronic device in the foregoing aspects and possible implementation manners of the foregoing aspects.
  • the function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules or units corresponding to the above-mentioned functions. For example, display module or unit, detection module or unit, processing module or unit, etc.
  • the present application provides an electronic device including: a touch display screen, wherein the touch display screen includes a touch-sensitive surface and a display; a camera; one or more processors; a memory; a plurality of application programs; and one or Multiple computer programs.
  • one or more computer programs are stored in the memory, and the one or more computer programs include instructions.
  • the electronic device is caused to execute the data processing method in any one of the possible implementations of the foregoing aspects.
  • the present application provides an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled with one or more processors, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the electronic device executes A method of data processing in any possible realization of any of the above aspects.
  • the present application provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute any one of the possible data processing methods in any of the foregoing aspects.
  • the present application provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to perform any one of the possible data processing methods in any of the foregoing aspects.
  • an electronic device in an eighth aspect, characterized in that the electronic device includes a device for executing any one of the possible data processing methods in any one of the foregoing aspects.
  • Figure 1 is a schematic diagram of an example of a graphical user interface display process provided by the present application.
  • Figure 2 is another example of a graphical user interface display flow chart provided by this application.
  • Fig. 3 is a flowchart of an example of a data processing method provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an example of a UI thread processing process provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of another example of a UI thread processing process provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a fluency test result provided by an embodiment of the present application.
  • Fig. 7 is a schematic diagram of a data processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the composition of an example of an electronic device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • first cycle means two or more than two, for example, multiple tasks may refer to two or more tasks.
  • the electronic device may also include other functions such as portable electronic devices, such as mobile phones, tablets, and wearable electronic devices with wireless communication functions (such as Smart watch) etc.
  • portable electronic devices include but are not limited to carrying Or portable electronic devices with other operating systems.
  • the above-mentioned portable electronic device may also be other portable electronic devices, such as a laptop computer. It should also be understood that in some other embodiments, the above electronic device may not be a portable electronic device, but a desktop computer.
  • the electronic device may be a smart home appliance, such as a smart speaker, smart home equipment, and so on. The embodiments of the present application do not impose any restrictions on the specific types of electronic devices.
  • GUI graphical user interface
  • FIG. 1 is a schematic diagram of a display process of an example of a graphical user interface provided by this application
  • FIG. 2 is a display flowchart of another example of a graphical user interface provided by this application.
  • the electronic device creates an Android user interface (UI) thread.
  • UI Android user interface
  • the process of displaying a graphical user interface by the system can be divided into three stages.
  • the rendering layer stage can draw the UI to a graphics buffer (Queue buffer).
  • the rendering layer stage can include a main thread and a renderer thread.
  • the main thread is also called the UI thread ( UI thread), the main thread and rendering thread are dependent threads. It should be understood that the system will not create a separate thread for each component, UI components in the same process will be instantiated in the UI thread, and the system's calls to each component will be distributed from the UI thread.
  • the UI thread may include multiple processing tasks, such as member functions Measure, Draw, Layout, and so on.
  • a DrawFrame command is issued to the rendering thread.
  • the rendering thread contains a task queue (Task Queue).
  • the DrawFrame command sent from the UI thread will be stored in the Task Queue of the rendering thread, waiting for the rendering thread Processing, the function types and tasks included in the UI thread and the rendering thread will not be repeated here.
  • the task processing result of the UI thread needs to be sent to the rendering thread.
  • the DrawFrame command sent by the UI thread to the rendering thread may be generated according to the task processing result of the UI thread.
  • the task processing result of the UI thread may be sent to the rendering thread through the DrawFrame command, which is not limited in this application.
  • the system can periodically generate trigger signals, such as the VSYNC signal shown in Figure 2.
  • the VSYNC signal is the trigger signal for each stage.
  • the VSYNC signal can be equivalently understood as a timer function.
  • the time length between two adjacent VSYNC signals is one frame of the process of displaying the graphical user interface.
  • the UI thread and the renderer thread are executed in sequence.
  • this application will take a 60 Hz display system as an example.
  • the VSYNC signal cycle is 16.67 ms, and the display process of three consecutive cycles is shown in FIG. 2.
  • the response time of each stage is within 16.67ms, the system is considered to be smooth without frame loss or freeze.
  • the layer composition stage is mainly executed in the compositor (SurfaceFlinger, SF), and the layer of the previous stage is synthesized according to the result of the graphics buffer in the rendering layer stage.
  • the running result is drawn to the graphics buffer (Queue buffer), and the second VSYNC signal triggers the system to enter the second cycle.
  • the layer synthesis stage directly synthesizes the layer of the previous stage according to the result of the graphics buffer of the previous stage.
  • Stage 3 Display stage
  • Liquid crystal display displays the synthesized data.
  • the LCD display stage of the third cycle displays the layer synthesized in the second cycle.
  • each cycle includes the rendering layer phase, the compositing layer phase, and the display phase.
  • the stages are run in parallel.
  • the composite layer phase composites the result of the graphics buffer of the previous cycle
  • the display phase displays the composite data of the composite layer phase of the previous cycle. Therefore, three of the three cycles shown in Figure 2
  • the different processing stages are for the task processing process started by the same application.
  • the fluency of the system means that the processing time of each frame does not exceed the frequency and time specified by the system. Combined with the 60Hz display system listed above, in the process of displaying the graphical user interface, the fluency of the system can refer to the processing of a single frame. The time does not exceed 16.67ms. For the rendering layer stage including the UI thread and the rendering thread, if the processing time of a single frame exceeds the specified time, such as 16.67ms, the system will lose frame or freeze, which affects the fluency of the system and poor user experience.
  • CPU central processing unit
  • GPU graphics processing unit
  • the CPU resources can include the large and small cores of the CPU that the thread runs, and the CPU frequency of the core where the thread is located.
  • the large core of the CPU has strong processing capability and high power consumption; the small core has weak processing capability and low power consumption.
  • set different scheduling priorities for different UI threads For example, running the UI thread (main thread) on the large CPU core can alleviate the lag to a certain extent, but this will increase the CPU. Power consumption.
  • the operating efficiency of a single frame is improved by increasing the frequency point, that is, the processing speed of the CPU is increased. For example, within a certain range, the operating speed of the CPU is increased from 300M to 800M. However, in this process, it is not necessary to increase the frequency for all user operations. For light-loaded threads, increasing the frequency will cause additional power consumption of the CPU.
  • the processing speed of the system is also increased by adding asynchronous threads, that is, by adding additional threads (such as V threads, etc.), the system can process multiple threads in parallel.
  • the method cannot run during the thread detection process, and even causes the system to crash.
  • the Android system it is stipulated that some tasks can only be processed in the main thread. If these tasks are not processed in the main thread during the thread detection process, the application will crash.
  • the UI thread introduced in Figure 1 and Figure 2 may include multiple processing tasks and functions, such as load functions. Too many load functions and heavy tasks will cause the system to fail to complete the processing tasks within the specified time. This leads to stuttering.
  • this application will provide a data processing method, which improves the fluency of the system by reducing the processing time of the UI thread, avoids jams, and improves user experience.
  • the UI thread includes multiple processing tasks, among which tasks include timing-independent tasks and timing-related tasks.
  • a sequence-independent task can be understood as a task that does not depend on the previous task and the running result does not affect the processing of the subsequent task.
  • the UI thread includes multiple tasks A, task B, task C, task D, and task E that are executed in sequence.
  • Task C is defined as a timing-independent task
  • task A, task B, task D, and task E are defined as Timing related tasks.
  • task C does not depend on task A and task B to run; other tasks, such as task B, must rely on task A to run to run task B, and task D can only run after task C is run.
  • timing-independent task it is also a necessary processing task for the current UI thread.
  • the main thread in a cycle can contain multiple time-independent tasks.
  • the following takes a single time-independent task as an example to introduce the data processing method provided by this application.
  • the processing principle of multiple time-independent tasks is the same as that of a single task, and will not be repeated in this application. .
  • Fig. 3 is a flowchart of an example of a data processing method provided by an embodiment of the present application. Taking an electronic device based on the 60Hz Android system as an example, as shown in FIG. 3, the method may include the following contents:
  • An external event triggers the UI thread to work. It should be understood that the external event here may be an operation event of the user or an event set in advance, which is not limited in this application.
  • the UI thread is started, that is, the UI thread starts to run.
  • the UI thread processes non-preprocessing tasks according to the original system flow.
  • the UI thread may include a pre-processing process and a non-pre-processing process.
  • the pre-processing process can be understood as the processing process of time-independent tasks
  • the non-pre-processing process can be understood as the process of processing time-related tasks necessary for the current UI thread.
  • the processing of task C is called the preprocessing flow
  • the processing of task A, task B, task D, and task E The process is called a non-pretreatment process.
  • an external event can trigger the system to send the VSYNC signal, and the VSYNC signal triggers the UI thread in a cycle to start task processing, which is called the UI thread in the first cycle, that is, the UI thread in the first cycle mentioned in this application can Understand as "currently running UI thread".
  • the UI thread preprocessing process is running, the preprocessing process of the current UI thread is stopped, and step 303 is executed, and the UI thread starts working; if the preprocessing process is not currently running, when a VSYNC signal is received, Then go directly to step 303, directly start to run a new UI thread, and the UI thread handles non-preprocessing tasks.
  • the preprocessing result of the current timing-independent task can be obtained, and the pre-processing result is the result of processing the timing-independent task in the second cycle ,
  • the second period is a period before the first period.
  • task C may be a timing-independent task of the current first cycle, and task C processes and caches the processing result in a second cycle, where the second cycle is a cycle before the first cycle.
  • task C is preprocessed in the second cycle before the current first cycle, and the preprocessing result of task C is cached.
  • the system can process the timing-independent tasks in one or more cycles after the first cycle in the current first cycle, and the processing results of the timing-independent tasks Cache.
  • the first cycle and one or more cycles after the first cycle may be multiple cycles formed by continuous VSYNC signals, and this application does not limit the number of preprocessed timing-independent tasks in the first cycle.
  • the processing process of task C is a preprocessing process
  • the current UI thread is task A and task B
  • the processing process of task D and task E is a non-pre-processing process.
  • the processing result of task C has been processed in the previous cycle (for example, the second cycle) and cached.
  • the system can directly obtain the preview from the cache.
  • the processing result of the processed task C; or, after running task A, task B, task D, and task E, the system can directly obtain the processing result of preprocessed task C from the cache, which is not limited in this application.
  • the system finishes running task A, task B, task D, and task E, it directly obtains the processing result of preprocessed task C from the cache, and obtains the processing result of task C. After that, the UI thread task ends.
  • the remaining time or idle time in the cycle may not be enough to process the timing-independent tasks (such as tasks) in the next cycle (the current first cycle).
  • the preprocessing process of the timing-independent task is not performed.
  • the system cannot obtain the processing result of the current timing-independent task C in the current first cycle.
  • the system can continue to process the original task C until the UI thread End of operation.
  • the remaining time for processing the UI thread in one cycle is referred to as "single frame idle time" or "single frame free time”.
  • step 306 the UI thread runs and the rendering thread can be entered.
  • the system determines whether the UI thread is in an idle state according to the state of the UI thread.
  • the idle state of the UI thread can be understood as the UI thread has sufficient or rich single frame idle time, that is, there is still time remaining after the UI thread runs in a cycle, or the idle state of the UI thread can be understood as the UI The thread enters the IDLE state.
  • the system continues to run the UI thread without performing the preprocessing process. After running the UI thread, it enters the rendering thread, which is not limited in this application.
  • Fig. 4 is a schematic diagram of an example of a UI thread processing process provided by an embodiment of the present application.
  • the UI thread and the rendering thread process of the rendering layer stage are shown respectively.
  • each cycle can include a UI thread and a rendering thread.
  • the UI thread plus the rendering thread is one frame processing process.
  • the current frame will be dropped, causing the user to feel stuck.
  • the system can determine the timing-independent tasks in the UI thread that are not related to timing through offline analysis, such as the FunA function shown in Figure 4.
  • the timing-independent tasks can be functions or API interfaces, etc., here are timing-independent tasks
  • the processing of the timing-independent API interface is the same, so I won't repeat it.
  • the FunA function is equivalent to the timing-independent task C listed above.
  • the processing of the FunA function is used as the preprocessing flow of the UI thread, which can not affect the processing of other tasks of the current UI thread, but is placed in other time periods. To process.
  • this solution reduces the UI running time of the current frame and the load of the main thread by preprocessing the unrelated tasks of timing, thereby achieving the goal of reducing the total time consumption of the UI thread and the rendering thread.
  • the timing-independent task of the UI thread in the first cycle is moved to the single frame idle time of the UI thread in the second cycle for processing, and the processing result of the timing-independent task of the UI thread is cached.
  • the second cycle is the previous cycle or previous cycles of the first cycle, which is not limited in this application.
  • the processing procedure of the first cycle of the timing-independent task FunA is moved to the single frame idle time of the UI thread of the second cycle for preprocessing, and the processing result of FunA is cached.
  • the UI thread of the first cycle is processed, the cached result of FunA can be directly obtained, which reduces the time of the current UI thread of the first cycle to run FunA, thereby reducing the total time consumption of the UI thread and rendering thread of the current cycle .
  • the total time consumed by the UI thread and the rendering thread is T0, and T0 is greater than 16.67ms; and in the processing flow of this solution, the time for the UI thread to run FunA is saved.
  • the system determines that the processing time of the first cycle of the timing-independent task FunA is less than or equal to (T-T1), the first cycle of the timing-independent task FunA can be moved to the second cycle for processing, and FunA The results of the operation are cached. It should be understood that this application does not limit the manner of determining the single frame idle time of the UI thread.
  • the running time of the UI thread can be shortened, the total time consumption of the UI thread and the rendering thread can be further reduced, and the fluency of the system can be improved. At the same time, the process will not affect the interface display of the electronic device.
  • the timing-independent task FunA can Parallel processing with the rendering thread, in other words, the electronic device can run the next cycle of unrelated timing tasks FunA at the same time during the rendering thread period; or, when the second cycle of rendering threads are finished, run the next first cycle of unrelated timing tasks FunA , This application does not limit this.
  • the system determines that the total processing time of the sequence-independent task FunA in the first cycle and the fourth cycle is less than or equal to t2, the system relocates both the sequence-independent task FunA in the first cycle and the fourth cycle to the second cycle.
  • the processing is performed periodically, and the running result of FunA is cached.
  • This application does not limit the number of timing-independent tasks FunA processed by the current frame or the number of timing-independent tasks in multiple cycles.
  • the duration of the second cycle includes the running time of the main thread and the idle time of the main thread.
  • the time-independent task in one or more cycles after the second cycle may be moved to the second cycle for processing.
  • the main thread in the second cycle of the preprocessing timing-independent task may be in an idle state.
  • the timing-independent tasks of one or more cycles after the second cycle can be moved to the idle state of the UI thread for processing, and the processing results of the timing-independent tasks of the UI thread can be cached.
  • the single frame idle time of the UI thread cannot be used, because the running time of this type of timing-independent tasks may have been longer than the duration of one cycle, so the processing time required for this type of task A function with a duration longer than one period is called FunB.
  • FunB may include the function obtaniview or the function inflate. The processing of this type of function requires a long time, and the single-frame idle time of the UI thread cannot be used for the preprocessing process, and the idle state of the UI thread can be used for processing, and the processing results of the timing-independent tasks of the UI thread can be cached.
  • Fig. 5 is a schematic diagram of another example of a UI thread processing process provided by an embodiment of the present application.
  • the shaded part is the cycle where the system has processing processes.
  • Each cycle includes the aforementioned UI thread and rendering thread, etc.
  • the UI thread is in the running state; the blank part shows The system does not handle the IDLE state of the process, that is, the UI thread is in the idle state.
  • the IDLE state of the thread that is, the time-consuming function FunB after cycle 8 is moved to the idle state of the blank cycle 4 to cycle 7, and the function FunB is preprocessed in turn, and the running result is cached. It should be understood that the system can determine how many cycles the function FunB can run and process according to the total duration of the IDLE state, which is not limited in this application.
  • cycle 8 there is a timing operation, or a user operation that is in line with the prediction, when the VSYNC signal triggers the UI thread to resume work, when the time-consuming function is executed, the result of running in the IDLE state is directly taken, thereby reducing the running time of the long function .
  • relocation of the pre-processing process for processing time-independent tasks to the UI thread idle time or IDLE state can be determined by the system. For example, the system may prioritize whether the running time of a timing-independent task is less than or equal to the idle time of the main thread. When the running time of a time-independent task is less than or equal to the idle time of the UI thread, the next one or more cycles Time-independent tasks are moved to the idle time of the UI thread of the current cycle for processing; when the running time of the unsatisfied time-independent tasks is greater than the idle time of the UI thread, the time-independent tasks of the next one or more cycles are moved to the current UI thread. IDLE state is processed. Or, when the system judges that there is currently the IDLE state of the UI thread, the next one or more cycles of unrelated tasks can be moved to the IDLE state of the current UI thread for processing, which is not limited in this application.
  • the total time consumption of the UI thread and the rendering thread can be shortened, and the fluency of the system can be improved.
  • the process can avoid frame loss without affecting the interface display of the electronic device.
  • the above combined Figure 4 and Figure 5 introduced two processes to shorten the total time consumption of the UI thread and the rendering thread to improve the fluency of the system.
  • the state of the UI thread is monitored through the system, and tasks that are not related to timing are identified and are not related to timing.
  • the running time of the task is estimated, and then the processing principle of the timing-independent task is determined, for example, the processing flow of FIG. 4 or FIG. 5 is determined to process the timing-independent task.
  • the timing-independent API interface processing is the same as that, and will not be repeated here):
  • the system monitors the state of the UI thread. Specifically, two pieces of information are mainly monitored, the idle time of a single frame of the UI thread or whether the UI thread is in the IDLE state.
  • the single frame idle time of the UI thread can be obtained by obtaining the running time of the UI thread.
  • the system also monitors whether the UI thread is in the IDLE state, and specifically can determine whether the UI thread is in the IDLE state by monitoring the thread state. Specifically, this application does not limit the manner in which the system obtains and determines the status of the UI thread.
  • step 309 in Figure 3 because the current frame needs to preprocess the timing-independent tasks of the next frame, it is necessary to estimate the time for the current frame to perform timing-independent tasks to estimate whether the single frame idle time t2 of the UI thread can be processed No timing tasks.
  • the estimated running time of the system is t3.
  • the data processing method introduced in Figure 4 can be used to use the single frame of the UI thread The preprocessing process is performed during the idle time; or, for the long time-consuming function FunB, the estimated running time of the system is t4.
  • the data processing method introduced in Figure 5 can be used, Use IDLE state for preprocessing process.
  • t3 and t4 can be pre-set in the system or obtained from other devices/networks; if the system cannot obtain or fails to obtain, you can also refer to the time consuming of similar functions according to the function parameters, such as type, calculation complexity, etc. determine.
  • the system can estimate the time for processing tasks that are not timing dependent.
  • the running time of the unrelated tasks in a period of time can be counted, and the average running time and the actual volatility of the unrelated tasks in the sequence can be recorded.
  • the 3 ⁇ principle can be adopted, so that the average running time of the estimated time-independent tasks can meet 99.98%, thereby improving the accuracy of the estimation.
  • the above 1 and 2 can occur one after the other or simultaneously, and the status of the UI thread can be continuously monitored, and the estimated time for processing tasks irrelevant to the timing is only obtained once.
  • the system determines the single frame idle time t2 of the current frame through the status monitoring of the UI thread, or determines whether the UI thread is in the IDLE state; in addition, the system also obtains the running time of the timing-independent tasks, and determines the processing of the timing-independent tasks based on the above information Strategy.
  • the single-frame idle time t2 is used to preprocess the timing-independent tasks, that is, according to the processing flow of this solution in Figure 4, A cycle of unrelated tasks moves to the current frame for processing, and caches the processing results.
  • timing-independent tasks in the next cycle are not processed.
  • the system determines whether the UI thread of the current frame is in the IDLE state, and then uses the IDLE state to preprocess the timing-independent tasks, that is, according to the solution in Figure 5 In the processing flow, the timing irrelevant tasks are moved to the IDLE state for preprocessing, and the processing results are cached.
  • timing-irrelevant tasks If the conditions for processing timing-irrelevant tasks are met, the processing of timing-irrelevant tasks is started.
  • the start processing means that the current UI thread has finished running and directly processes the timing-irrelevant tasks in parallel with the rendering thread. Or after entering the idle state, start processing unrelated tasks.
  • the system can enter the rendering thread; or, the timing-independent tasks and the rendering thread are processed at the same time, and the results of preprocessing are cached. This application does not do this limited.
  • the UI thread if it has a rich single frame idle time or enters the IDLE state, it starts the preprocessing process and caches the results of the preprocessing until the cache is completed, or when there is a signal to interrupt the preprocessing process, it ends running and starts a new UI thread . Or, when the system determines that the UI thread is not in an idle state based on the status of the UI thread, the preprocessing process of the UI thread is not run.
  • step 305 to step 311 can be executed in a loop during the next frame running process, and the preprocessed function result is obtained from the cache, until the UI thread finishes running and enters the rendering thread.
  • the timing-independent task in the UI thread is identified offline, which can be a timing-independent task or an Android API interface of a timing-independent task, and the single frame idle time of the UI thread or whether the UI thread is in the IDLE state is identified.
  • the single-frame idle time is used to preprocess the timing-independent tasks, and the timing-independent tasks in the UI thread are "moved" to the UI thread single frame idle time for processing, and Cache processing results.
  • the IDLE state is used to preprocess the timing-independent tasks, and the timing-independent tasks in the UI thread are "moved" to the IDLE state period of the UI thread for processing, and the processing results are cached.
  • This method can reduce the load task of the UI thread in some scenarios, reduce lag, thereby improve the fluency of the system and improve the user experience. For example, when the total duration of UI thread + rendering thread exceeds 16.67 ms, the existing system will be stuck.
  • the timing-independent task FunC is preprocessed.
  • UI thread + rendering thread 18ms
  • the running time of the timing-independent task FunC is 3ms
  • FIG. 6 is a schematic diagram of a fluency test result provided by an embodiment of the present application.
  • the user sliding WeChat is used as a test scenario, assuming that the user sliding manipulator speed is 600 millimeters per second (mm/s).
  • the abscissa is the frame length, the unit is ms, the abscissa at the dotted line is the time length of one cycle 16.67ms, which represents the time length of running a frame; the ordinate is the specific gravity.
  • the running time of the rendering layer phase (UI thread and rendering thread) of the system of 200,000 frames is counted.
  • the curve before optimization represents the percentage of different frame lengths in the rendering layer phase of the system in 200,000 frames. For example, 94.70% of the frame length of the rendering layer phase in the 200,000 frames before optimization can be It is guaranteed that within 16.67ms, 98.72% of the optimized 200,000 frames, the frame length of the rendering layer stage can be guaranteed within 16.67ms.
  • the running time is controlled at 16.67ms The percentage increased by nearly 4 percentage points, significantly shortening the running time of the system and improving the fluency of the system.
  • FIG. 7 is a schematic diagram of a data processing device 700 provided by an embodiment of the present application. It can be understood that the data processing device 700 may be the aforementioned electronic device, or a chip or component applied to the electronic device, and each module or unit in the device 700 is used to perform each action or processing procedure introduced in the above method 300.
  • the device 700 includes hardware and/or software modules corresponding to various functions. As shown in FIG. 7, 700 may include:
  • the UI thread processing unit 710 is configured to run a UI thread, and processing the UI thread includes multiple processing tasks, such as a member function Draw, a load layout, and the like.
  • the UI thread processing unit 710 may receive the VSYNC signal to trigger the UI thread to start execution.
  • the UI thread processing unit 710 may include a receiving module 712, an interface module 714, and a processing module 717.
  • the receiving module 712 is used to receive the trigger signal and the data to be processed; the interface module 714 is used to obtain the preprocessing results of the timing-independent tasks from the optimization unit; the processing module 717 is used to process the timing-related tasks in the data to be processed, and combine to obtain The preprocessing result of the unrelated tasks of the timing sequence determines the UI thread processing result, and sends the UI thread processing result to the rendering thread processing unit 720.
  • the rendering thread processing unit 720 receives the DrawFrame command sent by the UI thread processing unit 710, and performs rendering processing on the received data.
  • the rendering thread processing unit 720 contains a task queue (Task Queue) inside, and the DrawFrame command sent from the UI thread will be stored in the Task Queue of the rendering thread processing unit 720, waiting for processing by the rendering thread.
  • Task Queue Task Queue
  • the SF synthesizer 730 is used to synthesize the layers of the previous stage according to the result of the graphics buffer of the rendering thread processing unit 720.
  • SurfaceFlinger starts to synthesize the layer. If the GPU rendering task submitted before is not finished, it will wait for the GPU rendering to complete, and then synthesize. The synthesis phase depends on the GPU to complete.
  • the display unit 740 is used to display the data synthesized by the SF synthesizer 730.
  • the LCD display module can display.
  • the composite layer stage and LCD display stage shown in FIG. 1 are unchanged, that is, the corresponding rendering thread processing unit 720, SF synthesizer 730, and display unit 740 are the same as the existing ones and will not be omitted here. Repeat.
  • optimization unit 750 is used to implement the data processing method introduced in FIGS. 3 to 5.
  • the optimization unit 750 may include a UI thread state detection module 752, a processing module 758, and time-sequence-independent task running time acquisition Module 754 and queue buffer module 756.
  • the state detection module 752 is used to collect relevant information of the UI thread.
  • the running time acquisition module 754 of the timing-independent task is used to acquire the running time of the timing-independent task.
  • the processing module 758 is used to determine whether to move the timing irrelevant task to the single frame idle time of the UI thread or the IDLE state of the UI thread according to the UI thread status information from the UI thread status detection module 752 and the obtained running time of the timing irrelevant task . For example, by analyzing the acquired data, the main thread time consumption of the foreground application in a certain period of time, the state and the time-consuming tasks of timing-independent tasks can be calculated by analyzing the acquired data, and used to determine whether the current frame (for example, the second cycle) can process tasks that are independent of timing.
  • the single frame idle time is used to preprocess the timing irrelevant tasks, and the timing irrelevant tasks in the UI thread are "moved" to the UI thread single frame idle time for processing.
  • use the IDLE state to preprocess the timing-independent tasks, and "migrate" the timing-independent tasks in the UI thread to the IDLE state period of the UI thread for processing.
  • the queue cache module 756 is used to cache the processing results of the timing-independent tasks in the UI thread preprocessing process. When the UI thread runs to the timing-independent tasks again, the results are directly obtained from the queue cache module 756.
  • the embodiment of the present invention also provides an electronic device or electronic terminal, which, in addition to the module shown in FIG. 7, may also include a baseband processor, a transceiver, a display screen, an input/output device, etc., such as an external memory interface, an internal memory, Universal serial bus (USB) interface, charging management module, power management module, battery, antenna, mobile communication module, wireless communication module, audio module, speaker, receiver, microphone, headphone interface, sensor module, buttons, Motor, indicator, camera, display screen, and subscriber identification module (SIM) card interface, etc.
  • the sensor modules can include pressure sensors, gyroscope sensors, air pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity light sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, etc.
  • the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device or the electronic terminal.
  • the electronic device or the electronic terminal may include more or less components than those illustrated or listed above, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • this electronic terminal can be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, a notebook computer, an ultra-mobile personal computer (ultra- For electronic devices such as mobile personal computer (UMPC), netbooks, and personal digital assistants (personal digital assistants, PDAs), the embodiments of this application do not impose any restrictions on the specific types of electronic terminals.
  • UMPC mobile personal computer
  • netbooks netbooks
  • PDAs personal digital assistants
  • An embodiment of the present invention also provides an electronic device, which may be a chip or a circuit, which only includes the UI thread processing unit 710, the optimization unit 750, and the rendering thread processing unit 720 shown in FIG. 7, which is not limited in this application .
  • the electronic device includes hardware and/or software modules corresponding to each function.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Those skilled in the art can use different methods for each specific application in combination with the embodiments to implement the described functions, but such implementation should not be considered as going beyond the scope of the present application.
  • the electronic device may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 8 shows a possible composition diagram of the electronic device 800 involved in the foregoing embodiment.
  • the electronic device 800 may include: a display unit 801, a detection unit 802, and a processing unit 803.
  • the display unit 801, the detection unit 802, and the processing unit 803 can be used to support the electronic device 800 to perform the data processing methods introduced in the above flowcharts 3 to 5, etc., and/or other processes used in the technology described herein .
  • the electronic device provided in this embodiment is used to execute the aforementioned data processing method, and therefore can achieve the same effect as the aforementioned implementation method.
  • the electronic device may include a processing module, a storage module, and a communication module.
  • the processing module can be used to control and manage the actions of the electronic device, for example, can be used to support the electronic device to execute the steps performed by the display unit 801, the detection unit 802, and the processing unit 803.
  • the storage module can be used to support the electronic device to execute and store program code and data.
  • the communication module can be used to support the communication between the electronic device and other devices.
  • the processing module may be a processor or a controller. It can implement or execute various exemplary logical blocks, modules and circuits described in conjunction with the disclosure of this application.
  • the processor can also be a combination of computing functions, for example, a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and so on.
  • the storage module may be a memory.
  • the communication module may specifically be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip, and other devices that interact with other electronic devices.
  • the electronic device involved in this embodiment may be a device having the structure shown in FIG. 1.
  • This embodiment also provides a computer storage medium in which computer instructions are stored, and when the computer instructions run on an electronic device, the electronic device executes the above-mentioned related method steps to implement the data processing method in the above-mentioned embodiment .
  • This embodiment also provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned related steps to implement the data processing method in the above-mentioned embodiment.
  • the embodiments of the present application also provide a device.
  • the device may specifically be a chip, component or module.
  • the device may include a connected processor and a memory; wherein the memory is used to store computer execution instructions.
  • the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the data processing methods in the foregoing method embodiments.
  • the electronic device, computer storage medium, computer program product, or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided above. The beneficial effects of the method will not be repeated here.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of software products, which are stored in a storage medium It includes several instructions to make a device (may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes.

Abstract

一种数据处理的方法、装置及电子设备,该方法通过线下识别UI线程中的时序无关任务,并识别UI线程的单帧空闲时间,或者确定UI线程是否处于IDLE状态。当时序无关任务的运行时间均值小于或等于单帧空闲时间,则利用单帧空闲时间预处理时序无关任务,把UI线程中的时序无关任务"搬迁"到UI线程单帧空闲时间进行处理,并缓存处理结果。或者,利用IDLE状态预处理时序无关任务,将UI线程中的时序无关任务"搬迁"到UI线程的IDLE状态时段做处理,并缓存处理结果。该方法可以减轻部分场景下的UI线程的负载,减少卡顿,从而提升系统的流畅性,提升用户体验。

Description

数据处理的方法、装置及电子设备
本申请要求于2019年07月20日提交中国专利局、申请号为201910657906.3、申请名称为“数据处理的方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种数据处理的方法、装置及电子设备。
背景技术
当前主流的智能终端操作系统为安卓(Android)系统,Android系统中应用程序在前台运行时,有各种界面刷新变化的情况。尤其是,用户操作过程中,连续多次滑动时,典型的问题是界面变化忽快忽慢,系统的流畅性较差,此外,有时会有丢帧卡顿的现象,用户的体验不好。其主要原因是后台需要运行多个线程,包括高负载的任务消耗较多功耗,而前台任务无法及时获得所需的中央处理器(central processing unit,CPU)资源,从而增加单帧运行耗时,导致卡顿。
发明内容
本申请提供一种数据处理的方法、装置及电子设备,能够减轻部分场景下的UI线程的负载,缩短UI线程的运行时间,减少卡顿,从而提升系统的流畅性,提升用户体验。
第一方面,提供了一种数据处理的方法,该方法包括:响应于接收到的触发信号,第一周期内的主线程开始处理该第一周期内的一项或多项任务中的时序有关任务;主线程获取该一项或多项任务中的时序无关任务的预处理结果,该预处理结果是在第二周期内处理该时序无关任务得到的结果,该第二周期是该第一周期之前的周期;主线程结合该第一周期内处理的该时序有关任务的结果和获取到的该时序无关任务的预处理结果,确定该主线程处理结果;主线程向渲染线程发送该主线程处理结果。
本申请提供的方案,通过线下识别UI线程中的时序无关任务,其可以是时序无关任务或者时序无关任务的Android API接口等,并识别UI线程的单帧空闲时间,或者UI线程是否处于IDLE状态。当时序无关任务的运行时间均值小于或等于单帧空闲时间,则利用单帧空闲时间预处理时序无关任务,把UI线程中的时序无关任务“搬迁”到UI线程单帧空闲时间进行处理,并缓存处理结果。或者,利用IDLE状态预处理时序无关任务,将UI线程中的时序无关任务“搬迁”到UI线程的IDLE状态时段做处理,并缓存处理结果。该方法可以减轻部分场景下的UI线程的负载的任务,减少卡顿,从而提升系统的流畅性,提升用户体验。
例如,对于UI线程+渲染线程的总时长超过16.67ms的情况,现有的系统会发生卡顿,采用本发明实施例,比如,将时序无关任务FunC进行预处理,对于UI线程+渲染线程 =18ms,如果UI线程运行为10ms,其中的时序无关任务FunC的运行时间为3ms,渲染线程运行为8ms,那么我们可以利用UI线程运行10ms后的时间,提前处理FunC,那么UI线程+渲染线程=15ms,这样就不会出现丢帧情况,用户体验到的显示是流畅且无卡顿的。
结合第一方面,在第一方面的某些实现方式中,该第一周期内的主线程开始处理该时序有关任务之前,该方法还包括:在该第二周期内处理该时序无关任务,得到该时序无关任务的预处理结果;缓存该时序无关任务的预处理结果。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,当该第二周期内的该主线程为运行态时,该第二周期的时长包括主线程运行时长和主线程空闲时长,该方法还包括:确定该时序无关任务的运行时长小于或等于该主线程空闲时长。
在本方案中,通过预处理时序无关任务,减少当前帧的UI运行时间,降低主线程运行的负载,从而实现减少UI线程和渲染线程的总耗时的目的。具体地,将第一周期的时序无关任务的处理过程移动到第二周期的UI线程的单帧空闲时间进行预处理,并将该时序无关任务的处理结果进行缓存。当第一周期的UI线程处理时,直接获取时序无关任务的缓存结果即可,这样就减少了当前的第一周期的UI线程运行时序无关任务的时间,从而减少当前周期的UI线程和渲染线程的总耗时。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该第二周期内的该主线程为空闲态。
具体地,若一个周期内有时序无关任务耗时特别长,无法利用UI线程的单帧空闲时间,因为可能该类时序无关任务的运行时间已经大于一个周期的时长,该类函数处理所需的时间长,无法利用UI线程的单帧空闲时间进行预处理流程,则可以利用UI线程的空闲状态进行处理,并缓存该UI线程的时序无关任务的处理结果。对于执行该耗时长的函数时,直接取IDLE状态下运行的结果,从而减少长函数的运行时间。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该方法还包括:确定第三周期内的主线程空闲时长,其中,该第三周期内的该主线程为运行态,该第三周期的时长包括主线程运行时长和主线程空闲时长;当该时序无关任务的运行时长大于该第三周期内的主线程空闲时长时,在该第二周期内处理该时序无关任务,得到该时序无关任务的预处理结果。
应理解,将处理时序无关任务的预处理流程搬迁至UI线程空闲时间或者IDLE状态,可以由系统进行判断。例如,系统可以优先判断是否时序无关任务的运行时长小于或等于所述主线程空闲时长,当满足时序无关任务的运行时长小于或等于所述UI线程空闲时长时,将后一个或多个周期的时序无关任务搬迁到当前周期的UI线程空闲时长进行处理;当不满足时序无关任务的运行时长大于所述UI线程空闲时长时,将后一个或多个周期的时序无关任务搬迁到当前UI线程的IDLE状态进行处理。或者,当系统判断当前有UI线程的IDLE状态时,可以将后一个或多个周期的时序无关任务搬迁到当前UI线程的IDLE状态进行处理,本申请对此不做限定。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,响应于接收到的触发信号,第一周期内的主线程开始处理所述第一周期内的一项或多项任务中的时序有关任务,包括:响应于接收到的触发信号,停止所述第一周期内主线程的预处理流程,开始处 理第一周期内的一项或多项任务中的时序有关任务。
第二方面,提供了一种数据处理的装置,该装置包括:处理单元,响应于接收到的触发信号,在第一周期内开始处理该第一周期内的一项或多项任务中的时序有关任务;获取单元,用于获取该一项或多项任务中的时序无关任务的预处理结果,该预处理结果是在第二周期内处理该时序无关任务得到的结果,该第二周期是该第一周期之前的周期;该处理单元,还用于结合该第一周期内处理的该时序有关任务的结果和获取到的该时序无关任务的预处理结果,确定该主线程处理结果;发送单元,用于向渲染线程发送该主线程处理结果。
结合第二方面,在第二方面的某些实现方式中,该处理单元开始处理该时序有关任务之前,该处理单元还用于:在该第二周期内处理该时序无关任务,得到该时序无关任务的预处理结果;该装置还包括:缓存单元,用于缓存该时序无关任务的预处理结果。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,当该第二周期内的主线程为运行态时,该第二周期的时长包括主线程运行时长和主线程空闲时长,该处理单元还用于:确定该时序无关任务的运行时长当小于或等于该主线程空闲时长。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该第二周期内的主线程为空闲态。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该处理单元还用于:确定第三周期内的主线程空闲时长,该第三周期内的该主线程为运行态,该第三周期的时长包括主线程运行时长和主线程空闲时长;当该时序无关任务的运行时长大于该第三周期内的主线程空闲时长时,在该第二周期内处理该时序无关任务,得到该时序无关任务的预处理结果。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该处理单元具体用于:响应于接收到的触发信号,停止在所述第一周期内的预处理流程,开始处理所述第一周期内的一项或多项任务中的时序有关任务。
第三方面,本申请提供了一种装置,该装置包含在电子装置中,该装置具有实现上述方面及上述方面的可能实现方式中电子装置行为的功能。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元。例如,显示模块或单元、检测模块或单元、处理模块或单元等。
第四方面,本申请提供了一种电子装置,包括:触摸显示屏,其中,触摸显示屏包括触敏表面和显示器;摄像头;一个或多个处理器;存储器;多个应用程序;以及一个或多个计算机程序。其中,一个或多个计算机程序被存储在存储器中,一个或多个计算机程序包括指令。当指令被电子装置执行时,使得电子装置执行上述任一方面任一项可能的实现中的数据处理的方法。
第五方面,本申请提供了一种电子装置,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子装置执行上述任一方面任一项可能的实现中的数据处理的方法。
第六方面,本申请提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子装置上运行时,使得电子装置执行上述任一方面任一项可能的数据处理的方法。
第七方面,本申请提供了一种计算机程序产品,当计算机程序产品在电子装置上运行时,使得电子装置执行上述任一方面任一项可能的数据处理的方法。
第八方面,提供了一种电子装置,其特征在于,该电子装置包括执行上述任一方面任一项可能的数据处理的方法的装置。
附图说明
图1是本申请提供的一例图形用户界面的显示过程示意图。
图2是本申请提供的又一例图形用户界面的显示流程图。
图3是本申请实施例提供的一例数据处理的方法流程图。
图4是本申请实施例提供的一例UI线程的处理过程示意图。
图5是本申请实施例提供的又一例UI线程的处理过程示意图。
图6是本申请实施例提供的一例流畅性测试结果示意图。
图7是本申请实施例提供的一种数据处理的装置的示意图。
图8是本申请实施例提供的一例电子装置的组成示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。例如本申请中的“第一周期”、“第二周期”和“第三周期”等。此外,在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上,例如多项任务可以是指两个或两个以上的任务。
本申请实施例提供的方法可以应用于电子装置上,在一些实施例中,电子装置可以是还包含其它功能诸如便携式电子装置,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子装置(如智能手表)等。便携式电子装置的示例性实施例包括但不限于搭载
Figure PCTCN2020102534-appb-000001
Figure PCTCN2020102534-appb-000002
或者其它操作系统的便携式电子装置。上述便携式电子装置也可以是其它便携式电子装置,诸如膝上型计算机(laptop)等。还应当理解的是,在其他一些实施例中,上述电子装置也可以不是便携式电子装置,而是台式计算机。在一些实施例中,电子装置可以是智能家电,诸如智能音箱、智能家居设备等等。本申请实施例对电子装置的具体类型不作任何限制。
目前的电子装置上可以安装多款第三方应用程序(application,App),例如手机上的支付宝、相册、微信、卡包、设置、相机等多款应用。根据用户的操作,每一种应用都可以在电子装置上呈现不同的图形用户界面(graphical user interface,GUI)。
图1是本申请提供的一例图形用户界面的显示过程示意图,图2是本申请提供的另一例图形用户界面的显示流程图。以基于安卓(Android)系统的电子装置为例。当某个应 用程序启动时,电子装置会创建一个安卓用户界面(android user interface,UI)的线程,结合图1和图2所示,系统显示图形用户界面的过程可以分为三个阶段。
阶段一:渲染图层阶段
渲染图层阶段可以将UI绘制到一个图形缓冲区(Queue buffer),具体地,渲染图层阶段可以包括主线程(main thread)和渲染线程(renderer thread),其中,主线程也叫UI线程(UI thread),主线程和渲染线程为依赖线程。应理解,系统不会为每个组件单独创建线程,在同一个进程里的UI组件都会在UI线程里实例化,系统对每一个组件的调用都从UI线程分发出去。
如图1所示,UI线程可以包括多项处理任务,例如成员函数Measure、Draw、Layout等。当UI线程的任务处理结束时,向渲染线程发出一个DrawFrame命令,渲染线程内部包含一个任务队列(Task Queue),从UI线程发送过来的DrawFrame命令就会保存在渲染线程的Task Queue,等待渲染线程处理,此处对UI线程和渲染线程中所包括的函数类型和任务等不再赘述。
应理解,UI线程的任务处理结果需要发送给渲染线程。可选地,UI线程向渲染线程发送的DrawFrame命令可以是根据UI线程的任务处理结果生成的,换言之,UI线程的任务处理结果可以通过DrawFrame命令发送给渲染线程,本申请对此不做限定。
如图2所示,系统可以周期性产生触发信号,例如图2中示出的VSYNC信号,VSYNC信号为每个阶段的触发信号,换言之,VSYNC信号可以等效的理解为定时器的功能,两个相邻的VSYNC信号之间的时长为显示图形用户界面过程的一帧帧长。如图2所示,当第一个VSYNC信号触发系统开始进入第一个周期的渲染图层阶段,依次执行UI线程和渲染(renderer)线程。
示例性的,本申请将以60Hz的显示系统为例,对于60Hz的显示系统,VSYNC信号周期为16.67ms,图2中示出了三个连续周期的显示流程。一般情况下,当每个阶段的响应时间在16.67ms以内,则认为系统流畅,没有丢帧或者卡顿。
阶段二:合成图层阶段
合成图层阶段主要在合成器(SurfaceFlinger,SF)中执行,根据渲染图层阶段的图形缓冲区的结果,合成前一阶段的图层。
例如,如图2所示,渲染图层阶段的UI线程和渲染线程运行结束后,将运行结果绘制到图形缓冲区(Queue buffer),第二个VSYNC信号触发系统进入第二个周期,该周期的合成图层阶段直接根据前一阶段的图形缓冲区的结果合成前一阶段的图层。
阶段三:显示阶段
液晶显示屏(liquid crystal display,LCD)显示合成后的数据。例如,如图2所示,当第三个VSYNC信号触发系统进入第三个周期,第三个周期的LCD显示阶段将第二个周期合成的图层进行显示。
应理解,以上介绍的三个不同的阶段,在同一个周期中,三个阶段为并行运行状态,换言之,每一个周期中都包括渲染图层阶段、合成图层阶段和显示阶段,且三个阶段是并行运行的。其中,合成图层阶段合成的是前一个周期的图形缓冲区的结果,显示阶段显示的是前一个周期的合成图层阶段合成的数据,因此,图2中示出的三个周期的三个不同处理阶段针对的是同一个应用程序启动的任务处理过程。
随着智能终端的发展,用户对人机交互的体验、系统流畅性的体验的要求越来越高,外界也越来越关注和重视智能终端等电子装置使用过程中的系统流畅性,各种媒体测评、智能终端设备的发布,都在宣传其系统的流畅性来,从而引导用户的消费体验。
系统的流畅性,指的是每帧的处理时间不超过系统规定的频率时间,结合以上列举的60Hz的显示系统,在显示图形用户界面的过程,系统的流畅性可以指的是单帧的处理时间不超过16.67ms。对于包括UI线程和渲染线程的渲染图层阶段,单帧的处理时间如果超过指定时间,例如16.67ms,系统会发生丢帧或者卡顿现象,影响了系统的流畅性,用户体验差。
应理解,单帧的处理时间超过16.67ms导致系统的流畅性差的重要原因之一是系统在特定时间运行的系统负载或者任务过重,导致其无法在规定的16.67ms的时间内处理完任务,从而导致卡顿。例如,对于包括UI线程和渲染线程的渲染图层阶段,如果UI线程运行的任务过大,就会影响UI线程和渲染线程的耗时,从而导致卡顿。
为了提升了系统的流畅性,电子装置厂商更多的是通过提升中央处理单元(central processing unit,CPU)或图形处理单元(graphic processing units,GPU)的调度优先级来实现。具体地,在用户启动某个应用程序时,对CPU来说,需要运行多个线程,UI线程是多个线程中的一个线程。多个线程的运行会占用CPU资源,CPU资源可以包括线程运行的CPU大核、小核,以及所在核的CPU频率等。其中,CPU的大核处理能力强,功耗大;小核处理能力弱,功耗小。为了提升系统的流畅性,对于不同的UI线程设置不同的调度优先级,例如将UI线程(主线程)运行在CPU的大核上,在一定程度上可以缓解卡顿,但是这样就会增加CPU的功耗。
另一种可能的实现方式中,通过提升频点来提升单帧运行效率,即提升CPU的处理速度。例如,在一定范围内,将CPU的运行速度由300M提升到800M。但是,该过程中,并不是针对所有的用户操作都需要提升频点,对于轻负载的线程,提升频点反而会造成CPU的额外功耗。
此外,在另一种可能的实现方式中,还通过增加异步线程的方式提高系统的处理速度,即通过增加额外的线程(例如V线程等),使得系统可以并行的处理多个线程,但是该方法在线程检测过程中无法运行,甚至导致系统崩溃。例如,对于Android系统,规定有些任务只能在主线程进行处理,如果在线程检测过程中检查到这些任务不在主线程处理会导致应用崩溃。
综上所述,无论是通过提升CPU或GPU频点,还是调整调度的优先级的方式来降低UI线程和渲染线程都会带来额外功耗代价。并且,以上方案必须要求UI线程和渲染线程在单帧的处理时间内(如单帧的处理时间为16.67ms)完成才能不丢帧,如果超过单帧的处理时间都会导致丢帧问题,因此,需要一种提升系统的流畅性的方法,减少卡顿,提升用户体验。
应理解,对于图1和图2中介绍的UI线程,可以包括多项处理任务和函数,例如负载函数等,负载函数过多、任务过重,都会导致系统无法在规定时间内完成处理任务,从而导致卡顿。为了提高系统的流程性,本申请将提供一种数据处理的方法,通过降低UI线程的处理时间来提升系统的流畅性,避免卡顿,提高用户体验。
根据前述的相关介绍,在UI线程包括多项处理任务,其中任务包括时序无关任务和 时序有关任务。具体地,时序无关任务可以理解为不依赖在前的任务且运行结果不会影响在后任务处理的任务。例如,UI线程中包括多个按照顺序执行的任务A、任务B、任务C、任务D和任务E,其中,任务C定义为时序无关任务,任务A、任务B、任务D和任务E定义为时序有关任务。那么,任务C可以不依赖任务A、任务B运行完;而其他的任务,例如任务B,必须依赖任务A的运行完,才能运行任务B,任务D必须在任务C运行完之后才能运行。对于任务C,作为时序无关任务,也是当前UI线程必需的处理任务。一个周期内的主线程中可以包含多个时序无关任务,以下以单个时序无关任务为例介绍本申请提供的数据处理的方法,多个时序无关任务的处理原则与单个相同,本申请不再赘述。
图3是本申请实施例提供的一例数据处理的方法流程图。以基于60Hz的安卓系统的电子装置为例,如图3所示,该方法可以包括以下内容:
301,外部事件触发UI线程工作。应理解,这里外部事件可以是用户的操作事件,也可以是提前设定的事件,本申请对此不做限定。
302,发送信号,停止UI线程的预处理流程。
303,UI线程启动,即UI线程开始运行。
304,UI线程按照原系统流程处理非预处理任务。
应理解,UI线程可以包括预处理流程和非预处理流程。其中,预处理流程可以理解为时序无关任务的处理过程,非预处理流程可以理解为当前UI线程的必需的处理时序有关任务的流程。例如,如前述UI线程中包括任务A、任务B、任务C、任务D和任务E,其中,将任务C的处理过程称为预处理流程,任务A、任务B、任务D和任务E的处理过程称为非预处理流程。
具体地,外部事件可以触发系统发送VSYNC信号,VSYNC信号触发一个周期内的UI线程开始进行任务处理,称为第一周期内的UI线程,即本申请中所说第一周期内的UI线程可以理解为“当前运行的UI线程”。当接收到一个VSYNC信号,如果正在运行UI线程预处理流程,则停止当前UI线程的预处理流程,执行步骤303,UI线程开始工作;如果当前没有运行预处理流程,当接收到一个VSYNC信号,则直接进入步骤303,直接开始运行新的UI线程,UI线程处理非预处理任务。
在一种可能的实现方式中,第一周期内的UI线程运行时,可以获取当前的时序无关任务的预处理结果,该预处理结果是在第二周期内处理所述时序无关任务得到的结果,所述第二周期是所述第一周期之前的周期。
示例性的,任务C可以是当前第一周期的时序无关任务,且任务C是在第二周期内处理并缓存处理结果,其中第二周期是第一周期之前的周期。换言之,任务C经过在当前的第一周期之前的第二周期进行预处理,并对任务C的预处理结果进行缓存。
在另一种可能的实现方式中,当渲染图层阶段处理完UI线程的时间在规定的单帧耗时的指定时间(例如16.67ms)内且单帧空闲时间满足继续处理完当前的第一周期之后的一个或多个周期内的时序无关任务时,系统可以在当前的第一周期内处理第一周期之后的一个或多个周期内的时序无关任务,并将该时序无关任务的处理结果进行缓存。其中,第一周期以及第一周期之后的一个或多个周期可以为连续的VSYNC信号形成的多个周期,本申请对第一周期内预处理的时序无关任务的数量不做限定。
305,UI线程处理过程中,从缓存中获取预处理结果。
示例性的,当第一周期内的UI线程中包括多个任务A、任务B、任务C、任务D和任务E,任务C的处理过程为预处理流程,当前UI线程的任务A、任务B、任务D和任务E的处理过程为非预处理流程。任务C的处理结果已经经过上一个周期(例如第二周期)的处理进行缓存,当运行当前第一周期的UI线程时,例如运行完任务A和任务B之后,系统可以直接从缓存中获取预处理的任务C的处理结果;或者,运行完任务A、任务B、任务D和任务E之后,系统可以直接从缓存中获取预处理的任务C的处理结果,本申请对此不做限定。
306,若从缓存中获取成功,则处理非预处理流程,UI线程任务结束。
具体地,运行当前周期的UI线程的过程中,系统如果运行完任务A和任务B之后,直接从缓存中获取预处理的任务C的处理结果,获取完任务C的处理结果后继续运行任务D和任务E,直到UI线程任务结束。
或者,运行当前周期的UI线程的过程中,系统如果运行完任务A、任务B、任务D和任务E之后,直接从缓存中获取预处理的任务C的处理结果,获取完任务C的处理结果后,UI线程任务结束。
307,若从缓存中获取失败,则处理原来的预处理任务。
应理解,系统可能前一个周期(第二周期)的UI线程处理完之后,在该周期内的剩余时间或者空闲时间不足以处理下一个周期(当前的第一周期)的时序无关任务(例如任务C),则不进行时序无关任务的预处理流程,系统在当前的第一周期内,无法获取当前的时序无关任务C的处理结果,此时,系统可以继续处理原来的任务C,直到UI线程运行结束。在本申请中,将一个周期内处理完UI线程的剩余时间成为“单帧空闲时间”或者“单帧空余时长”。
至此步骤306,UI线程运行结束,即可进入渲染线程。
应理解,要实现从缓存中获取预处理的结果,需要提前处理时序无关任务,过程如下:
308,系统根据UI线程状态监测,判断UI线程是否为空闲状态。
309,当UI线程为空闲状态时,进行UI线程的预处理流程。
310,缓存预处理的结果。
具体地,UI线程的空闲状态可以理解为UI线程有充分的或者富裕的单帧空闲时间,即在一个周期内UI线程运行结束后还有剩余时间,或者,UI线程的空闲状态可以理解为UI线程进入IDLE状态。
此外,当UI线程不是空闲状态时,系统继续运行UI线程,不进行预处理流程,运行完UI线程,进入渲染线程,本申请对此不做限定。
图4是本申请实施例提供的一例UI线程的处理过程示意图。如图4所示,在3个不同周期中,分别示出了渲染图层阶段的UI线程和渲染线程过程。其中,在原流程中,每个周期可以包括UI线程和渲染线程。两个相邻的VSYNC信号之间的每个周期的时长T=16.67ms,一个周期为一帧处理进程,换言之,UI线程加渲染线程为一帧处理进程。当UI线程和渲染线程的两者时间之和超过16.67ms时,则会造成当前帧丢帧,导致用户感觉卡顿。
系统可以通过线下分析,确定UI线程中与时序无关的时序无关任务,例如图4中示出的FunA函数,应理解,时序无关的任务,可以是函数或者API接口等,此处以时序无 关任务为例,时序无关API接口处理与之相同,不再赘述。根据前述的相关介绍,该FunA函数相当于前面列举的时序无关任务C,将FunA函数的处理过程作为UI线程的预处理流程,可以不影响当前UI线程的其他任务的处理,而放在其他时段进行处理。为了提高系统的流畅性,在本方案中,通过预处理时序无关任务,减少当前帧的UI运行时间,降低主线程运行的负载,从而实现减少UI线程和渲染线程的总耗时的目的。
在一种可能的实现方式中,把第一周期的UI线程的时序无关任务移动至第二周期的UI线程的单帧空闲时间进行处理,并缓存该UI线程的时序无关任务的处理结果。其中,第二周期是第一周期的前一周期或前几周期,本申请对此不做限定。
具体地,将第一周期的时序无关任务FunA的处理过程移动到第二周期的UI线程的单帧空闲时间进行预处理,并将FunA的处理结果进行缓存。当第一周期的UI线程处理时,直接获取FunA的缓存结果即可,这样就减少了当前的第一周期的UI线程运行FunA的时间,从而减少当前周期的UI线程和渲染线程的总耗时。
示例性的,如图4所示,在原流程中,UI线程和渲染线程的总耗时为T0,且T0大于16.67ms;而在本方案的处理流程中,节省了UI线程运行FunA的时间,UI线程的运行时间缩短到t1(包括处理非预处理流程的时间,以及根据获取到的预处理结果进行计算的时间),总耗时T1=t1+渲染线程的运行时间,且T1小于16.67ms。
此外,UI线程的单帧空闲时间t2=T-t1,如果系统判断第一周期的时序无关任务FunA的处理过程的耗时小于或等于t2,可以将第一周期的时序无关任务FunA搬迁到第二周期进行处理,并将FunA的运行结果进行缓存。
或者,如果系统判断第一周期的时序无关任务FunA的处理过程的耗时小于或等于(T-T1),则可以将第一周期的时序无关任务FunA搬迁到第二周期进行处理,并将FunA的运行结果进行缓存。应理解,本申请对确定UI线程的单帧空闲时间的方式不做限定。
采用本发明实施例,可以缩短UI线程的运行时间,进一步减少UI线程和渲染线程的总耗时,可以提升系统的流畅性,同时,该过程不会影响电子装置的界面显示。
可选地,当系统判断第一周期的时序无关任务FunA的处理过程的耗时小于或等于t2,将第一周期的时序无关任务FunA搬迁到第二周期进行处理时,该时序无关任务FunA可以和渲染线程并行处理,换言之,电子装置可以在渲染线程时段同时运行后一个周期的时序无关任务FunA;或者,当第二周期的渲染线程运行结束后,再运行后第一周期的时序无关任务FunA,本申请对此不做限定。
可选地,当系统判断第一周期和第四周期的时序无关任务FunA的处理过程的总耗时小于或等于t2时,将第一周期和第四周期的时序无关任务FunA都搬迁到第二周期进行处理,并将FunA的运行结果进行缓存,本申请对当前帧处理的时序无关任务FunA的数量或者多个周期内时序无关任务的数量不做限定。
上述根据图4中介绍的方法,预处理时序无关任务的第二周期内的主线程为运行态时,其中,第二周期的时长包括主线程运行时长和主线程空闲时长,当系统确定所述时序无关任务的运行时长小于或等于所述主线程空闲时长时,可以将第二周期之后的一个或多个周期的时序无关任务搬迁到第二周期进行处理。
在另一种可能的实现方式中,预处理时序无关任务的第二周期内的所述主线程可以为空闲态。换言之,可以将第二周期之后的一个或多个周期的时序无关任务搬迁到UI线程 的空闲态的时段进行处理,并缓存该UI线程的时序无关任务的处理结果。
具体地,若一个周期内有时序无关任务耗时特别长,无法利用UI线程的单帧空闲时间,因为可能该类时序无关任务的运行时间已经大于一个周期的时长,将该类处理所需时长大于一个周期的时长的函数称为FunB,具体地,例如FunB可以包括函数obtaniview或者函数inflate等。该类函数处理所需的时间长,无法利用UI线程的单帧空闲时间进行预处理流程,则可以利用UI线程的空闲状态进行处理,并缓存该UI线程的时序无关任务的处理结果。
图5是本申请实施例提供的又一例UI线程的处理过程示意图。如图5所示,在9个不同周期中,阴影部分是系统有处理进程的周期,每个周期内包括前述介绍的UI线程和渲染线程等,UI线程为运行态;空白部分示出的是系统没有处理进程的空闲(IDLE)状态,即UI线程为空闲态。两个相邻的VSYNC信号之间的每个周期的时长T=16.67ms,一个周期为一帧处理进程。
示例性的,在图5中,原流程示出的函数FunB的处理时长t4大于一个周期的规定时长T=16.67ms,无法利用每个周期中的UI线程的单帧空闲时间,则可以利用UI线程的IDLE状态,即将周期8之后的耗时长的函数FunB移动到空白的周期4至周期7的空闲状态,依次对函数FunB做预处理,并将运行结果进行缓存。应理解,系统可以根据IDLE状态的总时长确定可以运行处理多少个周期的函数FunB,本申请对此不做限定。
当周期8,有定时操作,或者符合预测的用户操作,VSYNC信号触发UI线程重新开始工作时,对于执行该耗时长的函数时,直接取IDLE状态下运行的结果,从而减少长函数的运行时间。
应理解,将处理时序无关任务的预处理流程搬迁至UI线程空闲时间或者IDLE状态,可以由系统进行判断。例如,系统可以优先判断是否时序无关任务的运行时长小于或等于所述主线程空闲时长,当满足时序无关任务的运行时长小于或等于所述UI线程空闲时长时,将后一个或多个周期的时序无关任务搬迁到当前周期的UI线程空闲时长进行处理;当不满足时序无关任务的运行时长大于所述UI线程空闲时长时,将后一个或多个周期的时序无关任务搬迁到当前UI线程的IDLE状态进行处理。或者,当系统判断当前有UI线程的IDLE状态时,可以将后一个或多个周期的时序无关任务搬迁到当前UI线程的IDLE状态进行处理,本申请对此不做限定。
通过上述数据处理的方法,缩短了UI线程和渲染线程的总耗时,可以提升系统的流畅性,同时,该过程可以避免丢帧,且不会影响电子装置的界面显示。
以上结合图4和图5介绍了两种缩短UI线程和渲染线程的总耗时以提升系统的流畅性的流程,通过系统对UI线程的状态进行监测,并识别时序无关任务,且对时序无关任务的运行耗时进行估计,再确定时序无关任务的处理原则,例如确定图4或者图5的处理流程,对时序无关任务进行处理。具体地,在该过程中,以时序无关任务为例,可以包括以下过程(时序无关API接口处理与之相同,此处不再赘述):
一、监测UI线程的状态
对应于图3中的步骤308之前,系统监测UI线程的状态。具体地,主要监测两个信息,UI线程单帧空闲时间或者UI线程是否处于IDLE状态。
其中,UI线程的单帧空闲时间可以通过获取UI线程的运行时间获得。
在一种可能的实现方式中,在确定UI线程的单帧空闲时间的过程中,如图4中,UI线程的单帧空闲时间等于单帧总时间(单帧总时间T=16.67ms)减去UI线程运行时间,则t2=T-t1。
此外,系统还监测UI线程是否处于IDLE状态,具体可以通过监听线程状态确定UI线程是否处于IDLE状态。具体地,本申请对系统获取并确定UI线程的状态的方式不做限定。
二、时序无关任务的耗时估计
对应于图3中的步骤309,由于需要当前帧预处理下一帧的时序无关任务,因而需要预估当前帧执行时序无关任务的时间,用于估计UI线程的单帧空闲时间t2是否可以处理时序无关任务。
例如,对于耗时较短的函数FunA,系统预估运行耗时为t3,当t3小于或者等于单帧空闲时间t2时,可以利用图4中介绍的数据处理的方法,利用UI线程的单帧空闲时间进行预处理流程;或者,对于耗时较长的函数FunB等,系统预估运行耗时为t4,当t4大于单帧空闲时间t2时,可以利用图5中介绍的数据处理的方法,利用IDLE状态进行预处理流程。其中t3,t4可以为预先设置在系统中,或者从其他设备/网络侧获取;如系统无法获取或获取失败,还可以根据该函数参数,例如类型,计算复杂度等,参照类似函数的耗时确定。
应理解,系统可以预估处理时序无关任务的时间。在系统预估过程中,为了提高预估的准确性,可选地,可以统计一段时间内时序无关任务的运行时间,记录该时序无关任务的运行时间均值以及实际的波动性。例如,可以采用3δ原则,使得预估得到的时序无关任务的运行时间的均值可以满足99.98%的情况,进而提高预估的准确性。以上一和2可以先后发生,也可以同时进行,还可以对UI线程的状态持续监控,而处理时序无关任务的估计时间只获取一次。
三、时序无关任务处理策略
系统通过UI线程的状态监测确定当前帧的单帧空闲时间t2,或者确定了UI线程是否处于IDLE状态;此外,系统并获取了时序无关任务的运行时间,根据以上的信息确定时序无关任务的处理策略。
若满足时序无关任务波动性小并且时序无关任务的运行时间均值小于或等于单帧空闲时间,则利用单帧空闲时间t2预处理时序无关任务,即按照图4的本方案的处理流程,将下一周期的时序无关任务移动到当前帧进行处理,并缓存处理结果。
若时序无关任务波动性大并且时序无关任务的运行时间均值大于单帧空闲时间t2,则不处理下个周期的时序无关任务。
若时序无关任务波动性大并且时序无关任务的运行时间均值大于单帧空闲时间t2,系统确定当前帧UI线程是否处于IDLE状态,则利用IDLE状态预处理时序无关任务,即按照图5的本方案的处理流程,将时序无关任务移动到IDLE状态进行预处理,并缓存处理结果。
四、如何处理时序无关任务
若满足处理时序无关任务的条件时,则开始处理时序无关任务,这里的开始处理是指的当前UI线程运行完,直接和渲染线程并行处理时序无关任务。或者在进入空闲状态后 开始处理时序无关任务。
以上是系统对时序无关任务的识别和处理过程,当时序无关任务处理结束后,系统可以进入渲染线程;或者,时序无关任务和渲染线程同时处理,缓存预处理的结果,本申请对此不做限定。
应理解,若UI线程有富裕的单帧空闲时间或者进入IDLE状态,开始预处理流程并缓存预处理的结果,直到缓存完成,或者有信号中断预处理过程时结束运行,并开始新的UI线程。又或者,当系统根据UI线程状态监测,判断UI线程不是空闲状态时,不运行UI线程的预处理流程。
综上,UI线程运行结束,当下一帧运行过程就可以循环执行步骤305至步骤311的操作,从缓存中获取预处理的函数结果,直到UI线程运行结束,进入渲染线程。
上述方案,通过线下识别UI线程中的时序无关任务,其可以是时序无关任务或者时序无关任务的Android API接口等,并识别UI线程的单帧空闲时间,或者UI线程是否处于IDLE状态。当时序无关任务的运行时间均值小于或等于单帧空闲时间,则利用单帧空闲时间预处理时序无关任务,把UI线程中的时序无关任务“搬迁”到UI线程单帧空闲时间进行处理,并缓存处理结果。或者,利用IDLE状态预处理时序无关任务,将UI线程中的时序无关任务“搬迁”到UI线程的IDLE状态时段做处理,并缓存处理结果。该方法可以减轻部分场景下的UI线程的负载的任务,减少卡顿,从而提升系统的流畅性,提升用户体验。例如,对于UI线程+渲染线程的总时长超过16.67ms的情况,现有的系统会发生卡顿,采用本发明实施例,比如,将时序无关任务FunC进行预处理,对于UI线程+渲染线程=18ms,如果UI线程运行为10ms,其中的时序无关任务FunC的运行时间为3ms,渲染线程运行为8ms,那么我们可以利用UI线程运行10ms后的时间,提前处理FunC,那么UI线程+渲染线程=15ms,这样就不会出现丢帧情况,用户体验到的显示是流畅且无卡顿的。
图6是本申请实施例提供的一例流畅性测试结果示意图。如图6所示,以用户滑动微信为测试场景,假设用户滑动的机械手速为600毫米每秒(mm/s)。横坐标为帧长,单位为ms,虚线处横坐标为一个周期的时间长度16.67ms,表示运行一帧的时间长度;纵坐标为比重。具体地,以滑动微信操作为例,统计了20万帧的系统的渲染图层阶段(UI线程和渲染线程)的运行时间。在该统计过程中,优化前的曲线代表了20万帧中系统的渲染图层阶段占据不同帧长的百分比,例如,优化前的20万帧中的94.70%的渲染图层阶段的帧长可以保证在16.67ms之内,优化后的20万帧中的98.72%的渲染图层阶段的帧长可以保证在16.67ms之内,明显经过本申请提供的方法的优化,运行时间控制在16.67ms的百分比提高了近4个百分点,明显缩短了系统的运行时长,提高了系统的流畅性。
结合上述介绍的实施例及相关附图,本申请提供了一种数据处理的装置,图7是本申请实施例提供的一种数据处理的装置700的示意图。可以理解的是,数据处理的装置700可以是前述的电子装置,或者应用于电子装置的芯片或组件,该装置700中各模块或单元分别用于执行上述方法300中介绍的各动作或处理过程,装置700包含了执行各个功能相应的硬件和/或软件模块,如图7所示,700可以包括:
UI线程处理单元710,用于运行UI线程,处理UI线程包括多项处理任务,例如成员函数Draw、负载Layout等。UI线程处理单元710可以接收VSYNC信号,触发UI线程 开始执行。具体的,UI线程处理单元710可以包括接收模块712,接口模块714和处理模块717。其中,接收模块712用于接收触发信号和待处理数据;接口模块714用于从优化单元获取时序无关任务的预处理结果;处理模块717用于处理待处理数据中时序有关的任务,并结合获得的时序无关任务的预处理结果,确定UI线程处理结果,并将UI线程处理结果发给渲染线程处理单元720。
渲染线程处理单元720,接收UI线程处理单元710发送的DrawFrame命令,对接收到的数据进行渲染处理。渲染线程处理单元720内部包含一个任务队列(Task Queue),从UI线程发送过来的DrawFrame命令就会保存在渲染线程处理单元720的Task Queue,等待渲染线程的处理。
SF合成器730,用于根据渲染线程处理单元720的图形缓冲区的结果,合成前一阶段的图层。SurfaceFlinger开始合成图层,如果之前提交的GPU渲染任务没结束,则等待GPU渲染完成,再合成,合成阶段依赖GPU完成。
显示单元740,用于显示SF合成器730合成后的数据。例如LCD显示模块可以进行显示。
本发明实施例中,对图1所示的合成图层阶段和LCD显示阶段没有改动,即对应的渲染线程处理单元720,SF合成器730和显示单元740与已有的相同,此处不再赘述。
优化单元750。在本申请中,通过优化单元750以实现前述图3至图5中给介绍的数据处理的方法,该优化单元750可以包括UI线程状态检测模块752,处理模块758,时序无关任务的运行时间获取模块754以及队列缓存模块756。
其中,状态检测模块752,用于采集UI线程的相关信息。
时序无关任务的运行时间获取模块754,用于获取时序无关任务的运行时间。
处理模块758,用于根据来自UI线程状态检测模块752的UI线程状态信息,以及获取到的时序无关任务的运行时间,确定是否移动时序无关任务到UI线程单帧空闲时间或者UI线程的IDLE状态。例如,通过解析获取的数据,统计一定时间周期内前台应用的主线程耗时,状态以及时序无关任务耗时,用于判断当前帧(例如第二周期)是否可以处理时序无关的任务,当时序无关任务的运行时间均值小于或等于单帧空闲时间,则利用单帧空闲时间预处理时序无关任务,把UI线程中的时序无关任务“搬迁”到UI线程单帧空闲时间进行处理。或者,利用IDLE状态预处理时序无关任务,将UI线程中的时序无关任务“搬迁”到UI线程的IDLE状态时段做处理。
队列缓存模块756,用于缓存UI线程预处理流程中时序无关任务的处理结果,当UI线程再次运行到时序无关任务时,直接从该队列缓存模块756里获取结果。
通过以上新增加的优化单元750的各个模块的协同配合,可以减轻部分场景下的UI线程的负载的任务,减少卡顿,从而提升系统的流畅性,提升用户体验。
本发明实施例还提供一种电子装置或电子终端,其除了图7所示的模块外,还可以包括基带处理器,收发器,显示屏,输入输出装置等,例如外部存储器接口,内部存储器,通用串行总线(universal serial bus,USB)接口,充电管理模块,电源管理模块,电池,天线,移动通信模块,无线通信模块,音频模块,扬声器,受话器,麦克风,耳机接口,传感器模块,按键,马达,指示器,摄像头,显示屏,以及用户标识模块(subscriber identification module,SIM)卡接口等。其中传感器模块可以包括压力传感器、陀螺仪传感器、气压传感 器、磁传感器、加速度传感器、距离传感器、接近光传感器、指纹传感器、温度传感器、触摸传感器、环境光传感器、骨传导传感器等。
可以理解的是,本申请实施例示意的结构并不构成对电子装置或电子终端的具体限定。在本申请另一些实施例中,电子装置或电子终端可以包括比图示或者以上列举的更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
还应理解,这个电子终端可以是手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子终端的具体类型不作任何限制。
本发明实施例还提供一种电子装置,其可以是一个芯片或电路,其仅包含图7所示的UI线程处理单元710,优化单元750和渲染线程处理单元720,本申请对此不做限定。
可以理解的是,电子装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图8示出了上述实施例中涉及的电子装置800的一种可能的组成示意图,如图8所示,该电子装置800可以包括:显示单元801、检测单元802和处理单元803。
其中,显示单元801、检测单元802和处理单元803可以用于支持电子装置800执行上述流程图3至图5中介绍的数据处理的方法等,和/或用于本文所描述的技术的其他过程。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子装置,用于执行上述数据处理的方法,因此可以达到与上述实现方法相同的效果。
在采用集成的单元的情况下,电子装置可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子装置的动作进行控制管理,例如,可以用于支持电子装置执行上述显示单元801、检测单元802和处理单元803执行的步骤。存储模块可以用于支持电子装置执行存储程序代码和数据等。通信模块,可以用于支持电子装置与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包 含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子装置交互的设备。
在一个实施例中,当处理模块为处理器,存储模块为存储器时,本实施例所涉及的电子装置可以为具有图1所示结构的设备。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子装置上运行时,使得电子装置执行上述相关方法步骤实现上述实施例中的数据处理的方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的数据处理的方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的数据处理的方法。
其中,本实施例提供的电子装置、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储 器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (17)

  1. 一种数据处理的方法,其特征在于,所述方法包括:
    响应于接收到的触发信号,第一周期内的主线程开始处理所述第一周期内的一项或多项任务中的时序有关任务;
    所述主线程获取所述一项或多项任务中的时序无关任务的预处理结果,所述预处理结果是在第二周期内处理所述时序无关任务得到的结果,所述第二周期是所述第一周期之前的周期;
    所述主线程结合所述第一周期内处理的所述时序有关任务的结果和获取到的所述时序无关任务的预处理结果,确定所述主线程处理结果;
    所述主线程向渲染线程发送所述主线程处理结果。
  2. 根据权利要求1所述的方法,其特征在于,所述第一周期内的主线程开始处理所述时序有关任务之前,所述方法还包括:
    在所述第二周期内处理所述时序无关任务,得到所述时序无关任务的预处理结果;
    缓存所述时序无关任务的预处理结果。
  3. 根据权利要求1或2所述的方法,其特征在于,当所述第二周期内的所述主线程为运行态时,所述第二周期的时长包括主线程运行时长和主线程空闲时长,所述方法还包括:
    确定所述时序无关任务的运行时长小于或等于所述主线程空闲时长。
  4. 根据权利要求1或2所述的方法,其特征在于,所述第二周期内的所述主线程为空闲态。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    确定第三周期内的主线程空闲时长,其中,所述第三周期内的所述主线程为运行态,所述第三周期的时长包括主线程运行时长和主线程空闲时长,所述第三周期是所述第一周期之前的周期;
    当所述时序无关任务的运行时长大于所述第三周期内的主线程空闲时长时,在所述第二周期内处理所述时序无关任务,得到所述时序无关任务的预处理结果。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述响应于接收到的触发信号,第一周期内的主线程开始处理所述第一周期内的一项或多项任务中的时序有关任务,包括:
    响应于接收到的触发信号,停止所述第一周期内主线程的预处理流程,开始处理所述第一周期内的一项或多项任务中的时序有关任务。
  7. 一种数据处理的装置,其特征在于,所述装置包括:
    处理单元,响应于接收到的触发信号,在第一周期内开始处理所述第一周期内的一项或多项任务中的时序有关任务;
    获取单元,用于获取所述一项或多项任务中的时序无关任务的预处理结果,所述预处理结果是在第二周期内处理所述时序无关任务得到的结果,所述第二周期是所述第一周期之前的周期;
    所述处理单元,还用于结合所述第一周期内处理的所述时序有关任务的结果和获取到的所述时序无关任务的预处理结果,确定所述主线程处理结果;
    发送单元,用于向渲染线程发送所述主线程处理结果。
  8. 根据权利要求7所述的装置,其特征在于,所述处理单元开始处理所述时序有关任务之前,所述处理单元还用于:
    在所述第二周期内处理所述时序无关任务,得到所述时序无关任务的预处理结果;
    所述装置还包括:
    缓存单元,用于缓存所述时序无关任务的预处理结果。
  9. 根据权利要求7或8所述的装置,其特征在于,当所述第二周期内的主线程为运行态时,所述第二周期的时长包括主线程运行时长和主线程空闲时长,所述处理单元还用于:
    确定所述时序无关任务的运行时长小于或等于所述主线程空闲时长。
  10. 根据权利要求7或8所述的装置,其特征在于,所述第二周期内的主线程为空闲态。
  11. 根据权利要求10所述的装置,其特征在于,所述处理单元还用于:
    确定第三周期内的主线程空闲时长,其中,所述第三周期内的所述主线程为运行态,所述第三周期的时长包括主线程运行时长和主线程空闲时长,所述第三周期是所述第一周期之前的周期;
    当所述时序无关任务的运行时长大于所述第三周期内的主线程空闲时长时,在所述第二周期内处理所述时序无关任务,得到所述时序无关任务的预处理结果。
  12. 根据权利要求7至11中任一项所述的装置,其特征在于,所述处理单元具体用于:
    响应于接收到的触发信号,停止在所述第一周期内的预处理流程,开始处理所述第一周期内的一项或多项任务中的时序有关任务。
  13. 一种电子装置,其特征在于,包括:一个或多个处理器;存储器;多个应用程序;以及一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,当所述一个或者多个程序被所述处理器执行时,使得所述电子装置执行如权利要求1至6中任一项所述的数据处理的方法。
  14. 一种电子装置,其特征在于,包括:显示器,输入输出装置,基带处理电路,使得所述电子装置执行如权利要求1至6中任一项所述的数据处理的方法。
  15. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子装置上运行时,使得所述电子装置执行如权利要求1至6中任一项所述的数据处理的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至6中任一项所述的数据处理的方法。
  17. 一种电子装置,其特征在于,所述电子装置包括执行如权利要求1至6任一项所述方法的装置。
PCT/CN2020/102534 2019-07-20 2020-07-17 数据处理的方法、装置及电子设备 WO2021013055A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20844411.7A EP4002112A4 (en) 2019-07-20 2020-07-17 DATA PROCESSING METHOD AND APPARATUS AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910657906.3A CN110502294B (zh) 2019-07-20 2019-07-20 数据处理的方法、装置及电子设备
CN201910657906.3 2019-07-20

Publications (1)

Publication Number Publication Date
WO2021013055A1 true WO2021013055A1 (zh) 2021-01-28

Family

ID=68586749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102534 WO2021013055A1 (zh) 2019-07-20 2020-07-17 数据处理的方法、装置及电子设备

Country Status (3)

Country Link
EP (1) EP4002112A4 (zh)
CN (1) CN110502294B (zh)
WO (1) WO2021013055A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116010047A (zh) * 2022-12-12 2023-04-25 爱芯元智半导体(上海)有限公司 线程调度方法、硬件电路及电子设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110502294B (zh) * 2019-07-20 2021-08-20 华为技术有限公司 数据处理的方法、装置及电子设备
CN115631258B (zh) * 2020-07-31 2023-10-20 荣耀终端有限公司 一种图像处理方法及电子设备
CN114338952B (zh) * 2020-09-30 2023-07-11 华为技术有限公司 一种基于垂直同步信号的图像处理方法及电子设备
CN115086756B (zh) * 2021-03-10 2024-02-23 北京字跳网络技术有限公司 视频处理方法、装置及存储介质
CN113238842A (zh) * 2021-05-11 2021-08-10 中国第一汽车股份有限公司 任务执行方法、装置及存储介质
CN115549811A (zh) * 2021-06-30 2022-12-30 深圳市瑞图生物技术有限公司 时序控制方法、装置、干化学扫描设备和存储介质
CN114003177B (zh) * 2021-11-05 2024-02-06 青岛海信日立空调系统有限公司 一种空调器、控制系统和控制方法
CN116723265A (zh) * 2022-09-14 2023-09-08 荣耀终端有限公司 图像处理方法、可读存储介质、程序产品和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105283845A (zh) * 2013-04-11 2016-01-27 脸谱公司 显示对象预生成
CN106354512A (zh) * 2016-09-08 2017-01-25 广州华多网络科技有限公司 用户界面渲染方法及装置
CN107491346A (zh) * 2016-06-12 2017-12-19 阿里巴巴集团控股有限公司 一种应用的任务处理方法、装置及系统
CN109669752A (zh) * 2018-12-19 2019-04-23 北京达佳互联信息技术有限公司 一种界面绘制方法、装置及移动终端
CN109901926A (zh) * 2019-01-25 2019-06-18 平安科技(深圳)有限公司 基于大数据行为调度应用任务的方法、服务器及存储介质
CN110502294A (zh) * 2019-07-20 2019-11-26 华为技术有限公司 数据处理的方法、装置及电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6894693B1 (en) * 2001-02-09 2005-05-17 Vicarious Visions Inc. Management of limited resources in a graphics system
US8924976B2 (en) * 2011-08-26 2014-12-30 Knu-Industry Cooperation Foundation Task scheduling method and apparatus
CN104574265B (zh) * 2014-12-30 2018-04-17 中科九度(北京)空间信息技术有限责任公司 卫星遥感图像数据的处理方法及装置
CN107015871A (zh) * 2016-12-07 2017-08-04 阿里巴巴集团控股有限公司 一种数据处理方法和装置
CN107193551B (zh) * 2017-04-19 2021-02-02 北京永航科技有限公司 一种生成图像帧的方法和装置
CN107491931A (zh) * 2017-07-12 2017-12-19 浙江大学 一种基于众创设计的设计任务数据分解方法
CN107729094B (zh) * 2017-08-29 2020-12-29 口碑(上海)信息技术有限公司 一种用户界面渲染的方法及装置
US10424041B2 (en) * 2017-12-11 2019-09-24 Microsoft Technology Licensing, Llc Thread independent scalable vector graphics operations
CN109992347B (zh) * 2019-04-10 2022-03-25 Oppo广东移动通信有限公司 界面显示方法、装置、终端及存储介质
CN109996104A (zh) * 2019-04-22 2019-07-09 北京奇艺世纪科技有限公司 一种视频播放方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105283845A (zh) * 2013-04-11 2016-01-27 脸谱公司 显示对象预生成
CN107491346A (zh) * 2016-06-12 2017-12-19 阿里巴巴集团控股有限公司 一种应用的任务处理方法、装置及系统
CN106354512A (zh) * 2016-09-08 2017-01-25 广州华多网络科技有限公司 用户界面渲染方法及装置
CN109669752A (zh) * 2018-12-19 2019-04-23 北京达佳互联信息技术有限公司 一种界面绘制方法、装置及移动终端
CN109901926A (zh) * 2019-01-25 2019-06-18 平安科技(深圳)有限公司 基于大数据行为调度应用任务的方法、服务器及存储介质
CN110502294A (zh) * 2019-07-20 2019-11-26 华为技术有限公司 数据处理的方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4002112A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116010047A (zh) * 2022-12-12 2023-04-25 爱芯元智半导体(上海)有限公司 线程调度方法、硬件电路及电子设备
CN116010047B (zh) * 2022-12-12 2023-12-15 爱芯元智半导体(宁波)有限公司 线程调度方法、硬件电路及电子设备

Also Published As

Publication number Publication date
EP4002112A1 (en) 2022-05-25
EP4002112A4 (en) 2022-10-05
CN110502294B (zh) 2021-08-20
CN110502294A (zh) 2019-11-26

Similar Documents

Publication Publication Date Title
WO2021013055A1 (zh) 数据处理的方法、装置及电子设备
EP2473914B1 (en) Hardware-based scheduling of graphics processor unit (gpu) work
CN110300328B (zh) 一种视频播放控制方法、装置及可读存储介质
EP3866007A1 (en) Intelligent gpu scheduling in a virtualization environment
CN108140234B (zh) 基于命令流标记的gpu操作算法选择
WO2020073672A1 (zh) 资源调度方法和终端设备
US9715407B2 (en) Computer product, multicore processor system, and scheduling method
WO2014015725A1 (zh) 基于应用效果即时反馈的显卡虚拟化下资源调度系统、方法
WO2004095248A2 (en) Performance scheduling using multiple constraints
JPH11282568A (ja) セルフタイムドシステムの電力消耗の低減装置及びその方法
US9244740B2 (en) Information processing device, job scheduling method, and job scheduling program
CN114610472A (zh) 异构计算中多进程管理方法及计算设备
US11221875B2 (en) Cooperative scheduling of virtual machines
JP5590114B2 (ja) ソフトウェア制御装置、ソフトウェア制御方法、およびソフトウェア制御プログラム
JP5862722B2 (ja) マルチコアプロセッサシステム、マルチコアプロセッサシステムの制御方法、およびマルチコアプロセッサシステムの制御プログラム
CN116795503A (zh) 任务调度方法、任务调度装置、图形处理器及电子设备
CN112114967B (zh) 一种基于服务优先级的gpu资源预留方法
US20140053162A1 (en) Thread processing method and thread processing system
EP2551776B1 (en) Multi-core processor system, control program, and control method
KR101954668B1 (ko) 이종 멀티코어 프로세서를 이용하는 전자장치에서 전력효율을 개선하기 위한 방법 및 장치
US9015720B2 (en) Efficient state transition among multiple programs on multi-threaded processors by executing cache priming program
WO2024037068A1 (zh) 任务调度方法、电子设备及计算机可读存储介质
WO2021196175A1 (en) Methods and apparatus for clock frequency adjustment based on frame latency
WO2021000226A1 (en) Methods and apparatus for optimizing frame response
WO2024066926A1 (zh) 显示方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844411

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020844411

Country of ref document: EP

Effective date: 20220221