CN111198757A - CPU kernel scheduling method, CPU kernel scheduling device and storage medium - Google Patents
CPU kernel scheduling method, CPU kernel scheduling device and storage medium Download PDFInfo
- Publication number
- CN111198757A CN111198757A CN202010011545.8A CN202010011545A CN111198757A CN 111198757 A CN111198757 A CN 111198757A CN 202010011545 A CN202010011545 A CN 202010011545A CN 111198757 A CN111198757 A CN 111198757A
- Authority
- CN
- China
- Prior art keywords
- cluster
- core
- scheduling
- task
- cpu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012544 monitoring process Methods 0.000 claims description 12
- 230000003213 activating effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000009877 rendering Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000017525 heat dissipation Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 208000003028 Stuttering Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
- G06F9/4862—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
- G06F9/4875—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate with migration policy, e.g. auction, contract negotiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The disclosure relates to a CPU core scheduling method, a CPU core scheduling device and a storage medium. The CPU kernel scheduling method is applied to a terminal, a frame drawing application is installed on the terminal, a CPU of the terminal supports a multi-core cluster platform architecture, and the CPU kernel scheduling method comprises the following steps: respectively determining the scheduling delay time of each kernel in each kernel cluster in the multi-kernel cluster in a frame drawing period; if the scheduling delay time of each kernel in a first cluster exceeds a specified delay time threshold, scheduling a specified number of tasks in the first cluster to a second cluster; and the second cluster is a cluster different from the first cluster, and the performance index meets the requirement of running the specified number of tasks. By the method and the device, scheduling delay in a heavy load scene can be reduced, and jamming is reduced.
Description
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a CPU core scheduling method, a CPU core scheduling apparatus, and a storage medium.
Background
With the popularization of touch screen smart phones and the high-speed development of mobile phone hardware, applications supported by smart phones are increasing, and the demand for processing capability of hardware devices such as a Central Processing Unit (CPU) of a terminal is also continuously increasing. For example, mobile games are popular with young people for many years as a daily entertainment mode for relaxing mood, reducing blood pressure and emission. From the early single-player games to the multi-player on-line sports (MOBA) games in the market, the load of the games is higher and higher, and the requirement on the processing capacity of hardware devices such as CPUs is continuously increased.
Due to the constraints of heat dissipation and power consumption, hardware devices such as CPUs cannot continuously maintain high-performance operating frequency. In order to better balance the relationship between performance and power consumption, the CPU evolves a big-to-small architecture, i.e., a large-core architecture. The method comprises the steps of dividing threads into large and small tasks by tracking load changes of the threads, running the large tasks with heavy loads on a large core (Big core) to obtain better performance experience, and running the small tasks with light loads on a small core (LITTLE core) to save power consumption.
Furthermore, in order to fully utilize the performance of hardware and take power consumption and heat into consideration, one or two extra cores are added to some platforms on the basis of a large-core and small-core architecture to provide additional performance support, so that a novel multi-core cluster platform architecture is hatched from the traditional Big-core and small-core dual cluster architecture, for example, a three-core cluster (3-cluster) platform architecture, namely a small-medium-large (Little-Mid-Big) architecture. The Little-Mid-Big architecture comprises a small core cluster (Little cluster) consisting of small cores, a medium core cluster (Mid cluster) consisting of medium cores and a Big core cluster (Big cluster) consisting of Big cores. Among them, Little cluster is used to help realize the power consumption saving under the low load condition, Mid cluster is used to provide the performance support, Big cluster is used to satisfy the interactive input that requires strict or sensitive to the response time, and when the system load is unusually heavy, provide the extra performance support.
However, due to the consideration of power consumption and the restriction of the heat dissipation capacity of hardware, Big cluster is not fully utilized in a game scene, and practical tests show that in the backtracking process of the reason that some loads suddenly increase to cause frame loss in the game scene, because hardware resources are not fully utilized in time, the game computing capacity is not satisfied in time, and the problem of frame loss is relatively common.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a CPU core scheduling method, a CPU core scheduling apparatus, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, a method for scheduling a CPU core is provided, where the method is applied to a terminal, a frame drawing application is installed on the terminal, and a CPU of the terminal supports a multi-core cluster platform architecture, and the method for scheduling a CPU core includes:
respectively determining the scheduling delay time of each kernel in each kernel cluster in the multi-kernel cluster in a frame drawing period; if the scheduling delay time of each kernel in a first cluster exceeds a specified delay time threshold, scheduling a specified number of tasks in the first cluster to a second cluster; and the second cluster is a cluster different from the first cluster, and the performance index meets the requirement of running the specified number of tasks.
In one embodiment, the method for scheduling a CPU core further includes:
determining a weight value of a running task in each core cluster in the multi-core cluster according to the size and the importance degree of the task load; the task weight value with a large load is higher than the task weight value with a low load, and the task weight value with a high importance degree is higher than the task weight value with a low importance degree.
In another embodiment, scheduling a specified number of tasks in the first cluster into a second cluster comprises:
and scheduling the tasks in the specified number in the first cluster into the second cluster according to the sequence of the weighted values of the tasks from high to low.
In another embodiment, the second cluster is a larger core cluster with a performance index higher than that of the first cluster in the multi-core cluster, and/or the second cluster is a core cluster in which the sum of the scheduling delay times of the cores in the multi-core cluster is smaller than the sum of the scheduling delay times of the cores in the first cluster.
In yet another embodiment, when a new task is enqueued or task scheduling is switched, scheduling delay time of each core in each core cluster in the multi-core cluster is determined in a frame drawing period.
In another embodiment, the method for scheduling a CPU core further includes:
monitoring the number of ready tasks in each core cluster in the multi-core cluster in real time; and if the number of ready tasks in the core cluster exceeds a set task number threshold value and an inactive kernel exists in the core cluster of which the number of ready tasks exceeds the set task number threshold value, activating the inactive kernel.
In yet another embodiment, the multi-core cluster platform architecture is a three-core cluster platform architecture.
According to a second aspect of the embodiments of the present disclosure, there is provided a CPU core scheduling apparatus, which is applied to a terminal, where a frame drawing application is installed on the terminal, and a CPU of the terminal supports a multi-core cluster platform architecture, the CPU core scheduling apparatus includes:
a determining unit, configured to determine scheduling delay times of cores in each core cluster in the multi-core cluster in a frame drawing cycle, respectively; the scheduling unit is used for scheduling a specified number of tasks in a first cluster to a second cluster when the scheduling delay time of each core in the first cluster exceeds a specified delay time threshold; and the second cluster is a cluster different from the first cluster, and the performance index meets the requirement of running the specified number of tasks.
In one embodiment, the scheduling unit is further configured to:
determining a weight value of a running task in each core cluster in the multi-core cluster according to the size and the importance degree of the task load; the task weight value with a large load is higher than the task weight value with a low load, and the task weight value with a high importance degree is higher than the task weight value with a low importance degree.
In another embodiment, the scheduling unit schedules a specified number of tasks in the first cluster to a second cluster as follows:
and scheduling the tasks in the specified number in the first cluster into the second cluster according to the sequence of the weighted values of the tasks from high to low.
In another embodiment, the second cluster is a larger core cluster with a performance index higher than that of the first cluster in the multi-core cluster, and/or the second cluster is a core cluster in which the sum of the scheduling delay times of the cores in the multi-core cluster is smaller than the sum of the scheduling delay times of the cores in the first cluster.
In another embodiment, when a new task is enqueued or task scheduling is switched, the determining unit determines the scheduling delay time of each core in each core cluster in the multi-core cluster in a frame drawing period.
In another embodiment, the scheduling unit is further configured to:
monitoring the number of ready tasks in each core cluster in the multi-core cluster in real time; and when the number of ready tasks in the core cluster exceeds a set task number threshold value and an inactive kernel exists in the core cluster of which the number of ready tasks exceeds the set task number threshold value, activating the inactive kernel.
In yet another embodiment, the multi-core cluster platform architecture is a three-core cluster platform architecture.
According to a third aspect of the embodiments of the present disclosure, there is provided a CPU core scheduling apparatus, including:
a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: executing the method for scheduling a CPU core described in the first aspect or any one of the implementation manners of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to execute the CPU core scheduling method described in the first aspect or any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the scheduling delay time of each core in each core cluster is determined in a frame drawing period, and based on the scheduling delay time of each core, tasks with the scheduling delay time exceeding a specified delay time threshold value in the specified number in the core cluster are scheduled to other core clusters, so that the scheduling delay in a heavy load scene is reduced, the blocking is reduced, and the use experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method for CPU core scheduling in accordance with an exemplary embodiment.
FIG. 2 is a task profile of Mid cluster and Big cluster shown in accordance with an exemplary embodiment.
FIG. 3 is a task profile of additional task enqueuing shown in accordance with an exemplary embodiment.
FIG. 4 is a task scheduling diagram for Mid cluster and Big cluster shown in an exemplary embodiment.
Fig. 5 is a diagram illustrating a process of kernel activation based on a fixed time window in the related art according to an exemplary embodiment.
FIG. 6 is a diagram illustrating a process for monitoring the number of tasks and activating a kernel in real time based on a frame rendering period in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating a CPU core scheduling apparatus in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating an apparatus for CPU core scheduling in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The CPU kernel scheduling method provided by the embodiment of the disclosure is applied to a terminal installed with a frame drawing application. The frame drawing class has a frame drawing period in its course of operation. The frame drawing period may be understood as an adjacent two-frame image display interval.
In the embodiment of the present disclosure, the frame drawing application may be an APP application installed on the terminal, for example, the frame drawing application may be a game. The following description of the embodiments of the present disclosure will be given by taking a frame drawing application as an example, but the embodiments of the present disclosure are not limited to the frame drawing application as a game application, and may be other applications.
The terminal provided with the frame drawing application in the embodiment of the disclosure supports a multi-core cluster platform architecture, such as a 3-cluster platform architecture.
In the embodiment of the disclosure, the example that the multi-core cluster platform architecture is a 3-cluster platform architecture is taken as an example for explanation.
In the system supporting the 3-cluster platform architecture, the system comprises a small core cluster (Littlecluster) consisting of small cores, a medium core cluster (Mid cluster) consisting of medium cores and a large core cluster (Big cluster) consisting of large cores, and the performance indexes of the cores on different clusters are different. The kernel on the Big cluster has the highest performance index, can provide the maximum cpu performance, but has the maximum power consumption overhead. The kernel on the Little cluster has the optimal power consumption performance, but cannot meet the performance requirements of a high-load scene. And the performance and the power consumption of the kernel on the Mid cluster are between Big-Little. To account for this difference, the scheduler will bind different capability indicators to the kernels on different clusters. For example, the capacity index of the large core is set to 1000, the capacity index of the medium core is set to 800, and the capacity index of the small core is set to 400. Therefore, when a certain task wakes up for enqueuing, the cluster which can meet the performance requirement can be selected according to the historical load value of the enqueued task. And then selects the appropriate kernel from the set of kernels to perform the task. The task load in the operating system is approximately equal to the task running time within a certain fixed window period, and updates are performed at the end of each window period.
For convenience of description, in the embodiments of the present disclosure, the load of the task is set to be equal to the running time of the task, and the time is assumed to be a standardized value, regardless of the running frequency of the CPU on which the task runs and an Inter-process communication (IPC) value.
In the related art, aiming at the CPU core scheduling scheme supporting the 3-cluster platform architecture, the following logic is adopted to select a target CPU core: when a certain task wakes up for enqueuing, whether the load value of the task waking up for enqueuing is smaller than or equal to the set percentage threshold of the capability index of the cluster to be examined is examined in sequence according to the arrangement sequence of the Little-Mid cluster. And awakening the load value of the enqueue task to be less than or equal to the set percentage threshold of the cluster capability index to be examined, and determining that the cluster to be examined meets the performance requirement of the current task, so that a proper CPU core can be selected from the cluster meeting the performance requirement of the current task to execute the task. And if the load value of the awakening enqueue task is larger than the set percentage threshold of the cluster capability index to be inspected, continuously searching the cluster with better performance index until finding out the CPU core meeting the condition or inspecting the CPU core in the Bigcluster.
For simplifying the description, the embodiment of the present disclosure performs model simplification as follows, assuming that the window cycle length of load update is 100ms, the maximum performance index in the system is 1, Big cluster can reach the index of 100% performance, Mid cluster can reach 80% at most, and Little cluster can reach 40% at most.
According to the method for scheduling the CPU core in the related technology, the following scheme can be adopted:
1: when the task load (run time) does not exceed 100ms (window cycle length) 85% (threshold percentage) maximum performance that can be provided by 1 x 40% corelet, the task will run onto the Little cluster kernel.
2: the task load (run time) does not exceed 100ms (window cycle length) 85% (threshold percentage) 68ms, and is run to Mid cluster kernel when it is greater than 34 ms.
3: when the load (running time) of the task exceeds 68ms, the task runs to the kernel of Big cluster.
However, in the scene of actually running the frame drawing application such as the game, when frame loss is found, a composite scene in which the kernel utilization rate of Big cluster is very low and the Mid cluster scheduling delay is very high exists. The reason for this analysis is mainly that the conditions for using the large core are too strict and inflexible, resulting in uneven distribution of tasks among different clusters. For example, when a large number of ready tasks with loads between 34-68 appear in the system for a certain period of time, the backlog of most tasks in the Mid cluster is caused because the number of the tasks is far more than the number of available cores in the Mid cluster and the condition that the Big cluster is not scheduled is not met. Similarly, when a large number of ready tasks with loads higher than 68 occur in the system for a certain period of time, the task backlog of the large core is caused, and high scheduling delay is caused.
In summary, in the process of scheduling the CPU core in the multi-core cluster platform architecture, there are phenomena such as deadlock caused by task scheduling delay.
In view of this, the embodiments of the present disclosure provide a method for scheduling CPU cores, where the scheduling delay time of each core in each cluster is determined in a frame drawing cycle, and based on the scheduling delay time of each core, a specified number of tasks in the cluster whose scheduling delay time exceeds a specified delay time threshold are scheduled to other clusters, so as to reduce scheduling delay in a heavy load scene, reduce stuttering, and improve use experience.
Fig. 1 is a flowchart illustrating a method for scheduling a CPU core according to an exemplary embodiment, where the method for scheduling a CPU core is used in a terminal, a frame drawing application is installed on the terminal, and a CPU of the terminal supports a multi-core cluster platform architecture, for example, a 3-cluster platform architecture.
As shown in fig. 1, the CPU core scheduling method includes the following steps.
In step S11, the scheduling delay time of each core in each of the multiple cores is determined within the frame drawing period, respectively.
In step S12, if there is a cluster in which the scheduling delay times of the respective cores exceed the specified delay time threshold, the specified number of tasks in the cluster are scheduled to other clusters.
For convenience of description in the embodiments of the present disclosure, a cluster in which the scheduling delay time of each core exceeds a specified delay time threshold is referred to as a first cluster. The other cluster to which the task in the first cluster is scheduled is called the second cluster. The second cluster is different from the first cluster, and the performance index can meet the requirement of running the tasks scheduled from the first cluster.
In the embodiment of the disclosure, if the scheduling delay time of each core exceeds the first cluster of the specified delay time threshold, it may be determined that the task backlog on the first cluster causes an increase in the scheduling delay. By scheduling the tasks on the first cluster on the second cluster, the ready task number of each cluster can be actively balanced, so that the problem of frame loss caused by uneven task distribution is avoided.
The embodiment of the present disclosure will be described below with reference to practical applications.
In an example of the present disclosure, when a new task is enqueued or task scheduling is switched, scheduling delay time of each core in each cluster in multiple clusters may be respectively determined in a frame drawing cycle, so as to implement balanced scheduling of CPU cores in time when a new task or task is scheduled.
In another example of the present disclosure, a weight value of a running task in each cluster in a plurality of clusters is determined according to the size and the importance degree of the task. The task weight value with the large load is higher than the task weight value with the low load, the task weight value with the high importance degree is higher than the task weight value with the low importance degree, and finally the larger the load is, the larger the task weight value with the higher importance degree is.
In the embodiment of the present disclosure, when the weight value of a task is determined, a specified number of tasks in a first cluster may be scheduled to a second cluster according to the weight value of each task in the first cluster. For example, the embodiment of the present disclosure may schedule a specified number of tasks in the first cluster into the second cluster in the order of the weighted values of the tasks from high to low.
It is understood that the number of tasks scheduled from the first cluster in the embodiment of the present disclosure may be determined according to performance indexes of the first cluster and the second cluster, scheduling delay time, and the like. For example, if the scheduling delay time of the first cluster is very large and the performance index of the second cluster is higher, a larger number of tasks can be scheduled into the second cluster. The scheduling delay time of the first cluster is shorter, and the performance index of the second cluster is lower, so that a smaller number of tasks can be scheduled to the second cluster.
In an example, to achieve better core scheduling balance in the embodiment of the present disclosure, the second cluster may be a cluster with a performance index higher than that of the first cluster, for example, the first cluster is a Little cluster, and the second cluster may be a Mid cluster or a Big cluster. In an embodiment, the embodiment of the present disclosure may be a Big cluster with the highest performance index among multiple clusters, so as to implement real-time monitoring of a heavy load task in a frame rendering period, and once it is monitored that a load suddenly increases in a short period, directly migrate a related task to an ultra-large core for processing.
In another example, in order to achieve better core scheduling balance, the second may be a cluster in which the sum of the scheduling delay times of the cores in the 3-cluster is smaller than the sum of the scheduling delay times of the cores in the first cluster. For example, in the embodiment of the present disclosure, if the first cluster is a Big cluster, the second cluster may be a Littlecluster or Mid cluster, etc., so as to avoid a high scheduling delay caused by a large core task backlog.
In the embodiment of the disclosure, the threads in the frame rendering application are sorted according to the load size and the importance by counting and tracking the load change of the thread to which the frame rendering application belongs in each frame rendering period in real time. The greater the load, the greater the weight value of the thread of higher importance. When the scheduling delay caused by the backlog of the task on a certain cluster is monitored to be increased, the number of the ready tasks of each cluster is actively balanced, and therefore the problem of frame loss caused by uneven task distribution is avoided.
In an exemplary application scenario, the method is applied to a 3-cluster platform architecture. If a certain game scene, the task distribution of the current Midcuster and Big cluster is shown in FIG. 2. FIG. 2 is a task profile for Midccluster and Big cluster as shown in an exemplary embodiment. In FIG. 2, there are 3 cores running on Mid cluster: core 0, core 1, and core 2. And the task queues running on kernel 0, kernel 1, and kernel 2 are task 1 → task 2 → task 3, and task 1 → task 2, respectively. There are 1 kernel running on Big cluster: core 0. There is no running task queue on kernel 0. When the task A, the task B and the task C are awakened to be newly enqueued, the loads (running times) of the three types of tasks are between 34 and 68 according to the principle of task allocation in the related art, and therefore the tasks are queued in a running queue on the Mid cluster, as shown in FIG. 3. However, according to the CPU core scheduling method provided in the embodiment of the present disclosure, at this time, the total scheduling delay time of three cores on the Mid cluster is relatively high, and therefore, a specified number of tasks are extracted and scheduled to the Big cluster, for example, as shown in fig. 4, task 1 on core 0 and task B on core 1 are scheduled to the Big cluster, and the number of ready tasks of each cluster is actively balanced, so that the problem of frame loss caused by uneven task allocation is avoided.
Further, in order to better balance the number of ready tasks of each cluster, the number of ready tasks in each cluster may be monitored in real time within a frame drawing period, and if there is a cluster in which the number of ready tasks exceeds a set task number threshold and there is an inactive kernel in the cluster in which the number of ready tasks exceeds the set task number threshold, the inactive kernel is activated.
In the embodiment of the disclosure, the number of ready tasks in each cluster is monitored in real time in a frame drawing period, and when the number of ready tasks exceeds a set task number threshold, more cluster processing tasks are activated in real time, so that the method is suitable for the situation that the number of ready tasks suddenly increases, for example, the situation that the number of ready tasks suddenly increases at a certain moment due to scene change and the like in a game, more cores are activated in real time to process the suddenly increased tasks, and scheduling delay and frame loss can be reduced.
The following describes the scheduling manner related to the above-mentioned method for monitoring the number of ready tasks and activating an inactive kernel in real time in a frame drawing period, with reference to practical applications.
Fig. 5 is a schematic diagram illustrating a process of kernel activation based on a fixed time window in the related art. Assuming that the fixed time window length is 100ms in fig. 5, the average ready task count in each current time period is monitored, and the dynamic on-line and off-line of the kernel are executed. In fig. 5, at the time point when the number of tasks suddenly increases, the time point for adjusting the on-line and the off-line of the kernel is not reached, so that the task with suddenly increased kernel activation processing cannot be performed in time, and scheduling delay and frame loss occur.
Fig. 6 is a schematic diagram of a process of monitoring the number of tasks and activating a kernel in real time based on a frame drawing period in an exemplary embodiment of the present disclosure. In fig. 6, a new monitoring opportunity is added, when a new task enters a run queue in a cluster in a certain frame drawing period, the scheduling congestion condition of the run queue is judged, if the scheduling congestion condition exceeds a certain threshold, an inactive core in the cluster is immediately activated, more cores are activated in real time to process the suddenly increased tasks, and scheduling delay and frame loss can be reduced.
The CPU kernel scheduling method provided by the embodiment of the disclosure is a universal and independent load real-time monitoring and multi-core calling method. By applying real-time monitoring of heavy load tasks to frame drawing of games and the like, once sudden and steep increase of loads in a short period is monitored, related tasks are directly migrated to other suitable cluster for processing, such as to an ultra-large core for processing, and the number of available cores in the system is timely adjusted according to the real-time number of the real-time monitored heavy load threads, so that scheduling delay in a heavy load scene is reduced, game experience is improved, and blocking is reduced.
Based on the same conception, the embodiment of the disclosure also provides a CPU core scheduling device.
It is understood that, in order to implement the above functions, the CPU core scheduling apparatus provided in the embodiments of the present disclosure includes a hardware structure and/or a software module corresponding to executing each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
FIG. 7 is a block diagram illustrating a CPU core scheduler in accordance with an exemplary embodiment. Referring to fig. 7, a CPU core scheduling apparatus 700 is applied to a terminal, a frame drawing application is installed on the terminal, and a CPU of the terminal supports a multi-core cluster platform architecture. The CPU core scheduling apparatus 700 includes a determination unit 701 and a scheduling unit 702.
A determining unit 701, configured to determine scheduling delay times of cores in each core cluster in the multi-core cluster in a frame drawing cycle, respectively. A scheduling unit 702, configured to schedule a specified number of tasks in the first cluster to the second cluster when the scheduling delay time of each core in the first cluster exceeds a specified delay time threshold. The second cluster is different from the first cluster, and the performance index meets the requirement of running a specified number of tasks.
In one embodiment, the scheduling unit 702 is further configured to: and determining the weight value of the running task in each core cluster in the multi-core cluster according to the size and the importance degree of the task load. The task weight value with a large load is higher than the task weight value with a low load, and the task weight value with a high importance degree is higher than the task weight value with a low importance degree.
In another embodiment, the scheduling unit 702 schedules a specified number of tasks in the first cluster to the second cluster in the order of the weighted values of the tasks from high to low.
In another embodiment, the second cluster is a larger core cluster with a higher performance index than the first cluster in the multi-core cluster, and/or the second cluster is a core cluster in which the sum of the scheduling delay times of the cores in the multi-core cluster is smaller than the sum of the scheduling delay times of the cores in the first cluster.
In another embodiment, when a new task is enqueued or task scheduling is switched, the determining unit 701 determines the scheduling delay time of each core in each core cluster in the multi-core cluster in a frame drawing period.
In another embodiment, the scheduling unit 102 is further configured to: and monitoring the number of ready tasks in each core cluster in the multi-core cluster in real time. And when the number of ready tasks in the core cluster exceeds a set task number threshold value and an inactive core exists in the core cluster of which the number of ready tasks exceeds the set task number threshold value, activating the inactive core.
The CPU core scheduling device provided by the embodiment of the disclosure determines the scheduling delay time of each core in each core cluster in a frame drawing period, and schedules tasks of the specified number in the core cluster of which the scheduling delay time exceeds the specified delay time threshold value to other cores based on the scheduling delay time of each core, so as to reduce the scheduling delay in a heavy load scene, reduce the stagnation and improve the use experience.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram illustrating an apparatus 800 for CPU core scheduling in accordance with an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, as other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (16)
1. A CPU kernel scheduling method is applied to a terminal, a frame drawing application is installed on the terminal, a CPU of the terminal supports a multi-core cluster platform architecture, and the CPU kernel scheduling method comprises the following steps:
respectively determining the scheduling delay time of each kernel in each kernel cluster in the multi-kernel cluster in a frame drawing period;
if the scheduling delay time of each kernel in a first cluster exceeds a specified delay time threshold, scheduling a specified number of tasks in the first cluster to a second cluster;
and the second cluster is a cluster different from the first cluster, and the performance index meets the requirement of running the specified number of tasks.
2. The CPU core scheduling method according to claim 1, further comprising:
determining a weight value of a running task in each core cluster in the multi-core cluster according to the size and the importance degree of the task load;
the task weight value with a large load is higher than the task weight value with a low load, and the task weight value with a high importance degree is higher than the task weight value with a low importance degree.
3. The CPU core scheduling method of claim 2, wherein scheduling a specified number of tasks in the first cluster into a second cluster comprises:
and scheduling the tasks in the specified number in the first cluster into the second cluster according to the sequence of the weighted values of the tasks from high to low.
4. The CPU core scheduling method according to any one of claims 1 to 3, wherein the second cluster is a larger core cluster with a performance index higher than that of the first cluster in the multi-core cluster, and/or the second cluster is a core cluster in which a sum of scheduling delay times of cores in the multi-core cluster is smaller than that of cores in the first cluster.
5. The method according to claim 1, wherein when a new task is enqueued or task scheduling is switched, the scheduling delay time of each core in each core cluster in the multi-core cluster is determined in a frame drawing cycle.
6. The CPU core scheduling method according to claim 1, further comprising:
monitoring the number of ready tasks in each core cluster in the multi-core cluster in real time;
and if the number of ready tasks in the core cluster exceeds a set task number threshold value and an inactive kernel exists in the core cluster of which the number of ready tasks exceeds the set task number threshold value, activating the inactive kernel.
7. The CPU core scheduling method of claim 1, wherein the multi-core cluster platform architecture is a three-core cluster platform architecture.
8. A CPU kernel scheduling device is applied to a terminal, a frame drawing application is installed on the terminal, a CPU of the terminal supports a multi-core cluster platform architecture, and the CPU kernel scheduling device comprises:
a determining unit, configured to determine scheduling delay times of cores in each core cluster in the multi-core cluster in a frame drawing cycle, respectively;
the scheduling unit is used for scheduling a specified number of tasks in a first cluster to a second cluster when the scheduling delay time of each core in the first cluster exceeds a specified delay time threshold;
and the second cluster is a cluster different from the first cluster, and the performance index meets the requirement of running the specified number of tasks.
9. The CPU core scheduling apparatus of claim 8, wherein the scheduling unit is further configured to:
determining a weight value of a running task in each core cluster in the multi-core cluster according to the size and the importance degree of the task load;
the task weight value with a large load is higher than the task weight value with a low load, and the task weight value with a high importance degree is higher than the task weight value with a low importance degree.
10. The CPU core scheduling apparatus according to claim 9, wherein the scheduling unit schedules a specified number of tasks in the first cluster to a second cluster by:
and scheduling the tasks in the specified number in the first cluster into the second cluster according to the sequence of the weighted values of the tasks from high to low.
11. The CPU core scheduling apparatus according to any one of claims 8 to 10, wherein the second cluster is a larger core cluster with a performance index higher than that of the first cluster in the multi-core cluster, and/or the second cluster is a core cluster in which a sum of scheduling delay times of respective cores in the multi-core cluster is smaller than that of respective cores in the first cluster.
12. The CPU core scheduling device of claim 8, wherein the determining unit determines the scheduling delay time of each core in each core cluster in the multi-core cluster in a frame drawing cycle when a new task is enqueued or task scheduling is switched.
13. The CPU core scheduling device of claim 8, wherein the scheduling unit is further configured to:
monitoring the number of ready tasks in each core cluster in the multi-core cluster in real time;
and when the number of ready tasks in the core cluster exceeds a set task number threshold value and an inactive kernel exists in the core cluster of which the number of ready tasks exceeds the set task number threshold value, activating the inactive kernel.
14. The CPU core scheduling device of claim 8, wherein the multi-core cluster platform architecture is a three-core cluster platform architecture.
15. A CPU core scheduling apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: executing the CPU core scheduling method of any of claims 1 to 7.
16. A non-transitory computer readable storage medium, instructions in which, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the CPU core scheduling method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010011545.8A CN111198757B (en) | 2020-01-06 | 2020-01-06 | CPU kernel scheduling method, CPU kernel scheduling device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010011545.8A CN111198757B (en) | 2020-01-06 | 2020-01-06 | CPU kernel scheduling method, CPU kernel scheduling device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111198757A true CN111198757A (en) | 2020-05-26 |
CN111198757B CN111198757B (en) | 2023-11-28 |
Family
ID=70746789
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010011545.8A Active CN111198757B (en) | 2020-01-06 | 2020-01-06 | CPU kernel scheduling method, CPU kernel scheduling device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111198757B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113094172A (en) * | 2021-04-01 | 2021-07-09 | 北京天融信网络安全技术有限公司 | Server management method and device applied to distributed storage system |
CN113391902A (en) * | 2021-06-22 | 2021-09-14 | 未鲲(上海)科技服务有限公司 | Task scheduling method and device and storage medium |
CN114531544A (en) * | 2022-02-11 | 2022-05-24 | 维沃移动通信有限公司 | Recording method, device, equipment and computer storage medium |
WO2022111466A1 (en) * | 2020-11-24 | 2022-06-02 | 北京灵汐科技有限公司 | Task scheduling method, control method, electronic device and computer-readable medium |
WO2022247189A1 (en) * | 2021-05-24 | 2022-12-01 | 北京灵汐科技有限公司 | Core control method and apparatus for many-core system, and many-core system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130196709A1 (en) * | 2012-01-31 | 2013-08-01 | Lg Electronics Inc. | Mobile terminal, controlling method thereof and recording medium thereof |
US20160062798A1 (en) * | 2014-09-01 | 2016-03-03 | Samsung Electronics Co., Ltd. | System-on-chip including multi-core processor and thread scheduling method thereof |
US20160139655A1 (en) * | 2014-11-17 | 2016-05-19 | Mediatek Inc. | Energy Efficiency Strategy for Interrupt Handling in a Multi-Cluster System |
WO2017065629A1 (en) * | 2015-10-12 | 2017-04-20 | Huawei Technologies Co., Ltd. | Task scheduler and method for scheduling a plurality of tasks |
US20180157527A1 (en) * | 2016-12-07 | 2018-06-07 | Mstar Semiconductor, Inc. | Device and method for dynamically adjusting task loading of multi-core processor |
CN110287245A (en) * | 2019-05-15 | 2019-09-27 | 北方工业大学 | Method and system for scheduling and executing distributed ETL (extract transform load) tasks |
-
2020
- 2020-01-06 CN CN202010011545.8A patent/CN111198757B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130196709A1 (en) * | 2012-01-31 | 2013-08-01 | Lg Electronics Inc. | Mobile terminal, controlling method thereof and recording medium thereof |
US20160062798A1 (en) * | 2014-09-01 | 2016-03-03 | Samsung Electronics Co., Ltd. | System-on-chip including multi-core processor and thread scheduling method thereof |
US20160139655A1 (en) * | 2014-11-17 | 2016-05-19 | Mediatek Inc. | Energy Efficiency Strategy for Interrupt Handling in a Multi-Cluster System |
WO2017065629A1 (en) * | 2015-10-12 | 2017-04-20 | Huawei Technologies Co., Ltd. | Task scheduler and method for scheduling a plurality of tasks |
US20180157527A1 (en) * | 2016-12-07 | 2018-06-07 | Mstar Semiconductor, Inc. | Device and method for dynamically adjusting task loading of multi-core processor |
CN110287245A (en) * | 2019-05-15 | 2019-09-27 | 北方工业大学 | Method and system for scheduling and executing distributed ETL (extract transform load) tasks |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022111466A1 (en) * | 2020-11-24 | 2022-06-02 | 北京灵汐科技有限公司 | Task scheduling method, control method, electronic device and computer-readable medium |
CN113094172A (en) * | 2021-04-01 | 2021-07-09 | 北京天融信网络安全技术有限公司 | Server management method and device applied to distributed storage system |
WO2022247189A1 (en) * | 2021-05-24 | 2022-12-01 | 北京灵汐科技有限公司 | Core control method and apparatus for many-core system, and many-core system |
CN113391902A (en) * | 2021-06-22 | 2021-09-14 | 未鲲(上海)科技服务有限公司 | Task scheduling method and device and storage medium |
CN114531544A (en) * | 2022-02-11 | 2022-05-24 | 维沃移动通信有限公司 | Recording method, device, equipment and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111198757B (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111198757B (en) | CPU kernel scheduling method, CPU kernel scheduling device and storage medium | |
US11188961B2 (en) | Service execution method and device | |
JP6567768B2 (en) | Wireless communication radio management with emphasis on power consumption | |
RU2663212C2 (en) | Method and device for starting energy-saving mode | |
CN111240817B (en) | Resource scheduling method, resource scheduling device and storage medium | |
CN103140831B (en) | The system and method for thread is performed at processor | |
CN107783803B (en) | System optimization method and device of intelligent terminal, storage medium and intelligent terminal | |
CN107402813B (en) | Resource allocation method, mobile terminal and computer readable storage medium | |
CN108196482B (en) | Power consumption control method and device, storage medium and electronic equipment | |
CN106020670A (en) | Screen lightening control method, device and electronic equipment | |
CN110890092B (en) | Wake-up control method and device and computer storage medium | |
CN109710330B (en) | Method and device for determining running parameters of application program, terminal and storage medium | |
WO2022262434A1 (en) | Power optimization method and electronic device | |
CN117130773B (en) | Resource allocation method, device and equipment | |
CN111581174A (en) | Resource management method and device based on distributed cluster system | |
CN106095544B (en) | Central processing unit control method and device | |
CN111240835A (en) | CPU working frequency adjusting method, CPU working frequency adjusting device and storage medium | |
CN110856196B (en) | WLAN service optimization method, terminal device and storage medium | |
WO2023227075A1 (en) | Resource management and control method, and electronic device and medium | |
CN113254092B (en) | Processing method, apparatus and storage medium | |
CN112416580A (en) | Method, device and medium for determining optimal resource allocation mode in application runtime | |
CN116954931B (en) | Bandwidth allocation method and device, storage medium and electronic equipment | |
CN113132263A (en) | Method and device for scheduling core processor and storage medium | |
US20150271766A1 (en) | Method, terminal device and system for controlling transmission | |
CN115712489A (en) | Task scheduling method and device for deep learning platform and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |