CN108549574B - Thread scheduling management method and device, computer equipment and storage medium - Google Patents

Thread scheduling management method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN108549574B
CN108549574B CN201810200639.2A CN201810200639A CN108549574B CN 108549574 B CN108549574 B CN 108549574B CN 201810200639 A CN201810200639 A CN 201810200639A CN 108549574 B CN108549574 B CN 108549574B
Authority
CN
China
Prior art keywords
thread
processor
migrating
threads
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810200639.2A
Other languages
Chinese (zh)
Other versions
CN108549574A (en
Inventor
陈奂彣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN201810200639.2A priority Critical patent/CN108549574B/en
Publication of CN108549574A publication Critical patent/CN108549574A/en
Application granted granted Critical
Publication of CN108549574B publication Critical patent/CN108549574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Abstract

The application relates to a thread scheduling management method, a thread scheduling management device, computer equipment and a storage medium. The method comprises the following steps: selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues; if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads; and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to a migration processor. By adopting the method, the problem of over-scheduling caused by overlarge load of the scheduled thread can be avoided.

Description

Thread scheduling management method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for thread scheduling management, a computer device, and a storage medium.
Background
With the development of computer technology, in the running process of computer equipment, threads on processors can be dynamically scheduled, so that the threads can achieve load balance on each processor, and the execution efficiency of the threads is improved. For the threads on the processor, there is a difference in priority, in the existing thread scheduling method, the threads are selected for scheduling only according to the priority of the threads, and because different threads have different load amounts, the adoption of the thread scheduling method may cause a situation of load imbalance after the threads are scheduled, that is, a problem of over-scheduling occurs. Therefore, how to solve the problem of over-scheduling in the thread scheduling process becomes a technical problem to be solved at present.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a thread scheduling management method, an apparatus, a computer device, and a storage medium, which can solve the problem of over-scheduling in the thread scheduling process.
A method of thread scheduling management, the method comprising:
selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues;
if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads;
and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to a migration processor.
In one embodiment, the step of finding a thread queue running on an evicting processor comprises: acquiring the information of the processor core group where the migrating processor is located and the information of the processor core group where the migrating processor is located; when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor; if the number of the threads is multiple, searching a thread queue from high to low according to the priority; and if the number of the threads is one, searching the thread queues from low to high according to the priority.
In one embodiment, the method further comprises: when the processor core group of the migrating processor is a small core group and the processor core group of the migrating processor is a large core group, executing the following steps: if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low; and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
In one embodiment, the method further comprises: when the processor core group of the migrating processor is a large core group and the processor core group of the migrating processor is a small core group, executing the following steps: if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
In one embodiment, the method further comprises: acquiring storage resource information corresponding to a running thread; counting the times of accessing the input and output equipment by the running thread within a preset time by using the storage resource information; and if the number of times of accessing the input and output equipment is greater than a first threshold value, recording the running thread as a first type thread.
In one embodiment, the method further comprises: acquiring storage resource information corresponding to a running thread; counting the times of accessing the memory by the running thread within a preset time by using the storage resource information; and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
A thread scheduling management apparatus, the apparatus comprising:
the selection module is used for selecting the emigration processor and the immigration processor and judging whether the emigration processor and the immigration processor meet the thread migration condition or not; wherein the eviction processor comprises one or more thread queues;
the searching module is used for searching the thread running in the thread queue on the migration processor if the migration condition is met, and calculating the residual load to be migrated corresponding to the searched thread;
and the scheduling module is used for selecting a target thread according to the residual load capacity to be migrated and scheduling the target thread to the migration processor.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues; if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads; and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to a migration processor.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of: selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues; if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads; and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to a migration processor.
According to the thread scheduling management method, the thread scheduling management device, the computer equipment and the storage medium, the terminal selects the target thread according to the residual load to be migrated of the thread by calculating the residual load to be migrated of the thread, and schedules the target thread to the immigration processor, so that the thread with the most appropriate load can be selected for scheduling, the problem of over-scheduling caused by too large load of the scheduled thread is avoided, and the reliability of thread scheduling is improved.
Drawings
FIG. 1 is a flow diagram illustrating a method for thread scheduling management in one embodiment;
FIG. 2 is a thread state diagram in one embodiment;
FIG. 3 is a flowchart illustrating the steps for finding a thread in a queue running on an evicting processor in one embodiment;
FIG. 4a depicts a state diagram of a running thread according to one embodiment;
FIG. 4b is a schematic diagram of the thread migration state of FIG. 4 a;
FIG. 5 is a block diagram of an apparatus for thread scheduling management in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, a method for thread scheduling management is provided, which is described by taking the method as an example for being applied to a terminal, and includes the following steps:
102, selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues.
The processor of the terminal may be at least one of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Video Processing Unit (VPU), and the like. The processor of the terminal may be a single-core architecture or a multi-core architecture, and when the processor is a multi-core architecture, one core in the multi-core architecture is regarded as one processor. In this embodiment, a processor of a terminal is taken as an example of a multi-core architecture, where an migrating processor and an migrating processor are respectively a core.
At any time in the operation process of the terminal, the threads processed on each processor may be in a condition of load imbalance, and when the load imbalance occurs, the threads in the ready queue on some processors waiting for the processor time are too many, and the threads operated on other processors are too few, so that the overall thread processing efficiency is low. In order to improve the thread processing efficiency, the threads need to be scheduled among the processors so that the processors achieve load balance.
The terminal selects an emigration processor and an emigration processor. The terminal can randomly select two processors to be respectively used as an emigration processor and an emigration processor, and the terminal can also sequentially select two processors to be respectively used as the emigration processor and the emigration processor according to the address information of the processors. Further, the terminal judges whether the migrated processor and the migrated processor meet the thread migration condition. Specifically, the terminal acquires the remaining loadable amount of the migrated processor and the remaining loadable amount of the migrated processor, calculates a difference between the remaining loadable amount of the migrated processor and the remaining loadable amount of the migrated processor, and determines that the thread migration condition is satisfied when the difference is greater than a preset value. The processor has a fixed total load, the total load is the load of the thread which can be processed by the processor at the maximum, and the remaining load of the processor is the difference value between the total load of the processor and the load of the thread running on the processor at present.
The eviction processor includes one or more thread queues. The multiple threads in the same thread queue have the same priority, and the threads in different thread queues have different priorities respectively. In this embodiment, a thread executing a predetermined task is denoted as a first thread, and a thread wakened by the first thread is denoted as a second thread. The terminal assigns a first priority to the first thread and a second priority to the second thread. The first priority may be the highest priority, and the second priority may be the highest priority, may be the next highest priority, and may also be the lowest priority. In this embodiment, the first priority is the highest priority, and the second priority is next to the highest priority. Further, the terminal assigns the lowest priority to threads that perform other tasks. For example, the terminal assigns the priority of a thread executing a predetermined task to 1, the priority of a thread awakened by the thread executing the predetermined task to 2, and the priority of a thread executing another task to 3, where 1, 2, and 3 are sequentially lower priorities.
The predetermined tasks include drawing a picture, playing a sound, responding to feedback, and the like. The predetermined task is identified in advance by determining whether a predetermined flag exists in a processor of a thread running on a processor of the terminal. Wherein one of the predetermined tasks has a corresponding predetermined task identifier, the predetermined task identifier being pre-recorded in a file. The file recording the predetermined task identifier may be referred to as a thread identifier file, which may take the form of a database table. The thread id file is stored in a storage resource of the terminal, the storage resource includes but is not limited to a register, a cache, a memory, an external storage device, and the like.
The thread includes a plurality of states, a new state, a ready state, a running state, a blocked state, and a dead state. As shown in FIG. 2, a thread is always in one of the states from creation, run, and finish. In the process of running the thread, the thread may enter a blocking state due to various reasons, and the thread in the blocking state may yield the processor temporarily and enter a waiting queue if the running is not finished. The thread in the blocking state does not automatically start running and can be resumed by waking up. During the running of the thread, the thread may enter a dead state due to normal exit caused by the end of execution or due to execution termination caused by an exception. A thread performing a predetermined task may wake up multiple threads during operation. The thread identification woken up by the thread executing the predetermined task is also prerecorded in the thread identification file. Specifically, the identification of the thread that is woken up is recorded in a thread identification file corresponding to the identification of the thread that woken up to perform the predetermined task.
And step 104, if yes, searching a thread queue running on the migration processor, and calculating the residual load to be migrated corresponding to the searched thread.
And if the emigration processor and the emigration processor meet the thread migration condition, the terminal searches a thread queue running on the emigration processor. The thread migration condition comprises that the current load to be migrated is larger than a preset value, and the current load to be migrated is the difference value between the residual loadable amount of the migrated processor and the residual loadable amount of the migrated processor. In this embodiment, the terminal first searches for a thread queue with the highest priority. Further, the terminal calculates the remaining capacity to be migrated corresponding to the searched thread every time the terminal searches for one thread. And the remaining capacity to be migrated is a difference value between the remaining capacity of the migrating processor and the remaining capacity of the migrating processor after the thread is scheduled to the migrating processor. For example, if the remaining available load of the currently migrated processor is m, the remaining available load of the migrated processor is n, and the load of the thread found by the terminal is p, the remaining capacity to be migrated corresponding to the thread is (m-p) - (n + p) ═ m-n-2 p.
And step 106, selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to the migration processor.
When the terminal searches the thread queue, whether the thread is selected as the target thread or not is judged for the searched thread. Specifically, the terminal judges whether to select the thread as a target thread according to the load of the thread and the corresponding remaining load to be migrated. More specifically, when the load of the thread is less than twice the remaining load to be migrated, the terminal selects the thread as the target thread. Further, the scheduler of the terminal schedules the target thread to the immigration processor.
In the embodiment, the terminal calculates the remaining capacity to be migrated of the thread, selects the target thread according to the remaining capacity to be migrated, and schedules the target thread to the migration processor, so that the thread with the most appropriate capacity can be selected for scheduling, the problem of over-scheduling due to too large capacity of the scheduled thread is avoided, and the reliability of thread scheduling is improved.
In one embodiment, the thread scheduling management method further includes: judging whether the thread corresponding to the currently searched thread queue achieves load balance on the migrating processor and the migrating processor; if yes, searching a next thread queue; if not, continuing to search the current thread queue.
Since different threads have different demands on the processor, for example, a thread executing a predetermined task needs to be efficiently executed to ensure the execution efficiency of the predetermined task, it is necessary to ensure that the thread executing the predetermined task always obtains a better processor. The processor time occupied by different threads is also different, for example, the processor time occupied by a thread with the execution priority of a is a, the processor time occupied by a thread with the execution priority of B is B, if the load capacity of three a threads is the same as that of one a thread and four B threads, under the premise of load balancing, three a threads are distributed to run on one processor, one a thread and four B threads are distributed to run on another processor, for a processor running three a threads, the time for one a thread to wait for a processor is the time for running two a threads, namely the waiting time is 2a, and for a processor running one a thread and four B threads, the time for a thread to wait for a processor is the time for running four B threads, namely the waiting time is 4B, since the time for running two a threads may be different from the time for running four B threads, the waiting time of the a threads on the two processors is different, and the execution efficiency of the a threads is also different, that is, the a threads are not balanced in execution efficiency on each processor, so it is necessary to balance scheduling on each processor for the threads with different priorities, so that the threads with each priority are balanced on each processor.
And the terminal judges whether the thread corresponding to the currently searched thread queue achieves load balance on the migrating processor and the migrating processor. Specifically, the terminal obtains a difference value between the number of threads in the thread queue on the migrating processor and the number of threads in the thread queue on the migrating processor, when an absolute value of the difference value is smaller than or equal to a preset threshold value, it is judged that the thread corresponding to the currently searched thread queue has reached load balance on the migrating processor and the migrating processor, and when the absolute value of the difference value is larger than the preset threshold value, it is judged that the thread corresponding to the currently searched thread queue has not reached load balance on the migrating processor and the migrating processor.
Further, if the thread corresponding to the currently searched thread queue does not reach load balance on the migrating processor and the migrating processor, the terminal continues to search the current thread queue. Specifically, the terminal sequentially searches the current thread queue according to the storage order of the threads in the priority queue, and performs step 104 on the searched threads. And if the load of the thread corresponding to the currently searched thread queue is balanced on the migrating processor and the migrating processor, the terminal searches the next thread queue. Specifically, the terminal searches for a thread queue corresponding to the next priority, and executes step 104.
Further, the terminal judges whether the migrated processor and the migrated processor reach load balance, if yes, the terminal finishes searching the thread, namely finishes scheduling the thread, and if not, the terminal continues searching the thread queue. Specifically, after the terminal schedules the target thread to the immigration processor, the current load to be migrated is calculated, and if the current load to be migrated is smaller than a preset value, migration is finished.
In this embodiment, if the load of the thread corresponding to the currently searched thread queue is balanced on the migrated processor and the migrated processor, the terminal searches the next thread queue for thread scheduling, otherwise, the current thread queue is continuously searched for thread scheduling, so that the threads with different priorities are balanced on the migrated processor and the migrated processor, thereby effectively improving the execution efficiency of the threads and the utilization rate of the processors.
In one embodiment, the step of determining whether the migrating processor and the migrating processor satisfy the thread migration condition comprises: when the migrating processor and the migrating processor satisfy one of the following conditions: 1) migrating a thread which executes a preset task to the processor; 2) the optional thread to be migrated on the migrating processor is a thread for executing a predetermined task, and the migrating processor is not in an idle state; judging that the thread migration condition is not met; otherwise, judging that the thread migration condition is met.
In this embodiment, the thread migration condition is further defined according to the thread executing the predetermined task, so that the thread executing the predetermined task obtains an optimal processor on the premise of minimal migration, so as to improve the execution efficiency of the thread executing the predetermined task. Specifically, the terminal determines whether the migrated processor and the migrated processor satisfy one of the following two conditions:
1) the immigration processor runs a thread which executes a predetermined task. Further, the number of threads which run the predetermined task threads on the migrated processor and execute the predetermined task in all processors of the terminal is not the minimum. In order to ensure that the thread executing the predetermined task can be executed efficiently, it is necessary to reduce the time for the thread to wait for the processor, and when the thread on the processor executing the thread executing the predetermined task increases, the time for the thread executing the predetermined task to wait for the processor increases, so that the condition that the thread migration is not satisfied can be regarded as that the number of the threads executing the predetermined task running on the migrated processor is not the minimum among all processors of the terminal.
2) The selectable threads to be migrated on the migrating processor are only one and are threads for executing the predetermined task, and the migrating processor is not in an idle state. Wherein, the selectable threads to be migrated are threads on a ready queue of the processor, if the threads on the ready queue have one and only one threads for executing the predetermined task, and the migrating processor is not in an idle state, that is, the thread executing the predetermined task cannot be executed immediately if it is dispatched to the migrating processor, the minimum time that the thread waits for the migrating processor to finish executing the thread occupying the processor time, if the thread is not scheduled to the migrating processor, the thread only needs to wait for the thread currently occupying the migrating processor to be executed, the scheduler of the terminal needs to occupy the resources of the terminal and a certain processing time when scheduling the thread, which affects the thread execution efficiency, thus, in this case, no thread scheduling may be performed, i.e., condition 2) may be a condition that thread migration is not satisfied.
The terminal judges whether the migrated processor and the migrated processor meet the condition 1) specifically comprises the following steps: the terminal searches the migrated processor, acquires the identification of the thread running on the migrated processor, searches a thread identification file according to the identification of the thread, judges whether the acquired identification of the thread is recorded in the thread identification file, if so, the migrated processor runs the thread executing the predetermined task, namely, the condition 1 is met, and if not, the condition is not met.
The step that the terminal judges whether the migrated processor and the migrated processor meet the condition 2) is specifically as follows: the terminal searches the migrated processor, acquires the identifier of the thread on the ready queue of the migrated processor, searches the thread identifier file according to the identifier of the thread, judges whether the acquired identifier of the thread is recorded in the thread identifier file, and further acquires the number of the identifiers of the threads on the ready queue of the migrated processor recorded in the thread identifier file. And the terminal searches the threads on the migrated processor and acquires the number of the threads on the migrated processor. If the number of the identifiers of the threads on the migrated processor ready queue recorded in the thread identifier file is one, and the number of the threads migrated to the processor is one or more, judging that the condition 2 is met), otherwise, not meeting.
And if the emigration processor and the emigration processor meet the condition 1) or the condition 2), the terminal judges that the emigration processor and the emigration processor do not meet the condition of thread migration, and the terminal ends the migration of the thread. If the migration condition 1) and the migration condition 2) are not satisfied, the terminal determines that the migrated processor and the migrated processor satisfy the thread migration condition, and the terminal executes step 102.
In this embodiment, when the migrated processor runs a thread for executing the predetermined task, or when the migrated processor is not in an idle state, the migration of the thread is ended, so that processor contention of the thread for executing the predetermined task is reduced, and the thread for executing the predetermined task can be efficiently executed.
In one embodiment, as shown in FIG. 3, the step of finding a thread within a queue of threads running on the migrating processor comprises:
step 302, obtaining the information of the processor core group where the migrating processor is located and the information of the processor core group where the migrating processor is located.
For a multi-processor terminal, the processors may be of a homogeneous or heterogeneous co-constructed architecture. The homogeneous co-structure means that a plurality of processors are in the same processor core group, and the processors in the same processor core group have the same structure and performance, that is, have the same clock frequency, that is, the efficiency of a plurality of processing threads is the same. The heterogeneous co-structure means that a plurality of processors are located in a plurality of different processor core groups, and the processors in the different processor core groups have different structures and performances, and the clock frequencies thereof are also different, namely, the efficiency of processing threads is different. Therefore, it is necessary to schedule threads for a processor core group in which the processor is located, so that threads with higher priority are preferentially scheduled to processors with higher processing efficiency.
In this embodiment, the terminal acquires information of a processor core group in which the migrating processor is located and information of a processor core group in which the migrating processor is located. Specifically, the terminal acquires information of the migrating processor, analyzes the information of the migrating processor, and obtains information of a processor core group in which the migrating processor is located. Similarly, the terminal acquires information of a processor core group where the migrating processor is located. Further, the terminal judges whether the processor core group where the emigration processor is located and the processor core group where the emigration processor is located are the same according to the processor core group information where the emigration processor is located and the processor core group information where the emigration processor is located.
304, when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor; if the number of the threads is multiple, the thread queues are searched according to the sequence of the priority levels from high to low, and if the number of the threads is one, the thread queues are searched according to the sequence of the priority levels from low to high.
When the migrating processor is in the same processor core group as the migrating processor. Further, the terminal acquires the number of threads in the thread queue with the highest priority on the emigration processor, and when the number of threads is multiple, the terminal searches the thread queues from high to low according to the priority. Specifically, the terminal selects the thread queue with the highest priority, and executes step 104. If the number of the threads is one, the terminal searches the thread queue according to the sequence from low to high in priority, and executes step 104.
In this embodiment, when there are a plurality of threads with the highest priority on the migrated processor, the threads in the thread queue with the highest priority are searched and further scheduled, so that it can be ensured that the threads with the highest priority can achieve load balancing on the migrated processor and the migrated processor, and the execution efficiency of the threads is ensured.
In one embodiment, the step of finding a thread within a queue of threads running on the migrating processor comprises further comprising: step 306, when the processor core group of the migrating processor is a small core group and the processor core group of the migrating processor is a large core group, executing the following steps: if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low; and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
For a terminal with a processor in a heterogeneous co-constructed architecture, the processor in the terminal is generally divided into a large core and a small core, and the large core and the small core both have respective fixed logic structures, including logic units such as a cache, an execution unit, an instruction level unit, a bus interface and the like. The core group consisting of a plurality of large cores is a large core group, and the core group consisting of small cores is a small core group. The logic unit of the large core has better performance than that of the small core, for example, the clock frequency of the large core is greater than that of the small core, that is, the thread execution efficiency of the large core group is higher than that of the small core group, and in order to ensure that the thread with higher priority has higher processing efficiency, it is necessary to consider the processing efficiency of the small core group and the large core group to schedule the thread.
In this embodiment, when the processor core group in which the migrating processor is located is the small core group and the processor core group in which the migrating processor is located is the large core group, that is, when the thread processing efficiency of the migrating processor is lower than that of the migrating processor, the terminal further determines whether a thread executing a predetermined task exists on the migrating processor. Specifically, the terminal acquires the threads on the migrated processor, searches the thread identification file, judges whether the identification of the threads on the migrated processor is recorded in the thread identification file, and acquires the number of threads executing the predetermined tasks on the migrated processor. Further, if no thread executing the predetermined task exists on the immigration processor, the terminal searches the thread queue executing the thread executing the predetermined task on the immigration processor according to the load amount from high to low. Specifically, the terminal obtains a thread executing the predetermined task and a load amount thereof on the migrated processor, sorts the thread executing the predetermined task according to the load amount, further, the terminal searches a thread queue executing the predetermined task thread according to the load amount from large to small, and executes step 104. And if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
In this embodiment, when a thread executing a predetermined task is executed on the migrating processor, the thread executing the predetermined task with the largest load on the migrating processor is preferentially scheduled to the migrating processor, so that load balancing of the thread executing the predetermined task on the migrating processor and the migrating processor can be achieved under the condition of minimum scheduling, and when the thread executing the predetermined task exists on the migrating processor and no thread executing the predetermined task exists on the migrating processor, the search is finished, that is, the thread scheduling between the migrating processor and the migrating processor is finished, so that the thread executing the predetermined task can be executed on the migrating processor with the maximum efficiency, and thus, the efficiency of executing the predetermined task is further improved.
In one embodiment, the step of finding a thread within a queue of threads running on the migrating processor comprises further comprising: step 308, when the processor core group of the migrating processor is a large core group and the processor core group of the migrating processor is a small core group, executing the following steps: if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
In this embodiment, when the processor core group in which the migrating processor is located is a large core group and the processor core group in which the migrating processor is located is a small core group, that is, when the thread processing efficiency of the migrating processor is higher than that of the migrating processor, the terminal further determines whether a thread executing a predetermined task exists on the migrating processor. Specifically, the terminal acquires a thread executing the predetermined task on the migrated processor and load of each thread, sums the load of the threads executing the predetermined task to obtain total load of the threads executing the predetermined task, further acquires total load of the threads executing the predetermined task on other processors in the terminal, judges whether the total load of the threads executing the predetermined task on the migrated processor is the largest of the total load of the threads executing the predetermined task on all the processors in the terminal, and if yes, the terminal searches a thread queue executing the predetermined task according to the load of the threads from small to large. Otherwise, the terminal finishes searching the thread queue executing the predetermined task, further, the terminal searches the thread queue according to the sequence from low priority to high priority, and executes step 104.
In this embodiment, when the total load of the threads executing the predetermined tasks on the migrating processor is the maximum, the threads executing the predetermined tasks with the maximum load are selected to be scheduled to the migrating processor, otherwise, the threads executing the predetermined tasks are not scheduled, so that the threads executing the predetermined tasks can be retained on the migrating processor with higher execution efficiency to the maximum extent, and the predetermined tasks can be executed more efficiently.
In one embodiment, the thread scheduling management method further comprises: acquiring storage resource information corresponding to a running thread; counting the times of accessing the input and output equipment by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the input and output device is greater than a first threshold value, recording the running thread as a first type thread.
The tasks executed by the threads are Input/Output (I/O) intensive tasks. Tasks involving network and disk I/O are I/O intensive tasks that are characterized by low processor consumption because I/O is far slower than processor and memory, and most of the time a task is waiting for an I/O operation to complete. For such task threads, it is necessary to identify them for further targeted management to increase the terminal running speed.
When the thread is in a running state, the original data required by the thread to run comes from the memory. During the running of a thread, some data may be frequently read and stored in registers and caches. When thread execution ends, these cached data should be written back to memory as appropriate. The storage resources used by the threads in the running state may include registers, caches, memory, and external storage devices, among others. The information of the storage resource may include an address space of the storage resource used by the thread.
The terminal can access the storage resource of the thread according to the thread identifier recorded in the thread identifier file, and acquire the information of the storage resource used by the running thread.
The storage resource information of the thread includes operation data during the running of the thread, including data of each time the I/O device is accessed, specifically including address information of the I/O device each time the I/O device is accessed, and access time of each time the I/O device is accessed. The terminal intercepts a certain thread or a plurality of thread continuous execution time periods recorded by the storage resource information, namely preset time, by accessing the storage resources of the threads, and counts the number of address information of the I/O equipment contained in the storage resource information in the preset time, wherein the number is the number of times of accessing the I/O equipment when the threads run in the preset time. The kind of the address information of the I/O device may be one or more. The preset time includes only the time period during which the thread is continuously running.
In this embodiment, the terminal compares the counted number of times that the running thread accesses the I/O device with the first threshold, adds the first type thread to the thread identifier file when the number of times that the running thread accesses the I/O device is greater than the first threshold, and records the running thread identifier in correspondence with the first type thread identifier file. The first threshold is a preset constant greater than 0, and a person skilled in the art can take the value of the first threshold according to a specific terminal standard. The first type of thread is a thread that performs I/O intensive tasks.
In the multi-processor terminal, a plurality of threads which are running simultaneously may be provided, at this time, the terminal accesses the storage resources of all the threads which are running, respectively obtains the storage resource information corresponding to the threads which are running, counts the number of times that each thread which is running accesses the input and output device within a preset time, respectively compares the number of times with a first threshold value, and identifies and records a first type thread of all the threads which are running.
Further, the terminal prioritizes the threads on the processor. Specifically, the terminal sets the thread priority to four levels, i.e., 1, 2, 3, 4, where 1 is the highest priority and 2, 3, 4 decrease in order. The terminal sets the priority of the thread executing the predetermined task to 1, the priority of the first type thread to 2, the priorities of other threads to 3, and the priorities of the threads except the thread executing the predetermined task and the first type thread to 4. And the thread scheduler of the terminal schedules the threads according to the priority.
As shown in fig. 4a, thread a executing on the first processor (CPU0), where thread a is a thread executing a predetermined task that executes the predetermined task, has a priority of 1, and threads B1 and B2, thread B1 and thread B2 executing on the second processor (CPU1), have priorities set to 3. As shown in fig. 4B, when the performance of the CPU0 is better than that of the CPU1 and the terminal needs to dispatch a thread from the CPU1 to the CPU0, the terminal temporarily sets the priority of the thread B2 to 2, and the thread scheduler selects a thread B2 with a higher priority from the CPU1 and dispatches the thread B2 to the CPU0, so that the thread B2 can preferentially obtain the first processor where the thread a waking up the thread B is located, thereby ensuring that the predetermined task executed by the thread a is executed efficiently.
Through the thread scheduling management method of the embodiment, the terminal counts the number of times that the running thread accesses the I/O device within the preset time, compares the number with the first threshold value, identifies and records the I/O intensive task thread in the thread, and sets the priority higher than other threads for the thread executing the I/O intensive task, so that the thread executing the I/O intensive task can obtain a processor better than other threads, and the execution efficiency of the preset task is further improved.
In one embodiment, the thread scheduling management method further comprises: acquiring storage resource information corresponding to a running thread; counting the times of accessing the memory by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
There are also computationally intensive tasks for the tasks performed by the threads. The computationally intensive tasks are characterized by a large number of computations, processor intensive, such as computing the circumference ratio, high definition decoding of video, and the like. For the thread executing the calculation-intensive task, the operation efficiency of the thread depends on the probability of obtaining the processor, so that the identification of the calculation-intensive task for further targeted management is beneficial to improving the operation rate of the terminal.
When a thread issues an instruction request operation to a processor, the instructions and data are temporarily stored in a memory and are transferred to the processor when the processor is idle, that is, each time the thread issues an instruction to the processor, the thread accesses the memory to temporarily store the instruction and related data. Therefore, counting the number of times that the thread accesses the memory within a certain time can obtain the number of times that the thread requests the processor within a certain time, thereby determining whether the thread is a thread executing a compute-intensive task.
In this embodiment, the terminal accesses the storage resource of the thread according to the thread identifier recorded in the file, and acquires information of the storage resource used by the running thread.
The storage resource information of the thread comprises operation data during the running of the thread, the operation data comprises data of each memory access operation, specifically comprises address information of the memory during each memory access, and each piece of address information of the memory corresponds to one access time. The terminal intercepts a certain thread or a plurality of thread continuous execution time periods recorded by the storage resource information, namely preset time, by accessing the storage resources of the threads, and counts the number of address information of the memory contained in the storage resource information in the preset time, wherein the number is the number of times that the threads access the memory in the running preset time. The kind of the address information of the memory may be one or more.
And the terminal compares the counted number of times of accessing the memory by the running thread with a second threshold value, adds a second type thread in the thread identification file when the number of times of accessing the memory is greater than the second threshold value, and correspondingly records the running thread identification. The second threshold is a preset constant greater than 0, and a person skilled in the art can take the value of the second threshold according to a specific terminal standard. The second type of thread is a thread that performs a compute intensive task.
In the multi-processor terminal, there may be a plurality of threads running simultaneously, and at this time, the terminal accesses the storage resources of all the threads running, respectively obtains the storage resource information corresponding to the threads running, respectively counts the number of times that each thread running accesses the memory within a preset time, respectively compares the number of times with a second threshold value, and identifies and records a second type of thread in all the threads running.
Further, the terminal prioritizes the threads on the processor. Specifically, the terminal sets the thread priority to four levels, i.e., 1, 2, 3, 4, where 1 is the highest priority and 2, 3, 4 decrease in order. The terminal sets the priority of the thread executing the predetermined task to 1, the priority of the second type thread to 2, the priorities of the other threads to 3, and the priorities of the threads other than the thread executing the predetermined task and the second type thread to 4. And the thread scheduler of the terminal schedules the threads according to the priority.
By the thread scheduling management method of the embodiment, the terminal counts the number of times that the running thread accesses the memory within the preset time, compares the number with the second threshold value, identifies and records the compute-intensive task thread in the thread, and sets the priority higher than other threads for the thread executing the compute-intensive task, so that the thread executing the compute-intensive task can obtain a processor better than other threads, thereby further improving the execution efficiency of the predetermined task.
It should be understood that although the steps in the flowcharts of fig. 1 and 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 and 3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a thread scheduling management apparatus, including: a selection module 510, a lookup module 520, and a scheduling module 530, wherein:
a selecting module 510, configured to select an migrated processor and an migrated processor, and determine whether the migrated processor and the migrated processor satisfy a thread migration condition; wherein the eviction processor comprises one or more thread queues.
The searching module 520 is configured to search, if the migration condition is satisfied, a thread running in a thread queue on the migration processor, and calculate a remaining load to be migrated corresponding to the searched thread.
And the scheduling module 530 is configured to select a target thread according to the remaining load to be migrated, and schedule the target thread to the migration processor.
In one embodiment, the selection module is further configured to, when the migrating processor and the migrating processor satisfy one of the following conditions: migrating a thread which executes a preset task to the processor; only one optional thread to be migrated on the migrating processor is a thread for executing a predetermined task, and the migrating processor is not in an idle state; judging that the thread migration condition is not met; otherwise, judging that the thread migration condition is met.
In one embodiment, the search module is further configured to obtain information about a processor core group in which the migrating processor is located and information about a processor core group in which the migrating processor is located; when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor; if the number of the threads is multiple, searching a thread queue from high to low according to the priority; if the number of the threads is one, searching the thread queues from low to high according to the priority.
In one embodiment, the lookup module is further configured to execute the following steps when the processor core group in which the migrating processor is located is the small core group and the processor core group in which the migrating processor is located is the large core group: if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low; and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
In one embodiment, the lookup module is further configured to execute the following steps when the processor core group in which the migrating processor is located is a large core group and the processor core group in which the migrating processor is located is a small core group: if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
In one embodiment, the thread scheduling management apparatus further comprises: the classification module is used for acquiring storage resource information corresponding to the running thread; counting the times of accessing the input and output equipment by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the input and output device is greater than a first threshold value, recording the running thread as a first type thread.
In one embodiment, the classification module is further configured to obtain storage resource information corresponding to the running thread; counting the times of accessing the memory by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
For the specific limitation of the thread scheduling management apparatus, reference may be made to the above limitation of the thread scheduling management method, which is not described herein again. The modules in the thread scheduling management device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a thread scheduling management method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the emigration processor comprises one or more thread queues; if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads; and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to the migration processor.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the migrating processor and the migrating processor satisfy one of the following conditions: migrating a thread which executes a preset task to the processor; only one optional thread to be migrated on the migrating processor is a thread for executing a predetermined task, and the migrating processor is not in an idle state; judging that the thread migration condition is not met; otherwise, judging that the thread migration condition is met.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring information of a processor core group where an emigration processor is located and information of a processor core group where an emigration processor is located; when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor; if the number of the threads is multiple, searching a thread queue from high to low according to the priority; if the number of the threads is one, searching the thread queues from low to high according to the priority.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the processor core group of the migrating processor is a small core group and the processor core group of the migrating processor is a large core group, executing the following steps: if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low; and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the processor core group of the migrating processor is a large core group and the processor core group of the migrating processor is a small core group, executing the following steps: if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring storage resource information corresponding to a running thread; counting the times of accessing the input and output equipment by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the input and output device is greater than a first threshold value, recording the running thread as a first type thread.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring storage resource information corresponding to a running thread; counting the times of accessing the memory by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the emigration processor comprises one or more thread queues; if yes, searching threads running in a thread queue on the migration processor, and calculating the residual load to be migrated corresponding to the searched threads; and selecting a target thread according to the residual load capacity to be migrated, and scheduling the target thread to the migration processor.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the migrating processor and the migrating processor satisfy one of the following conditions: migrating a thread which executes a preset task to the processor; only one optional thread to be migrated on the migrating processor is a thread for executing a predetermined task, and the migrating processor is not in an idle state; judging that the thread migration condition is not met; otherwise, judging that the thread migration condition is met.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring information of a processor core group where an emigration processor is located and information of a processor core group where an emigration processor is located; when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor; if the number of the threads is multiple, searching a thread queue from high to low according to the priority; if the number of the threads is one, searching the thread queues from low to high according to the priority.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the processor core group of the migrating processor is a small core group and the processor core group of the migrating processor is a large core group, executing the following steps: if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low; and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the processor core group of the migrating processor is a large core group and the processor core group of the migrating processor is a small core group, executing the following steps: if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring storage resource information corresponding to a running thread; counting the times of accessing the input and output equipment by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the input and output device is greater than a first threshold value, recording the running thread as a first type thread.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring storage resource information corresponding to a running thread; counting the times of accessing the memory by the running thread within the preset time by using the storage resource information; and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of thread scheduling management, the method comprising:
selecting an emigration processor and an emigration processor, and judging whether the emigration processor and the emigration processor meet a thread migration condition or not; wherein the eviction processor comprises one or more thread queues;
if yes, searching threads running in a thread queue on the migration processor, and calculating the remaining capacity to be migrated corresponding to the searched threads aiming at each searched thread; the remaining capacity to be migrated is a difference value between the remaining capacity of the migrating processor and the remaining capacity of the migrating processor after the thread is scheduled to the migrating processor;
and when the load of the thread is less than twice of the residual load to be migrated corresponding to the thread, selecting the thread as a target thread, and scheduling the target thread to a migration processor.
2. The thread scheduling management method of claim 1 wherein the step of determining whether the migrating processor and the migrating processor satisfy the thread migration condition comprises:
when the migrating processor and the migrating processor satisfy one of the following conditions:
migrating a thread which executes a preset task to the processor;
only one optional thread to be migrated on the migrating processor is a thread for executing a predetermined task, and the migrating processor is not in an idle state;
judging that the thread migration condition is not met; otherwise, judging that the thread migration condition is met.
3. The method of claim 1, wherein the step of locating a thread in a queue running on an evicting processor comprises:
acquiring the information of the processor core group where the migrating processor is located and the information of the processor core group where the migrating processor is located;
when the migrating processor and the migrating processor are in the same processor core group, acquiring the number of threads in a thread queue with the highest priority on the migrating processor;
if the number of the threads is multiple, searching a thread queue from high to low according to the priority;
and if the number of the threads is one, searching the thread queues from low to high according to the priority.
4. The thread scheduling management method of claim 3, further comprising:
when the processor core group of the migrating processor is a small core group and the processor core group of the migrating processor is a large core group, executing the following steps:
if no thread for executing the preset task runs on the immigration processor, searching a thread queue for executing the thread of the preset task according to the load from high to low;
and if no thread for executing the preset task runs on the migrating processor and no thread for executing the preset task runs on the migrating processor, finishing the search.
5. The thread scheduling management method of claim 3, further comprising:
when the processor core group of the migrating processor is a large core group and the processor core group of the migrating processor is a small core group, executing the following steps:
if the total load of the threads for executing the preset tasks on the emigration processor is the largest in the large core group, searching a thread queue for executing the preset tasks according to the load of the threads from small to large; otherwise, finishing searching the thread queue executing the predetermined task.
6. The method of claim 1, further comprising:
acquiring storage resource information corresponding to a running thread;
counting the times of accessing the input and output equipment by the running thread within a preset time by using the storage resource information;
and if the number of times of accessing the input and output equipment is greater than a first threshold value, recording the running thread as a first type thread.
7. The method of claim 1, further comprising:
acquiring storage resource information corresponding to a running thread;
counting the times of accessing the memory by the running thread within a preset time by using the storage resource information;
and if the number of times of accessing the memory is greater than a second threshold value, recording the running thread as a second type thread.
8. A thread scheduling management apparatus, the apparatus comprising:
the selection module is used for selecting the emigration processor and the immigration processor and judging whether the emigration processor and the immigration processor meet the thread migration condition or not; wherein the eviction processor comprises one or more thread queues;
the searching module is used for searching the threads in the thread queue running on the migration processor if the migration condition is met, and calculating the residual load to be migrated corresponding to the searched threads aiming at each searched thread; the remaining capacity to be migrated is a difference value between the remaining capacity of the migrating processor and the remaining capacity of the migrating processor after the thread is scheduled to the migrating processor;
and the scheduling module is used for selecting the thread as a target thread and scheduling the target thread to a migration processor when the load of the thread is less than twice of the residual load to be migrated corresponding to the thread.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201810200639.2A 2018-03-12 2018-03-12 Thread scheduling management method and device, computer equipment and storage medium Active CN108549574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810200639.2A CN108549574B (en) 2018-03-12 2018-03-12 Thread scheduling management method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810200639.2A CN108549574B (en) 2018-03-12 2018-03-12 Thread scheduling management method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108549574A CN108549574A (en) 2018-09-18
CN108549574B true CN108549574B (en) 2022-03-15

Family

ID=63516101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810200639.2A Active CN108549574B (en) 2018-03-12 2018-03-12 Thread scheduling management method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108549574B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783028B (en) * 2019-01-16 2022-07-15 Oppo广东移动通信有限公司 Optimization method and device for I/O scheduling, storage medium and intelligent terminal
CN109947569B (en) * 2019-03-15 2021-04-06 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for binding core
CN112052077A (en) * 2019-06-06 2020-12-08 北京字节跳动网络技术有限公司 Method, device, equipment and medium for software task management
CN110989933A (en) * 2019-12-05 2020-04-10 北京首汽智行科技有限公司 Message queue RockMq smooth migration method
CN112232770B (en) * 2020-10-17 2021-08-20 成都数字家园科技有限公司 Business information processing method based on smart community and cloud service equipment
CN113553164B (en) * 2021-09-17 2022-02-25 统信软件技术有限公司 Process migration method, computing device and storage medium
CN113918527B (en) * 2021-12-15 2022-04-12 西安统信软件技术有限公司 Scheduling method and device based on file cache and computing equipment
CN117407128A (en) * 2022-07-06 2024-01-16 华为技术有限公司 Task migration method, device, equipment, storage medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745778A (en) * 1994-01-26 1998-04-28 Data General Corporation Apparatus and method for improved CPU affinity in a multiprocessor system
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN105955809A (en) * 2016-04-25 2016-09-21 深圳市万普拉斯科技有限公司 Thread scheduling method and system
CN107015862A (en) * 2015-12-22 2017-08-04 英特尔公司 Thread and/or scheduling virtual machine for the core with different abilities

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101286700B1 (en) * 2006-11-06 2013-07-16 삼성전자주식회사 Apparatus and method for load balancing in multi core processor system
CN101345770A (en) * 2008-08-22 2009-01-14 杭州华三通信技术有限公司 Load equalization implementing method, storage control equipment and memory system
US8255644B2 (en) * 2009-05-18 2012-08-28 Lsi Corporation Network communications processor architecture with memory load balancing
CN102834807B (en) * 2011-04-18 2015-09-09 华为技术有限公司 The method and apparatus of multicomputer system load balancing
US9632822B2 (en) * 2012-09-21 2017-04-25 Htc Corporation Multi-core device and multi-thread scheduling method thereof
WO2015070789A1 (en) * 2013-11-14 2015-05-21 Mediatek Inc. Task scheduling method and related non-transitory computer readable medium for dispatching task in multi-core processor system based at least partly on distribution of tasks sharing same data and/or accessing same memory address (es)
CN104951357B (en) * 2014-03-28 2018-06-26 华为技术有限公司 The management method and protocol stack system of concurrent user state protocol stack
CN106686039B (en) * 2015-11-10 2020-07-21 华为技术有限公司 Resource scheduling method and device in cloud computing system
US20170206111A1 (en) * 2016-01-15 2017-07-20 Qualcomm Innovation Center, Inc. Managing processing capacity provided to threads based upon load prediction
CN106980533B (en) * 2016-01-18 2020-04-28 杭州海康威视数字技术股份有限公司 Task scheduling method and device based on heterogeneous processor and electronic equipment
US10523692B2 (en) * 2016-04-08 2019-12-31 Samsung Electronics Co., Ltd. Load balancing method and apparatus in intrusion detection system
CN107315637B (en) * 2016-04-26 2020-07-31 南京中兴新软件有限责任公司 Load balancing method and device of signal processing module

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745778A (en) * 1994-01-26 1998-04-28 Data General Corporation Apparatus and method for improved CPU affinity in a multiprocessor system
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system
CN102184125A (en) * 2011-06-02 2011-09-14 首都师范大学 Load balancing method based on program behaviour online analysis under heterogeneous multi-core environment
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN107015862A (en) * 2015-12-22 2017-08-04 英特尔公司 Thread and/or scheduling virtual machine for the core with different abilities
CN105955809A (en) * 2016-04-25 2016-09-21 深圳市万普拉斯科技有限公司 Thread scheduling method and system

Also Published As

Publication number Publication date
CN108549574A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN108549574B (en) Thread scheduling management method and device, computer equipment and storage medium
CN108509260B (en) Thread identification processing method and device, computer equipment and storage medium
US10545789B2 (en) Task scheduling for highly concurrent analytical and transaction workloads
US9448864B2 (en) Method and apparatus for processing message between processors
US9858115B2 (en) Task scheduling method for dispatching tasks based on computing power of different processor cores in heterogeneous multi-core processor system and related non-transitory computer readable medium
US20170090988A1 (en) Granular quality of service for computing resources
CN109766180B (en) Load balancing method and device, storage medium, computing equipment and computing system
US8627325B2 (en) Scheduling memory usage of a workload
CA2463748A1 (en) Method and apparatus for dispatching tasks in a non-uniform memory access (numa) computer system
US6587865B1 (en) Locally made, globally coordinated resource allocation decisions based on information provided by the second-price auction model
JP5345990B2 (en) Method and computer for processing a specific process in a short time
KR20160027541A (en) System on chip including multi-core processor and thread scheduling method thereof
US10768684B2 (en) Reducing power by vacating subsets of CPUs and memory
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
Usui et al. Squash: Simple qos-aware high-performance memory scheduler for heterogeneous systems with hardware accelerators
CN110795323A (en) Load statistical method, device, storage medium and electronic equipment
CN115617494B (en) Process scheduling method and device in multi-CPU environment, electronic equipment and medium
US7603673B2 (en) Method and system for reducing context switch times
US11875197B2 (en) Management of thrashing in a GPU
CN112114967B (en) GPU resource reservation method based on service priority
JP5243822B2 (en) Workload management in a virtualized data processing environment
KR20200114702A (en) A method and apparatus for long latency hiding based warp scheduling
JP4127354B2 (en) Multiprocessor control program and multiprocessor control method
CN116414555A (en) Method for scheduling cache budgets in a multi-core processing device and device for performing the same
Tang et al. A shared cache-aware Task scheduling strategy for multi-core systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant