CN114968500A - Task scheduling method, device, equipment and storage medium - Google Patents

Task scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN114968500A
CN114968500A CN202110189134.2A CN202110189134A CN114968500A CN 114968500 A CN114968500 A CN 114968500A CN 202110189134 A CN202110189134 A CN 202110189134A CN 114968500 A CN114968500 A CN 114968500A
Authority
CN
China
Prior art keywords
task
target
core
logic core
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110189134.2A
Other languages
Chinese (zh)
Inventor
蒋彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110189134.2A priority Critical patent/CN114968500A/en
Publication of CN114968500A publication Critical patent/CN114968500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a task scheduling method, a device, equipment and a storage medium, and when task scheduling is executed, a first task to be scheduled to run on a target logic core is obtained from a task queue of the target logic core. And checking the task type of a second task running on the sibling logical core corresponding to the target logical core, and if the task type of the second task is different from that of the first task, determining the priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task so as to execute task scheduling of the first task according to the priority relationship, so that the high-priority tasks of the first task and the second task run on the corresponding logical cores. The method can avoid serious interference of low-priority tasks (such as offline tasks) to high-priority tasks (such as online tasks) caused by the fact that the two hyper-threads simultaneously run tasks with different priorities, namely, the hyper-thread interference is avoided, and the performance of the high-priority tasks is ensured.

Description

Task scheduling method, device, equipment and storage medium
Technical Field
The present application relates to the field of cloud computing, and in particular, to a task scheduling method, apparatus, device, and storage medium.
Background
In a data center, in order to improve the resource utilization rate of a physical machine and achieve the purpose of saving cost, virtual machines executing different task types (such as online tasks and offline tasks) are often deployed on the same physical machine at the same time, that is, hybrid deployment is performed. Online tasks are sensitive to delay, while offline tasks are not.
In a hybrid deployment scenario, a modern Central Processing Unit (CPU) architecture designs a Hyper-thread architecture (HT) for improving overall hardware capability, that is, two Hyper-threads may be on one physical core (core) of the CPU, and the two Hyper-threads may be regarded as two logic cores, which are sibling logic cores, and each logic core may be regarded as an independent logic CPU. One hyper-thread (logical core) runs an online task while the other hyper-thread (logical core) runs an offline task.
When a task is scheduled, because many hardware resources (such as Cache) are shared between two hyper-threads, and an offline task is usually a CPU-consuming task, serious interference is brought to the online task, and the performance of the online task is interfered.
Disclosure of Invention
In order to solve the above technical problems, the present application provides a task scheduling method, which can ensure that a low-priority task avoids a high-priority task in a task scheduling process, and avoid that the low-priority task (for example, an offline task) causes severe interference to the high-priority task (for example, an online task) due to the fact that two hyper-threads simultaneously run tasks with different priorities, that is, avoid hyper-thread interference, and ensure the performance of the high-priority task.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a task scheduling method, where the method includes:
acquiring a first task to be scheduled to run on a target logic core from a task queue of the target logic core, wherein the task queue comprises at least one task;
checking the task type of a second task running on a brother logic core corresponding to the target logic core, wherein the brother logic core of the target logic core and the target logic core form a hyper-thread pair;
if the task type of the second task is different from the task type of the first task, determining a priority relation between the first task and the second task according to the task type of the first task and the task type of the second task;
and executing the task scheduling of the first task according to the priority relation, so that the high-priority tasks in the first task and the second task run on the corresponding logic cores.
In a second aspect, an embodiment of the present application provides a task scheduling apparatus, where the apparatus includes an obtaining unit, a checking unit, a determining unit, and a scheduling unit:
the acquiring unit is configured to acquire a first task to be scheduled to run on a target logic core from a task queue of the target logic core, where the task queue includes at least one task;
the checking unit is configured to check a task type of a second task running on a sibling logical core corresponding to the target logical core, where the target logical core and the sibling logical core of the target logical core form a hyper-thread pair;
the determining unit is configured to determine, if the task type of the second task is different from the task type of the first task, a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
the scheduling unit is configured to perform task scheduling of the first task according to the priority relationship, so that a high-priority task in the first task and the second task runs on a corresponding logic core.
In a third aspect, an embodiment of the present application provides an electronic device for task scheduling, where the electronic device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of the first aspect according to instructions in the program code.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing program code for executing the method of the first aspect.
According to the technical scheme, the scheduler can schedule the tasks to run in a task queue of one logic core, for example, a target logic core, wherein the task queue comprises at least one task, and when the task scheduling is executed, a first task to be scheduled to run on the target logic core is obtained from the task queue of the target logic core. In a hybrid deployment scenario, in order to avoid hyper-thread interference caused by the fact that tasks of different task types with different priorities are simultaneously run on a target logical core and a target logical core forming a hyper-thread pair, that is, a task with a low priority interferes with the performance of a task with a high priority, for this reason, the task type of a second task running on a sibling logical core corresponding to the target logical core may be checked, and if the task type of the second task is different from the task type of the first task, a priority relationship between the first task and the second task is determined according to the task type of the first task and the task type of the second task, so that task scheduling of the first task is performed according to the priority relationship, and the task with a high priority in the first task and the second task runs on the corresponding logical cores. According to the method, under the condition that hardware resources are shared between two hyper-threads, a low-priority task can be guaranteed to avoid a high-priority task in the task scheduling process, and the problem that the low-priority task (such as an offline task) causes serious interference to the high-priority task (such as an online task) due to the fact that tasks with different priorities are simultaneously operated on the two hyper-threads is avoided, namely hyper-thread interference is avoided, and the performance of the high-priority task is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for a person of ordinary skill in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a hyper-threading architecture provided in the related art;
fig. 2 is a diagram illustrating a system architecture of a task scheduling method according to an embodiment of the present application;
fig. 3 is a schematic architecture diagram of a server according to an embodiment of the present application;
fig. 4 is a flowchart of a task scheduling method according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of a task scheduling framework provided by an embodiment of the present application;
fig. 6 is a flowchart of task scheduling executed by a hyper-threading interference-based isolator module according to an embodiment of the present application;
fig. 7 is an exemplary diagram of offline task scheduling according to an embodiment of the present application;
FIG. 8 is an exemplary diagram of an online task scheduling provided by an embodiment of the present application;
fig. 9 is an exemplary diagram of load balancing provided in an embodiment of the present application;
fig. 10 is a flowchart of a task scheduling method according to an embodiment of the present application;
fig. 11 is a structural diagram of a task scheduling apparatus according to an embodiment of the present application;
fig. 12 is a structural diagram of a terminal device according to an embodiment of the present application;
fig. 13 is a block diagram of a server according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
In a mixed deployment scenario, a CPU architecture designs a hyper-thread architecture for improving overall hardware capability, and as shown in fig. 1, one physical core (core) of the CPU may have two hyper-threads, i.e., hyper-thread 1 and hyper-thread 2, which may be regarded as two logic cores, which are brother logic cores, and each logic core may be regarded as an independent logic CPU. Where one hyper-thread, e.g., hyper-thread 1, runs online tasks while another hyper-thread, e.g., hyper-thread 2, runs offline tasks.
In the process of task scheduling of the logic cores, for the online task and the offline task on the same logic core, the online task has high priority, and the offline task has low priority, so that interference of the offline task on the online task is avoided through priority control, and the offline task can run by using an idle logic core (namely, the offline task can run only when the online task on one logic core does not run) while the online task is not influenced (or is influenced at least), so that the utilization rate of a logic CPU is improved, and the cost is reduced.
However, when the online task and the offline task run on two hyper-threads of the same physical core at the same time, due to the sharing of part of physical resources, the performance of the online task is seriously disturbed when the offline task runs (the offline task is usually a CPU-consuming task). The conventional kernel scheduler does not perform any processing aiming at the problem of hyper-thread interference, and cannot solve the problem of hyper-thread interference in a mixed scene.
In order to solve the above technical problem, an embodiment of the present application provides a task scheduling method, which can ensure that a low-priority task avoids a high-priority task in a task scheduling process under the condition that hardware resources are shared between two hyper-threads, and avoid that the low-priority task (for example, an offline task) causes severe interference to the high-priority task (for example, an online task) due to the fact that the two hyper-threads run tasks with different priorities at the same time, that is, avoid hyper-thread interference, and ensure the performance of the high-priority task.
It should be noted that the method provided by the embodiment of the present application may be applied to various mixed deployment scenarios sensitive to hyper-thread interference, for example, an online task and an offline task are respectively run on two logic cores forming a hyper-thread pair (i.e., logic cores that are brothers of each other). Certainly, the tasks running on the two logic cores of the logic cores that are brother of each other are not limited to the online task and the offline task, but may also be tasks of any two different task types with different priorities, where one task type has a low priority, and the other task type has a high priority, and the task with the low priority is a CPU consumption type task, and may cause interference to the performance of the task with the high priority.
It should be noted that, the method provided in the embodiment of the present application may relate to the field of cloud computing, and cloud computing (cloud computing) as described above is a computing mode, which distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can obtain computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the 'cloud' can be infinitely expanded to users, and can be acquired at any time, used as required and expanded at any time. The cloud computing resource pool mainly comprises computing equipment (which is a virtualization machine and comprises an operating system), storage equipment and network equipment.
Referring to fig. 2, fig. 2 is a schematic diagram of a system architecture of a task scheduling method according to an embodiment of the present application. The system architecture includes a terminal device 201 and a server 202, a virtual machine may run on the server 202, a task runs on the virtual machine, that is, the task runs on a hyper-thread, two hyper-threads on one physical core of one physical CPU may be regarded as two logical cores, and the two logical cores are sibling logical cores. The hardware resources on server 202 are shared among sibling logical cores.
The task queue of the server 202 includes at least one task waiting to be scheduled, and the task in the task queue may be submitted by the terminal device 201. A scheduler may be deployed on server 202, through which server 202 schedules tasks from a task queue for execution. When performing task scheduling for a single core (i.e., a single logical core such as a target logical core), the server 202 may obtain a first task to be scheduled to run on the target logical core from a task queue of the target logical core.
In a hybrid deployment scenario, in order to avoid hyper-thread interference caused by the fact that tasks of different task types with different priorities are simultaneously run on a target logical core and a target logical core forming a hyper-thread pair, that is, a task with a low priority may interfere with performance of a task with a high priority, the server 202 may check a task type of a second task running on a sibling logical core corresponding to the target logical core, and if the task type of the second task is different from that of the first task, determine a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task, thereby performing task scheduling of the first task according to the priority relationship, so that the task with a high priority in the first task and the second task runs on the corresponding logical cores. Therefore, the low-priority task and the high-priority task are prevented from simultaneously and respectively running on the target logic core and the corresponding sibling logic core, the resource consumption of the low-priority task is prevented from influencing the performance of the high-priority task, and the hyper-thread interference is avoided.
It should be noted that, the task scheduling method provided by the embodiment of the present application may be executed by the server 202, and accordingly, the scheduler is generally disposed in the server 202. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the task scheduling scheme provided in the embodiments of the present application.
It should be further noted that the number of the terminal devices 201 and the servers 202 in fig. 2 is merely illustrative. According to implementation needs, the server 202 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The terminal device 201 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal device 201 and the server 202 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The following description will mainly use a server as an execution subject, and will describe in detail a task scheduling method provided in the embodiments of the present application with reference to the drawings.
Two logic cores forming a hyper-thread pair in the server are brother logic cores, a task queue of each logic core comprises at least one task, and the server can schedule the task from the task queue through a scheduler to run. However, in the process of task scheduling for each logical core, for example, a target logical core, it is necessary to avoid that a low-priority task and a high-priority task run on the target logical core and a corresponding sibling logical core, respectively, at the same time. In addition, since the server includes multiple logic cores, load balancing may need to be performed among the multiple logic cores, and the load balancing process needs to allocate a task to a certain logic core, for example, a target logic core, and hyper-thread interference also needs to be considered when determining whether a task can be scheduled to the target logic core in the load balancing process. Therefore, logic for avoiding hyper-thread interference is added in the single-core task scheduling process and the load balancing process.
Based on this, the architecture of the server provided in the embodiment of the present application may be as shown in fig. 3, where the server includes a scheduler 300, and the scheduler 300 may include a single-core scheduling module 301 and a load balancing module 302 in order to implement a function of avoiding hyper-thread interference during scheduling. The single-core scheduling module 301 is mainly used for avoiding hyper-thread interference through the task scheduling method provided by the embodiment of the present application in the process of scheduling a task from a task queue for running by a single logic core (i.e. in the single-core task scheduling process); the load balancing module 302 mainly aims at the load balancing process, and schedules a certain task to a target logic core to realize load balancing by using the task scheduling method provided by the embodiment of the present application on the premise of avoiding hyper-thread interference.
Firstly, detailed description is given to the task scheduling method provided by the embodiment of the present application for avoiding hyper-thread interference in the process of scheduling a single-core task, and the process can be realized by a server through a single-core scheduling module 301 included in a scheduler of the server.
Referring to fig. 4, fig. 4 shows a flow chart of a task scheduling method, the method comprising:
s401, a first task to be scheduled to run on a target logic core is obtained from a task queue of the target logic core.
For each logical core, e.g., a target logical core, a scheduler is required to schedule tasks from its task queue to run on the target logical core. However, in a hybrid deployment scenario, tasks of different task types run on the target logic core and a sibling logic core (filing CPU) of the target logic core, and the tasks of different task types may have corresponding priorities. The different task types can include an online task and an offline task, and can also include other task types with different priorities.
In this embodiment, in order to implement a function of avoiding hyper-thread interference, an operating system kernel of a server may be modified based on logic of the task scheduling method provided in the embodiment of the present application, and after the operating system and the virtual machine are started, priorities of processes are set by the operating system, taking an online task and an offline task as examples, as shown in fig. 5, the server includes a plurality of logic cores, which are logic core 1, logic core 2 … …, and logic core n, and divides all tasks into the online task and the offline task, where the online task includes online task 1, online task 2 … …, and online task n, and the offline task includes offline task 1, offline task 2 … …, and offline task n. The off-line task is a CPU consumption type task insensitive to delay, and the on-line task is a non-CPU consumption type task sensitive to delay. The online task and the offline task have different scheduling strategies, and the scheduler is responsible for distributing the tasks of different task types to the logic cores to run according to the specified scheduling strategies, so that the time-sharing multiplexing of the CPU is realized.
S402, checking the task type of the second task running on the brother logic core corresponding to the target logic core.
Based on the foregoing discussion, after obtaining the first task to be scheduled to run on the target logical core from the task queue of the target logical core, the task type of the second task running on the sibling logical core corresponding to the target logical core may be checked first, so as to avoid that the task with the high priority and the task with the low priority run on the sibling logical core of the target logical core and the target logical core at the same time.
It should be noted that the task scheduling method provided by the embodiment of the present application implements hyper-thread interference isolation and avoids hyper-thread interference, because in some scenarios, a task is sensitive to hyper-thread interference, and if hyper-thread interference exists, normal execution of the task may be affected, thereby reducing user experience. For example, for payment tasks, if the payment tasks are disturbed by hyper-threading, the payment tasks are executed slowly, and the payment delays affect the user experience.
However, in some scenarios, the task may be insensitive to, or tolerant of, hyper-threading interference, and in such scenarios, the user may not need the "hyper-threading interference isolation" function. If the hyper-thread interference isolation function is still started at this time, even if a part of the logic cores are idle, the part of the logic cores may not be utilized by the low-priority tasks (because the high-priority tasks run on the sibling logic cores), and at this time, the idle utilization rate of the whole machine is reduced, and the logic cores cannot be fully utilized. For such a scenario, a hyper-threading interference isolation switch module (for example, 303 shown in fig. 3) may be set, so that a user may select whether to start the hyper-threading interference isolation function through the hyper-threading interference isolation switch module according to different scenario requirements.
If the hyper-thread interference isolation switch module is started, the hyper-thread interference isolation can be avoided through the method in the task scheduling process, and if the hyper-thread interference isolation switch module is closed, the hyper-thread interference isolation switch module is not started, the hyper-thread interference isolation does not need to be avoided through the method in the task scheduling process. Therefore, in a possible implementation manner, before S402, the server may determine state information of the hyper-thread interference isolation switch module, where the state information is used to indicate whether the hyper-thread interference isolation switch module is turned on, and if the state information indicates that the hyper-thread interference isolation switch module is turned on, after acquiring a first task to be scheduled to run on the target logic core from a task queue of the target logic core, execute the step shown in S402. If the state information indicates that the hyper-thread interference isolator module is not started, after a first task to be scheduled to run on the target logic core is obtained from a task queue of the target logic core, the original standard logic of the scheduler is executed, that is, S402 does not need to be executed, and the first task is only scheduled according to the task execution condition of the target logic core.
From an implementation level, the user configuration may be set into the os scheduler for subsequent scheduler scheduling through a user-mode interface provided by the os (such as a/proc and/sys virtual file system interface provided by the Linux os). Referring to fig. 6, a first task is obtained (for example, as shown in S601 in fig. 6), it is determined whether the hyper-threading interference isolator module is turned on (for example, as shown in S602 in fig. 6) according to the state information of the hyper-threading interference isolator module, if so, the step shown in S402 is executed, that is, the processing logic related to the "hyper-threading interference isolator function" is entered (for example, as shown in S603 in fig. 6), and if not, the original scheduler standard logic is continuously executed (for example, as shown in S604 in fig. 6).
And S403, if the task type of the second task is different from the task type of the first task, determining the priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task.
S404, executing task scheduling of the first task according to the priority relation, and enabling the high-priority tasks in the first task and the second task to run on corresponding logic cores.
And if the server determines that the task type of the second task is different from the task type of the first task, determining the priority relation between the first task and the second task according to the task type of the first task and the task type of the second task, and executing task scheduling of the first task according to the priority relation so that the tasks with high priorities in the first task and the second task run on the corresponding logic cores.
Generally, the task types of the first tasks are different, and the manner of performing task scheduling of the first tasks according to the priority relationships is different.
When no online task is running on the target logical core, an offline task may be selected to run as the first task. If the task type of the first task is an offline task and the task type of the second task is an online task, the priority relationship is determined to be that the priority of the second task is higher than that of the first task, and at the moment, the task scheduling of the first task is executed according to the priority relationship in such a way that the first task is abandoned to be scheduled on the target logic core and the second task is kept running on the brother logic core, namely, the hyper-thread interference is avoided through a single-core scheduling avoidance function.
It should be noted that, in some cases, a task in the task queue may be a task that is set with a throttle (Throttled) flag, indicating that the task is a suspended and temporarily non-executed task to avoid hyper-thread interference, and no scheduling execution is required for such a task that is set with the throttle flag. In this case, after the first task is acquired, it may be determined whether the first task has the Throttled flag set, and the task may be skipped. If the first task has no set Throttled flag, the step of S402 is performed for the first task. If it is determined that the second task running on the sibling logical core is an online task, the scheduler may forego scheduling the first task, and then select another suitable task to run if a suitable task is found, then select to run, and if not, enter an IDLE (IDLE) state.
Referring to fig. 7, in the schematic diagram shown in 701, a physical core includes a hyper-thread 1 and a hyper-thread 2, the hyper-thread 1 runs an online task, the hyper-thread 2 is idle, a task queue includes an online task and two offline tasks, the online task may be represented by a VM, and the offline task may be represented by an OF. When the target logic core schedules an offline task from the task queue as a first task and wants to run the offline task on the target logic core, referring to 702 in fig. 7, an offline task (OF) identified by a dashed box on the target logic core represents the first task to be scheduled, and a task type OF a second task running on the sibling logic core, that is, a task type OF the hyper-thread 1, may be checked first, and since the task type OF the second task is an online task (VM), the first task is abandoned to be scheduled to the target logic core, so that the offline task is prevented from causing interference to the online task on the sibling logic core, that is, the hyper-thread interference is prevented.
The corresponding pseudo-code is described as:
Figure BDA0002944627930000101
when the task type of the first task is an online task, the task type of the second task running on the sibling logic core is checked to be an offline task, and the priority relationship is determined to be that the priority of the first task is higher than that of the second task. At this time, the task scheduling of the first task according to the priority relationship may be performed by notifying the sibling logic core to suspend the second task, and scheduling the first task to run on the target logic core, that is, avoiding the hyper-thread interference through the single-core scheduling isolation function. The notification may be implemented by Inter-Processor Interrupt (IPI).
Referring to fig. 8, in the schematic diagram shown in 801, a physical core includes a hyper-thread 1 and a hyper-thread 2, the hyper-thread 1 is idle, the hyper-thread 2 runs an offline task, and a task queue includes two online tasks and one offline task. When the target logical core schedules an online task (VM) from the task queue as a first task and wants to run the online task on the target logical core, as shown in 802 in fig. 8, the online task identified by a dashed box on the target logical core represents the first task to be scheduled, and the task type of a second task running on the sibling logical core, that is, the task type of the hyper-thread 1, may be checked first. Because the task type OF the second task is an offline task, the sibling logic core is notified to suspend the offline task (OF), and after receiving the corresponding notification, the sibling logic core stops the currently running offline task, for example, the offline task may be moved out OF a queue in which the sibling logic core is running, and a threttled flag is set. After the sibling logic core stops the running offline task, other suitable tasks (non-offline tasks) are selected to run, if the suitable tasks are found, the tasks are selected to run, and if the suitable tasks are not found, the IDLE state is entered.
The corresponding pseudo-code is described as:
Figure BDA0002944627930000111
Figure BDA0002944627930000121
according to the technical scheme, the scheduler can schedule the tasks to run in a task queue of one logic core, for example, a target logic core, wherein the task queue comprises at least one task, and when the task scheduling is executed, a first task to be scheduled to run on the target logic core is obtained from the task queue of the target logic core. In a hybrid deployment scenario, in order to avoid hyper-thread interference caused by the fact that tasks of different task types with different priorities are simultaneously run on a target logical core and a target logical core forming a hyper-thread pair, that is, a task with a low priority interferes with the performance of a task with a high priority, for this reason, a task type of a second task running on a sibling logical core corresponding to the target logical core may be checked, and if the task type of the second task is different from the task type of the first task, a priority relationship between the first task and the second task is determined according to the task type of the first task and the task type of the second task, so that task scheduling of the first task is performed according to the priority relationship, and a task with a high priority in the first task and the second task runs on the corresponding logical cores. According to the method, under the condition that hardware resources are shared between two hyper-threads, a low-priority task can be guaranteed to avoid a high-priority task in the task scheduling process, and the problem that the low-priority task (such as an offline task) causes serious interference to the high-priority task (such as an online task) due to the fact that tasks with different priorities are simultaneously operated on the two hyper-threads is avoided, namely hyper-thread interference is avoided, and the performance of the high-priority task is guaranteed.
By avoiding hyper-thread interference, the mixed deployment requirement of sensitive online services can be met. By the mixed deployment of the offline tasks and the online tasks, the idle CPU can be fully utilized to run the offline tasks while the online service performance is guaranteed, the utilization rate of the whole CPU is improved, and the cost is greatly saved.
In some cases, in the Linux environment, it may also be possible to try to limit the bandwidth (how long the specified time period can be run at most) for running the offline task, so as to limit the running of the offline task (low-priority task) to some extent, and reduce interference to the online service (high-priority task).
Next, in the load balancing process, load balancing is achieved by scheduling a task to a target logic core through the task scheduling method provided by the embodiment of the present application, so as to avoid hyper-thread interference. This process may be implemented by the server through a load balancing module 302 included in its scheduler.
The server may include a plurality of logic cores, and load balancing is required among the plurality of logic cores. The target logic core is one of the plurality of logic cores, and tasks in a task queue of the target logic core are distributed in a load balancing mode.
In order to avoid the hyper-thread interference, in one possible implementation manner, the load balancing manner may be that, for each logic core in the multiple logic cores, a target task load of each logic core is calculated according to the offline task load of the logic core and the online task load of the corresponding sibling logic core, that is, the target task load is equal to the offline task load of the logic core + the online task load of the sibling logic core. And then, carrying out load balancing according to the target task load.
It should be noted that, in this embodiment, when load balancing is performed on the offline task, the load calculation of the logic core needs to consider the offline task load of the logic core itself and the online task load of the corresponding sibling logic core to obtain a target task load, that is, a physical core is taken as an overall calculation load. Therefore, the problem that although the load of the offline task on the logic core is small, the offline task has a large load of the online task on the sibling logic core of the logic core, and the performance of the online task on the sibling logic core is influenced by the offline task distributed to the logic core is avoided.
It should be noted that, in this embodiment, the occasions for performing load balancing may include various occasions, for example, periodic load balancing performed when a load balancing cycle is reached, when an idle logic core exists, and when a new task is woken up or created. The essence of load balancing the first two opportunities is to migrate a task, e.g., an offline task, on one logical core to another logical core (e.g., a target logical core); the essence of load balancing on the third occasion is to distribute a new task, e.g., an offline task, that is either woken up or created, to a certain logical core, e.g., a target logical core. The following describes different load balancing.
In one possible implementation, the time for load balancing according to the target task load is the arrival load balancing period or the target logic core is idle. At this time, if the target task in the task queue is an offline task, the load balancing according to the target task load may be performed by determining a logic core with the heaviest load according to the target task load, and if a difference between the target task load of the logic core with the heaviest load and the target task load of the target logic core is greater than a preset threshold, selecting the target task from the logic core with the heaviest load and migrating the target task to the task queue of the target logic core. If the difference is smaller than the preset threshold value, the migration is abandoned. This process may also be referred to as periodic load balancing back-off.
It should be noted that the load balancing period in this embodiment may refer to a heartbeat (tick) of a scheduler, a periodic scheduling period, i.e., a period of a clock interrupt, and is usually 1 ms.
In another possible implementation manner, when the time for performing load balancing according to the target task load is to wake up or create a new task, if the target task in the task queue is an offline task, the method for performing load balancing according to the target task load may be to determine an idle logical core according to the target task load, and if there is no online task running on a sibling logical core corresponding to the idle logical core, determine the idle logical core as the target logical core, thereby allocating the target task to the task queue of the target logical core. This process may also be referred to as idle core selection avoidance.
Referring to fig. 9, in the schematic diagram shown in 901, a physical core includes a hyper-thread 1 and a hyper-thread 2, the hyper-thread 1 runs an online task, the hyper-thread 2 is idle, and a task queue includes an online task and two offline tasks. When the off-line task is awakened (wakeup) or created as a target task and a target logic core is required to be selected to run, traversing the logic core and selecting an idle logic core according to the load of the target task. After a certain idle logical core (for example, hyper thread 2 in fig. 9) is selected, it is determined whether an online task is running on a sibling logical core (for example, hyper thread 1) corresponding to the idle logical core, and if no online task is running, hyper thread 2 is used as a target logical core, and the offline task is allocated to hyper thread 2. And if the online task runs on the corresponding sibling logical core, the offline task is abandoned to be distributed to the hyper-thread 2, and the idle logical core is skipped to continue searching.
The corresponding pseudo-code logic is as follows:
Figure BDA0002944627930000141
Figure BDA0002944627930000151
in the process of load balancing, when the off-line task performs load balancing, the logic core in which the on-line task operates is actively avoided (when the target task load is calculated, the off-line task load of the logic core and the on-line task load of the corresponding brother logic core need to be considered at the same time), so that the off-line task can effectively avoid the logic core in which the on-line task operates, and the off-line task is prevented from being hungry.
Next, a task scheduling method provided in the embodiment of the present application will be described with reference to an actual application scenario. In a mixed deployment scene of an offline task and an online task, a hyper-thread architecture is designed, two hyper-threads on one physical core can be regarded as two logic cores, the two logic cores are brother logic cores, one hyper-thread (logic core) runs the online task, and the other hyper-thread (logic core) runs the offline task. However, when the online task and the offline task are simultaneously run on two hyper-threads of the same physical core, due to sharing of part of physical resources, in some hyper-thread interference sensitive scenes, the performance of the online task is seriously interfered by the running of the offline task, namely hyper-thread interference. To this end, an embodiment of the present application provides a task scheduling method to avoid hyper-thread interference, and referring to fig. 10, the method includes:
and S1001, acquiring a first task.
And S1002, determining whether the hyper-thread interference isolation switch module is started or not according to the state information of the hyper-thread interference isolation switch module, if so, executing S1003, and if not, executing S1004.
And S1003, entering a processing logic related to a hyper-thread interference isolation function.
And S1004, when the offline task is selected to run, checking the type of the task running on the brother logic core of the target logic core, if the task is an online task, actively avoiding the task, and abandoning the scheduling opportunity.
S1005, when the online task is selected to run, checking the running task type on the brother logic core of the target logic core, and if the running task type is the offline task, informing the brother logic core that the running offline task is required to be suspended, so as to realize isolation.
And S1006, when the load of the offline task is balanced, taking the sum of the offline task load of the logic core and the online task load of the corresponding sibling logic core as the target task load of the logic core.
And S1007, if the idle logic core is determined according to the target task load, taking the idle logic core which does not have the online task running on the corresponding brother logic core as the target logic core.
It is understood that the processing logic associated with the "hyper-threading interference isolation function" includes single core scheduling avoidance, single core scheduling isolation, and load balancing avoidance. The single-core scheduling avoidance can be realized through S1004, the single-core scheduling isolation can be realized through S1005, and the load balancing avoidance can be realized through S1006-S1007.
S1008, scheduler standard logic.
The scheduler criteria logic includes: when no on-line task runs on one logic core, an off-line task can be run; when an online task needs to be scheduled to run on a logic core, only the task type of the logic core running needs to be considered, and preemption is performed according to priority; in the off-line task load balancing process, the target task load of the logic core is calculated only by considering the off-line task load of the logic core and not by considering the on-line task load of the brother logic core.
Based on the task scheduling method provided by the embodiment corresponding to fig. 4, an embodiment of the present application further provides a task scheduling apparatus, referring to fig. 11, the apparatus includes an obtaining unit 1101, a checking unit 1102, a determining unit 1103, and a scheduling unit 1104:
the obtaining unit 1101 is configured to obtain a first task to be scheduled to run on a target logical core from a task queue of the target logical core, where the task queue includes at least one task;
the checking unit 1102 is configured to check a task type of a second task running on a sibling logical core corresponding to the target logical core, where the target logical core and the sibling logical core of the target logical core form a hyper-thread pair;
the determining unit 1103 is configured to determine, if the task type of the second task is different from the task type of the first task, a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
the scheduling unit 1104 is configured to perform task scheduling on the first task according to the priority relationship, so that a high-priority task in the first task and the second task runs on a corresponding logical core.
In a possible implementation manner, the determining unit 1103 is specifically configured to:
if the task type of the first task is an offline task and the task type of the second task is an online task, determining that the priority of the second task is higher than that of the first task;
the scheduling unit 1104 is specifically configured to:
forgoing scheduling the first task onto the target logical core, keeping the second task running on the sibling logical core.
In a possible implementation manner, the determining unit 1103 is specifically configured to:
if the task type of the first task is an online task and the task type of the second task is an offline task, determining that the priority of the first task is higher than that of the second task;
the scheduling unit 1104 is specifically configured to:
notifying the sibling logical core to suspend the second task;
and scheduling the first task to run on the target logic core.
In a possible implementation manner, the determining unit 1103 is further configured to:
determining state information of a hyper-threading interference isolation switch module;
if the state information indicates that the hyper-thread interference isolation function is turned on, after the obtaining unit 1101 obtains a first task to be scheduled to run on a target logical core from a task queue of the target logical core, the checking unit 1102 is triggered to execute the step of checking the task type of a second task running on a sibling logical core of the target logical core.
In one possible implementation manner, the target logical core is one of a plurality of logical cores, and tasks in a task queue of the target logical core are distributed in a load balancing manner.
In one possible implementation, the apparatus further includes a computing unit and an equalizing unit:
the computing unit is configured to compute, for each logic core of the plurality of logic cores, a target task load of each logic core according to an offline task load of the logic core and an online task load of a corresponding sibling logic core;
and the balancing unit is used for carrying out load balancing according to the target task load.
In a possible implementation manner, if the target task in the task queue is an offline task, the balancing unit is configured to:
determining the logic core with the heaviest load according to the target task load;
and if the difference value between the target task load of the logic core with the heaviest load and the target task load of the target logic core is larger than a preset threshold value, selecting the target task from the logic core with the heaviest load and migrating the target task to the task queue of the target logic core.
In a possible implementation manner, the time for performing load balancing according to the target task load is a load balancing period or the target logic core is idle.
In a possible implementation manner, if the target task in the task queue is an offline task, the balancing unit is configured to:
determining an idle logic core according to the target task load;
if no online task operation exists on the brother logic core corresponding to the idle logic core, determining the idle logic core as the target logic core;
and distributing the target task to a task queue of the target logic core.
The embodiment of the present application further provides an electronic device for task scheduling, where the electronic device may be a terminal device, and the terminal device is taken as a smart phone as an example:
fig. 12 is a block diagram illustrating a partial structure of a smartphone related to a terminal device provided in an embodiment of the present application. Referring to fig. 12, the smart phone includes: radio Frequency (RF) circuit 1210, memory 1220, input unit 1230, display unit 1240, sensor 1250, audio circuit 1260, wireless fidelity (WiFi) module 1270, processor 1280, and power 1290. The input unit 1230 may include a touch panel 1231 and other input devices 1232, the display unit 1240 may include a display panel 1241, and the audio circuit 1260 may include a speaker 1261 and a microphone 1262. Those skilled in the art will appreciate that the smartphone configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The memory 1220 may be used to store software programs and modules, and the processor 1280 executes various functional applications and data processing of the smart phone by operating the software programs and modules stored in the memory 1220. The memory 1220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the smartphone, and the like. Further, the memory 1220 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1280 is a control center of the smart phone, connects various parts of the entire smart phone using various interfaces and lines, and performs various functions of the smart phone and processes data by operating or executing software programs and/or modules stored in the memory 1220 and calling data stored in the memory 1220, thereby integrally monitoring the smart phone. Alternatively, processor 1280 may include one or more processing units; preferably, the processor 1280 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 1280.
In this embodiment, the processor 1280 in the terminal device may execute the following steps:
acquiring a first task to be scheduled to run on a target logic core from a task queue of the target logic core, wherein the task queue comprises at least one task;
checking the task type of a second task running on the brother logic core corresponding to the target logic core, wherein the target logic core and the brother logic core of the target logic core form a hyper-thread pair;
if the task type of the second task is different from the task type of the first task, determining a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
and executing the task scheduling of the first task according to the priority relation, so that the high-priority tasks in the first task and the second task run on the corresponding logic cores.
The electronic device may further include a server, and this embodiment of the present application further provides a server, please refer to fig. 13, fig. 13 is a structural diagram of a server 1300 provided in this embodiment of the present application, and the server 1300 may generate a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1322 (for example, one or more processors) and a memory 1332, and one or more storage media 1330 (for example, one or more mass storage devices) that store an application program 1342 or data 1344. Memory 1332 and storage medium 1330 may be, among other things, transitory or persistent storage. The program stored in the storage medium 1330 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, the central processor 1322 may be arranged in communication with the storage medium 1330, executing a sequence of instruction operations in the storage medium 1330 on the server 1300.
Server 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input-output interfaces 1358, and/or one or more operating systems 1341 such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
In this embodiment, the central processor 1322 in the server 1300 may perform the following steps:
acquiring a first task to be scheduled to run on a target logic core from a task queue of the target logic core, wherein the task queue comprises at least one task;
checking the task type of a second task running on the brother logic core corresponding to the target logic core, wherein the target logic core and the brother logic core of the target logic core form a hyper-thread pair;
if the task type of the second task is different from the task type of the first task, determining a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
and executing the task scheduling of the first task according to the priority relation, so that the high-priority tasks in the first task and the second task run on the corresponding logic cores.
According to an aspect of the present application, a computer-readable storage medium is provided, which is used for storing program codes, and the program codes are used for executing the task scheduling method described in the foregoing embodiments.
According to an aspect of the application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the embodiment.
The terms "first," "second," "third," "fourth," and the like (if any) in the description of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for task scheduling, the method comprising:
acquiring a first task to be scheduled to run on a target logic core from a task queue of the target logic core, wherein the task queue comprises at least one task;
checking the task type of a second task running on a brother logic core corresponding to the target logic core, wherein the brother logic core of the target logic core and the target logic core form a hyper-thread pair;
if the task type of the second task is different from the task type of the first task, determining a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
and executing the task scheduling of the first task according to the priority relation, so that the high-priority tasks in the first task and the second task run on the corresponding logic cores.
2. The method of claim 1, wherein determining the priority relationship between the first task and the second task based on the task type of the first task and the task type of the second task comprises:
if the task type of the first task is an offline task and the task type of the second task is an online task, determining that the priority relationship is that the priority of the second task is higher than that of the first task;
the performing task scheduling of the first task according to the priority relationship includes:
forgoing scheduling the first task onto the target logical core, keeping the second task running on the sibling logical core.
3. The method of claim 1, wherein determining the priority relationship between the first task and the second task based on the task type of the first task and the task type of the second task comprises:
if the task type of the first task is an online task and the task type of the second task is an offline task, determining that the priority relationship is that the priority of the first task is higher than that of the second task;
the executing the task scheduling of the first task according to the priority relationship comprises:
notifying the sibling logical core to suspend the second task;
and scheduling the first task to run on the target logic core.
4. The method according to any of claims 1-3, wherein prior to said checking a task type of a second task running on a sibling logical core of the target logical core, the method further comprises:
determining state information of a hyper-threading interference isolation switch module;
and if the state information indicates that a hyper-thread interference isolation function is started, after the first task to be scheduled to run on the target logic core is obtained from the task queue of the target logic core, executing the step of checking the task type of the second task running on the sibling logic core of the target logic core.
5. The method of claim 1, wherein the target logical core is one of a plurality of logical cores, and wherein tasks in a task queue of the target logical core are distributed in a load balancing manner.
6. The method of claim 5, wherein the load balancing comprises:
for each logic core in the plurality of logic cores, calculating a target task load of each logic core according to an offline task load of the logic core and an online task load of a corresponding sibling logic core;
and carrying out load balancing according to the target task load.
7. The method of claim 6, wherein if the target task in the task queue is an offline task, the load balancing according to the target task load comprises:
determining the logic core with the heaviest load according to the target task load;
and if the difference value between the target task load of the logic core with the heaviest load and the target task load of the target logic core is larger than a preset threshold value, selecting the target task from the logic core with the heaviest load and migrating the target task to a task queue of the target logic core.
8. The method of claim 7, wherein the time for load balancing according to the target task load is to reach a load balancing cycle or the target logic core is idle.
9. The method according to claim 6, wherein if the target task in the task queue is an offline task, said performing load balancing according to the target task load comprises:
determining an idle logic core according to the target task load;
if no online task operation exists on the brother logic core corresponding to the idle logic core, determining the idle logic core as the target logic core;
and distributing the target task to a task queue of the target logic core.
10. A task scheduling apparatus, characterized in that the apparatus comprises an acquisition unit, a checking unit, a determination unit and a scheduling unit:
the acquiring unit is configured to acquire a first task to be scheduled to run on a target logic core from a task queue of the target logic core, where the task queue includes at least one task;
the checking unit is configured to check a task type of a second task running on a sibling logical core corresponding to the target logical core, where the target logical core and the sibling logical core of the target logical core form a hyper-thread pair;
the determining unit is configured to determine, if the task type of the second task is different from the task type of the first task, a priority relationship between the first task and the second task according to the task type of the first task and the task type of the second task;
the scheduling unit is configured to execute task scheduling of the first task according to the priority relationship, so that a high-priority task in the first task and the second task runs on a corresponding logic core.
11. The apparatus according to claim 10, wherein the determining unit is specifically configured to:
if the task type of the first task is an offline task and the task type of the second task is an online task, determining that the priority of the second task is higher than that of the first task;
the scheduling unit is specifically configured to:
forgoing scheduling the first task onto the target logical core, keeping the second task running on the sibling logical core.
12. The apparatus according to claim 10, wherein the determining unit is specifically configured to:
if the task type of the first task is an online task and the task type of the second task is an offline task, determining that the priority of the first task is higher than that of the second task;
the scheduling unit is specifically configured to:
notifying the sibling logical core to suspend the second task;
and scheduling the first task to run on the target logic core.
13. The apparatus according to any of claims 10-12, wherein the determining unit is further configured to:
determining state information of a hyper-threading interference isolation switch module;
if the state information indicates that a hyper-thread interference isolation function is started, after the obtaining unit obtains a first task to be scheduled to run on a target logic core from a task queue of the target logic core, triggering the checking unit to execute the step of checking the task type of a second task running on a sibling logic core of the target logic core.
14. An electronic device for task scheduling, the electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 1-9 according to instructions in the program code.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store a program code for performing the method of any of claims 1-9.
CN202110189134.2A 2021-02-19 2021-02-19 Task scheduling method, device, equipment and storage medium Pending CN114968500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110189134.2A CN114968500A (en) 2021-02-19 2021-02-19 Task scheduling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110189134.2A CN114968500A (en) 2021-02-19 2021-02-19 Task scheduling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114968500A true CN114968500A (en) 2022-08-30

Family

ID=82954203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110189134.2A Pending CN114968500A (en) 2021-02-19 2021-02-19 Task scheduling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114968500A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737347A (en) * 2023-08-14 2023-09-12 南京翼辉信息技术有限公司 Task scheduling control method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737347A (en) * 2023-08-14 2023-09-12 南京翼辉信息技术有限公司 Task scheduling control method
CN116737347B (en) * 2023-08-14 2023-10-13 南京翼辉信息技术有限公司 Task scheduling control method

Similar Documents

Publication Publication Date Title
US11797327B2 (en) Dynamic virtual machine sizing
US10437639B2 (en) Scheduler and CPU performance controller cooperation
US10536392B2 (en) Monitoring data streams and scaling computing resources based on the data streams
WO2023071172A1 (en) Task scheduling method and apparatus, device, storage medium, computer program and computer program product
US9411649B2 (en) Resource allocation method
EP2972851A1 (en) Systems and methods of using a hypervisor with guest operating systems and virtual processors
US11620155B2 (en) Managing execution of data processing jobs in a virtual computing environment
CN112380020A (en) Computing power resource allocation method, device, equipment and storage medium
US9798582B2 (en) Low latency scheduling on simultaneous multi-threading cores
CN111488210B (en) Task scheduling method and device based on cloud computing and computer equipment
US9547576B2 (en) Multi-core processor system and control method
CN114637536A (en) Task processing method, computing coprocessor, chip and computer equipment
CN111597044A (en) Task scheduling method and device, storage medium and electronic equipment
CN114968567A (en) Method, apparatus and medium for allocating computing resources of a compute node
CN111930516B (en) Load balancing method and related device
CN109729113B (en) Method, server system and computer program product for managing dedicated processing resources
CN114968500A (en) Task scheduling method, device, equipment and storage medium
US9436505B2 (en) Power management for host with devices assigned to virtual machines
US20220066827A1 (en) Disaggregated memory pool assignment
CN114661415A (en) Scheduling method and computer system
CN113439260A (en) I/O completion polling for low latency storage devices
US11055137B2 (en) CPU scheduling methods based on relative time quantum for dual core environments
CN113032154B (en) Scheduling method and device for virtual CPU, electronic equipment and storage medium
US11347544B1 (en) Scheduling work items based on declarative constraints
CN113032098B (en) Virtual machine scheduling method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination