CN115617494A - Process scheduling method and device in multi-CPU environment, electronic equipment and medium - Google Patents

Process scheduling method and device in multi-CPU environment, electronic equipment and medium Download PDF

Info

Publication number
CN115617494A
CN115617494A CN202211552730.3A CN202211552730A CN115617494A CN 115617494 A CN115617494 A CN 115617494A CN 202211552730 A CN202211552730 A CN 202211552730A CN 115617494 A CN115617494 A CN 115617494A
Authority
CN
China
Prior art keywords
cpu
task
scheduling
judging whether
task queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211552730.3A
Other languages
Chinese (zh)
Other versions
CN115617494B (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nfs China Software Co ltd
Original Assignee
Nfs China Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nfs China Software Co ltd filed Critical Nfs China Software Co ltd
Priority to CN202211552730.3A priority Critical patent/CN115617494B/en
Publication of CN115617494A publication Critical patent/CN115617494A/en
Application granted granted Critical
Publication of CN115617494B publication Critical patent/CN115617494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a process scheduling method, a device, an electronic device and a medium in a multi-CPU environment, wherein the method specifically comprises the following steps: receiving a scheduling request of a first process for a second process; under the condition that the scheduling request requires load balancing and the process creating behavior or the process covering behavior corresponding to the second process, judging whether the first process and the second process have a synchronization relationship or not and judging whether a task queue of a first CPU where the first process is located contains 1 task or not; under the conditions that the first process and the second process have a synchronization relationship and a task queue of a first CPU contains 1 task, placing the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is scheduled. The method and the device for processing the data can improve the interaction performance between the first process and the second process.

Description

Process scheduling method and device in multi-CPU environment, electronic equipment and medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a process scheduling method and device, electronic equipment and a medium in a multi-CPU environment.
Background
With the development of computer technology, in order to improve the capability of processing tasks, a plurality of CPUs may be integrated in a device. When a plurality of CPUs are integrated in a device, the CPUs need to be allocated to processes so that tasks corresponding to the processes can be efficiently executed.
At present, under the condition that a process in a first multi-CPU environment schedules a second process, the most idle CPU is generally allocated to the second process according to the principle of load balancing.
In practical applications, the second process is allocated the most idle CPU, which may in some cases result in a reduced performance of the interaction between the first process and the second process.
Disclosure of Invention
The embodiment of the application provides a process scheduling method in a multi-CPU environment, which can improve the interaction performance between a first process and a second process.
Correspondingly, the embodiment of the application also provides a process scheduling device, an electronic device and a machine readable medium in the multi-CPU environment, so as to ensure the implementation and application of the method.
In order to solve the above problem, an embodiment of the present application discloses a process scheduling method in a multi-CPU environment, where the method includes:
receiving a scheduling request of a first process for a second process;
under the condition that the scheduling request requires load balancing and the process creating behavior or the process covering behavior corresponding to the second process, judging whether the first process and the second process have a synchronization relationship or not and judging whether a task queue of a first CPU where the first process is located contains 1 task or not;
under the conditions that the first process and the second process have a synchronous relation and 1 task is contained in a task queue of the first CPU, placing the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched.
In order to solve the above problem, an embodiment of the present application discloses a process scheduling apparatus in a multi-CPU environment, where the apparatus includes:
the system comprises a receiving module, a scheduling module and a processing module, wherein the receiving module is used for receiving a scheduling request of a first process for a second process;
the first judgment module is used for judging whether the first process and the second process have a synchronization relation or not and judging whether a task queue of a first CPU (central processing unit) in which the first process is located contains 1 task or not under the condition that the scheduling request requires load balancing and the process creating behavior or the process covering behavior corresponding to the second process;
the first scheduling module is used for placing the second process in the first CPU for processing under the conditions that the first process and the second process have a synchronous relation and 1 task is contained in a task queue of the first CPU; the first process and the second process have a synchronous relation, and the first process is characterized in that the first process enters a waiting state after the second process is scheduled.
Optionally, the apparatus further comprises:
the first searching module is used for searching the second CPU in an idle state in a plurality of CPUs under the condition that the first process and the second process do not have a synchronization relationship and/or a plurality of tasks are contained in a task queue of the first CPU;
and the second scheduling module is used for placing the second process in the second CPU for processing.
Optionally, the apparatus further comprises:
the second judging module is used for judging whether the first process and the second process have an affinity relation or not under the condition that the scheduling request requires load balancing and the second process corresponds to a process awakening behavior;
the third judging module is used for judging whether the first process and the second process have a synchronous relation or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not under the condition that the first process and the second process have an affinity relation;
and the third scheduling module is used for placing the second process in the first CPU for processing under the condition that the first process and the second process have a synchronization relationship and 1 task is contained in a task queue of the first CPU.
Optionally, the second determining module includes:
the sharing judgment module is used for judging whether the first process and the second process are shared or not; and/or
And the awakening frequency judging module is used for judging whether the first process and the second process have an affinity relation or not according to the process awakening frequency of the first process in the preset time period.
Optionally, the apparatus further comprises:
the second searching module is used for searching a third CPU which is adjacent to the first CPU and is in an idle state in a plurality of CPUs under the condition that the first process and the second process do not have a synchronization relation and/or a plurality of tasks are contained in a task queue of the first CPU;
and the third scheduling module is used for placing the second process in the third CPU for processing.
The embodiment of the application also discloses an electronic device, which comprises: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in embodiments of the present application.
The embodiment of the application also discloses a machine-readable medium, wherein executable codes are stored on the machine-readable medium, and when the executable codes are executed, a processor is caused to execute the method according to the embodiment of the application.
The embodiment of the application has the following advantages:
in the technical solution of the embodiment of the present application, under the condition that the scheduling request requires load balancing and the second process corresponds to the process creation behavior or the process coverage behavior, two judgments are performed: and judging whether the first process and the second process have a synchronous relation correspondingly to the step A and judging whether a task queue of a first CPU in which the first process is positioned correspondingly to the step B contains 1 task. The first process and the second process have a synchronous relation, and can represent that the first process enters a waiting state after the second process is scheduled. Whether the task queue of the first CPU where the first process is located contains 1 task or not can represent that the task queue of the first CPU contains the task corresponding to the first process.
According to the embodiment of the application, when the judgment result of the judgment A represents that the first process and the second process have a synchronous relation, the first process can be represented to enter a waiting state after the second process is dispatched, and the judgment result of the judgment B represents that the task queue of the first CPU contains 1 task, the first process can enter the waiting state after the second process is dispatched, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. In addition, because the first process and the second process are both run on the first CPU, and the inter-process communication speed in the same CPU environment is generally higher than the inter-process communication speed in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
Drawings
FIG. 1 is a flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a process scheduling apparatus in a multi-CPU environment according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
In order to make the embodiments of the present application better understood, the following description is provided for the concepts related to the embodiments of the present application:
a task (task) may refer to an activity that is completed by software. A task may be a process or a thread. In short, it refers to a series of operations that collectively achieve a certain purpose. For example, data is read and placed into memory. This task may be implemented as a process or as a thread.
A process is a running activity of a program in a computer on a data set, and is a basic unit for resource allocation and scheduling by a system. For example, when a user runs a program of the user, the system creates a process and allocates resources to the process, including various tables, memory space, disk space, I/O (Input/Output) devices, and the like; then, the process is put into a ready queue of the process; the process scheduler in a multi-CPU environment chooses it, allocates CPU and other related resources to it, and the process is actually running. Therefore, a process is the unit of concurrent execution in the system.
And (3) process state: the process state may include: new state, running state, ready state, waiting state, terminating state, etc. Wherein, the running state may refer to a state in which the process possession processor is running; the process has acquired the CPU, whose program is executing. The ready state may refer to a state in which the process has a running condition, waiting for the system to allocate a processor for running.
The waiting state is also called a blocking state or a sleeping state, and refers to a state that a process does not have an operation condition and is waiting for completion of a certain time; for example, a process is temporarily stopping running while waiting for some event to occur.
multi-CPU environment: the number of cores of a CPU today ranges from single core to dual core to 4, 8, or even 10 cores. The multi-core architecture may also distinguish between large and small cores. The cores are differentiated because their performance (power) and power consumption are different, and they are differentiated in clusters (small core in one cluster and large core in another cluster), while the CPU frequency within the same cluster is currently regulated synchronously.
Load balancing in a multi-CPU environment: to reduce interference between CPUs, there is a task queue on each CPU. In the running process, some CPUs are in a busy state, and some CPUs are in an idle state, so that load balancing is needed. The process of load balancing is a process of transferring tasks from a CPU with a heavy load to a CPU with a relatively light load for execution.
CPU topology: the kernel uses a scheduling domain (scheduled _ domain) to describe the hierarchical relationship between the CPUs.
A scheduling domain: is a set of CPUs that share attributes and scheduling policies and can balance each other. The scheduling domains are hierarchical; a multi-level system will have multi-level domains. Each scheduling domain contains one or more CPU groups (struct scheduled _ groups).
For example, a 4-core SOC (System on Chip) includes: CPU0, CPU1, CPU2 and CPU3. Wherein, the CPU0 and the CPU1 belong to cluster0 and share the L2 cache (secondary cache); the CPU2 and the CPU3 belong to the cluster1 and share the L2 cache. cluster0 or cluster1 can be considered a scheduling domain, with two scheduling groups in a scheduling domain, and one CPU in a scheduling group. Of course, in other cases (e.g., an 8-core SOC), two or more CPUs may be included in a scheduling group. The entire SOC can be considered as a high-level scheduling domain, where there are two scheduling groups, cluster0 belongs to one scheduling group and cluster1 belongs to the other scheduling group.
In the conventional technology, when a process in a first multi-CPU environment schedules a second process, an idle CPU is generally allocated to the second process according to a load balancing principle. For example, in the case where the second process is a fork (copied) out process, the conventional technique uses a slow path (find _ idle _ CPU function) to find the most idle CPU and places the second process on the most idle CPU for execution. Wherein the fork system call is used to create a new process, called child process. However, in some cases, the first process and the second process need to perform inter-process communication via a pipeline or the like, and if the second process is placed on any idle CPU for execution, the performance of interaction between the first process and the second process will be affected.
Aiming at the technical problem of reduced interactive performance between a first process and a second process, the embodiment of the application provides a process scheduling method in a multi-CPU environment, and the method specifically comprises the following steps: receiving a scheduling request of a first process for a second process; under the condition that the scheduling request requires load balancing and the second process corresponds to a process creating behavior or a process covering behavior, judging whether the first process and the second process have a synchronization relationship or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not; under the condition that the first process and the second process have a synchronization relationship and the task queue of the first CPU contains 1 task, placing the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched.
In the embodiment of the present application, when the scheduling request requires load balancing and the second process corresponds to a process creation behavior or a process coverage behavior, two determinations are performed: and judging whether the first process and the second process have a synchronization relation correspondingly to the step A, and judging whether the task queue of the first CPU where the first process is located contains 1 task correspondingly to the step B. The first process and the second process have a synchronous relation, and can represent that the first process enters a waiting state after the second process is scheduled. Whether the task queue of the first CPU where the first process is located contains 1 task can represent that the task queue of the first CPU contains the task corresponding to the first process.
According to the embodiment of the application, when the judgment result of the judgment A represents that the first process and the second process have a synchronous relation, the first process can be represented to enter a waiting state after the second process is dispatched, and the judgment result of the judgment B represents that the task queue of the first CPU contains 1 task, the first process can enter the waiting state after the second process is dispatched, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. Moreover, because the first process and the second process are both run on the first CPU, and the speed of inter-process communication in the same CPU environment is generally higher than the speed of inter-process communication in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
The process creation behavior corresponding to the second process in the embodiment of the present application may be applicable to a case where the first process duplicates the second process, and the process coverage behavior corresponding to the second process may be applicable to a case where the second process covers the first process.
It should be noted that if a decision (decision a or decision B) is performed, there may be a risk that the performance of the second process is degraded. For example, if the determination a is performed without performing the determination B, when a plurality of tasks are included in the task queue of the first CPU, the second process needs to wait for the first CPU to execute the plurality of tasks, and there is still a problem that the scheduling delay of the second process is large. For another example, if the determination B is performed without performing the determination a, the first process may also perform other operations after scheduling the second process, instead of entering the waiting state after scheduling the second process, in which case the second process needs to wait for the first process to perform other operations, and there still exists a problem of large scheduling delay of the second process.
Method embodiment 1
Referring to fig. 1, a schematic flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application is shown, where the method may specifically include the following steps:
step 101, receiving a scheduling request of a first process for a second process;
102, under the condition that the scheduling request requires load balancing and the second process corresponds to a process creating behavior or a process covering behavior, judging whether a first process and the second process have a synchronization relation or not and judging whether a task queue of a first CPU (central processing unit) in which the first process is positioned contains 1 task or not;
103, under the condition that the first process and the second process have a synchronous relation and the task queue of the first CPU comprises 1 task, placing the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched.
The method of the embodiment of the application can be applied to a multi-CPU environment of SOC and is used for carrying out multi-CPU load balancing under the condition of reducing the scheduling delay of the second process.
In step 101, a first process may call a process scheduling function, which in turn calls a process placing function, and a call request for the process placing function may be referred to as a scheduling request. The process placement function may execute the method of the embodiments of the present application according to the scheduling request.
The scheduling request may include a scheduling parameter. The scheduling parameters may include: and scheduling scene parameters. The scheduling scenario parameters may include: load balancing parameters and second process scenario parameters, etc. The load balancing parameter may represent that the scheduling request requires load balancing, and the second process scenario parameter may represent a scenario of the second process. For example, the second process scenario parameters may include: a process creation parameter or a process coverage parameter, and the like, where the process creation parameter may characterize a process creation behavior corresponding to the second process, and the process coverage parameter may characterize a process coverage behavior corresponding to the second process.
The process creation behavior may correspond to the fork call described previously. The process override behavior may correspond to exec calls. If the exec call is successful, the calling process will be overwritten and then execution will start from the entry of the new process. This results in a new process, but the process identifier of the new process is the same as the calling process. That is, exec does not create a new process concurrent with the calling process, but replaces the calling process with the new process.
In practical application, the load balancing parameter and the second process scene parameter can be set independently as two independent parameters; or the load balancing parameter and the second process scenario parameter may be set as one parameter combination. Taking the merge setting as an example, one scheduling scenario parameter may include both the load balancing parameter and the second process scenario parameter.
The process scheduling function may determine the scheduling parameter included in the scheduling request according to a call parameter carried in a call request of the first process for the process scheduling function. For example, the call parameters include: the scheduling parameters may also include information such as second process scene parameters.
In step 102, the scheduling parameter included in the scheduling request may be analyzed to determine whether the scheduling request requires load balancing and whether the second process corresponds to a process creation behavior or a process coverage behavior.
In a specific implementation, if the scheduling parameter includes a string corresponding to load balancing, it may be considered that the scheduling request requires load balancing. If the scheduling parameter includes a character string corresponding to the process creation behavior or the process coverage behavior, it may be determined whether the second process corresponds to the process creation behavior or the process coverage behavior.
Under the condition that the scheduling request requires load balancing and the second process corresponds to a process creation behavior or a process coverage behavior, the embodiment of the present application may perform two determinations: and judging whether the first process and the second process have a synchronous relation correspondingly to the step A and judging whether a task queue of a first CPU in which the first process is positioned correspondingly to the step B contains 1 task.
For the determination a, whether the first process and the second process have a synchronization relationship may be determined according to a synchronization flag between the first process and the second process. For example, the value of the synchronization flag bit is 1, which can represent that the first process and the second process have a synchronization relationship; the value of the synchronization flag bit is 0, which can represent that the first process and the second process do not have a synchronization relationship.
In practical applications, the kernel of the operating system may maintain the value of the synchronization flag. The process scheduling function may pass the value of the synchronization flag into the scheduling request in the case of calling the process placement function. Of course, the process placement function may also have the ability to obtain the value of the synchronization flag.
For the judgment B, the task queue of the first CPU where the first process is located may be accessed, and whether the task queue of the first CPU includes 1 task is judged, and if yes, it may be considered that 1 task included in the task queue of the first CPU is a task corresponding to the first process.
In step 103, if the first process and the second process have a synchronization relationship and the task queue of the first CPU includes 1 task, the second process is placed in the first CPU for processing, so that the second process executes the corresponding task using the resource of the first CPU.
In another implementation manner of the embodiment of the present application, when the first process and the second process do not have a synchronization relationship and/or when a plurality of tasks are included in a task queue of the first CPU, a second CPU in an idle state may be searched for among the plurality of CPUs, and the second process may be placed in the second CPU for processing.
In a specific implementation, the second CPU in the idle state may be searched for among the plurality of CPUs according to a topology of the CPUs. For example, a scheduling group in an idle state may be sought in the scheduling domain, and then a second CPU in an idle state may be selected in the scheduling group in the idle state. According to the embodiment of the application, the idle state can be determined according to the number of the tasks in the task queue. For example, the number of tasks in the task queue is 0, which can indicate that the CPU is in an idle state. Therefore, the second CPU of the embodiment of the present application may be a CPU whose task number is 0.
It should be noted that the scheduling domain in the embodiment of the present application may be a multi-level scheduling domain. If the second CPU in the idle state is not found in the (i + 1) th-level scheduling domain, the scheduling group in the idle state may be found in the i-th-level scheduling domain, and then the second CPU in the idle state is selected in the scheduling group in the idle state, i may be a positive integer.
For example, a 4-core SOC includes: CPU0, CPU1, CPU2 and CPU3, the SOC comprises two levels of scheduling domains, the first level of scheduling domains comprises: two scheduling groups of cluster0 and cluster1, the scheduling domain of the second level comprises: and the scheduling domain corresponding to cluster0 or cluster 1. Then in the case that the second CPU in the idle state is not found in the second-level scheduling domain, the scheduling group in the idle state may be found in the first-level scheduling domain, and then the second CPU in the idle state may be selected in the scheduling group in the idle state.
To sum up, in the process scheduling method in a multi-CPU environment according to the embodiment of the present application, when the scheduling request requires load balancing and the second process corresponds to a process creation behavior or a process coverage behavior, under the condition that the judgment result of the judgment a indicates that the first process and the second process have a synchronization relationship, and can indicate that the first process will enter a waiting state after scheduling the second process, and the judgment result of the judgment B indicates that 1 task is included in the task queue of the first CPU, the first process can enter the waiting state after executing the scheduling of the second process, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. Moreover, because the first process and the second process are both run on the first CPU, and the speed of inter-process communication in the same CPU environment is generally higher than the speed of inter-process communication in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
Method example II
Referring to fig. 2, a schematic flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application is shown, where the method may specifically include the following steps:
step 201, receiving a scheduling request of a first process for a second process;
step 202, under the condition that the scheduling request requires load balancing and the second process corresponds to a process creation behavior or a process coverage behavior, judging whether a synchronization relationship exists between the first process and the second process and judging whether a task queue of a first CPU where the first process is located contains 1 task;
step 203, placing the second process in the first CPU for processing under the condition that the first process and the second process have a synchronization relationship and the task queue of the first CPU contains 1 task; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched;
with respect to the first embodiment of the method shown in fig. 1, the method of this embodiment may further include:
step 204, under the condition that the scheduling request requires load balancing and the second process corresponds to a process awakening behavior, judging whether the first process and the second process have an affinity relationship;
step 205, under the condition that the first process and the second process have an affinity relationship, determining whether the first process and the second process have a synchronization relationship, and determining whether a task queue of a first CPU in which the first process is located includes 1 task;
step 206, in case that the first process and the second process have a synchronization relationship and the task queue of the first CPU includes 1 task, placing the second process in the first CPU for processing.
The process creating behavior corresponding to the second process in the embodiment of the application can be suitable for the situation that the first process copies the second process, the process covering behavior corresponding to the second process can be suitable for the situation that the second process covers the first process, and the process awakening behavior corresponding to the second process can be suitable for the situation that the second process awakens the first process.
Steps 204 to 206 in the embodiment of the present application may be used to reduce the scheduling delay of the second process when the second process wakes up the first process.
Step 204, when the scheduling request requires load balancing and the second process corresponds to a process wake-up behavior, determining whether the first process and the second process have an affinity relationship, where the corresponding determination method may include:
judging mode 1, judging whether the first process and the second process relate to sharing; and/or
And a judging mode 2, judging whether the first process and the second process have an affinity relation or not according to the times of process awakening of the first process in a preset time period.
For the determination method 1, the sharing between the first process and the second process may be the sharing of resources such as environment variables and memory space. For example, a data pipe is shared between a first process and a second process, and the first process and the second process can be considered to have an affinity relationship. On the contrary, if the first process and the second process do not involve sharing, the first process and the second process may be considered to have no affinity.
As for the determination method 2, when the number of times of process awakening of the first process in the preset time period exceeds the number threshold, it may be considered that the behavior of the first process awakening process is too frequent, and it may be considered that the first process and the second process do not have an affinity relationship. Conversely, when the number of times that the first process wakes up within the preset time period does not exceed the threshold of times, it may be considered that the first process and the second process have an affinity relationship.
In step 205, if there is an affinity relationship between the first process and the second process, two determinations are performed: and judging whether the first process and the second process have a synchronous relation correspondingly to the step A and judging whether a task queue of a first CPU in which the first process is positioned correspondingly to the step B contains 1 task. The first process and the second process have a synchronous relation, and can represent that the first process enters a waiting state after the second process is scheduled. Whether the task queue of the first CPU where the first process is located contains 1 task can represent that the task queue of the first CPU contains the task corresponding to the first process.
According to the embodiment of the application, when the judgment result of the judgment A represents that the first process and the second process have a synchronous relation, the first process can be represented to enter a waiting state after the second process is dispatched, and the judgment result of the judgment B represents that the task queue of the first CPU contains 1 task, the first process can enter the waiting state after the second process is dispatched, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. In addition, because the first process and the second process are both run on the first CPU, and the inter-process communication speed in the same CPU environment is generally higher than the inter-process communication speed in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
In another implementation manner of the present application, when the first process and the second process do not have a synchronization relationship and/or when a plurality of tasks are included in a task queue of the first CPU, a third CPU that is adjacent to the first CPU and is in an idle state may be searched for among the plurality of CPUs, and the second process may be placed in the third CPU for processing.
In a specific implementation, the first CPU may be used as a center, and the third CPU may be searched to obtain a third CPU that is closest to the first CPU and is in an idle state.
For example, a 4-core SOC includes: the first CPU is assumed to be CPU0, a third CPU may be first searched in a scheduling domain corresponding to CPU0, and CPU1 is assumed to be in an idle state, and then CPU1 may be used as a CPU for placing a second process. Since the CPU0 and the CPU1 share the L2 cache, data can be read from the common L2 cache of the CPU0 and the CPU1 in the process of executing the task of the second process on the CPU1, and thus the task processing efficiency of the second process can be improved.
In summary, in the process scheduling method in a multi-CPU environment according to the embodiment of the present application, when the scheduling request requires load balancing and the second process corresponds to a process wake-up behavior, it is determined whether the first process and the second process have an affinity relationship. The affinity relationship between the first process and the second process can indicate that resources such as a cache corresponding to the first CPU are meaningful for the second process. Therefore, the determination of whether the first process and the second process have an affinity relationship can read data from the cache of the first CPU in the process of executing the task of the second process on the first CPU, and thus the task processing efficiency of the second process can be improved.
Further, when the first process and the second process have an affinity relationship, the judgment a and the judgment B are performed. When the judgment result of the judgment A indicates that the first process and the second process have a synchronous relation and can indicate that the first process enters a waiting state after the second process is scheduled and the judgment result of the judgment B indicates that the task queue of the first CPU contains 1 task, the first process can enter the waiting state after the second process is scheduled, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. In addition, because the first process and the second process are both run on the first CPU, and the inter-process communication speed in the same CPU environment is generally higher than the inter-process communication speed in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
Method embodiment three
Referring to fig. 3, a schematic flowchart illustrating steps of a process scheduling method in a multi-CPU environment according to an embodiment of the present application is shown, where the method may specifically include the following steps:
step 301, receiving a scheduling request of a first process for a second process;
step 302, judging whether the scheduling request requires load balancing and whether the second process corresponds to a process creating behavior, a process covering behavior or a process awakening behavior, if so, executing step 303;
step 303, determining whether the first process and the second process have an affinity relationship, if yes, executing step 304;
step 304, determining whether a synchronization relationship exists between the first process and the second process, and determining whether a task queue of the first CPU in which the first process is located includes 1 task, if yes, performing step 305;
step 305, determining whether the second process corresponds to a process creation behavior or a process coverage behavior, if so, executing step 306, otherwise, executing step 307;
step 306, placing the second process in the first CPU for processing;
307, judging whether the first CPU is idle, if so, executing step 306, otherwise, executing step 308;
step 308, executing the judgment A and the judgment B, specifically, judging whether the first process and the second process have a synchronization relationship, and judging whether a task queue of a first CPU where the first process is located contains 1 task, if so, executing the step 306, otherwise, executing the step 309;
step 309, searching a third CPU which is adjacent to the first CPU and is in an idle state in the plurality of CPUs, and placing the second process in the third CPU for processing.
Step 303 determines whether the first process and the second process have an affinity relationship, and if the first process and the second process have an affinity relationship, the second process is placed in the first CPU for processing by further determination. The affinity relationship between the first process and the second process can indicate that resources such as a cache corresponding to the first CPU are meaningful for the second process. Therefore, the determination of whether the first process and the second process have the affinity relationship can read data from the cache of the first CPU in the process of executing the task of the second process on the first CPU, and thus the task processing efficiency of the second process can be improved.
The first execution of decision a and decision B in step 304 and the second execution of decision a and decision B in step 308 may accommodate changes in the task queue of the first CPU. For example, in the case of a newly added task in the task queue of the first CPU, step 308 can capture the corresponding change in time.
To sum up, in the process scheduling method in a multi-CPU environment according to the embodiment of the present application, when the scheduling request requires load balancing and the second process corresponds to a process creation behavior, a process coverage behavior, or a process wakeup behavior, it is determined whether the first process and the second process have an affinity relationship. The affinity relationship between the first process and the second process can indicate that resources such as a cache corresponding to the first CPU are meaningful for the second process. Therefore, the determination of whether the first process and the second process have an affinity relationship can read data from the cache of the first CPU in the process of executing the task of the second process on the first CPU, and thus the task processing efficiency of the second process can be improved.
Further, when the first process and the second process have an affinity relationship, the judgment a and the judgment B are performed. When the judgment result of the judgment A indicates that the first process and the second process have a synchronous relation and can indicate that the first process enters a waiting state after the second process is scheduled and the judgment result of the judgment B indicates that the task queue of the first CPU contains 1 task, the first process can enter the waiting state after the second process is scheduled, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. Moreover, because the first process and the second process are both run on the first CPU, and the speed of inter-process communication in the same CPU environment is generally higher than the speed of inter-process communication in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
On the basis of the foregoing embodiment, this embodiment further provides a process scheduling apparatus in a multi-CPU environment, and with reference to fig. 4, the apparatus may specifically include: a receiving module 401, a first judging module 402 and a first scheduling module 403.
The receiving module 401 is configured to receive a scheduling request of a first process for a second process;
a first determining module 402, configured to determine whether the first process and the second process have a synchronization relationship and whether a task queue of a first CPU in which the first process is located includes 1 task, when the scheduling request requires load balancing and a process creation behavior or a process coverage behavior corresponding to the second process is detected;
a first scheduling module 403, configured to, in a case that a synchronization relationship exists between the first process and the second process and 1 task is included in a task queue of the first CPU, place the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched.
Optionally, the apparatus may further include:
the first searching module is used for searching the second CPU in an idle state in a plurality of CPUs under the condition that the first process and the second process do not have a synchronization relationship and/or a plurality of tasks are contained in a task queue of the first CPU;
and the second scheduling module is used for placing the second process in the second CPU for processing.
Optionally, the apparatus may further include:
the second judgment module is used for judging whether the first process and the second process have an affinity relation or not under the condition that the scheduling request requires load balancing and the second process corresponds to a process awakening behavior;
the third judging module is used for judging whether the first process and the second process have a synchronous relation or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not under the condition that the first process and the second process have an affinity relation;
and the third scheduling module is used for placing the second process in the first CPU for processing under the condition that the first process and the second process have a synchronization relationship and 1 task is contained in a task queue of the first CPU.
Optionally, the second determining module may include:
the sharing judgment module is used for judging whether the first process and the second process are shared or not; and/or
And the awakening frequency judging module is used for judging whether the first process and the second process have an affinity relation or not according to the process awakening frequency of the first process in the preset time period.
Optionally, the apparatus may further include:
the second searching module is used for searching a third CPU which is adjacent to the first CPU and is in an idle state in a plurality of CPUs under the condition that the first process and the second process do not have a synchronous relation and/or a plurality of tasks are contained in a task queue of the first CPU;
and the third scheduling module is used for placing the second process in the third CPU for processing.
To sum up, in the process scheduling apparatus in a multi-CPU environment according to the embodiment of the present application, when the scheduling request requires load balancing and the second process corresponds to a process creation behavior or a process coverage behavior, under the condition that the judgment result of the judgment a indicates that the first process and the second process have a synchronization relationship, and can indicate that the first process will enter a waiting state after scheduling the second process, and the judgment result of the judgment B indicates that 1 task is included in the task queue of the first CPU, the first process can enter the waiting state after executing the scheduling of the second process, and the first CPU can be in a leisure state after the first process enters the waiting state; therefore, in this case, the second process is placed in the first CPU for processing, so that the second process can be run quickly, that is, the scheduling delay of the second process can be reduced. Moreover, because the first process and the second process are both run on the first CPU, and the speed of inter-process communication in the same CPU environment is generally higher than the speed of inter-process communication in different CPU environments, the embodiment of the present application can improve the interaction performance between the first process and the second process.
Embodiments of the present application provide a non-volatile readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the one or more modules may cause the device to execute instructions (instructions) of method steps in embodiments of the present application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the present application, the electronic device includes various types of devices such as a terminal device and a server (cluster).
Embodiments of the disclosure may be implemented as an apparatus for performing desired configurations using any suitable hardware, firmware, software, or any combination thereof, which may include: and the electronic equipment comprises terminal equipment, a server (cluster) and the like. Fig. 5 schematically illustrates an example apparatus 1100 that may be used to implement various embodiments described herein.
For one embodiment, fig. 5 illustrates an example apparatus 1100 having one or more processors 1102, a control module (chipset) 1104 coupled to at least one of the processor(s) 1102, a memory 1106 coupled to the control module 1104, a non-volatile memory (NVM)/storage 1108 coupled to the control module 1104, one or more input/output devices 1110 coupled to the control module 1104, and a network interface 1112 coupled to the control module 1104.
The processor 1102 may include one or more single-core or multi-core processors, and the processor 1102 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1100 can be implemented as a terminal device, a server (cluster), or the like in the embodiments of the present application.
In some embodiments, the apparatus 1100 may include one or more computer-readable media (e.g., the memory 1106 or the NVM/storage 1108) having instructions 1114 and one or more processors 1102 in combination with the one or more computer-readable media configured to execute the instructions 1114 to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 1104 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1102 and/or to any suitable device or component in communication with control module 1104.
Control module 1104 may include a memory controller module to provide an interface to memory 1106. The memory controller module may be a hardware module, a software module, and/or a firmware module.
The memory 1106 may be used to load and store data and/or instructions 1114 for the device 1100, for example. For one embodiment, memory 1106 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 1106 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, control module 1104 may include one or more input/output controllers to provide an interface to NVM/storage 1108 and input/output device(s) 1110.
For example, NVM/storage 1108 may be used to store data and/or instructions 1114. NVM/storage 1108 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard disk drive(s) (HDD (s)), one or more Compact Disc (CD) drive(s), and/or one or more Digital Versatile Disc (DVD) drive (s)).
NVM/storage 1108 may include storage resources that are physically part of the device on which apparatus 1100 is installed, or it may be accessible by the device and may not be necessary as part of the device. For example, NVM/storage 1108 may be accessed over a network via input/output device(s) 1110.
Input/output device(s) 1110 may provide an interface for apparatus 1100 to communicate with any other suitable device, input/output devices 1110 may include communication components, audio components, sensor components, and so forth. Network interface 1112 may provide an interface for device 1100 to communicate over one or more networks, and device 1100 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, e.g., wiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1102 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of the control module 1104. For one embodiment, at least one of the processor(s) 1102 may be packaged together with logic for one or more controller(s) of control module 1104 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1102 may be integrated on the same die with logic for one or more controller(s) of control module 1104. For one embodiment, at least one of the processor(s) 1102 may be integrated on the same die with logic for one or more controller(s) of control module 1104 to form a system on chip (SoC).
In various embodiments, the apparatus 1100 may be, but is not limited to: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, the apparatus 1100 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1100 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The method and the device for scheduling processes in a multi-CPU environment, the electronic device, and the machine-readable medium provided by the present application are introduced in detail, and specific examples are applied in the present application to explain the principles and embodiments of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A process scheduling method in a multi-CPU environment, the method comprising:
receiving a scheduling request of a first process for a second process;
under the condition that the scheduling request requires load balancing and the second process corresponds to the process creation behavior or the process coverage behavior, judging whether the first process and the second process have a synchronization relationship or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not;
under the conditions that the first process and the second process have a synchronous relation and 1 task is contained in a task queue of the first CPU, placing the second process in the first CPU for processing; the first process and the second process have a synchronous relation, and the first process is characterized in that the first process enters a waiting state after the second process is scheduled.
2. The method of claim 1, further comprising:
and searching a second CPU in an idle state in the plurality of CPUs and placing the second process in the second CPU for processing under the condition that the first process and the second process do not have a synchronization relation and/or a plurality of tasks are contained in a task queue of the first CPU.
3. The method of claim 1, further comprising:
under the condition that the scheduling request requires load balancing and the second process corresponds to a process awakening behavior, judging whether the first process and the second process have an affinity relation;
under the condition that the first process and the second process have affinity relation, judging whether the first process and the second process have synchronous relation or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not;
and when the first process and the second process have a synchronous relation and 1 task is contained in a task queue of the first CPU, placing the second process in the first CPU for processing.
4. The method of claim 3, wherein determining whether the first process has an affinity with the second process comprises:
judging whether the first process and the second process are shared; and/or
And judging whether the first process and the second process have an affinity relation or not according to the process awakening times of the first process in a preset time period.
5. The method of claim 3, further comprising:
and searching a third CPU which is adjacent to the first CPU and is in an idle state in a plurality of CPUs and placing the second process in the third CPU for processing under the condition that the first process and the second process do not have a synchronous relation and/or a plurality of tasks are contained in a task queue of the first CPU.
6. An apparatus for scheduling processes in a multi-CPU environment, the apparatus comprising:
the system comprises a receiving module, a scheduling module and a processing module, wherein the receiving module is used for receiving a scheduling request of a first process for a second process;
the first judgment module is used for judging whether the first process and the second process have a synchronization relation or not and judging whether a task queue of a first CPU (central processing unit) in which the first process is located contains 1 task or not under the condition that the scheduling request requires load balancing and the process creating behavior or the process covering behavior corresponding to the second process;
the first scheduling module is used for placing the second process in the first CPU for processing under the conditions that the first process and the second process have a synchronous relation and 1 task is contained in a task queue of the first CPU; the first process and the second process have a synchronous relation, and the first process is characterized to enter a waiting state after the second process is dispatched.
7. The apparatus of claim 6, further comprising:
the first searching module is used for searching the second CPU in an idle state in a plurality of CPUs under the condition that the first process and the second process do not have a synchronization relationship and/or a plurality of tasks are contained in a task queue of the first CPU;
and the second scheduling module is used for placing the second process in the second CPU for processing.
8. The apparatus of claim 6, further comprising:
the second judging module is used for judging whether the first process and the second process have an affinity relation or not under the condition that the scheduling request requires load balancing and the second process corresponds to a process awakening behavior;
the third judging module is used for judging whether the first process and the second process have a synchronous relation or not and judging whether a task queue of a first CPU (central processing unit) where the first process is located contains 1 task or not under the condition that the first process and the second process have an affinity relation;
and the third scheduling module is used for placing the second process in the first CPU for processing under the condition that the first process and the second process have a synchronization relationship and 1 task is contained in a task queue of the first CPU.
9. An electronic device, comprising: a processor; and
memory having stored thereon executable code which, when executed, causes the processor to perform the method of any one of claims 1-5.
10. A machine readable medium having executable code stored thereon, which when executed, causes a processor to perform the method of any of claims 1-5.
CN202211552730.3A 2022-12-06 2022-12-06 Process scheduling method and device in multi-CPU environment, electronic equipment and medium Active CN115617494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211552730.3A CN115617494B (en) 2022-12-06 2022-12-06 Process scheduling method and device in multi-CPU environment, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211552730.3A CN115617494B (en) 2022-12-06 2022-12-06 Process scheduling method and device in multi-CPU environment, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN115617494A true CN115617494A (en) 2023-01-17
CN115617494B CN115617494B (en) 2023-03-14

Family

ID=84880624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211552730.3A Active CN115617494B (en) 2022-12-06 2022-12-06 Process scheduling method and device in multi-CPU environment, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115617494B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737346A (en) * 2023-08-14 2023-09-12 南京翼辉信息技术有限公司 Scheduling configuration system for large and small core processors and implementation method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5506987A (en) * 1991-02-01 1996-04-09 Digital Equipment Corporation Affinity scheduling of processes on symmetric multiprocessing systems
CN104657222A (en) * 2015-03-13 2015-05-27 浪潮集团有限公司 SMP dispatching system-oriented optimization method
CN109840151A (en) * 2017-11-29 2019-06-04 大唐移动通信设备有限公司 A kind of load-balancing method and device for multi-core processor
CN113515388A (en) * 2021-09-14 2021-10-19 统信软件技术有限公司 Process scheduling method and device, computing equipment and readable storage medium
CN114461404A (en) * 2022-04-01 2022-05-10 统信软件技术有限公司 Process migration method, computing device and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5506987A (en) * 1991-02-01 1996-04-09 Digital Equipment Corporation Affinity scheduling of processes on symmetric multiprocessing systems
CN104657222A (en) * 2015-03-13 2015-05-27 浪潮集团有限公司 SMP dispatching system-oriented optimization method
CN109840151A (en) * 2017-11-29 2019-06-04 大唐移动通信设备有限公司 A kind of load-balancing method and device for multi-core processor
CN113515388A (en) * 2021-09-14 2021-10-19 统信软件技术有限公司 Process scheduling method and device, computing equipment and readable storage medium
CN114461404A (en) * 2022-04-01 2022-05-10 统信软件技术有限公司 Process migration method, computing device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116737346A (en) * 2023-08-14 2023-09-12 南京翼辉信息技术有限公司 Scheduling configuration system for large and small core processors and implementation method thereof
CN116737346B (en) * 2023-08-14 2023-10-24 南京翼辉信息技术有限公司 Scheduling configuration system for large and small core processors and implementation method thereof

Also Published As

Publication number Publication date
CN115617494B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
EP3155521B1 (en) Systems and methods of managing processor device power consumption
US8190863B2 (en) Apparatus and method for heterogeneous chip multiprocessors via resource allocation and restriction
JP6199477B2 (en) System and method for using a hypervisor with a guest operating system and virtual processor
US7689838B2 (en) Method and apparatus for providing for detecting processor state transitions
KR102197874B1 (en) System on chip including multi-core processor and thread scheduling method thereof
US10977092B2 (en) Method for efficient task scheduling in the presence of conflicts
KR101839646B1 (en) Suspension and/or throttling of processes for connected standby
US20150046679A1 (en) Energy-Efficient Run-Time Offloading of Dynamically Generated Code in Heterogenuous Multiprocessor Systems
CN108549574B (en) Thread scheduling management method and device, computer equipment and storage medium
US20160203083A1 (en) Systems and methods for providing dynamic cache extension in a multi-cluster heterogeneous processor architecture
US20110219373A1 (en) Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform
US20130152100A1 (en) Method to guarantee real time processing of soft real-time operating system
US10768684B2 (en) Reducing power by vacating subsets of CPUs and memory
CN115617494B (en) Process scheduling method and device in multi-CPU environment, electronic equipment and medium
US20150121392A1 (en) Scheduling in job execution
CN109840151B (en) Load balancing method and device for multi-core processor
US11093441B2 (en) Multi-core control system that detects process dependencies and selectively reassigns processes
CN114064236A (en) Task execution method, device, equipment and storage medium
US7603673B2 (en) Method and system for reducing context switch times
US10740150B2 (en) Programmable state machine controller in a parallel processing system
CN114443255A (en) Thread calling method and device
CN115562830A (en) Host bus adapter tuning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant