CN113918291A - Multi-core operating system stream task scheduling method, system, computer and medium - Google Patents

Multi-core operating system stream task scheduling method, system, computer and medium Download PDF

Info

Publication number
CN113918291A
CN113918291A CN202110965573.8A CN202110965573A CN113918291A CN 113918291 A CN113918291 A CN 113918291A CN 202110965573 A CN202110965573 A CN 202110965573A CN 113918291 A CN113918291 A CN 113918291A
Authority
CN
China
Prior art keywords
task
stream
priority
tasks
storage space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110965573.8A
Other languages
Chinese (zh)
Inventor
韩旭
陈诺
张锦南
田锐
程刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Juhui Technology Development Co ltd
Original Assignee
Anhui Juhui Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Juhui Technology Development Co ltd filed Critical Anhui Juhui Technology Development Co ltd
Publication of CN113918291A publication Critical patent/CN113918291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Abstract

The invention provides a method, a system, a computer and a medium for scheduling stream tasks of a multi-core operating system, wherein the method comprises the steps of configuring a stream task buffer space, a stream task common queue and a stream task running queue of the operating system in advance, creating the stream tasks in response to stream data receiving, storing the stream tasks in an annular storage space, reading the stream tasks to generate corresponding stream task priorities, storing the corresponding stream tasks in a priority storage space, storing the stream tasks in the stream task common queue, distributing the stream tasks in the stream task common queue to the stream task running queue according to an inter-core task scheduling algorithm, scheduling the stream tasks in the stream task running queue according to a priority scheduling algorithm, and updating an annular storage pointer, a priority storage pointer and an address pointed by a context information storage pointer. The invention ensures the balanced scheduling of the stream tasks, simultaneously considers the temporary storage of the stream data entering the system, and improves the efficiency of processing the stream tasks and the system performance.

Description

Multi-core operating system stream task scheduling method, system, computer and medium
Technical Field
The invention relates to the technical field of computer balance scheduling, in particular to a multi-core real-time operating system based flow task scheduling method, a multi-core real-time operating system based flow task scheduling system, computer equipment and a storage medium.
Background
The stream data refers to a data sequence which is generated continuously by a large number of data sources, arrives at a data processing end sequentially, massively, quickly and continuously, and needs the data processing end to perform incremental processing according to records or sequence according to a sliding time window, namely a dynamic data set which grows infinitely along with time duration. The streaming data is widely applied to a plurality of fields such as network monitoring, sensor networks, aerospace, meteorological measurement and control, financial services and the like, but the streaming data has the characteristics of real-time arrival of data, independent and uncontrolled arrival times of the data, large data scale, unpredictable size, difficulty in re-taking out after data processing and the like, and brings about not less challenges to the streaming processing of a streaming data processing end: the method needs to deal with the speed of the generation of the stream data, quickly process newly arrived data in time, continuously generate output, provide real-time analysis for users of the stream data, facilitate the understanding of the latest trend of the change and the development of things, and timely respond to emergencies and provide adjustment measures.
However, due to the characteristics of streaming data, when the conventional multi-core operating system is used as a streaming data processing end, the problems of idle occupation of a core of a streaming data task, low CPU affinity and low processing efficiency often occur, and the real-time requirement of the conventional streaming task processing cannot be met.
Therefore, it is desirable to provide a method for scheduling a stream task based on a multi-core real-time operating system to improve the efficiency of processing the stream task and the system performance.
Disclosure of Invention
The invention aims to provide a multi-core real-time operating system-based stream task scheduling method which guarantees the balanced scheduling of stream tasks, meanwhile considers the temporary storage of stream data entering a system, and improves the processing efficiency of the stream tasks and the system performance.
In order to achieve the above object, it is necessary to provide a method, a system, a computer device and a storage medium for scheduling a multi-core operating system stream task in response to the above technical problem.
In a first aspect, an embodiment of the present invention provides a method for scheduling a stream task of a multi-core operating system, where the method includes the following steps:
pre-configuring a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises an annular storage pointer pointing to the annular storage space, a priority storage pointer pointing to the priority storage space and a context information storage pointer pointing to the context information storage space;
in response to streaming data reception, creating a streaming task and storing the streaming task in the annular storage space;
reading the stream task, generating a corresponding stream task priority, storing the stream task priority in the priority storage space, and storing the stream task in the stream task common queue;
distributing the stream tasks in the stream task common queue to the stream task running queue;
and scheduling the flow tasks in the flow task running queue according to the flow task priority, and updating the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer.
Further, the step of pre-configuring the stream task buffer space, the stream task common queue and the stream task running queue of the operating system includes:
calculating the occupation space of a single stream task according to the digit of the multi-core real-time operating system, and distributing the annular storage space by combining the number of the quasi-cache stream tasks;
distributing the priority storage space according to the size of the annular storage space and the data packet format of the stream task;
and distributing the context information storage space according to the size of the annular storage space and the interrupt suspension time of the multi-core real-time operating system.
Further, the step of creating a streaming task in response to streaming data reception and storing the streaming task in the annular storage space comprises:
responding to the first streaming data receiving, distributing the storage annular storage pointer, the priority storage pointer and the context information storage pointer, and respectively pointing the storage annular storage pointer, the priority storage pointer and the context information storage pointer to the first addresses of the annular storage space, the priority storage space and the context information storage space;
responding to the non-first stream data receiving, judging whether the annular storage space is completely occupied, and if the annular storage space is completely occupied, deleting the stream task with the longest cache time in the annular storage space;
and storing the stream task in the annular storage space, and updating the annular storage pointer to point to the stream task.
Further, the step of reading the stream task, generating a corresponding stream task priority, storing the stream task priority in the priority storage space, and storing the stream task in the stream task common queue includes:
judging whether the streaming task is a new task, if so, generating the priority of the streaming task according to the header information of the streaming data;
and storing the stream task priority in the priority storage space, updating the priority storage pointer to point to the stream task priority, and storing the stream task corresponding to the priority storage pointer into the stream task common queue.
Further, the step of allocating the stream tasks in the stream task common queue to the stream task running queue includes:
judging whether the stream task common queue is full, if so, obtaining task allocation priority of the stream task of each stream task running queue according to the CPU affinity and the CPU load of the stream task running queue;
and distributing the stream tasks of the stream task common queue to the stream task running queue corresponding to the maximum value of the task distribution priority.
Further, the step of scheduling the stream tasks of the stream task execution queue according to a priority scheduling algorithm, generating corresponding context information to be stored in the context information storage space, and updating the addresses pointed by the annular storage pointer, the priority storage pointer, and the context information storage pointer includes:
responding to time slice polling or system interruption, updating the priority of the streaming task through the priority algorithm according to the priority of the streaming task and the suspension time, and storing the address pointed by the priority storage pointer corresponding to the priority of the streaming task;
selecting a target flow task from the flow data running queue through the priority scheduling algorithm according to the flow task priority;
scheduling the target stream task to a CPU (central processing unit) kernel for processing, and reading data information of the stream task according to the annular storage pointer, the priority storage pointer and the context information storage pointer;
responding to the system interrupt in the target stream task processing process, storing the processing result of the target stream task into the annular storage space, and moving the annular storage pointer forward by one bit after updating the context information storage pointer to the address pointed by the current annular storage pointer;
responding to the target flow task to enter a ready state again after the target flow task is interrupted, and storing the target flow task into the flow task common queue;
and responding to the completion of the processing of the target stream task, moving the annular storage pointer back by one bit, and deleting the target stream task from the annular storage space.
Further, the step of selecting a target streaming task from the streaming task running queue according to the streaming task priority further includes:
responding to the time slice polling, and sequentially selecting the corresponding flow tasks as the target flow tasks according to the flow task priority sequence of the flow tasks in the flow task running queue;
and when the flow task running queue is a null value, the flow tasks of the flow task common queue are preferentially scheduled to the flow task running queue for processing through an SMP preemptive priority scheduling algorithm.
In a second aspect, an embodiment of the present invention provides a multi-core operating system stream task scheduling system, where the system includes:
the pre-configuration module is used for pre-configuring a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises an annular storage pointer pointing to the annular storage space, a priority storage pointer pointing to the priority storage space and a context information storage pointer pointing to the context information storage space;
the task creating module is used for responding to stream data receiving, creating stream tasks and storing the stream tasks in the annular storage space;
the task cache module is used for reading the stream tasks, generating corresponding stream task priorities, storing the stream task priorities in the priority storage space and storing the stream tasks in the stream task common queue;
the inter-core distribution module is used for distributing the flow tasks in the flow task common queue to the flow task running queue;
and the task scheduling module is used for scheduling the stream tasks in the stream task running queue according to the stream task priority, and updating the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the above method.
The present application provides a method, a system, a computer device, and a storage medium for scheduling a stream task of a multi-core operating system, by which a method is implemented in which an operating system is configured in advance to include an annular storage space, a priority storage space, a context information storage space, and a pointer array space, and the pointer array space includes a stream task buffer space including an annular storage pointer, a priority storage pointer, and a context information storage pointer, which point to the annular storage space, the priority storage space, and the context information storage space, respectively, and a corresponding stream task common queue and a stream task execution queue, a stream task is created and stored in the annular storage space in response to stream data reception, a corresponding stream task priority is generated when a stream task is read and stored in the priority storage space, and the stream task is stored in the stream task common queue, according to an inter-core task scheduling algorithm, and after the flow tasks of the flow task common queue are distributed to the flow task running queue, the flow tasks of the flow task running queue are dispatched according to a priority dispatching algorithm, and the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer are updated. Compared with the prior art, the method effectively solves the problems of vacant occupation of a kernel, low CPU affinity and low processing efficiency when the existing multi-core real-time operating system processes massive streaming data tasks, not only ensures the balanced scheduling of the streaming tasks, but also considers the temporary storage of streaming data entering the system, and also improves the processing efficiency and the system performance of the streaming tasks.
Drawings
FIG. 1 is a schematic diagram of an application scenario of a multi-core operating system stream task scheduling method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the scheduling of multi-core OS stream tasks according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for scheduling a multi-core operating system stream task according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating the process of creating a stream task and storing the stream task in the annular storage space in step S12 in FIG. 3;
FIG. 5 is a schematic flow chart illustrating the step S13 in FIG. 3 of generating the priority of the streaming task and adding to the common queue of the streaming task;
FIG. 6 is a schematic flowchart illustrating the step S14 in FIG. 3 of allocating the streaming tasks in the streaming task common queue to the streaming task running queue;
FIG. 7 is a flowchart illustrating a process of dispatching the streaming task in the streaming task execution queue to the CPU core in step S15 in FIG. 3;
FIG. 8 is a schematic flowchart illustrating that in step S152 in FIG. 7, a target stream task is selected by a priority scheduling algorithm according to the stream task priority;
FIG. 9 is a schematic diagram of a multi-core real-time operating system flow task scheduling test system according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a reference model system flow task scheduling test system in an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a multi-core real-time operating system stream task scheduling system according to an embodiment of the present invention;
fig. 12 is an internal structural view of a computer device in the embodiment of the present invention.
Detailed Description
In order to make the purpose, technical solution and advantages of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments, and it is obvious that the embodiments described below are part of the embodiments of the present invention, and are used for illustrating the present invention only, but not for limiting the scope of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method is based on the characteristics that the stream task has the continuous arrival of the data elements of the data stream, the data stream processing system cannot control the arrival sequence of the data elements, the data stream is possibly infinite (or the size of the data stream is infinite), one data element of the data stream can be discarded or filed after being processed, the data element is generally not easy to extract again (unless the data element is still in a memory at present), and the like, and the problem of relevant factors influencing the computational efficiency of the multi-core real-time operating system facing the multi-item stream task is converted into the stream task scheduling problem of the multi-core operating system. The invention creatively introduces a streaming task buffer space, a streaming task common task queue of the multi-core task communication and storage ready state streaming task, and a streaming task running queue of multi-core processing.
The method, the system, the computer device and the storage medium for scheduling the flow tasks of the multi-core operating system provided by the invention can be applied to the terminal or the server shown in fig. 1, and the method for scheduling the flow tasks of the multi-core operating system provided by the invention can be applied to the terminal or the server shown in fig. 1. The terminal can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be implemented by an independent server or a server cluster formed by a plurality of servers. The server adopts the multi-core real-time operating system which comprises the stream task buffer space, the stream task common queue, the stream task running queue and the scheduling kernel and can generate the stream data processing result as shown in fig. 2, the generated stream data processing result is sent to the terminal, and the terminal is used for a user of the terminal to check and analyze after receiving the stream data processing result. It should be noted that, as shown in fig. 2, a stream task buffer space in an operating system used by a server for processing a stream task includes a ring memory space for storing a data stream, a priority memory space for storing priority information of the data stream, a context information memory space for storing corresponding system scheduling information, and a pointer array space, and the pointer array space points to a ring memory pointer (pointer a) of the ring memory space, a priority memory pointer (pointer B) of the priority memory space, and a context information memory pointer (pointer C) of the context information memory space.
In one embodiment, as shown in fig. 3, a method for scheduling a stream task of a multi-core operating system is provided, which includes the following steps:
s11, pre-configuring a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises an annular storage pointer pointing to the annular storage space, a priority storage pointer pointing to the priority storage space and a context information storage pointer pointing to the context information storage space;
the annular storage space is used for caching the stream task data to be executed by the CPU, the space size of the annular storage space is calculated according to the number of bits of the multi-core real-time operating system, the occupied space of a single stream task is reasonably distributed by combining the number of the stream tasks to be cached, and the space ten times the size of a single stream data packet can be referred; the priority storage space is used for storing the stream task priority corresponding to the stream task of the annular storage space, and the space size is reasonably distributed according to the size of the annular storage space and the data packet format of the stream task; the context information storage space is used for storing system interrupt context information corresponding to the stream task of the annular storage space, and the space size is reasonably distributed according to the size of the annular storage space and the interrupt suspension time of the multi-core real-time operating system. And the annular storage pointer, the priority storage pointer and the context information storage pointer which respectively correspond to the annular storage space, the priority storage space and the context information storage space point to the access position of the annular storage space, the priority storage pointer points to the priority information of the current flow task, and the context information storage pointer points to the system context information required by the flow task interruption and the flow task restart. It should be noted that the allocation method of each storage space in the streaming task buffer space is a priority method in this example, and is not limited to implement the streaming task scheduling of the present invention, that is, the size allocation method of each storage space in the actual implementation process may be selected according to the actual needs of the user.
In this embodiment, the stream task buffer space is designed to store stream data that has entered the stream task computing system and is not processed in time, that is, the stream data buffer location in the stream task computing system, which not only can effectively relieve the pressure on the operating system caused by the instability of the stream data, but also can quickly read the information of each stream task through the cooperation of the annular storage pointer, the priority storage pointer, and the context information storage pointer, thereby saving the time consumption of scheduling and computing of the stream tasks, and further providing a good support for the system to improve the processing efficiency of the stream tasks.
S12, responding to the receiving of the streaming data, creating a streaming task and storing the streaming task in the annular storage space;
the stream task is that when data is continuously transmitted in a stream mode and a large amount is transmitted in a stream mode in a stream computing environment and each single data stream is received by an operating system communication port, a corresponding stream task is created by an existing task creating mechanism of an operating system and is stored according to the actual occupation condition of a stream task buffer space. As shown in fig. 4, the step S12 of creating a streaming task in response to the streaming data reception and storing the streaming task in the annular storage space includes:
s121, responding to the first stream data receiving, distributing the annular storage pointer, the priority storage pointer and the context information storage pointer, and respectively pointing the storage annular storage pointer, the priority storage pointer and the context information storage pointer to the first addresses of the annular storage space, the priority storage space and the context information storage space;
wherein the first stream data refers to the first stream data received by the communication port of the operating system. The arrival of the first stream data indicates that the operating system is ready to process the persistent stream task, at which time the three pointers needed for stream task processing need to be allocated in pointer array space: the annular storage pointer, the priority storage pointer and the context information storage pointer are used for ensuring that an operating system can quickly and accurately read the related information of the stream tasks stored in the stream task buffer space when the subsequent stream tasks are processed and scheduled. It should be noted that when the first data stream arrives, the initial point after the allocation of the ring storage pointer, the priority storage pointer, and the context information storage pointer is the first address of the corresponding storage space, and is updated in real time in the scheduling and processing process of the stream task by the subsequent operating system.
S122, responding to the non-first stream data receiving, judging whether the annular storage space is completely occupied, and if the annular storage space is completely occupied, deleting the stream task with the longest storage time in the annular storage space;
the size of the annular memory space is determined before the system starts to process the stream tasks, and is not infinitely expanded, and when the number of the stream tasks to be processed entering the operating system does not reach the upper limit of the annular memory space, the storage positions are directly searched according to the shifting sequence of the annular memory space pointer. However, when a large amount of stream data enters the system in a short time, new stream data arrives and the annular storage space does not have a free space for storage, at this time, the stream task data with the longest cache time is discarded actively, that is, the stream task with the longest storage time in the annular storage space is deleted, and the new stream task data is stored, so that the system is effectively ensured to operate continuously and effectively under a short-time excess stream data processing scene.
S123, storing the stream task in the annular storage space, and updating the annular storage pointer to point to the stream task.
After the position of the annular storage space which needs to be stored by the stream task is determined through the steps, the stream task is stored in the corresponding position of the annular storage space, and the direction of the annular storage pointer is updated to be the address of the position of the annular storage space which stores the stream task.
S13, reading the stream tasks, generating corresponding stream task priorities, storing the stream task priorities in the priority storage space, and storing the stream tasks in the stream task common queue;
the structure of the stream task is the header information of the stream data and the content of the stream data, and one stream task may contain a plurality of task structures, but the plurality of task structures are not sent successively and simultaneously. The system accesses the head information of the data stream when the data is firstly accessed to an operating system port, calculates the corresponding stream task priority through a priority algorithm and stores the stream task priority in a priority storage space in a stream task buffer space, and updates a priority storage pointer B of a pointer array space to point to the stream task priority of the stream task. As shown in fig. 5, the step S13 of reading the streaming task, generating a corresponding streaming task priority, and storing the streaming task priority in the priority storage space, and storing the streaming task in the streaming task common queue includes:
s131, judging whether the streaming task is a new task or not, and if the streaming task is the new task, generating a priority of the streaming task according to header information of the streaming data;
when the system enters the operating system for the first time, that is, when a new task is obtained, the system will store the stream data information into the corresponding annular storage space, then read the header information of the stream data corresponding to the stream task from the annular storage space, and obtain a specific value of the priority of the stream task corresponding to the stream task by a preset priority algorithm, for example, by performing weighted average on the stream data property and the source information. It should be noted that the above method for generating the stream task priority according to the header information of the stream data is only an exemplary description, and in this embodiment, it is not limited to what kind of priority algorithm is designed based on the header information of the stream data to calculate the corresponding stream task priority, and the method can be reasonably selected according to actual requirements.
S132, storing the stream task priority in the priority storage space, updating the priority storage pointer to point to the stream task priority, and storing the stream task corresponding to the priority storage pointer into the stream task common queue.
The priority storage pointer is similar to the annular storage pointer, and when the corresponding priority storage space stores a new stream task priority, the operating system updates the direction of the priority storage pointer to an address for storing the stream task priority so as to ensure efficient and accurate scheduling processing of the follow current task and updating of the corresponding stream task priority. In addition, the flow task common queue is a queue for the communication of the flow tasks among the multiple cores of the operating system, stores new tasks of the whole system and the flow tasks which enter a ready state again after being interrupted, and then is distributed to a kernel running queue of the operating system for processing based on an inter-core task scheduling algorithm, so that certain technical support is provided for effectively solving the problems of empty occupation of a flow task processing kernel, low CPU affinity and the like.
S14, distributing the flow tasks in the flow task common queue to the flow task running queue;
the inter-core task scheduling algorithm is designed based on the CPU affinity and the CPU load of each flow task running queue, for example, the system counts the CPU affinity and the CPU load of each flow task running queue in real time, calculates to obtain the priority of processing that each flow task in the flow task common queue is distributed to different CPU cores, and determines to finally obtain the flow task running queue of each flow task by combining the affinity of each flow task running queue and each CPU core. As shown in fig. 6, the step S14 of allocating the streaming tasks in the streaming task common queue to the streaming task running queue includes:
s141, judging whether the flow task common queue is full, if so, obtaining task allocation priority of the flow task of each flow task running queue according to CPU affinity and CPU load of the flow task running queue;
the size of the common queue of the stream task can be reasonably set according to the number of CPU cores of the actual operating system and the processing performance of the system. As described above, the flow task common queue stores new tasks and enters the ready-state flow tasks again after interruption, and when the flow task common queue is full, the flow tasks are actively issued to the flow task running queue to wait for system scheduling and CPU processing in order to ensure continuous entry of subsequent new tasks. In order to solve the problems of idle occupation of cores and low CPU affinity of the conventional stream task processing system, in this embodiment, based on the CPU affinity and CPU load consideration of the stream task running queues, a priority value assigned to each CPU core for processing by each stream task is calculated, and then, in combination with the affinity value of each stream task running queue and each CPU core, a task assignment priority assigned to each stream task running queue by each stream task in the stream task common queue is obtained by using weighted average of the two values, and then, ready stream tasks in the stream task common queue are effectively and reasonably assigned according to the task assignment priority assigned to each stream task running queue by each stream task.
And S142, distributing the flow tasks of the flow task common queue to the flow task running queue corresponding to the task distribution priority maximum value.
In the embodiment, based on the consideration of the CPU affinity and the CPU load of the flow task running queue, the task distribution priority assigned to each flow task running queue by each flow task in the flow task common queue is calculated, and then the flow task running queue with the largest task distribution priority is selected for each flow task to be processed.
S15, according to the flow task priority, scheduling the flow tasks in the flow task running queue, and updating the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer.
The scheduling of the flow tasks in the running queues adopts an SMP preemptive priority scheduling algorithm combined with time slice polling, and the flow tasks of the flow task running queues are scheduled to CPU cores for processing based on the flow task priorities of all the flow task running queues. As shown in fig. 7, the step S15 of scheduling the streaming task in the streaming task running queue according to the streaming task priority, and updating the addresses pointed by the ring storage pointer, the priority storage pointer, and the context information storage pointer includes:
s151, responding to time slice polling or system interruption, updating the priority of the stream task according to the priority and the suspension time of the stream task, and storing the address pointed by the priority storage pointer corresponding to the priority storage priority;
the priority algorithm used for updating the stream task priority is set based on the principle that the stream task with a large priority value or corresponding stream tasks which are not started and are suspended for a long time is increased in priority, for example, when time slice polling or system interruption occurs each time, the stream task priority corresponding to the priority storage space can be updated by adopting the weighted average value of the stream task priority and the suspension time, so that the problem that the current ready tasks with a high priority level are always scheduled to a CPU (central processing unit) kernel for processing when the subsequent stream tasks are scheduled is avoided, and the balanced scheduling of the stream tasks in all stream task running queues is effectively ensured. It should be noted that the priority algorithm described in this embodiment is also applicable to both a new task and a streaming task that enters the ready state again after interruption.
S152, selecting a target flow task from the flow task running queue according to the flow task priority;
in this embodiment, an SMP preemptive priority scheduling algorithm combining timeslice polling is preferably used, as shown in fig. 8, where the step S152 of selecting a target flow task from the flow task running queue according to the priority of the flow task includes:
s1521, responding to the time slice polling, and sequentially selecting the corresponding flow tasks as the target flow tasks according to the flow task priority sequence of the flow tasks in the flow task running queue;
the stream tasks in the stream task running queue are all ready stream tasks of each CPU core, and when polling is performed in each time slice under the condition that the stream tasks are not preempted, the stream tasks in the stream task running queue are sequentially scheduled to the corresponding CPU cores for processing according to the sequence of the priority levels of the stream tasks in the stream task running queue from large to small.
S1522, when the flow task running queue is null, the flow tasks of the flow task common queue are scheduled to the flow task running queue for processing through an SMP preemptive priority scheduling algorithm.
The method comprises the steps that a flow task of a flow task running queue is continuously scheduled to a CPU core for execution, and the situation that a null value appears in a certain flow task running queue during system scheduling can occur, namely the waiting queue length of the flow task running queue is 0, so that the flow task of a flow task common queue can be preferentially distributed to the null queue through an SMP preemptive priority scheduling algorithm during time slice polling of an operating system, namely the flow task running queue acquires the flow task from the flow task common queue and starts to execute.
According to the scheduling method for updating the priority of the over-current task, the target flow task is selected from the flow task running queue based on the SMP preemptive priority scheduling algorithm combined with time slice polling, so that the reasonable balance of flow task scheduling is effectively guaranteed, and the flow task processing performance of the system is improved.
S153, scheduling the target stream task to a CPU kernel for processing, and reading data information of the stream task according to the annular storage pointer, the priority storage pointer and the context information storage pointer;
when a target stream task is scheduled to a CPU kernel for processing, an SMP preemptive priority scheduling algorithm finds addresses of relevant information of the stream task, which are stored in an annular storage space, a priority storage space and a context information storage space in a stream task buffer space, according to the directions of an annular storage pointer, a priority storage pointer and a context information storage pointer, and further reads required data information for processing and using the stream task.
S154, responding to the system interrupt in the target stream task processing process, storing the processing result of the target stream task into the annular storage space, and moving the annular storage pointer forward by one bit after updating the context information storage pointer to the address pointed by the current annular storage pointer;
in the process of processing the target stream task by the CPU core, task processing may be interrupted due to slice polling or system interrupt. At this time, it is necessary to store the current processing result of the target stream task into the corresponding ring storage space, generate the corresponding context information to be stored in the corresponding context information storage space of the stream task buffer space, and move the ring storage pointer forward by one bit after updating the context information storage pointer to the address pointed by the current ring storage pointer. By updating the direction of the storage pointer corresponding to the storage space in real time in the flow task processing process, the last-time stopping position of the pointer is saved when context information is stored during time slice polling each time, so that the position of the suspended target flow task to be continuously executed can be quickly found when the suspended target flow task is recovered, and the processing efficiency is improved. It should be noted that, when the processing of the target stream task is interrupted, the priority update of the stream task is also triggered, and the updating method is the same as above, and is not described herein again.
S155, responding to the interruption of the target flow task and then entering a ready state again, and storing the target flow task into the flow task common queue;
when the target stream task is interrupted and then enters the ready state again, the operating system dispatches the target stream task to the stream task common queue and waits to be continuously distributed to the stream running queue for processing.
And S156, responding to the completion of the target stream task processing, moving the annular storage pointer backward by one bit, and deleting the target stream task from the annular storage space.
After the target stream task is scheduled and processed by the operating system, the stream task data stored in the annular space does not need to be cached and needs to be deleted, and then the corresponding priority storage space, the context information storage space, and the corresponding information of the annular storage pointer, the priority storage pointer and the context information storage pointer need to be deleted. If a certain storage pointer of deleted stream task data is located in the middle of a pointer storage area, a pointer continuous space gap may occur, and although the data storage location of the annular storage space may be a discontinuous storage slice, the annular storage pointer pointing to the annular storage space must be stored in a continuous space, at this time, the pointer space gap needs to be deleted according to a single linked list node deletion method. In addition, it should be noted that, the completion of the target stream task processing also triggers the update of the priority of the stream task as described above, and the update method is the same as above, and is not described herein again.
The embodiment is directed to the characteristics of a flow task, introduces a task buffer space including an annular storage space, a priority storage space, a context information storage space and a pointer array space flow to buffer flow task data creatively, stores the flow task in a ready state into a flow task common queue of multi-core communication, distributes the flow task to a flow task running queue based on CPU affinity and CPU load degree, updates the priority in real time based on the flow task priority and hang-up time when a time slice polling or a system is interrupted, adopts a scheduling method for scheduling the flow task in the flow task running queue to a kernel CPU for processing by adopting an SMP preemptive priority scheduling algorithm combined with the time slice polling, and efficiently completes the processing work of the whole flow task by matching four parts of the task buffer space, thereby effectively solving the problems that the empty occupation of the kernel, the empty occupation and the empty occupation of the kernel in the processing of the flow task in the mass flow data tasks of the existing multi-core real-time operating system, The problems of low CPU affinity, low processing efficiency and the like not only ensure the balanced scheduling of the stream tasks, but also consider the temporary storage of stream data entering the system, and also improve the processing efficiency and the system performance of the stream tasks.
In order to verify the effectiveness of the streaming data task scheduling method based on the multi-core real-time operating system, the same tests of multiple streaming tasks are respectively executed on the improved multi-core FreeRTOS + SMP system shown in FIG. 9 and the reference multi-core FreeRTOS + SMP system shown in FIG. 10.
The test adopts intercepting 10 minutes of flow task data to carry out simulation test, each test task package contains high-frequency high flow, high-frequency low flow, low-frequency low flow and low-frequency high flow data with high, medium and low priorities, namely each test task contains 12 flow tasks. Test schemes aiming at task acceleration, interruption overhead, communication delay and notification overhead are designed through the kernel Benchmark to evaluate the processing performance of different operating systems, and test results shown in tables 1-3 are obtained.
Table 1 refers to FreeRTOS + SMP system flow task processing results
Testing tasks 10 data packets 20 data packets 50 data packets 100 data packets
FreeRTOS+SMP 10 20 36 37
Wait queue length 0 0 14 63
Task processing time 10min 10min 18min 35min
TABLE 2 results of the processing of the flow tasks of the improved FreeRTOS + SMP system of the present invention
Figure BDA0003222156620000171
TABLE 3 modified FreeRTOS + SMP System and reference FreeRTOS + SMP System Kernel test results
Figure BDA0003222156620000172
By comparing the computation speed of the modified FreeRTOS + SMP and the reference FreeRTOS + SMP and the processing effect for different numbers of streaming tasks, it can be found that: (1) when the number of the stream task packets is small, the reference system and the improved system can effectively complete the tasks; (2) when the number of the streaming task packets is gradually increased, the reference system is limited by the system characteristics, and the streaming task scheduling and executing efficiency is gradually reduced, while the improved scheduling system of FreeRTOS + SMP can still be well executed, and can realize the real-time calculation of the streaming task; (3) when the number of the stream task packets is multiplied, the processing upper limit of the reference system is about the peak value of 37 stream task packets, the improved system can reach 69 stream task packets, the waiting queue length of the improved system is smaller than that of the reference system, and the stream task completion time is less than half of the time required by the reference system; (4) by comparing core benchmarking results, the improved system creates almost twice as many tasks, but with only a slight increase in notification cost, interrupt overhead, and communication latency. The combination of the flow task test proves that the flow data task scheduling method based on the multi-core real-time operating system has higher efficiency in the aspect of actual flow data calculation task processing, and effectively ensures the balanced scheduling of the flow tasks.
It should be noted that, although the steps in the above-described flowcharts are shown in sequence as indicated by arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise.
In one embodiment, as shown in fig. 11, there is provided a multi-core operating system stream task scheduling system, the system comprising:
a pre-configuration module 1, configured to pre-configure a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises a storage annular storage pointer, a priority storage pointer and a context information storage pointer which respectively point to the annular storage space, the priority storage space and the context information storage space;
a task creating module 2, configured to create a stream task in response to stream data reception, and store the stream task in the annular storage space;
the task cache module 3 is configured to read the stream task, generate a corresponding stream task priority, store the stream task priority in the priority storage space, and store the stream task in the stream task common queue;
the inter-core distribution module 4 is configured to distribute the stream tasks in the stream task common queue to the stream task running queue;
and the task scheduling module 5 is configured to schedule the stream tasks in the stream task running queue according to the stream task priority, and update addresses pointed by the annular storage pointer, the priority storage pointer, and the context information storage pointer.
For specific limitations of a multi-core operating system stream task scheduling system, reference may be made to the above limitations of a multi-core operating system stream task scheduling method, which is not described herein again. All or part of each module in the multi-core operating system stream task scheduling system can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 12 shows an internal structure diagram of a computer device in one embodiment, and the computer device may be specifically a terminal or a server. As shown in fig. 12, the computer apparatus includes a processor, a memory, a network interface, a display, and an input device, which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for scheduling tasks of a multi-core operating system stream. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those of ordinary skill in the art that the architecture shown in FIG. 12 is only a block diagram of some of the structures associated with the present solution and is not intended to limit the computing devices to which the present solution may be applied, and that a particular computing device may include more or less components than those shown in the drawings, or may combine certain components, or have the same arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the steps of the above method being performed when the computer program is executed by the processor.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method.
To sum up, the embodiments of the present invention provide a method, a system, a computer device, and a storage medium for scheduling stream tasks of a multi-core operating system, which are configured to buffer stream task data in a task buffer space including an annular storage space, a priority storage space, a context information storage space, and a pointer array spatial stream, store ready-state stream tasks in a common stream task queue of multi-core communication, allocate the ready-state stream tasks to a stream task running queue based on CPU affinity and CPU load, update priorities in real time based on the stream task priorities and hang-up time when a time slice is polled or a system is interrupted, schedule the stream tasks in the stream task running queue to a kernel CPU for processing by using an SMP type priority scheduling algorithm combined with time slice polling, and efficiently complete the processing of the entire stream task by matching four parts of the task buffer space, the problems that an existing multi-core real-time operating system is easy to cause empty occupation of a kernel, low CPU affinity, low processing efficiency and the like when processing massive streaming data tasks are effectively solved, the balanced scheduling of the streaming tasks is guaranteed, the temporary storage of streaming data entering the system is considered, and the efficiency and the system performance of processing the streaming tasks are improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.
The embodiments in this specification are described in a progressive manner, and all the same or similar parts of the embodiments are directly referred to each other, and each embodiment is described with emphasis on differences from other embodiments. In particular, for embodiments of the system, the computer device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to the description, reference may be made to some portions of the description of the method embodiments. It should be noted that, the technical features of the embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express some preferred embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these should be construed as the protection scope of the present application. Therefore, the protection scope of the present patent shall be subject to the protection scope of the claims.

Claims (10)

1. A multi-core operating system stream task scheduling method is characterized by comprising the following steps:
pre-configuring a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises an annular storage pointer pointing to the annular storage space, a priority storage pointer pointing to the priority storage space and a context information storage pointer pointing to the context information storage space;
in response to streaming data reception, creating a streaming task and storing the streaming task in the annular storage space;
reading the stream task, generating a corresponding stream task priority, storing the stream task priority in the priority storage space, and storing the stream task in the stream task common queue;
distributing the stream tasks in the stream task common queue to the stream task running queue;
and scheduling the flow tasks in the flow task running queue according to the flow task priority, and updating the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer.
2. The method for scheduling stream tasks in a multi-core operating system according to claim 1, wherein the step of pre-configuring the stream task buffer space, the stream task common queue and the stream task running queue of the operating system comprises:
calculating the occupation space of a single stream task according to the digit of the multi-core real-time operating system, and distributing the annular storage space by combining the number of the quasi-cache stream tasks;
distributing the priority storage space according to the size of the annular storage space and the data packet format of the stream task;
and distributing the context information storage space according to the size of the annular storage space and the interrupt suspension time of the multi-core real-time operating system.
3. The method of claim 1, wherein the step of creating a stream task in response to stream data reception and storing the stream task in the ring storage space comprises:
responding to the first streaming data receiving, distributing the annular storage pointer, the priority storage pointer and the context information storage pointer, and respectively pointing the storage annular storage pointer, the priority storage pointer and the context information storage pointer to the first addresses of the annular storage space, the priority storage space and the context information storage space;
responding to the non-first stream data receiving, judging whether the annular storage space is completely occupied, and if the annular storage space is completely occupied, deleting the stream task with the longest cache time in the annular storage space;
and storing the stream task in the annular storage space, and updating the annular storage pointer to point to the stream task.
4. The method for scheduling stream tasks of a multi-core operating system according to claim 1, wherein the reading the stream tasks, generating corresponding stream task priorities, storing the stream task priorities in the priority storage space, and storing the stream tasks in the stream task common queue comprises:
judging whether the streaming task is a new task, if so, generating the priority of the streaming task according to the header information of the streaming data;
and storing the stream task priority in the priority storage space, updating the priority storage pointer to point to the stream task priority, and storing the stream task corresponding to the priority storage pointer into the stream task common queue.
5. The method for scheduling streaming tasks in a multi-core operating system according to claim 1, wherein the step of allocating the streaming tasks in the streaming task common queue to the streaming task run queue comprises:
judging whether the stream task common queue is full, if so, obtaining task allocation priority of the stream task of each stream task running queue according to the CPU affinity and the CPU load of the stream task running queue;
and distributing the stream tasks of the stream task common queue to the stream task running queue corresponding to the maximum value of the task distribution priority.
6. The method for scheduling stream tasks in a multi-core operating system according to claim 1, wherein the step of scheduling the stream tasks in the stream task running queue according to the stream task priority and updating the addresses pointed by the ring storage pointer, the priority storage pointer and the context information storage pointer comprises the steps of:
responding to time slice polling or system interruption, updating the priority of the streaming task according to the priority and the suspension time of the streaming task, and storing the address pointed by the priority storage pointer corresponding to the priority storage priority;
selecting a target flow task from the flow task running queue according to the flow task priority;
scheduling the target stream task to a CPU (central processing unit) kernel for processing, and reading data information of the stream task according to the annular storage pointer, the priority storage pointer and the context information storage pointer;
responding to the system interrupt in the target stream task processing process, storing the processing result of the target stream task into the annular storage space, and moving the annular storage pointer forward by one bit after updating the context information storage pointer to the address pointed by the current annular storage pointer;
responding to the target flow task to enter a ready state again after the target flow task is interrupted, and storing the target flow task into the flow task common queue;
and responding to the completion of the processing of the target stream task, moving the annular storage pointer back by one bit, and deleting the target stream task from the annular storage space.
7. The method for scheduling streaming tasks according to claim 6, wherein the step of selecting the target streaming task from the streaming task running queue according to the streaming task priority comprises:
responding to the time slice polling, and sequentially selecting the corresponding flow tasks as the target flow tasks according to the flow task priority sequence of the flow tasks in the flow task running queue;
and when the flow task running queue is a null value, the flow tasks of the flow task common queue are preferentially scheduled to the flow task running queue for processing through an SMP preemptive priority scheduling algorithm.
8. A multi-core operating system stream task scheduling system, the system comprising:
the pre-configuration module is used for pre-configuring a stream task buffer space, a stream task common queue and a stream task running queue of an operating system; the stream task buffer space comprises an annular storage space, a priority storage space, a context information storage space and a pointer array space; the pointer array space comprises an annular storage pointer pointing to the annular storage space, a priority storage pointer pointing to the priority storage space and a context information storage pointer pointing to the context information storage space;
the task creating module is used for responding to stream data receiving, creating stream tasks and storing the stream tasks in the annular storage space;
the task cache module is used for reading the stream tasks, generating corresponding stream task priorities, storing the stream task priorities in the priority storage space and storing the stream tasks in the stream task common queue;
the inter-core distribution module is used for distributing the flow tasks in the flow task common queue to the flow task running queue;
and the task scheduling module is used for scheduling the stream tasks in the stream task running queue according to the stream task priority, and updating the addresses pointed by the annular storage pointer, the priority storage pointer and the context information storage pointer.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110965573.8A 2021-06-25 2021-08-20 Multi-core operating system stream task scheduling method, system, computer and medium Pending CN113918291A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021107240434 2021-06-25
CN202110724043 2021-06-25

Publications (1)

Publication Number Publication Date
CN113918291A true CN113918291A (en) 2022-01-11

Family

ID=79233247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110965573.8A Pending CN113918291A (en) 2021-06-25 2021-08-20 Multi-core operating system stream task scheduling method, system, computer and medium

Country Status (1)

Country Link
CN (1) CN113918291A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115951989A (en) * 2023-03-15 2023-04-11 之江实验室 Collaborative flow scheduling numerical simulation method and system based on strict priority

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115951989A (en) * 2023-03-15 2023-04-11 之江实验室 Collaborative flow scheduling numerical simulation method and system based on strict priority

Similar Documents

Publication Publication Date Title
US11558244B2 (en) Improving performance of multi-processor computer systems
CN109218355B (en) Load balancing engine, client, distributed computing system and load balancing method
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
US9246840B2 (en) Dynamically move heterogeneous cloud resources based on workload analysis
US8468251B1 (en) Dynamic throttling of access to computing resources in multi-tenant systems
US8424007B1 (en) Prioritizing tasks from virtual machines
EP1750200A2 (en) System and method for executing job step, and computer product
US9513965B1 (en) Data processing system and scheduling method
WO2019056695A1 (en) Task scheduling method and apparatus, terminal device, and computer readable storage medium
US20120297216A1 (en) Dynamically selecting active polling or timed waits
US10733022B2 (en) Method of managing dedicated processing resources, server system and computer program product
US10884778B1 (en) Adjusting dynamically scalable instance hosting based on compute resource usage
US9547576B2 (en) Multi-core processor system and control method
US10037225B2 (en) Method and system for scheduling computing
Tang et al. Data-aware resource scheduling for multicloud workflows: A fine-grained simulation approach
CN114090223A (en) Memory access request scheduling method, device, equipment and storage medium
CN114675964A (en) Distributed scheduling method, system and medium based on Federal decision tree model training
Xu et al. Enhancing performance and energy efficiency for hybrid workloads in virtualized cloud environment
Wen et al. Research and realization of nginx-based dynamic feedback load balancing algorithm
US8141077B2 (en) System, method and medium for providing asynchronous input and output with less system calls to and from an operating system
US8321569B2 (en) Server resource allocation
CN113918291A (en) Multi-core operating system stream task scheduling method, system, computer and medium
CN115525400A (en) Method, apparatus and program product for managing multiple computing tasks on a batch basis
US9405470B2 (en) Data processing system and data processing method
US11474868B1 (en) Sharded polling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination