CN114579285A - Task running system and method and computing device - Google Patents
Task running system and method and computing device Download PDFInfo
- Publication number
- CN114579285A CN114579285A CN202210463083.2A CN202210463083A CN114579285A CN 114579285 A CN114579285 A CN 114579285A CN 202210463083 A CN202210463083 A CN 202210463083A CN 114579285 A CN114579285 A CN 114579285A
- Authority
- CN
- China
- Prior art keywords
- interrupt
- task
- real
- time
- highest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4812—Task transfer initiation or dispatching by interrupt, e.g. masked
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a task running system, a task running method and a computing device, wherein the task running system comprises a preemptive kernel, an interrupt preprocessing module, a real-time running time domain and a general running time domain, wherein the real-time running time domain and the general running time domain are arranged on the interrupt preprocessing module; when the real-time running time domain receives an interrupt signal, interrupting the real-time task which is being executed and carrying out interrupt processing, and acquiring the real-time task with the highest emergency degree from the real-time task queue so as to immediately execute the real-time task with the highest emergency degree; and when the general operation time domain receives an interrupt signal, acquiring the highest-priority calculation type task from the calculation type task queue, interrupting the low-priority calculation type task and performing interrupt processing so as to immediately execute the highest-priority calculation type task. According to the technical scheme of the invention, the mixed operation of real-time tasks and calculation tasks can be realized.
Description
Technical Field
The present invention relates to the field of edge computing and operating systems, and in particular, to a task running system, a task running method, and a computing device.
Background
The cloud computing technology improves the utilization rate of hardware resources by carrying out separated management on the resources, greatly reduces the use cost of an IT information system, and simultaneously improves the usability of the system. At present, the mainstream large internet service is based on the infrastructure provided by cloud computing.
With the development of the 5G technology, the internet develops to a deeper and wider field, and the topological distance between a network terminal and a network center is continuously lengthened, so that the timeliness of network application response is influenced. The network terminal is a boundary between the digital world and the physical world, and the network terminal device usually has a requirement on real-time performance, and needs to complete a calculation task and feed back a calculation result within a limited time.
In the prior art, because factors such as cost are considered, the computing capability of the network terminal device is generally weak, and the computing resources of the cloud center are required to be used for completing the computing tasks of the core part, and then the computing tasks of the rest part are completed by the network terminal device. However, the distance between the cloud center and the network terminal is long, so that the real-time performance of the interaction is affected by the network transmission process. Therefore, a new computing method needs to be introduced between the cloud center and the network terminal, that is, at the edge of the cloud, to solve the contradiction between the computing resources and the physical distance.
The existing operating systems include both real-time operating systems and general-purpose operating systems. For network terminal equipment, a real-time operating system is generally adopted to ensure the real-time performance of task operation. For a cloud-centric server, a general-purpose operating system is generally adopted to obtain the maximum utilization rate of computing resources.
For the edge computing server node, both the computing type task and the real-time task need to be operated. Therefore, neither the existing single real-time operating system nor the general-purpose operating system can meet the requirements of the edge computing server node.
Therefore, a task execution system with mixed time domain characteristics is needed to solve the problems in the prior art.
Disclosure of Invention
To this end, the present invention provides a task execution system, a task execution method, and a computing device to solve or at least alleviate the above-existing problems.
According to an aspect of the present invention, there is provided a task execution system adapted to be deployed in an edge computing server, the system including a preemptive kernel, an interrupt preprocessing module arranged above the preemptive kernel, a real-time execution time domain and a general-purpose execution time domain arranged above the interrupt preprocessing module, wherein: the interrupt preprocessing module is suitable for responding to an interrupt signal generated by hardware, determining an interrupt type according to the interrupt signal, and sending the interrupt signal to a corresponding running time domain for processing according to the interrupt type; the real-time running time domain is suitable for interrupting the real-time task which is being executed and carrying out interrupt processing when an interrupt signal is received, and the real-time task with the highest emergency degree is obtained from the real-time task queue so as to immediately execute the real-time task with the highest emergency degree; the general operation time domain is suitable for acquiring a highest-priority calculation type task from a calculation type task queue when an interrupt signal is received, interrupting a low-priority calculation type task and performing interrupt processing so as to immediately execute the highest-priority calculation type task.
Optionally, in the task execution system according to the present invention, the real-time execution time domain includes: the fast interrupt module is suitable for interrupting the real-time task which is being executed and carrying out interrupt processing when receiving the interrupt signal; the real-time scheduling module is suitable for acquiring a real-time task with the highest emergency degree from the real-time task queue by using a real-time scheduling algorithm so as to immediately execute the real-time task with the highest emergency degree; and the real-time operation module is suitable for providing memory management service for the real-time task with the highest emergency degree.
Optionally, in the task execution system according to the present invention, the fast interrupt module is further adapted to: and sending the interrupt signal to the processor so that the processor searches the corresponding interrupt processing program from the rapid interrupt vector table, and interrupting the real-time task being executed by the interrupt processing program and performing interrupt processing.
Optionally, in the task execution system according to the present invention, the general execution time domain includes: the general scheduling module is suitable for acquiring a highest-priority computational task from a computational task queue by using a fair scheduling algorithm when an interrupt signal is received so as to immediately execute the highest-priority computational task; the thread-based interrupt module is suitable for interrupting the low-priority computing task and performing interrupt processing; and the general operation module is suitable for providing memory management service for the calculation type task with the highest priority.
Optionally, in the task execution system according to the present invention, the threaded interrupt module is further adapted to: sending the interrupt signal to a processor so that the processor converts the interrupt signal into a corresponding interrupt request and searches one or more interrupt handlers associated with the interrupt request from an interrupt request registry; sequentially waking one or more processing threads corresponding to the one or more interrupt handlers to interrupt the low-priority computing-type task and perform interrupt handling via the one or more processing threads.
Optionally, in the task execution system according to the present invention, the fair scheduling algorithm includes a CFS scheduling algorithm.
Optionally, in the task execution system according to the present invention, the real-time scheduling algorithm includes a minimum slack priority scheduling algorithm.
Optionally, in the task execution system according to the present invention, the interrupt preprocessing module is further adapted to: and acquiring interrupt source information from the interrupt signal, and determining the interrupt type according to the interrupt source information.
According to an aspect of the present invention, there is provided a task execution method executed in a task execution system, the system including a preemptive kernel, an interrupt preprocessing module arranged above the preemptive kernel, a real-time execution time domain and a general-purpose execution time domain arranged above the interrupt preprocessing module, the method including the steps of: the interrupt preprocessing module responds to an interrupt signal generated by hardware, determines an interrupt type according to the interrupt signal, and sends the interrupt signal to a corresponding running time domain for processing according to the interrupt type; when the real-time running time domain receives an interrupt signal, interrupting an executing real-time task and performing interrupt processing, and acquiring a real-time task with the highest emergency degree from a real-time task queue so as to immediately execute the real-time task with the highest emergency degree; and when the general operation time domain receives an interrupt signal, acquiring the highest-priority calculation type task from the calculation type task queue, interrupting the low-priority calculation type task and performing interrupt processing so as to immediately execute the highest-priority calculation type task.
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the task execution method as described above.
According to an aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform a task execution method as described above.
According to the technical scheme, the task running system can meet the requirement of mixed running of the real-time task and the calculation type task by setting the real-time running time domain and the general running time domain. The interrupt preprocessing module can respond to an interrupt signal generated by hardware, and allocates the interrupt signal to a corresponding running time domain according to the interrupt type for processing. And when the real-time running time domain receives an interrupt signal, immediately interrupting the real-time task which is being executed and performing interrupt processing, and selecting the real-time task with the highest emergency degree from the real-time task queue of the CPU so that the CPU immediately executes the real-time task with the highest emergency degree. When receiving an interrupt signal, the general-purpose running time domain firstly acquires a highest-priority computational task from a computational task queue of the CPU, and then can interrupt a low-priority computational task which is being executed and interrupt processing so that the CPU immediately executes the highest-priority computational task. Therefore, the task running system has the mixed time domain characteristic, can process the real-time task with the highest emergency degree in a priority mode, can process the calculation type task with the highest priority in a priority mode, and can simultaneously schedule and execute the real-time task and the calculation type task. Therefore, the requirements of mixed services on computing capacity and real-time control capacity can be met, and the requirements of the edge computing server node on mixed operation of real-time tasks and computing tasks are also met. In addition, due to the limited resources of the edge computing server nodes, the task running system based on the invention can realize the mixed running of real-time tasks and computing tasks on one platform, so that the limited resources can be optimally and fully utilized. Moreover, the task operation system is deployed at the edge computing server node, so that a more complete network information system can be formed by connecting the cloud center server and the network terminal equipment, and the problem of slow response caused by too long physical distance between the cloud center server and the network terminal is solved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a task execution system 120 deployed in an edge compute server 100, according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention;
FIG. 3 shows a flow diagram of a task execution method 300 according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic diagram of a task execution system 120 deployed in an edge computing server 100 according to an embodiment of the present invention.
As shown in fig. 1, the edge computing server 100 may include a hardware layer 110, a task execution system 120, and an application layer 130. In some embodiments, the task execution system 120 may be a part of the operating system, that is, the task execution system 120 is included in the operating system of the edge computing server 100. In still other embodiments, the operating system of the edge computing server 100 may be implemented as the task execution system 120 of the present invention.
Specifically, the application layer 130 may include one or more applications, and the present invention is not limited to the kind and number of the applications. Developers can also develop applications based on actual business needs. Each application may call an interface provided by the task execution system to request the business execution system to perform the task. In one embodiment, the applications include, for example, applications for hybrid environment monitoring, applications for hybrid traffic debugging, applications for hybrid traffic analysis, and the like.
The hardware layer 110 may provide a hardware environment for the task execution system and the application execution. The hardware layer 110 may include a processor (CPU), an internal memory, and may further include an external hardware device such as a network card, a hard disk, and a keyboard.
The task execution system 120 may provide an execution environment for one or more tasks that the application requests to execute, including real-time tasks and computing-type tasks.
The task execution system 120 according to the present invention can simultaneously execute a real-time task and a computational task. It should be noted that the real-time task is a task that needs to respond in a predetermined time, for example, a task of controlling a traffic light signal. The computing task is a task which needs to process a large amount of data and has high requirement on computing capacity, such as audio and video processing, database application and the like.
The real-time task and the computing task have different requirements in aspects of interrupt processing, scheduling processing, memory management and the like. Specifically, in terms of interrupt processing, a real-time task needs to execute a task with the highest degree of urgency as soon as possible by interrupt processing; for computational tasks, however, it is undesirable for the currently executing task to be interrupted frequently. In the aspect of scheduling processing, real-time tasks need to be scheduled as soon as possible after an event arrives; for computing services, however, it is undesirable for the currently executing task to be scheduled out frequently. In the memory management method, the real-time tasks need to be stored in the memory completely instead of being replaced by the virtual memory; the computing task needs to fully utilize the virtual memory to realize large-scale data computation because a large amount of data needs to be processed.
In view of the requirements of real-time tasks and computing tasks in terms of interrupt handling, scheduling processing, memory management, etc., the present invention provides a task execution system 120 capable of scheduling and executing real-time tasks and computing tasks simultaneously.
According to an embodiment of the present invention, as shown in fig. 1, the task execution system 120 includes a preemptive kernel 121, an interrupt preprocessing module 122 disposed above the preemptive kernel 121, and a runtime system disposed above the interrupt preprocessing module 122. The runtime system includes a real-time runtime domain 123 and a general runtime domain 124 (non-real-time runtime domain), where the real-time runtime domain 123 may provide a runtime environment for a real-time task, and the general runtime domain 124 may provide a runtime environment for a computing task.
It should be noted that the preemptive kernel 121 in the present invention adopts a full preemptive kernel to ensure the basic real-time response capability and the fused scheduling capability for resources. Each running time domain adopts a complete independent core software stack, so that the two running time domains are isolated from each other, and the independence of real-time tasks and calculation type tasks with different time characteristics during running is ensured.
In one embodiment, as shown in fig. 1, the task execution system 120 further includes an application domain management module 126 located above the real-time execution time domain 123 and the general execution time domain 124, and the application domain management module 126 may provide a unified domain control interface for one or more applications of the upper application layer, so that the applications request execution of tasks by calling the domain control interface. The application domain management module 126 may receive task execution requests sent by one or more applications and assign tasks to corresponding domains for execution according to types of the tasks. That is, when the task is a real-time task, the real-time task is allocated to the real-time running time domain 123 for execution, and the real-time running time domain 123 may provide a real-time running environment for the running of the real-time task. When the task is a computational task, the computational task is assigned to the general runtime 124 for execution, and the general runtime 124 may provide a general runtime environment for the execution of the computational task.
According to an embodiment of the present invention, each module in the real-time running time domain 123 is configured to prioritize the real-time task with the highest urgency, and each module in the general running time domain 124 is configured to prioritize the computing task with the highest priority.
Specifically, the preemptive core 121 may receive hardware generated interrupt signals, i.e., hardware interrupt signals. The hardware interrupt signal is automatically generated by a hardware device (e.g., network card, hard disk, keyboard, etc.) communicatively coupled to task execution system 120. The preemptive core 121 may then send the interrupt signal from the hardware to the interrupt preprocessing module 122, so that the interrupt signal is assigned to the corresponding runtime domain by the interrupt preprocessing module 122 for processing.
The interrupt preprocessing module 122 may respond to an interrupt signal generated by hardware, determine an interrupt type according to the interrupt signal, and allocate the interrupt signal to a corresponding runtime domain (a real-time runtime domain 123 or a general runtime domain 124) according to the interrupt type for processing. Here, the interrupt signal includes interrupt source information, and the interrupt preprocessing module 122 may obtain the interrupt source information from the interrupt signal and determine the interrupt type according to the interrupt source information. The interrupt type is to determine whether the most urgent real-time task or the highest priority computing task needs to be executed immediately.
When receiving the interrupt signal, the real-time running time domain 123 first immediately interrupts the real-time task being executed and performs interrupt processing, and then may obtain the real-time task with the highest urgency from the real-time task queue of the processor (CPU) by using a real-time scheduling algorithm, so that the processor (CPU) immediately executes the real-time task with the highest urgency. In this case, the real-time task corresponding to the highest degree of urgency preempts the usage right of the processor. It can be understood that, when receiving the interrupt signal, the real-time running time domain 123 firstly performs interrupt processing on the task being executed, and then performs scheduling processing.
The general runtime 124, upon receiving the interrupt signal, first obtains the highest priority compute task from the compute task queue of the processor, for example, the highest priority compute task may be obtained from the compute task queue based on a fair scheduling algorithm. Further, the executing low priority computing-type task may be interrupted and the interrupt process performed so that the processor immediately executes the highest priority computing-type task.
Further, after selecting the highest priority computing-type task from the computing-type task queue, the general runtime 124 needs to determine whether the selected highest priority computing-type task is an executing computing-type task. If the highest priority computational task is not an executing computational task, indicating that the executing computational task currently belongs to a low priority computational task, then the executing low priority computational task (i.e., the previous highest priority computational task) is interrupted and interrupt processing is performed so that the processor immediately switches to executing the highest priority computational task. At this time, the task corresponding to the highest priority computing type preempts the use right of the processor.
In addition, if the selected highest-priority computing task is the executing computing task, the executing computing task does not need to be interrupted and the interrupt processing does not need to be carried out, so that the situation that the computing task is frequently interrupted is avoided. It is to be understood that the general runtime 124 first performs a scheduling process when an interrupt signal is received, and then determines whether an interrupt process is required.
According to an embodiment of the present invention, the real-time runtime 123 includes a fast interrupt module 1231, a real-time scheduling module 1232, and a real-time runtime module 1233. During the operation of the real-time task with the highest urgency, the real-time operation module 1233 may provide a memory management service for the operation of the real-time task with the highest urgency.
The fast interrupt module 1231 runs on the processor, and the fast interrupt module 1231 can interrupt the real-time task being executed and perform interrupt processing when receiving the interrupt signal.
Further, the fast interrupt module 1231 may be bound to the interrupt handler. The fast interrupt module 1231 may send the interrupt signal to the processor upon receiving the interrupt signal, and the processor may send the interrupt signal to the interrupt handler by looking up a corresponding interrupt handler (i.e., the interrupt handler bound to the fast interrupt module 1231) from the fast interrupt vector table, so as to interrupt the real-time task being executed and perform the interrupt processing through the interrupt handler.
It should be noted that the interrupt processing refers to a processing procedure in which, when a new task requiring priority execution occurs, the processor temporarily suspends execution of the current task and executes the new task (for example, the real-time task with the highest urgency or the calculation-type task with the highest priority in the above-described embodiment).
The real-time scheduling module 1232 may obtain the most urgent real-time task from the real-time task queue using a real-time scheduling algorithm, so that the processor immediately executes the most urgent real-time task.
In one embodiment, the real-time scheduling algorithm may be implemented as a minimum slack priority scheduling algorithm, that is, the real-time task with the highest urgency may be obtained from the real-time task queue by using the minimum slack priority scheduling algorithm. It should be noted that the minimum slack priority scheduling algorithm determines the priority of the task according to the degree of urgency (or slack) of the task. For the embodiment of the present invention, the higher the urgency level of the real-time task, the higher the priority level given to the real-time task so as to preferentially execute the real-time task with the highest urgency level. In the real-time task queue, each real-time task is sorted according to the slack from low to high (namely, the urgency is from high to low), wherein the real-time task with the lowest slack (the urgency is the highest) is arranged at the front of the real-time task queue and is scheduled to be executed preferentially. The sag calculation method is as follows: slack for real-time tasks = time that must be completed-its own run time-current time. According to the algorithm, when the minimum slack of a real-time task is reduced to 0, the real-time scheduling module 1232 must immediately schedule the real-time task to immediately preempt the processor, so as to ensure that the real-time task is executed and completed according to the requirement of the deadline.
According to an embodiment of the present invention, the general runtime domain 124 includes a general scheduling module 1242, a threaded interrupt module 1241, and a general runtime module 1243. The general purpose execution module 1243 may provide memory management services for the highest priority computational task during its execution.
The general scheduling module 1242 may select and fetch the highest priority computational task from the computational task queue of the processor using a fair scheduling algorithm upon receiving the interrupt signal, so that the processor immediately executes the highest priority computational task. In one implementation, the fair scheduling algorithm may be implemented as a CFS scheduling algorithm. The general scheduling module 1242 may select the highest priority computing type task from the task queue of the processor using a CFS scheduling algorithm. Specifically, according to the CFS scheduling algorithm, the general scheduling module 1242 always selects the slowest-running computing task from the computing task queue as the highest-priority computing task, so that the slowest-running computing task may get more running opportunities.
If the highest priority calculation-type task is an executing calculation-type task, the executing calculation-type task is continuously executed, and the executing calculation-type task does not need to be interrupted.
If the highest priority computational task is not an executing computational task, in other words, the executing computational task is a low priority computational task, then the executing low priority computational task is further interrupted and interrupted by the threaded interrupt module 1241.
In one embodiment, the threaded interrupt module 1241 runs on top of a processor. The threaded interrupt module 1241 may specifically interrupt an executing low-priority computing task and perform interrupt processing according to the following method: the interrupt signal may be sent to a processor by converting the interrupt signal to a corresponding Interrupt Request (IRQ) and looking up one or more interrupt handlers associated with the interrupt request from an interrupt request registry. And, the processor sequentially wakes up one or more processing threads corresponding to the one or more interrupt handlers to interrupt the executing low priority computing type task and perform interrupt processing via the one or more processing threads.
Specifically, the interrupt handler associated with the interrupt request may include a plurality. When the processor searches a plurality of interrupt processing programs associated with the interrupt request from the interrupt request registry, each time one associated interrupt processing program is found, the processor wakes up the processing thread corresponding to the interrupt processing program so as to break the executing low-priority computing task and interrupt the processing through the processing thread, after waiting for the completion of the execution of the processing thread, then searches a next associated interrupt processing program from the interrupt request registry, wakes up a next processing thread corresponding to the next interrupt processing program, and waits for the completion of the execution of the next processing thread. And repeating the steps until the execution of the processing threads corresponding to all the interrupt processing programs related to the interrupt request is completed, thereby completing the interrupt processing.
In addition, in an embodiment, as shown in fig. 1, a domain resource management module 125 may be further disposed between the real-time running time domain 123 and the general running time domain 124, where the domain resource management module 125 is configured to separate resources of the real-time running time domain 123 and the general running time domain 124, and ensure isolation of resources and characteristics between the real-time running time domain 123 and the general running time domain 124.
According to the task execution system 120 of the present invention, the real-time task and the calculation-type task can be mixedly executed by setting the real-time execution time domain and the general execution time domain. The interrupt preprocessing module can respond to an interrupt signal generated by hardware and distribute the interrupt signal to a corresponding running time domain according to the interrupt type for processing. And when the real-time running time domain receives an interrupt signal, immediately interrupting the real-time task which is being executed and performing interrupt processing, and selecting the real-time task with the highest urgency degree from the real-time task queue of the CPU so that the CPU immediately executes the real-time task with the highest urgency degree. When the general operation time domain receives an interrupt signal, the general operation time domain firstly acquires the highest priority computing task from the computing task queue of the CPU, and then can interrupt the executing low priority computing task and perform interrupt processing so that the CPU immediately executes the highest priority computing task. Therefore, the task operation system has the mixed time domain characteristic, can realize the priority processing of the real-time task with the highest emergency degree, can realize the priority processing of the calculation type task with the highest priority, and can realize the scheduling execution of the real-time task and the calculation type task at the same time. Therefore, the requirements of mixed services on computing capacity and real-time control capacity can be met, and the requirements of the edge computing server node on mixed operation of real-time tasks and computing tasks are also met. In addition, due to the limited resources of the edge computing server nodes, the task running system based on the invention can realize the mixed running of real-time tasks and computing tasks on one platform, so that the limited resources can be optimally and fully utilized. Moreover, the task operation system is deployed at the edge computing server node, so that a more complete network information system can be formed by connecting the cloud center server and the network terminal equipment, and the problem of slow response caused by too long physical distance between the cloud center server and the network terminal is solved.
According to one embodiment of the invention, the edge computing server 100 may implement a computing device 200 as shown below.
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention.
As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing, including but not limited to: a microprocessor (UP), a microcontroller (UC), a digital information processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. The application 222 is actually a plurality of program instructions that direct the processor 204 to perform corresponding operations. In some embodiments, application 222 may be arranged to cause processor 204 to operate with program data 224 on an operating system.
Computing device 200 also includes storage device 232, storage device 232 including removable storage 236 and non-removable storage 238.
Computing device 200 may also include a storage interface bus 234. The storage interface bus 234 enables communication from the storage devices 232 (e.g., removable storage 236 and non-removable storage 238) to the basic configuration 202 via the bus/interface controller 230. Operating system 220, applications 222, and at least a portion of program data 224 may be stored on removable storage 236 and/or non-removable storage 238, and loaded into system memory 206 via storage interface bus 234 and executed by one or more processors 204 when computing device 200 is powered on or applications 222 are to be executed.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The exemplary output device 242 includes an image processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices such as a display or speakers via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in a manner that encodes information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In an embodiment in accordance with the invention, the computing device 200 is configured to perform a task execution method 300 in accordance with the invention. The operating system of the computing device 200 includes a plurality of program instructions for performing the task execution method 300 of the present invention, such that the computing device performs the hybrid execution of real-time tasks and computing-type tasks by performing the task execution method 300 of the present invention.
In one embodiment of the invention, the task execution system 120 is included in the operating system of the computing device 200, and the task execution system 120 is configured to perform the task execution method 300 according to the invention.
FIG. 3 shows a flow diagram of a task execution method 300 according to one embodiment of the invention. The method 300 is suitable for execution in the task execution system 120. The task execution system 120 may be deployed in an edge server 100 (e.g., the aforementioned computing device 200).
As shown in FIG. 3, the method 300 includes steps S310 to S330.
In step S310, the interrupt preprocessing module 122 determines an interrupt type according to the interrupt signal in response to the interrupt signal generated by the hardware, and allocates the interrupt signal to a corresponding runtime domain (the real-time runtime domain 123 or the general runtime domain 124) according to the interrupt type for processing. Here, the interrupt signal includes interrupt source information, and the interrupt preprocessing module 122 may obtain the interrupt source information from the interrupt signal and determine the interrupt type according to the interrupt source information. The interrupt type is to determine whether the most urgent real-time task or the highest priority computing task needs to be executed immediately.
In step S320, when receiving the interrupt signal, the real-time running time domain 123 first immediately interrupts the real-time task being executed and performs interrupt processing, and then may obtain the real-time task with the highest urgency from the real-time task queue of the CPU by using a real-time scheduling algorithm, so that the CPU immediately executes the real-time task with the highest urgency. In this case, the real-time task corresponding to the highest degree of urgency preempts the use right of the CPU. It can be understood that, when receiving the interrupt signal, the real-time running time domain 123 first performs interrupt processing on the task being executed, and then performs scheduling processing.
In step S330, upon receiving the interrupt signal, the general runtime 124 first obtains the highest priority computational task from the computational task queue of the CPU, for example, the highest priority computational task may be obtained from the computational task queue based on a fair scheduling algorithm. Further, the executing low-priority computing-type task may be interrupted and the interrupt process performed so that the CPU immediately executes the highest-priority computing-type task.
Further, after selecting the highest priority compute-type task from the compute-type task queue, it is necessary to determine whether the selected highest priority compute-type task is an executing compute-type task. If the highest priority computational task is not the executing computational task, indicating that the executing computational task currently belongs to a low priority computational task, then the executing low priority computational task (i.e., the previous highest priority computational task) is interrupted and interrupt processing is performed so that the CPU immediately proceeds to execute the highest priority computational task. In this case, the CPU is preempted by the calculation-type task corresponding to the highest priority.
In addition, if the selected highest-priority computing task is the executing computing task, the executing computing task does not need to be interrupted and the interrupt processing does not need to be carried out, so that the situation that the computing task is frequently interrupted is avoided. It is to be understood that the general runtime 124 first performs a scheduling process when an interrupt signal is received, and then determines whether an interrupt process is required.
It should be noted that, for specific implementation of the task execution method 300, reference may be made to the foregoing description of the task execution system 120, and details are not described here.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the task execution method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Additionally, some of the embodiments are described herein as a method or combination of method elements that can be implemented by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed by way of illustration and not limitation with respect to the scope of the invention, which is defined by the appended claims.
Claims (11)
1. A task execution system adapted to be deployed in an edge compute server, the system comprising a preemptive kernel, an interrupt pre-processing module disposed above the preemptive kernel, a real-time runtime domain and a general runtime domain disposed above the interrupt pre-processing module, wherein:
the interrupt preprocessing module is suitable for responding to an interrupt signal generated by hardware, determining an interrupt type according to the interrupt signal, and sending the interrupt signal to a corresponding running time domain for processing according to the interrupt type;
the real-time running time domain is suitable for interrupting the real-time task which is being executed and carrying out interrupt processing when an interrupt signal is received, and acquiring the real-time task with the highest emergency degree from the real-time task queue so as to immediately execute the real-time task with the highest emergency degree;
the general operation time domain is suitable for acquiring a highest-priority calculation type task from a calculation type task queue when an interrupt signal is received, interrupting a low-priority calculation type task and performing interrupt processing so as to immediately execute the highest-priority calculation type task.
2. The system of claim 1, wherein the real-time operational time domain comprises:
the fast interrupt module is suitable for interrupting the real-time task which is being executed and carrying out interrupt processing when receiving the interrupt signal;
the real-time scheduling module is suitable for acquiring a real-time task with the highest emergency degree from the real-time task queue by using a real-time scheduling algorithm so as to immediately execute the real-time task with the highest emergency degree;
and the real-time operation module is suitable for providing memory management service for the real-time task with the highest emergency degree.
3. The system of claim 2, wherein the fast interrupt module is further adapted to:
and sending the interrupt signal to the processor so that the processor searches the corresponding interrupt processing program from the rapid interrupt vector table, and interrupting the real-time task being executed by the interrupt processing program and performing interrupt processing.
4. The system of any of claims 1-3, wherein the generic runtime domain comprises:
the general scheduling module is suitable for acquiring a highest-priority computational task from a computational task queue by using a fair scheduling algorithm when an interrupt signal is received so as to immediately execute the highest-priority computational task;
the thread-based interrupt module is suitable for interrupting the low-priority computing task and performing interrupt processing;
and the general operation module is suitable for providing memory management service for the calculation type task with the highest priority.
5. The system of claim 4, wherein the threaded interrupt module is further adapted to:
sending the interrupt signal to a processor so that the processor converts the interrupt signal into a corresponding interrupt request and searches one or more interrupt handlers associated with the interrupt request from an interrupt request registry;
sequentially waking one or more processing threads corresponding to the one or more interrupt handlers to interrupt the low-priority computing-type task and perform interrupt handling via the one or more processing threads.
6. The method of claim 4, wherein,
the fair scheduling algorithm includes a CFS scheduling algorithm.
7. The method of claim 2 or 3,
the real-time scheduling algorithm comprises a minimum slack priority scheduling algorithm.
8. The system of any one of claims 1-3, wherein the interrupt pre-processing module is further adapted to:
and acquiring interrupt source information from the interrupt signal, and determining the interrupt type according to the interrupt source information.
9. A task execution method executed in a task execution system, the system including a preemptive kernel, an interrupt pre-processing module arranged above the preemptive kernel, a real-time runtime domain and a general runtime domain arranged above the interrupt pre-processing module, the method comprising the steps of:
the interrupt preprocessing module responds to an interrupt signal generated by hardware, determines an interrupt type according to the interrupt signal, and sends the interrupt signal to a corresponding running time domain for processing according to the interrupt type;
when the real-time running time domain receives an interrupt signal, interrupting an executing real-time task and performing interrupt processing, and acquiring a real-time task with the highest emergency degree from a real-time task queue so as to immediately execute the real-time task with the highest emergency degree;
and when the general operation time domain receives an interrupt signal, acquiring the highest-priority calculation type task from the calculation type task queue, interrupting the low-priority calculation type task and performing interrupt processing so as to immediately execute the highest-priority calculation type task.
10. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method of claim 9.
11. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of claim 9.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210927215.2A CN115168013A (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
CN202210463083.2A CN114579285B (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210463083.2A CN114579285B (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210927215.2A Division CN115168013A (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114579285A true CN114579285A (en) | 2022-06-03 |
CN114579285B CN114579285B (en) | 2022-09-06 |
Family
ID=81778540
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210927215.2A Pending CN115168013A (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
CN202210463083.2A Active CN114579285B (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210927215.2A Pending CN115168013A (en) | 2022-04-29 | 2022-04-29 | Task running system and method and computing device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN115168013A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116483549A (en) * | 2023-06-25 | 2023-07-25 | 清华大学 | Task scheduling method and device for intelligent camera system, camera and storage medium |
WO2023246044A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
WO2023246042A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070260447A1 (en) * | 2005-07-12 | 2007-11-08 | Dino Canton | Virtual machine environment for interfacing a real time operating system environment with a native host operating system |
CN101819426A (en) * | 2009-02-27 | 2010-09-01 | 中国科学院沈阳计算技术研究所有限公司 | Method for synchronizing kernel data of real-time system and non-real-time system of industrial Ethernet numerical control system |
CN101894045A (en) * | 2010-06-18 | 2010-11-24 | 阳坚 | Real-time Linux operating system |
US20200310855A1 (en) * | 2019-03-28 | 2020-10-01 | Amazon Technologies, Inc. | Verified isolated run-time environments for enhanced security computations within compute instances |
CN111736905A (en) * | 2020-08-12 | 2020-10-02 | 江苏深瑞汇阳能源科技有限公司 | Edge computing terminal meeting real-time service requirement |
CN112131741A (en) * | 2020-09-22 | 2020-12-25 | 西安电子科技大学 | Real-time double-kernel single-machine semi-physical simulation architecture and simulation method |
-
2022
- 2022-04-29 CN CN202210927215.2A patent/CN115168013A/en active Pending
- 2022-04-29 CN CN202210463083.2A patent/CN114579285B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070260447A1 (en) * | 2005-07-12 | 2007-11-08 | Dino Canton | Virtual machine environment for interfacing a real time operating system environment with a native host operating system |
CN101819426A (en) * | 2009-02-27 | 2010-09-01 | 中国科学院沈阳计算技术研究所有限公司 | Method for synchronizing kernel data of real-time system and non-real-time system of industrial Ethernet numerical control system |
CN101894045A (en) * | 2010-06-18 | 2010-11-24 | 阳坚 | Real-time Linux operating system |
US20200310855A1 (en) * | 2019-03-28 | 2020-10-01 | Amazon Technologies, Inc. | Verified isolated run-time environments for enhanced security computations within compute instances |
CN111736905A (en) * | 2020-08-12 | 2020-10-02 | 江苏深瑞汇阳能源科技有限公司 | Edge computing terminal meeting real-time service requirement |
CN112131741A (en) * | 2020-09-22 | 2020-12-25 | 西安电子科技大学 | Real-time double-kernel single-machine semi-physical simulation architecture and simulation method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023246044A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
WO2023246042A1 (en) * | 2022-06-23 | 2023-12-28 | 哲库科技(北京)有限公司 | Scheduling method and apparatus, chip, electronic device, and storage medium |
CN116483549A (en) * | 2023-06-25 | 2023-07-25 | 清华大学 | Task scheduling method and device for intelligent camera system, camera and storage medium |
CN116483549B (en) * | 2023-06-25 | 2023-09-19 | 清华大学 | Task scheduling method and device for intelligent camera system, camera and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114579285B (en) | 2022-09-06 |
CN115168013A (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114579285B (en) | Task running system and method and computing device | |
US20210073169A1 (en) | On-chip heterogeneous ai processor | |
EP2701074B1 (en) | Method, device, and system for performing scheduling in multi-processor core system | |
CN109697122B (en) | Task processing method, device and computer storage medium | |
US9563585B2 (en) | System and method for isolating I/O execution via compiler and OS support | |
WO2024021489A1 (en) | Task scheduling method and apparatus, and kubernetes scheduler | |
US7366814B2 (en) | Heterogeneous multiprocessor system and OS configuration method thereof | |
CN109840149B (en) | Task scheduling method, device, equipment and storage medium | |
WO2021212965A1 (en) | Resource scheduling method and related device | |
US20240160474A1 (en) | Multi-core processor task scheduling method, and device and storage medium | |
CN115167996A (en) | Scheduling method and device, chip, electronic equipment and storage medium | |
EP4386554A1 (en) | Instruction distribution method and device for multithreaded processor, and storage medium | |
CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
CN113132456A (en) | Edge cloud cooperative task scheduling method and system based on deadline perception | |
CN111158875B (en) | Multi-module-based multi-task processing method, device and system | |
CN111831408A (en) | Asynchronous task processing method and device, electronic equipment and medium | |
CN104598311A (en) | Method and device for real-time operation fair scheduling for Hadoop | |
CN113010301B (en) | User-defined measured priority queues | |
CN117311939A (en) | Client request processing method, computing device and storage medium | |
CN107634978B (en) | Resource scheduling method and device | |
CN114911538A (en) | Starting method of running system and computing equipment | |
CN112486638A (en) | Method, apparatus, device and storage medium for executing processing task | |
CN114816703A (en) | Task processing method, device, equipment and medium | |
CN113485810A (en) | Task scheduling execution method, device, equipment and storage medium | |
CN114911597A (en) | Switching method of operation system and computing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |