CN116244073A - Resource-aware task allocation method for hybrid key partition real-time operating system - Google Patents

Resource-aware task allocation method for hybrid key partition real-time operating system Download PDF

Info

Publication number
CN116244073A
CN116244073A CN202310095960.XA CN202310095960A CN116244073A CN 116244073 A CN116244073 A CN 116244073A CN 202310095960 A CN202310095960 A CN 202310095960A CN 116244073 A CN116244073 A CN 116244073A
Authority
CN
China
Prior art keywords
task
level
tasks
key
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310095960.XA
Other languages
Chinese (zh)
Inventor
赵帅
苏若娴
徐菡志
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202310095960.XA priority Critical patent/CN116244073A/en
Publication of CN116244073A publication Critical patent/CN116244073A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a resource-aware task allocation method of a hybrid key partition real-time operating system, which comprises the following steps: grouping the tasks according to the resource access condition; distributing high-key-level tasks under the condition that the distribution result is schedulable in a high-key-level system mode according to the task grouping result; distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result; and according to the allocated high-critical-level tasks and low-critical-level tasks, scheduling system mode switching by using a task migration method to finish task allocation. The invention can improve the overall schedulability of the system and can be widely applied to the technical field of computers.

Description

Resource-aware task allocation method for hybrid key partition real-time operating system
Technical Field
The invention relates to the technical field of computers, in particular to a resource-aware task allocation method of a hybrid key partition real-time operating system.
Background
Hybrid critical systems refer to embedded real-time systems that perform multiple functions on the same computing platform and meet space, power, and cost constraints. Such systems are widely used in the automotive, aerospace and other industries. In such an integrated system, applications with different authentication requirements and different importance (criticality) can coexist and share system resources. Hybrid critical systems must ensure that high-critical tasks are performed correctly and meet time constraints while low-critical tasks can be abandoned in certain situations to ensure the operation of high-critical tasks. Hybrid critical systems typically have two modes of system operation, low criticality and high criticality. The system starts running in a low criticality mode (low criticality mode). In this mode, the low criticality task (low criticality tasks) runs with the high criticality task (high criticality tasks) at execution time in the low criticality mode. When there is a task running timeout in the system, the system will switch to a high criticality mode and suspend low criticality tasks, while high criticality tasks will run at longer execution times.
In a multi-core hybrid critical system, low-criticality tasks and high-criticality tasks share various resources on a platform, such as code segments, data, memory, IO devices, and the like. To ensure the integrity and consistency of data, tasks need to access these shared resources in a lock-wise, mutually exclusive manner. Spin lock (spin lock), defined by the AUTOSAR standard, is widely used to protect the computational correctness of shared resources, and real-time systems use spin lock-based resource sharing protocols (resource sharing protocol) to manage access to shared resources and provide a temporal upper bound for tasks to wait and execute the shared resources. However, due to the application of spin locks, when tasks request the same resource on different processors, they may be busy on the processor to which they belong, etc., until the requested resource is successfully obtained. This phenomenon will give high blocking time to the task and seriously undermine the schedulability of the system.
In real-time systems, response time analysis is a mainstream schedulability analysis method that first predicts the worst-case response time for each task and then compares this value to the deadline for the task. Currently, in the schedulable analysis of a shared resource protocol based on spin lock, by analyzing the resource access blocking situation in a given time, three blocking times (direct spin blocking, indirect spin blocking and arrival blocking) can be precisely constrained, a time upper bound for tasks waiting and executing the shared resource is provided, and fitting the time term to a response time analysis equation can provide more precise analysis results.
In recent years, in multiprocessor systems, a resource-aware task allocation method has received attention in order to reduce resource contention among processors. The idea of this task allocation is to reduce contention by localizing the shared resources, i.e. to allocate tasks accessing the same shared resource to the same core as much as possible. However, in hybrid critical-level systems, little research work has been done to design task allocation algorithms with resource awareness as the primary direction. Meanwhile, frequent shared resource access widely existing in a real system causes the problems of long task blocking time, low system schedulability and the like.
At present, the following problems exist in the task allocation algorithm under the hybrid key system:
the existing research commonly adopts heuristic algorithm to distribute tasks, the task distribution rule mainly considers the task utilization rate and deadline, and only usually considers the task at the highest key level (namely the maximum utilization rate), which can lead to pessimistic estimation of the system utilization rate, thereby reducing the schedulability of the system.
Part of the research considers the utilization difference at a plurality of key levels, and can improve the schedulability of the whole system. However, none of the above schemes take into account the impact of shared resource access, resulting in a system that is not schedulable due to excessive blocking latency when actually running.
Disclosure of Invention
In view of this, the embodiment of the invention provides a resource-aware task allocation method for a hybrid critical partition real-time operating system, so as to improve the schedulability of the system.
An aspect of the present invention provides a resource-aware task allocation method for a hybrid critical partition real-time operating system, including:
grouping the tasks according to the resource access condition;
distributing high-key-level tasks under the condition that the distribution result is schedulable in a high-key-level system mode according to the task grouping result;
distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and according to the allocated high-critical-level tasks and low-critical-level tasks, scheduling system mode switching by using a task migration method to finish task allocation.
Optionally, the grouping the tasks according to the resource access condition includes:
sequencing the shared resources from large to small according to the total number of accessed times to obtain a resource sequencing result;
task grouping is carried out according to the resource ordering result;
and further dividing each resource task group and independent task group obtained by division according to the key level of the task.
Optionally, the task grouping according to the resource ordering result includes:
starting from a first resource, if a certain task needs to request the resource, adding a task group corresponding to the resource; if the task has joined the grouping, no other task group is joined; the tasks which do not request any resources are all taken as a separate group;
the further dividing each resource task group and independent task group obtained by dividing according to the key level of the task comprises the following steps:
and further dividing each resource task group and each independent task group to obtain a high-key-level task group and a low-key-level task group corresponding to each resource, and obtaining an independent high-key-level task group and an independent low-key-level task group.
Optionally, the allocating the high-critical-level task according to the task grouping result under the condition that the allocation result is schedulable in the high-critical-level system mode includes:
according to the task grouping result, a high-key-level task group corresponding to each resource and an independent high-key-level group are obtained, and the tasks in the task group corresponding to each resource are sequenced in an ascending order according to the utilization rate under the high key level;
Traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors according to the utilization rate at a high key level from small to large;
sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the high-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate under the high key level, and before each independent task is distributed, the processors are ordered according to the descending order of the utilization rate under the high key level, and the first schedulable processor is distributed according to the sequence of the processors.
Optionally, the allocating the low-critical-level task according to the task grouping result under the condition that the allocation result is schedulable in the low-critical-level system mode includes:
according to the result of task grouping, the obtained low-key-level task group corresponding to each resource and an independent low-key-level task group are used for sorting the tasks in the task group corresponding to each resource according to the ascending order of the utilization rate of the tasks in the low-key-level task group;
traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors, namely sequencing the processors with high-key-level tasks accessing the current resources in front, sequencing the processors according to the utilization rate of the low-key-level tasks from small to large, and sequencing the rest processors according to the utilization rate of the low-key-level tasks from small to large;
Sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the low-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, before each time of the independent tasks are distributed, the processors are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, the processors are traversed, and the first schedulable processor is distributed.
Optionally, according to the allocated high-critical-level task and low-critical-level task, scheduling system mode switching by using a task migration method to complete task allocation, including:
calculating the response time of each high-key-level task in the mode switching process, and checking whether the response time of the task in the mode switching process exceeds a deadline;
when the response time of the task exceeds the deadline, sequentially checking the overtime reason of the task until the task can be scheduled;
and completing the scheduling of all the high-critical-level tasks.
Optionally, when the response time of the task exceeds the deadline, sequentially checking the timeout reason of the task until the task can be scheduled, including:
when the arrival blocking time of the high-critical task at the mode switching is larger than the arrival blocking time of the high-critical task at the high-critical level, acquiring a resource causing the maximum arrival blocking, and executing the following steps:
If no high critical task accesses the resource on the processor to which the current task belongs, only low critical tasks access the resource: sequencing the processors, namely sequencing the processors which can cause spin blocking on low critical tasks before the processors, and sequencing the rest processors from big to small according to relaxation time; migrating all low-key tasks accessing the resource on the processor to a first processor capable of meeting the condition according to the sequence of the processors;
if there is a high critical task accessing the resource on the processor to which the current task belongs: migrating all low-level-critical tasks accessing the resource on the processor causing the arrival blocking of the task to the processor where the current task is located;
after the steps are executed, the current task is still not schedulable, and the following steps are continuously executed:
if the resource access blocking time is greater than 0, traversing according to the resource number, sequentially checking whether the low-key-level task accessing the resource on each processor causes spin blocking to the task according to the processor number, and migrating the low-key-level task accessing the resource on the processor to the core where the current task is located; if the newly added task does not timeout, executing migration;
Stopping when the current task can schedule or traverse all resources.
Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device of a hybrid critical partition real-time operating system, including:
the first module is used for grouping the tasks according to the resource access condition;
the second module is used for distributing the high-key-level tasks under the condition that the distribution result is schedulable in the high-key-level system mode according to the task grouping result;
the third module is used for distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and the fourth module is used for carrying out scheduling of system mode switching by using a task migration method according to the allocated high-key-level tasks and low-key-level tasks so as to finish task allocation.
Another aspect of the embodiment of the invention also provides an electronic device, which includes a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Another aspect of the embodiments of the present invention also provides a computer-readable storage medium storing a program that is executed by a processor to implement a method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
The embodiment of the invention groups each task according to the resource access condition; distributing high-key-level tasks under the condition that the distribution result is schedulable in a high-key-level system mode according to the task grouping result; distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result; and according to the allocated high-critical-level tasks and low-critical-level tasks, scheduling system mode switching by using a task migration method to finish task allocation. The invention can improve the overall schedulability of the system.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating overall steps provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The following first explains the related terms that may appear in the present invention:
fixed Priority Scheduling (FPS): a scheduling method in a real-time system. The priorities of the tasks are statically assigned before the system is running and are used fixedly throughout the life cycle of the tasks. During system operation, tasks with higher priorities may be scheduled to run preferentially.
Semi-partitioned scheduling (semi-partitioned Scheme): a real-time scheduling scheme under multiple processors. The system statically distributes tasks to the processors before running and schedules them on each processor independently. However, unlike full partitioning, half-partitioning scheduling allows some tasks to migrate to run on pre-specified processors under certain conditions.
Sporadic periodic task (sporadic task model): a computing task with explicit time constraints whose arrival of a job has no fixed period (period), but has a minimum arrival time interval constraint (minimal arrival interval), which in the worst case can be considered a periodic task.
Shared resources (shared resources): multiple tasks may request hardware or software resources at the same time but must have exclusive access. To ensure data integrity, each resource is protected by a designated lock. The task is allowed to access the resource only if the lock corresponding to the resource is obtained; if a resource is owned, the task requesting the resource will wait until the resource is again available.
Schedulability (schedulability): meaning that all tasks in the system can be run within a deadline.
Schedulability analysis (schedulability test): refers to a set of mathematical tools that can calculate the worst response time (i.e., from release to completion) for a task to be performed at a target system. The worst response time typically includes several major parts, such as the worst time (WCET) required for the task itself to execute, the interference time (local high priority interference) of the local high priority task, the blocking time caused by the task accessing the shared resource (explained in detail below).
Spin blocking (spin delay): in multiprocessor systems using spin-lock based resource sharing protocols, task access to a resource on a remote processor causes a blocking by spin waiting of a task requesting the resource on a local processor. Tasks requesting resources are directly interfered by remote tasks, referred to as direct spin blocking, and low priority tasks are transitively blocked by high priority tasks interfered by remote tasks, referred to as indirect spin blocking.
Arrival blocking (arrival blocking): in systems based on non-preemptive or priority boost techniques, high priority tasks are blocked by local low priority tasks when arriving because they cannot preempt.
Aiming at the problems existing in the prior art, the invention provides a task allocation method for shared resources, which greatly reduces the blocking time of task access to the shared resources and improves the schedulability of the system.
Specifically, an aspect of the embodiment of the present invention provides a resource-aware task allocation method for a hybrid critical-partition real-time operating system, including:
grouping the tasks according to the resource access condition;
distributing high-key-level tasks under the condition that the distribution result is schedulable in a high-key-level system mode according to the task grouping result;
distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and according to the allocated high-critical-level tasks and low-critical-level tasks, scheduling system mode switching by using a task migration method to finish task allocation.
Optionally, the grouping the tasks according to the resource access condition includes:
Sequencing the shared resources from large to small according to the total number of accessed times to obtain a resource sequencing result;
task grouping is carried out according to the resource ordering result;
and further dividing each resource task group and independent task group obtained by division according to the key level of the task.
Optionally, the task grouping according to the resource ordering result includes:
starting from a first resource, if a certain task needs to request the resource, adding a task group corresponding to the resource; if the task has joined the grouping, no other task group is joined; the tasks which do not request any resources are all taken as a separate group;
the further dividing each resource task group and independent task group obtained by dividing according to the key level of the task comprises the following steps:
and further dividing each resource task group and each independent task group to obtain a high-key-level task group and a low-key-level task group corresponding to each resource, and obtaining an independent high-key-level task group and an independent low-key-level task group.
Optionally, the allocating the high-critical-level task according to the task grouping result under the condition that the allocation result is schedulable in the high-critical-level system mode includes:
According to the task grouping result, a high-key-level task group corresponding to each resource and an independent high-key-level group are obtained, and the tasks in the task group corresponding to each resource are sequenced in an ascending order according to the utilization rate under the high key level;
traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors according to the utilization rate at a high key level from small to large;
sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the high-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate under the high key level, and before each independent task is distributed, the processors are ordered according to the descending order of the utilization rate under the high key level, and the first schedulable processor is distributed according to the sequence of the processors.
Optionally, the allocating the low-critical-level task according to the task grouping result under the condition that the allocation result is schedulable in the low-critical-level system mode includes:
according to the result of task grouping, the obtained low-key-level task group corresponding to each resource and an independent low-key-level task group are used for sorting the tasks in the task group corresponding to each resource according to the ascending order of the utilization rate of the tasks in the low-key-level task group;
Traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors, namely sequencing the processors with high-key-level tasks accessing the current resources in front, sequencing the processors according to the utilization rate of the low-key-level tasks from small to large, and sequencing the rest processors according to the utilization rate of the low-key-level tasks from small to large;
sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the low-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, before each time of the independent tasks are distributed, the processors are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, the processors are traversed, and the first schedulable processor is distributed.
Optionally, according to the allocated high-critical-level task and low-critical-level task, scheduling system mode switching by using a task migration method to complete task allocation, including:
calculating the response time of each high-key-level task in the mode switching process, and checking whether the response time of the task in the mode switching process exceeds a deadline;
When the response time of the task exceeds the deadline, sequentially checking the overtime reason of the task until the task can be scheduled;
and completing the scheduling of all the high-critical-level tasks.
Optionally, when the response time of the task exceeds the deadline, sequentially checking the timeout reason of the task until the task can be scheduled, including:
when the arrival blocking time of the high-critical task at the mode switching is larger than the arrival blocking time of the high-critical task at the high-critical level, acquiring a resource causing the maximum arrival blocking, and executing the following steps:
if no high critical task accesses the resource on the processor to which the current task belongs, only low critical tasks access the resource: sequencing the processors, namely sequencing the processors which can cause spin blocking on low critical tasks before the processors, and sequencing the rest processors from big to small according to relaxation time; migrating all low-key tasks accessing the resource on the processor to a first processor capable of meeting the condition according to the sequence of the processors;
if there is a high critical task accessing the resource on the processor to which the current task belongs: migrating all low-level-critical tasks accessing the resource on the processor causing the arrival blocking of the task to the processor where the current task is located;
After the steps are executed, the current task is still not schedulable, and the following steps are continuously executed:
if the resource access blocking time is greater than 0, traversing according to the resource number, sequentially checking whether the low-key-level task accessing the resource on each processor causes spin blocking to the task according to the processor number, and migrating the low-key-level task accessing the resource on the processor to the core where the current task is located; if the newly added task does not timeout, executing migration;
stopping when the current task can schedule or traverse all resources.
Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device of a hybrid critical partition real-time operating system, including:
the first module is used for grouping the tasks according to the resource access condition;
the second module is used for distributing the high-key-level tasks under the condition that the distribution result is schedulable in the high-key-level system mode according to the task grouping result;
the third module is used for distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and the fourth module is used for carrying out scheduling of system mode switching by using a task migration method according to the allocated high-key-level tasks and low-key-level tasks so as to finish task allocation.
Another aspect of the embodiment of the invention also provides an electronic device, which includes a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Another aspect of the embodiments of the present invention also provides a computer-readable storage medium storing a program that is executed by a processor to implement a method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
The following describes the specific implementation of the present invention in detail with reference to the drawings of the specification:
the present invention will be based on a hybrid critical system (Mixed-Criticality System) that contains two system modes (System Execution Mode, L.epsilon. { HI, LO }) at high and low critical levels. The system comprises a set of identical processors Φ (symmetric multiprocessor) and a set of sporadic periodic tasks Γ (sporadic tasks). The system adopts a semi-partitioned fixed priority scheduling method (semi-partitioned fixed-priority scheme).
In the system, each task (task, τ i Representation) is defined by period, deadline, worst execution time estimate, priority, key level, allocation scheme (assignment), and Migration scheme (Migration), i.e.)
Figure BDA0004071691900000091
Figure BDA0004071691900000092
Unlike normal periodic tasks, worst-case execution estimation of hybrid critical tasks presents a vector characteristic, i.e., with different worst-case time estimation in different system execution modes, higher-level system modes will estimate the execution time of the task in a more conservative manner (C i (HI)>C i (LO)). The worst response time of a task in mode L is denoted as R i (L) the utilization of the task in mode L is denoted +.>
Figure BDA0004071691900000093
Processor->
Figure BDA0004071691900000094
The utilization in mode L is denoted +.>
Figure BDA0004071691900000095
At the same time, the system also comprises a group of shared resources (the resource set is denoted by R) protected by the spin lock. For one resource in the system (in r k Representation), c k (L) represents performing r at critical level L k Worst execution time estimate of (c), again assuming c k (HI)>c k (LO)。
Figure BDA0004071691900000096
Representing task τ i Accessing resource r during one run k Is a number of times (1); function F (-) represents the set of resources accessed by a given task and function G (-) represents the set of tasks accessing the given resource. To->
Figure BDA0004071691900000097
Representing resource r k Corresponding task grouping,/- >
Figure BDA0004071691900000098
Representing a task grouping that does not request resources.
The hybrid critical level model assumes that the system starts from a low critical level and that all tasks are scheduled with a budget at the low critical level. In actual operation, if there is a task running time exceeding the budget in this mode, the system will switch to the high critical level and pause all low critical level tasks.
As shown in FIG. 1, the method comprises four core steps, namely, firstly grouping tasks based on resource access conditions, then preferentially distributing high-key-level tasks according to the task grouping results and ensuring that the distribution results can be scheduled in a high-key-level system mode, then distributing low-key-level tasks and ensuring that the distribution results can be scheduled in a low-key-level system mode, and finally ensuring the schedulability during system mode switching by using a task migration method.
The specific steps of the invention are as follows:
step 1: task grouping:
step 1.1, the shared resources are ordered from big to small according to the total number of accessed times, namely
Figure BDA0004071691900000099
Step 2.1: the tasks are grouped according to the resource ordering of step 1.1. Starting from the first resource, if a task needs to request the resource r k Then add the resource r k Corresponding task grouping
Figure BDA0004071691900000101
If a task has joined the group, no further task groups are joined. Tasks not requesting task resources are all grouped as an independent group +. >
Figure BDA0004071691900000102
Step 3.1: dividing each resource task group and independent task group according to the key level to obtain each resource r k Corresponding high-critical task group
Figure BDA0004071691900000103
And low critical task group->
Figure BDA0004071691900000104
And an independent high-level task group +.>
Figure BDA0004071691900000105
And an independent low-critical task group +.>
Figure BDA0004071691900000106
Step 2: allocation of high-key-level tasks:
step 2.1: high-key task group corresponding to each resource based on task grouping (algorithm 1)
Figure BDA0004071691900000107
And an independent high-level group +.>
Figure BDA0004071691900000108
The tasks in the task group corresponding to each resource are according to the utilization ratio u under the high key level i (HI) ascending sort.
Step 2.2: from the total number of accessed times N k The task group corresponding to the largest resource starts traversing.
Step 2.3: the processor is subjected to utilization ratio under high key
Figure BDA0004071691900000109
Ordering from small to large (processors with less load can be assigned to more tasks of the same task group);
step 2.4: tasks in the current task group are sequentially distributed, the tasks are distributed to a first schedulable processor according to the sequence of the processors until task distribution of the high-key-level task group corresponding to all resources is completed, and schedulability analysis in the high-key-level mode is used for schedulability test.
And repeatedly executing the steps 2.3 and 2.4 until the task allocation of the high-key-level task group corresponding to all the resources is completed.
Step 2.5: tasks within an independent task group are according to the utilization u at a high critical level i (HI) sort in descending order, before each allocation of independent tasks, sort processors according to the utilization ratio under high key from small to large, allocate the processors to the first schedulable processor according to the processor order, and the schedulability test uses the schedulability analysis under the high key level mode.
Step 3: allocation of low-critical-level tasks:
step 3.1: task group (algorithm 1) based on low key level task group corresponding to each resource obtained by task group
Figure BDA00040716919000001010
And an independent low-level task group +.>
Figure BDA00040716919000001011
The tasks in the task group corresponding to each resource are according to the utilization ratio u under the low key level i (LO) ascending sort.
Step 3.2: from the total number of accessed times N k The task group corresponding to the largest resource starts traversing.
Step 3.3: the processors are ordered. Processors with high critical tasks accessing the resource are arranged in front and at low critical utilization
Figure BDA00040716919000001012
Ordering from small to large, the remaining processors are ordered according to the utilization under low key +.>
Figure BDA00040716919000001013
Ordering from small to large.
Step 3.4: tasks in the current task group are sequentially distributed, tasks are distributed to a first schedulable processor according to the sequence of the processors, and schedulability analysis in a low-key-level mode is used for schedulability test.
And (3) repeating the step (3.3) and the step (3.4) until the task allocation of the low-key-level task group corresponding to all the resources is completed.
Step 3.5: tasks within an independent task group are according to the utilization u at low key level i (LO) descending order, before each allocation of independent tasks, the processors are ordered from small to large according to the utilization ratio under low key, the processors are traversed, the first schedulable processor is allocated, and the schedulability test uses the schedulability analysis under the low key level mode.
Step 4: the task migration method during mode switching comprises the following steps:
in operation, a hybrid critical system will upgrade (i.e., switch modes) if one task runs beyond the worst-case time estimate. The low critical tasks will be suspended after the upgrade, while the high critical tasks will run at the worst time of more pessimistic. Because of the existence of the shared resource, if a low-key task holds a shared resource during system mode switching, the task will be suspended after the execution of the current resource is completed, and if the task does not occupy the shared resource, the task is directly suspended.
After the worst response time (R i ) When we use a more pessimistic assumption to provide a secure worst response time boundary. Specifically, at the time of mode switching, the low critical task will run at the lowest priority and hang up after accessing all the resources it needs once. I.e. for a low critical task tau j If r k ∈F(τ j ),
Figure BDA0004071691900000111
According to the step of high-critical task allocation, we allocate high-critical tasks and ensure the schedulability of the high-critical tasks in a high-critical mode, namely R i (HI)<D i . During mode switching, the access of the resource by the low-critical-level task can cause blockage to the operation of the high-critical-level task, so that part of the high-critical-level tasks cannot be scheduled. We consider that during mode switching, the impact on the high-critical task is solved by migrating the low-critical task, thus ensuring the schedulability of the high-critical task during mode switching.
First, it is determined whether a high critical task will timeout after being interfered by a low critical task when the mode is switched.
R i =R i (HI)+I i +B i
Wherein I is i For low critical task pairs τ on remote processors i Time of spin-blocking caused, B i Is tau i The increased arrival blocking time experienced. By using the existing blocking time analysis method, the source access blocking time I can be calculated i And B i
Figure BDA0004071691900000112
Figure BDA0004071691900000113
For processor->
Figure BDA0004071691900000114
Up access resource r k Low critical level task-to-task tau i Blocking caused by F Ai ) To possibly target task tau i Set of resources causing arrival blocking, B i (HI) is task τ i The arrival blocking time experienced in the high critical level mode. />
Figure BDA0004071691900000115
The migration algorithm steps are as follows:
Step 4.1: by R i =R i (HI)+I i +B i Calculating the response time R of each high critical task in mode switching i Determine whether or not deadline D is exceeded i . Based on updated R i Define the relaxation (slot) time of the core as
Figure BDA0004071691900000121
Figure BDA0004071691900000122
Step 4.2: if there is a task tau i Overtime, checking the overtime reason in turn;
step 4.2.1: if B is i >0, get cause maximum arrival block B i Resource r of (2) k The treatment is carried out in two cases:
1) If at task tau i On the processor to which it belongs, no high critical task accesses the resource, only low critical tasks access the resource: processors are ordered, processors that can cause spin blocking for these low critical tasks are ordered before, and the remaining processors are ordered from big to small in relaxation time. In processor order, the processors are loadedAll accessing the resource r k Is migrated to the first processor that can meet the condition. The condition that can migrate is that there is no newly added timeout task.
2) If at task tau i On the processor to which it belongs, there is a highly critical task to access the resource: migrating all low critical level task attempts on processors that caused arrival blocking for the task to access the resource to task τ i A processor located therein. The condition that can migrate is that there is no newly added timeout task.
Repeating step 4.2.1 until B i =0 or R i <D i . If the migration is unsuccessful, step 4.2.2 is entered.
Step 4.2.2: if I i >0, traversing according to the resource number k, and checking each processor in turn according to the processor number m
Figure BDA0004071691900000123
Whether or not there is an access resource r k Low critical level task-to-task tau i Causing spin blocking, i.e.)>
Figure BDA0004071691900000124
If->
Figure BDA0004071691900000125
The processor is +.>
Figure BDA0004071691900000126
Up access resource r k Low critical task migration to task τ i The core is located. And if the newly added task does not timeout, executing migration. When task tau i Schedulable (R) i <D i ) Or stop when all resources have been traversed.
Step 4.3: if the execution of step 4.2 is finished, the task is schedulable, then step 4.2 is continued until all high critical level tasks are schedulable (R i <D i )。
In summary, compared with the prior art, the invention has the following characteristics:
1. the scheme considers the influence of shared resources, localizes resources with vigorous competition as much as possible, thereby reducing resource access conflict among cores, reducing task blocking time and improving the schedulability of the system.
2. The scheme simultaneously considers the worst running time of the task at different key levels under the mixed key system (namely, the task has different utilization rates at different key levels, and the higher the key level is, the higher the utilization rate is); the task allocation and migration method during the mode and mode switching is provided, and the schedulability of the system in the full-running scene is ensured.
The invention reduces the resource blocking time and improves the schedulability of the system by distributing the shared resources with strong localized competition through the tasks. The invention considers the key level characteristics of the mixed key level system, and ensures the schedulability of each key level mode in the distribution process; the schedulability during mode switching is guaranteed by using task migration, the system capacity is reasonably utilized, and the overall schedulability of the system is improved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. The resource-aware task allocation method of the hybrid key partition real-time operating system is characterized by comprising the following steps of:
grouping the tasks according to the resource access condition;
distributing high-key-level tasks under the condition that the distribution result is schedulable in a high-key-level system mode according to the task grouping result;
distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and according to the allocated high-critical-level tasks and low-critical-level tasks, scheduling system mode switching by using a task migration method to finish task allocation.
2. The method for assigning tasks to a hybrid critical partition real-time operating system according to claim 1, wherein grouping the tasks according to the resource access condition comprises:
sequencing the shared resources from large to small according to the total number of accessed times to obtain a resource sequencing result;
task grouping is carried out according to the resource ordering result;
and further dividing each resource task group and independent task group obtained by division according to the key level of the task.
3. The method for resource-aware task allocation of a hybrid critical-zone real-time operating system of claim 2, wherein,
The task grouping according to the resource ordering result comprises the following steps:
starting from a first resource, if a certain task needs to request the resource, adding a task group corresponding to the resource; if the task has joined the grouping, no other task group is joined; the tasks which do not request any resources are all taken as a separate group;
the further dividing each resource task group and independent task group obtained by dividing according to the key level of the task comprises the following steps:
and further dividing each resource task group and each independent task group to obtain a high-key-level task group and a low-key-level task group corresponding to each resource, and obtaining an independent high-key-level task group and an independent low-key-level task group.
4. The method for allocating tasks of a hybrid critical partition real-time operating system according to claim 3, wherein allocating high critical level tasks according to the task grouping result while ensuring that the allocation result is schedulable in the high critical level system mode comprises:
according to the task grouping result, a high-key-level task group corresponding to each resource and an independent high-key-level group are obtained, and the tasks in the task group corresponding to each resource are sequenced in an ascending order according to the utilization rate under the high key level;
Traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors according to the utilization rate at a high key level from small to large;
sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the high-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate under the high key level, and before each independent task is distributed, the processors are ordered according to the descending order of the utilization rate under the high key level, and the first schedulable processor is distributed according to the sequence of the processors.
5. The method for allocating tasks of a hybrid critical partition real-time operating system according to claim 3, wherein allocating low-critical tasks according to the task grouping result while ensuring that the allocation result is schedulable in the low-critical system mode comprises:
according to the result of task grouping, the obtained low-key-level task group corresponding to each resource and an independent low-key-level task group are used for sorting the tasks in the task group corresponding to each resource according to the ascending order of the utilization rate of the tasks in the low-key-level task group;
Traversing from the task group corresponding to the resource with the largest total accessed times;
sequencing the processors, namely sequencing the processors with high-key-level tasks accessing the current resources in front, sequencing the processors according to the utilization rate of the low-key-level tasks from small to large, and sequencing the rest processors according to the utilization rate of the low-key-level tasks from small to large;
sequentially distributing tasks in the current task group, distributing the tasks to a first schedulable processor according to the sequence of the processors until the task distribution of the low-key-level task group corresponding to all resources is completed;
the tasks in the independent task groups are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, before each time of the independent tasks are distributed, the processors are ordered according to the descending order of the utilization rate of the independent tasks at the low key level, the processors are traversed, and the first schedulable processor is distributed.
6. The method for allocating resource-aware tasks of a hybrid critical partition real-time operating system according to claim 1, wherein the scheduling of system mode switching by using a task migration method according to allocated high-critical-level tasks and low-critical-level tasks, and completing task allocation, comprises:
calculating the response time of each high-key-level task in the mode switching process, and checking whether the response time of the task in the mode switching process exceeds a deadline;
When the response time of the task exceeds the deadline, sequentially checking the overtime reason of the task until the task can be scheduled;
and completing the scheduling of all the high-critical-level tasks.
7. The method for assigning tasks to a hybrid critical partition real-time operating system according to claim 6, wherein when the response time of the task exceeds the deadline, sequentially checking the timeout cause of the task until the task can be scheduled, comprising:
when the arrival blocking time of the high-critical task at the mode switching is larger than the arrival blocking time of the high-critical task at the high-critical level, acquiring a resource causing the maximum arrival blocking, and executing the following steps:
if no high critical task accesses the resource on the processor to which the current task belongs, only low critical tasks access the resource: sequencing the processors, namely sequencing the processors which can cause spin blocking on low critical tasks before the processors, and sequencing the rest processors from big to small according to relaxation time; migrating all low-key tasks accessing the resource on the processor to a first processor capable of meeting the condition according to the sequence of the processors;
if there is a high critical task accessing the resource on the processor to which the current task belongs: migrating all low-level-critical tasks accessing the resource on the processor causing the arrival blocking of the task to the processor where the current task is located;
After the steps are executed, the current task is still not schedulable, and the following steps are continuously executed:
if the resource access blocking time is greater than 0, traversing according to the resource number, sequentially checking whether the low-key-level task accessing the resource on each processor causes spin blocking to the task according to the processor number, and migrating the low-key-level task accessing the resource on the processor to the core where the current task is located; if the newly added task does not timeout, executing migration;
stopping when the current task can schedule or traverse all resources.
8. The resource-aware task allocation device of the hybrid key partition real-time operating system is characterized by comprising the following components:
the first module is used for grouping the tasks according to the resource access condition;
the second module is used for distributing the high-key-level tasks under the condition that the distribution result is schedulable in the high-key-level system mode according to the task grouping result;
the third module is used for distributing the low-key-level tasks under the condition that the distribution result is schedulable in the low-key-level system mode according to the task grouping result;
and the fourth module is used for carrying out scheduling of system mode switching by using a task migration method according to the allocated high-key-level tasks and low-key-level tasks so as to finish task allocation.
9. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium stores a program that is executed by a processor to implement the method of any one of claims 1 to 7.
CN202310095960.XA 2023-02-06 2023-02-06 Resource-aware task allocation method for hybrid key partition real-time operating system Pending CN116244073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310095960.XA CN116244073A (en) 2023-02-06 2023-02-06 Resource-aware task allocation method for hybrid key partition real-time operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310095960.XA CN116244073A (en) 2023-02-06 2023-02-06 Resource-aware task allocation method for hybrid key partition real-time operating system

Publications (1)

Publication Number Publication Date
CN116244073A true CN116244073A (en) 2023-06-09

Family

ID=86632493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310095960.XA Pending CN116244073A (en) 2023-02-06 2023-02-06 Resource-aware task allocation method for hybrid key partition real-time operating system

Country Status (1)

Country Link
CN (1) CN116244073A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116430738A (en) * 2023-06-14 2023-07-14 北京理工大学 Self-adaptive dynamic scheduling method of hybrid key system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116430738A (en) * 2023-06-14 2023-07-14 北京理工大学 Self-adaptive dynamic scheduling method of hybrid key system
CN116430738B (en) * 2023-06-14 2023-08-15 北京理工大学 Self-adaptive dynamic scheduling method of hybrid key system

Similar Documents

Publication Publication Date Title
Gai et al. Minimizing memory utilization of real-time task sets in single and multi-processor systems-on-a-chip
US9612868B2 (en) Systems and methods generating inter-group and intra-group execution schedules for instruction entity allocation and scheduling on multi-processors
Devi Soft real-time scheduling on multiprocessors
US8108869B2 (en) System and method for enforcing future policies in a compute environment
US6317774B1 (en) Providing predictable scheduling of programs using a repeating precomputed schedule
Lipari et al. A framework for achieving inter-application isolation in multiprogrammed, hard real-time environments
US9021490B2 (en) Optimizing allocation of computer resources by tracking job status and resource availability profiles
Mohammadi et al. Scheduling algorithms for real-time systems
US8875146B2 (en) Systems and methods for bounding processing times on multiple processing units
Cheng et al. Cross-platform resource scheduling for spark and MapReduce on YARN
US20030061260A1 (en) Resource reservation and priority management
EP2624135B1 (en) Systems and methods for task grouping on multi-processors
JP2017004511A (en) Systems and methods for scheduling tasks using sliding time windows
CN108123980B (en) Resource scheduling method and system
Guo et al. The concurrent consideration of uncertainty in WCETs and processor speeds in mixed-criticality systems
Davis et al. An investigation into server parameter selection for hierarchical fixed priority pre-emptive systems
CN116244073A (en) Resource-aware task allocation method for hybrid key partition real-time operating system
KR20170023280A (en) Multi-core system and Method for managing a shared cache in the same system
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
Al-Bayati et al. Partitioning and selection of data consistency mechanisms for multicore real-time systems
Al-Bayati et al. Task placement and selection of data consistency mechanisms for real-time multicore applications
Hobbs et al. Optimal soft real-time semi-partitioned scheduling made simple (and dynamic)
Zeng et al. Optimizing stack memory requirements for real-time embedded applications
Nemati et al. Multiprocessor synchronization and hierarchical scheduling
Andrews et al. Survey on job schedulers in hadoop cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination