CN112817762A - Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof - Google Patents

Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof Download PDF

Info

Publication number
CN112817762A
CN112817762A CN202110129172.9A CN202110129172A CN112817762A CN 112817762 A CN112817762 A CN 112817762A CN 202110129172 A CN202110129172 A CN 202110129172A CN 112817762 A CN112817762 A CN 112817762A
Authority
CN
China
Prior art keywords
user
task
cpu
tasks
thread pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110129172.9A
Other languages
Chinese (zh)
Inventor
李丰军
周剑光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Priority to CN202110129172.9A priority Critical patent/CN112817762A/en
Publication of CN112817762A publication Critical patent/CN112817762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention discloses a dispatching system based on a self-adaptive automobile open system architecture standard and a dispatching method thereof, wherein the dispatching system comprises a task layer unit, a user layer unit and a core layer unit; the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management; the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run; and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks. The invention is convenient to manage the user task by statically configuring the strategy and the resource, and simultaneously, the resource consumption is smaller when the user task is switched compared with the kernel thread.

Description

Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof
Technical Field
The invention relates to a dispatching system based on an adaptive automobile open system architecture standard, and belongs to the field of edge computing.
Background
With the rapid development of automatic driving and intelligent networking technologies, the traditional automotive auto ar platform (classic auto sar) cannot independently meet the increasingly complex requirements of electronic control functions of automobiles. Adaptive Autosar has incomparable advantages, especially for service-oriented architecture development and the use of high-performance processors. It is well known that autopilot technology has strict real-time requirements on process or thread scheduling, and products like ROS/ROS2 have significant instability and delay in process or thread scheduling. Therefore, a user-level scheduler is designed in the Adaptive Autosar, and is compatible with different OSs, so that the task scheduling of the automatic driving function on the Adaptive Autosar can be better supported, and the corresponding requirements on low delay, certainty and real-time performance are met.
The automatic driving application has higher requirements on the performance and the stability of an operating system; especially, it is difficult to ensure the stability of the autopilot application based on the autopilot application developed on the Linux OS, which is not a real-time operating system.
With the increasing complexity of the automatic driving application program, the requirements on the real-time performance and the stability of the task are higher and higher. If the system resources are not sufficient, then the scheduler design is also put high demand accordingly. It is known that scheduling in an operating system can solve contradictions between system resources and running tasks, generally, like Linux OS, there are scheduling algorithms, such as RR, FIFO, and CFS, in a kernel layer, but these three scheduling algorithms use Linux threads as basic units for scheduling, and switching between threads in the kernel layer is relatively resource consuming. In order to realize the real-time function, the scheduler firstly needs to have priority, and the priority has different strategies of preemption, RR and FIFO. When the FIFO mode is used, the CPU is occupied when some task is abnormal, so that other tasks have no chance to obtain.
Disclosure of Invention
The purpose of the invention is as follows: the scheduling system and the scheduling method based on the adaptive automobile open system architecture standard are provided to solve the problems.
The technical scheme is as follows: a dispatching system based on adaptive automobile open system architecture standard comprises a task layer unit, a user layer unit and a core layer unit;
the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management;
the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run;
and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks.
According to one aspect of the invention, the task layer unit allocates user tasks in task lists to an idle thread pool, the user tasks start to run, when the tasks are listed as a global queue, all tasks on the user can be placed on the global queue, the task lists are two-dimensional arrays, the priority is used as an index, one task list is used as a group of members of the index to realize the function that the priority tasks run first, and the function of a core group is also realized.
According to one aspect of the present invention, the user layer unit uses the thread pool as a virtual CPU of the user layer, and performs resource management based on the thread pool, that is, the core group configures user tasks to be bound to the thread pool, since the thread pool corresponds to the physical CPU one by one, the user tasks are bound to the set physical CPU, the task columns correspond to the core group one by one, and uses the thread pool as a virtual CPU of the user layer, and performs resource management based on the thread pool, that is, since the core group configures user tasks to be bound to the thread pool, since the thread pool corresponds to the physical CPU one by one, the user tasks are bound to the set physical CPU, and the task columns correspond to the core group one by one.
A scheduling method based on adaptive automobile open system architecture standard manages user tasks according to priority and resource allocation, and comprises the following specific steps:
step 1, a member function of a scheduling system creates a work pool, and a configuration file scheduler program of a user is read, wherein the configuration file contains configuration information such as a core group, CPU setting, CPU binding, priority and the like;
step 2, instantiating a thread pool, calling a member function binding context of a classic thread pool, and acquiring a task context, namely a working context by the function;
step 3, in the context binding function, a ring thread is created in a standard traversing mode, and ring thread attributes including CPU setting, CPU binding, RR scheduling policy, FIFO scheduling policy, priority and the like are set according to the read scheduling program file configuration information;
step 4, the thread is a circular thread, if in the idle state, the thread is in the waiting state; if finding out the task, the scheduler will wake up the corresponding thread, or put the task into the corresponding task queue according to the priority to wait for execution.
According to one aspect of the invention, in order to implement real-time functionality, priority is first required, with different policies: preempting, RR and FIFO modes, using the FIFO mode, monitoring the FIFO scheduling strategy, and performing exception handling to avoid that when a certain task is abnormal, the CPU is always occupied, so that other tasks have no chance to obtain.
According to one aspect of the invention, to ensure real-time performance, not only priority level, but also resource configuration is required, and the real-time performance of the whole system is ensured by setting a core group of a certain process, binding a CPU, binding an interrupt and setting the affinity of the CPU.
Has the advantages that: the invention can design a scheduler on the user layer, is convenient for managing the automatic driving application program, ensures the stability and low delay of the automatic driving application program, has smaller resource consumption when the user task is switched compared with the kernel thread, monitors the FIFO scheduling strategy, performs exception handling, avoids that when a certain task is abnormal, the CPU is always occupied, other tasks have no chance to obtain, and ensures the real-time performance of the whole system by setting the core group of a certain process, binding the CPU, the interrupt and the affinity setting of the CPU.
Drawings
Fig. 1 is a flowchart of a scheduling method based on the adaptive automobile open system architecture standard according to the present invention.
FIG. 2 is a block diagram of the global queue scheduling system based on the architecture standard of the adaptive automobile open system according to the present invention.
Fig. 3 is a block diagram of a core group dispatching system based on the adaptive automobile open system architecture standard according to the present invention.
Fig. 4 is an operation diagram of the dispatching system based on the adaptive automobile open system architecture standard of the invention.
Detailed Description
Example 1
A scheduling system is designed at a user layer, because the scheduling system designed at the user layer can statically configure strategies and resources and conveniently manage user tasks.
In this embodiment, as shown in fig. 1, a scheduling system based on the adaptive automobile open system architecture standard includes a task layer unit, a user layer unit and a core layer unit;
the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management;
the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run;
and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks.
In a further embodiment, as shown in fig. 2, 1. in the Linux platform, a running program is embodied in a process manner on a user layer, but only a thread concept is seen from a core layer unit, and a minimum unit of scheduling of the core layer unit is a thread, so that a thread pool can be created on the user layer and bound on a certain physical CPU, including 4 CPUs physically, so that 4 thread pools are created and bound on fixed CPUs respectively, and then a scheduling system can allocate a user task in a task column to an idle thread pool, and the user task starts running immediately; the user task column is a global queue, all tasks on the user are placed on the global queue, and then the scheduling system takes the tasks from the global queue task column to an idle thread pool.
Example 2
A scheduling system is designed at a user layer, because the scheduling system designed at the user layer can statically configure strategies and resources and conveniently manage user tasks.
In this embodiment, as shown in fig. 1, a scheduling system based on the adaptive automobile open system architecture standard includes a task layer unit, a user layer unit and a core layer unit;
the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management;
the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run;
and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks.
In a further embodiment, as shown in FIG. 3, the physical CPU sees only the thread pool from the core layer unit, and for application tasks, only the thread pool. Therefore, from the perspective of user tasks, the thread pool can be regarded as a virtual CPU of a user layer, and resource management can be performed based on the thread pool, that is, core group configuration, including that the core group configures user tasks to be bound on some thread pools. At this time, the global queue task columns are not suitable and need to correspond to the core groups, that is, how many core groups there are task columns, and each task column corresponds to one core group. The task list is a two-dimensional array, the priority is used as an index, and one task list is used as a group of members of the index, so that the function of firstly running the tasks of the priority user can be realized, and the function of a core group is also realized.
Example 3
A scheduling system is designed at a user layer, because the scheduling system designed at the user layer can statically configure strategies and resources and conveniently manage user tasks.
In this embodiment, as shown in fig. 1, a scheduling system based on the adaptive automobile open system architecture standard includes a task layer unit, a user layer unit and a core layer unit;
the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management;
the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run;
and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks.
As shown in fig. 4, a scheduling method based on adaptive automobile open system architecture standard manages user tasks according to priorities and resource allocation, and specifically includes the following steps:
step 1, a member function of a scheduling system creates a work pool, and a configuration file scheduler program of a user is read, wherein the configuration file contains configuration information such as a core group, CPU setting, CPU binding, priority and the like;
step 2, instantiating a thread pool, calling a member function binding context of a classic thread pool, and acquiring a task context, namely a working context by the function;
step 3, in the context binding function, a ring thread is created in a standard traversing mode, and ring thread attributes including CPU setting, CPU binding, RR scheduling policy, FIFO scheduling policy, priority and the like are set according to the read scheduling program file configuration information;
step 4, the thread is a circular thread, if in the idle state, the thread is in the waiting state; if finding out the task, the scheduler will wake up the corresponding thread, or put the task into the corresponding task queue according to the priority to wait for execution.
In a further embodiment, in order to implement the real-time functionality, priority is first required, which has different policies: preempting, RR and FIFO modes, using the FIFO mode, monitoring the FIFO scheduling strategy, and performing exception handling to avoid that when a certain task is abnormal, the CPU is always occupied, so that other tasks have no chance to obtain.
In a further embodiment, to ensure real-time performance, not only priority level, but also resource configuration is required, and the real-time performance of the whole system is ensured by setting a core group of a certain process, binding a CPU, binding interrupts, and setting affinity of the CPU.
In summary, the present invention has the following advantages:
1. by designing a scheduler on a user layer, the automatic driving application program is convenient to manage, the stability and low delay of the automatic driving application program are ensured, and the resource consumption is smaller when a user task is switched compared with that when a kernel thread is switched;
2. by monitoring the FIFO scheduling strategy and performing exception handling, the situation that when a certain task is abnormal, a CPU is always occupied and other tasks have no chance to obtain is avoided;
3. the real-time performance of the whole system is ensured by setting a core group of a certain process and binding affinity settings of a CPU, an interrupt and the CPU.
It is to be noted that the respective technical features described in the above embodiments are combined in any appropriate manner without contradiction. The invention is not described in detail in order to avoid unnecessary repetition.

Claims (8)

1. A dispatching system based on the adaptive automobile open system architecture standard is characterized in that,
the task layer unit comprises a user task and core group configuration, wherein the user task is a user operation requirement, and the core group configuration distributes the user task for management;
the user layer unit comprises a thread pool, and the idle thread pool is allocated to a user task to run;
and the core layer unit comprises a CPU, is bound with the thread pool and runs the distributed user tasks.
2. The adaptive automobile open system architecture standard-based scheduling system of claim 1, wherein the task layer unit allocates user tasks in a task list to a free thread pool, the user tasks start to run, and when the tasks are listed as a global queue, all tasks on the user are placed on the global queue.
3. The adaptive automobile open system architecture standard-based scheduling system of claim 1, wherein the user layer unit, task columns are two-dimensional arrays, the priority is used as an index, and one task column is used as a group member of the index to realize the function of running priority tasks first and realize the function of a core group.
4. The scheduling system based on the adaptive automobile open system architecture standard as claimed in claim 1, wherein each thread pool created by the user layer unit is bound to a physical CPU, and corresponds to the physical CPU one to one, the scheduling system allocates user tasks in a task list to an idle thread pool, and when a task list is a global queue, the scheduling system takes the user tasks in the global queue and allocates the user tasks to the idle thread pool.
5. The dispatching system based on the architecture standard of the adaptive automobile open system as recited in claim 1, wherein the user layer unit takes the thread pool as a virtual CPU of the user layer, and performs resource management based on the thread pool, that is, the core group configures the user tasks to be bound to the thread pool, and since the thread pool is in one-to-one correspondence with the physical CPU, the user tasks are bound to the set physical CPU, and the task columns are in one-to-one correspondence with the core group.
6. A scheduling method based on adaptive automobile open system architecture standard is characterized in that user tasks are managed according to priority and resource allocation, and the method comprises the following specific steps:
step 1, a member function of a scheduling system creates a work pool, and a configuration file scheduler program of a user is read, wherein the configuration file contains configuration information such as a core group, CPU setting, CPU binding, priority and the like;
step 2, instantiating a thread pool, calling a member function binding context of a classic thread pool, and acquiring a task context, namely a working context by the function;
step 3, in the context binding function, a ring thread is created in a standard traversing mode, and ring thread attributes including CPU setting, CPU binding, RR scheduling policy, FIFO scheduling policy, priority and the like are set according to the read scheduling program file configuration information;
step 4, the thread is a circular thread, if in the idle state, the thread is in the waiting state; if finding out the task, the scheduler will wake up the corresponding thread, or put the task into the corresponding task queue according to the priority to wait for execution.
7. The scheduling method based on the adaptive automobile open system architecture standard according to claim 6, characterized in that a FIFO mode is used to monitor the FIFO scheduling strategy and perform exception handling, so as to avoid that when a certain task is abnormal, CPU is always occupied, and other tasks do not have a chance to obtain.
8. The dispatching method based on the adaptive automobile open system architecture standard as claimed in claim 6, wherein a core group of a certain process is set, affinity settings of a CPU, an interrupt and the CPU are bound, and real-time performance of the whole system is ensured.
CN202110129172.9A 2021-01-29 2021-01-29 Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof Pending CN112817762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110129172.9A CN112817762A (en) 2021-01-29 2021-01-29 Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110129172.9A CN112817762A (en) 2021-01-29 2021-01-29 Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof

Publications (1)

Publication Number Publication Date
CN112817762A true CN112817762A (en) 2021-05-18

Family

ID=75860324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110129172.9A Pending CN112817762A (en) 2021-01-29 2021-01-29 Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof

Country Status (1)

Country Link
CN (1) CN112817762A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
WO2023122891A1 (en) * 2021-12-27 2023-07-06 宁德时代新能源科技股份有限公司 Task scheduling method and multi-core processor system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360310A (en) * 2011-09-28 2012-02-22 中国电子科技集团公司第二十八研究所 Multitask process monitoring method and system in distributed system environment
CN102541653A (en) * 2010-12-24 2012-07-04 新奥特(北京)视频技术有限公司 Method and system for scheduling multitasking thread pools
CN103473138A (en) * 2013-09-18 2013-12-25 柳州市博源环科科技有限公司 Multi-tasking queue scheduling method based on thread pool
US20150135183A1 (en) * 2013-11-12 2015-05-14 Oxide Interactive, LLC Method and system of a hierarchical task scheduler for a multi-thread system
CN106533982A (en) * 2016-11-14 2017-03-22 西安电子科技大学 Dynamic queue scheduling device and method based on bandwidth borrowing
US20170315831A1 (en) * 2015-01-12 2017-11-02 Yutou Technology (Hangzhou) Co., Ltd. A System for Implementing Script Operation in a Preset Embedded System
CN108804211A (en) * 2018-04-27 2018-11-13 西安华为技术有限公司 Thread scheduling method, device, electronic equipment and storage medium
US20190188034A1 (en) * 2017-12-15 2019-06-20 Red Hat, Inc. Thread pool and task queuing method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541653A (en) * 2010-12-24 2012-07-04 新奥特(北京)视频技术有限公司 Method and system for scheduling multitasking thread pools
CN102360310A (en) * 2011-09-28 2012-02-22 中国电子科技集团公司第二十八研究所 Multitask process monitoring method and system in distributed system environment
CN103473138A (en) * 2013-09-18 2013-12-25 柳州市博源环科科技有限公司 Multi-tasking queue scheduling method based on thread pool
US20150135183A1 (en) * 2013-11-12 2015-05-14 Oxide Interactive, LLC Method and system of a hierarchical task scheduler for a multi-thread system
US20170315831A1 (en) * 2015-01-12 2017-11-02 Yutou Technology (Hangzhou) Co., Ltd. A System for Implementing Script Operation in a Preset Embedded System
CN106533982A (en) * 2016-11-14 2017-03-22 西安电子科技大学 Dynamic queue scheduling device and method based on bandwidth borrowing
US20190188034A1 (en) * 2017-12-15 2019-06-20 Red Hat, Inc. Thread pool and task queuing method and system
CN108804211A (en) * 2018-04-27 2018-11-13 西安华为技术有限公司 Thread scheduling method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王建等: "面向通信系统的GPP平台内核调度算法研究", 《信息技术》, no. 12, pages 22 - 25 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327868A (en) * 2021-12-08 2022-04-12 中汽创智科技有限公司 Dynamic memory regulation and control method, device, equipment and medium
CN114327868B (en) * 2021-12-08 2023-12-26 中汽创智科技有限公司 Memory dynamic regulation and control method, device, equipment and medium
WO2023122891A1 (en) * 2021-12-27 2023-07-06 宁德时代新能源科技股份有限公司 Task scheduling method and multi-core processor system

Similar Documents

Publication Publication Date Title
CN109564528B (en) System and method for computing resource allocation in distributed computing
EP2799990B1 (en) Dynamic virtual machine sizing
CA2704269C (en) Uniform synchronization between multiple kernels running on single computer systems
US6389449B1 (en) Interstream control and communications for multi-streaming digital processors
US7650601B2 (en) Operating system kernel-assisted, self-balanced, access-protected library framework in a run-to-completion multi-processor environment
US9274832B2 (en) Method and electronic device for thread scheduling
US8627325B2 (en) Scheduling memory usage of a workload
US20080229319A1 (en) Global Resource Allocation Control
CN112817762A (en) Dispatching system based on adaptive automobile open system architecture standard and dispatching method thereof
KR101697647B1 (en) Apparatus and Method Managing Migration of Tasks among Cores Based On Scheduling Policy
JP4985662B2 (en) Program and control device
CN111597044A (en) Task scheduling method and device, storage medium and electronic equipment
EP2587374A1 (en) Multi-core system and scheduling method
US10853133B2 (en) Method and apparatus for scheduling tasks to a cyclic schedule
CN109656716B (en) Slurm job scheduling method and system
KR101439355B1 (en) Scheduling method of real-time operating system for vehicle, vehicle ECU using the same, and computer readable recording medium having program of scheduling method
GB2417580A (en) Method for executing a bag of tasks application on a cluster by loading a slave process onto an idle node in the cluster
US11934890B2 (en) Opportunistic exclusive affinity for threads in a virtualized computing system
JPH05108380A (en) Data processing system
JP2000259430A (en) Processing method for computer system
CN116431335B (en) Control group-based container message queue resource quota control method
KR101334842B1 (en) Virtual machine manager for platform of terminal having function of virtualization and method thereof
LU502792B1 (en) Method for implementing adaptive scheduling of user-mode thread pool
CN118689599A (en) Interrupt processing method and device and electronic equipment
Lu et al. Constructing ECU Software Architecture Based on OSEK

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination