CN118034880A - Multi-core scheduling method, device, vehicle, electronic equipment and medium - Google Patents

Multi-core scheduling method, device, vehicle, electronic equipment and medium Download PDF

Info

Publication number
CN118034880A
CN118034880A CN202410174062.8A CN202410174062A CN118034880A CN 118034880 A CN118034880 A CN 118034880A CN 202410174062 A CN202410174062 A CN 202410174062A CN 118034880 A CN118034880 A CN 118034880A
Authority
CN
China
Prior art keywords
core
task
migration
tasks
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410174062.8A
Other languages
Chinese (zh)
Inventor
胡自成
谢宝友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoke Chushi Chongqing Software Co ltd
Original Assignee
Guoke Chushi Chongqing Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoke Chushi Chongqing Software Co ltd filed Critical Guoke Chushi Chongqing Software Co ltd
Priority to CN202410174062.8A priority Critical patent/CN118034880A/en
Publication of CN118034880A publication Critical patent/CN118034880A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/483Multiproc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to a method, a device, a vehicle, an electronic device and a medium for multi-core scheduling, comprising: acquiring initial allocation information of tasks of each core and priorities of tasks on each core in a vehicle multi-core processor; determining the imbalance degree of task allocation according to the initial allocation information of the tasks and the priorities of the tasks on the cores, wherein the imbalance degree of task allocation represents the imbalance degree of at least one of the number of task allocation among the cores and the execution time of the tasks with different priorities in the cores; according to the unbalanced degree of task allocation, performing task migration scheduling among the cores; the transfer and migration of tasks between cores is performed based on the migration queue. The method can improve the utilization rate of the multi-core processor, realize multi-core load balancing, and ensure that a user only needs to assign the priority of the task according to the importance degree of the task in an initial state, does not need to consider the corresponding relation between the task and the core to perform initial allocation setting of the task, and greatly reduces the difficulty of performing task allocation in an initializing mode.

Description

Multi-core scheduling method, device, vehicle, electronic equipment and medium
Description of the divisional application
The application relates to a method, a device, a vehicle, electronic equipment and a medium for multi-core scheduling, which are classified as parent applications with the application number of 202211716884.1, the application date of 2022, 12 and 29.
Technical Field
The disclosure relates to the field of vehicle task scheduling, and in particular relates to a multi-core scheduling method, a multi-core scheduling device, a vehicle, electronic equipment and a medium.
Background
In the field of vehicles, intelligent applications of automobiles are becoming more and more popular. The automobile open system architecture (AUTOSAR, AUTomotive Open System ARchitecture) is a collaborative development framework of an automobile electronic system jointly participated by all automobile manufacturers, parts suppliers and various research and service institutions worldwide, and establishes an open automobile controller (ECU) standard software architecture. AUTOSAR has a classical operating platform (CP, classsic Platform) and an adaptive platform (AP, adaptive Platform).
For an automatic driving vehicle, under limited calculation and storage conditions (calculation resources corresponding to a vehicle-mounted chip, limited storage resources and the like), the automatic driving vehicle has the requirements of timely response of tasks and rapid processing of tasks with higher importance degree (such as tasks for determining whether to brake according to road conditions). At present, the CP operation platform of the AUTOSAR supports multi-core scheduling, when the CP operation platform of the AUTOSAR performs multi-core scheduling, main scheduling logic is independent scheduling of each core, each core can only process tasks bound on the core, and optimal scheduling of tasks in the cores is realized based on an algorithm in each core. However, if the scheduling manner is in the face of unreasonable task allocation at the beginning, the problem of unbalanced core load caused by overload of the existing cores and constant idle of some cores is easy to occur, so that the utilization rate of the multi-core processor is low.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method, an apparatus, a vehicle, an electronic device, and a medium for multi-core scheduling.
According to a first aspect of embodiments of the present disclosure, a method of multi-core scheduling is provided. The method comprises the following steps: acquiring initial allocation information of tasks of each core and priorities of tasks on each core in a vehicle multi-core processor; determining the imbalance degree of task allocation according to the initial allocation information of the tasks and the priorities of the tasks on the cores, wherein the imbalance degree of task allocation represents the imbalance degree of at least one of the number of task allocation among the cores and the execution time of the tasks with different priorities in the cores; performing task migration scheduling among the cores according to the imbalance degree of task allocation; the transfer queue is a data structure for storing transfer tasks among the cores, and the corresponding relation between the tasks and the cores is adjusted through the belonging relation of the tasks in the data structure.
In some embodiments, performing task migration scheduling between cores according to the task allocation imbalance degree includes: according to the imbalance degree of the task allocation quantity among the cores, determining whether task migration and corresponding migration states are needed in each core or not respectively; and under the condition that the first core needs to carry out task migration and the migration state is the migration state, according to the relative high priority of the tasks in the ready queue of the first core, migrating one or more first tasks with higher priority in the ready queue to the migration queue.
In some embodiments, the migration queue is maintained based on spin locks, and only one core can access the migration queue at a time.
In some embodiments, the above method further comprises: and under the condition that the second core needs to perform task migration and the migration state is the migration state, one or more second tasks in the migration queue are migrated into the ready queue of the second core.
In some embodiments, the tasks in the migration queue are ordered by priority from high to low. Migrating one or more second tasks in the migration queue to a ready queue of the second core, comprising: and based on the priority ordering of the tasks in the migration queue, migrating a second task with the priority ordering in the migration queue at a preset priority to a ready queue of the second core.
In some embodiments, according to the degree of imbalance of the task allocation number between the cores, determining, in each core, whether task migration and a corresponding migration state are needed includes: determining the number of the core balancing tasks according to the total number of the cores and the total number of the tasks of the multi-core processor; determining the number relation between the task allocation number of each core and the core balancing task number; determining that the first core needs to perform task migration and the corresponding migration state is an migrate state under the condition that the number relation indicates that the task allocation number of the first core is larger than the core balance task number, wherein the migrate task number is the difference value between the task allocation number and the core balance task number; under the condition that the number relation indicates that the task allocation number of the second core is smaller than the core balance task number, determining that the second core needs to perform task migration and the corresponding migration state is an migration state, wherein the migration task number is the difference value between the core balance task number and the task allocation number; and determining that the third core does not need to perform task migration when the number relationship indicates that the task allocation number of the third core is in a region where the core balancing task number is or equal to the core balancing task number.
In some embodiments, determining the number of core-balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; and calculating the average number of the tasks according to the total number of the tasks and the total number of the cores of the multi-core processor, wherein the average number of the tasks is used as the balanced number of the cores.
In some embodiments, determining the number of core-balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; determining a target quantity floating value according to the total task quantity; calculating the average number of tasks of the cores according to the total number of tasks and the total number of cores of the multi-core processor; and generating an equilibrium number range according to the average task number of the cores and the target number floating value, and considering that the task number of the cores is equilibrium when the task number of the cores is in the equilibrium number range.
In some embodiments, performing task migration scheduling between cores according to the task allocation imbalance degree includes: determining whether a target migration task to be migrated exists in each core according to the imbalance degree of the execution time of the tasks with different priorities in each core; the target migration task includes: under the condition that the running time of running tasks exceeds a set threshold value, the task corresponding to the target priority with zero running time in the ready queue; if the target migration task exists in the fourth core, determining whether the target migration core exists in other cores except the fourth core, wherein the highest priority of the task in the target migration core is smaller than the priority of the target migration task; and if the target migration core exists, migrating the target migration task into the target migration core.
In some embodiments, migrating the target-migrating task to the target-migrating core includes: migrating the target migration task to the migration queue, and marking a migration target corresponding to the target migration task according to the target migration core; at a migration queue access time set by each core, each core accesses the migration queue; and when the fifth core accesses the migration queue and determines that a first target migration task marking the fifth core as a migration target exists, migrating the first target migration task in the migration queue into a ready queue of the fifth core.
In some embodiments, the migration queue access timing set by each core includes one of the following: each core is preset with an access time sequence of a migration queue, and the operation of accessing the migration queue is executed based on the access time sequence; or executing the operation of accessing the migration queue under the condition that each core generates the completion of the operation task processing; or executing the operation of accessing the migration queue under the condition that the number of tasks of each core occurrence ready queue changes; or an inspection mark for inspecting the task of the migration queue is arranged on the migration target, whether the inspection mark is arranged in each core or not is determined, and the core with the inspection mark executes the operation of accessing the migration queue.
In some embodiments, marking the migration target corresponding to the target migration task according to the target migration core includes: when a plurality of target migration cores exist, marking the plurality of target migration cores as migration targets corresponding to the target migration tasks; or if a plurality of target migration cores exist, marking the smallest core in the highest priority of the tasks in the target migration cores as the migration target corresponding to the target migration task.
In some embodiments, the above method further comprises: when the target migration core does not exist, the fourth core marks the target migration task as a state to be migrated, and continuously monitors whether the target migration core exists in other cores except the fourth core; under the condition that the target migration core does not exist in the set time period, canceling the mark of the state to be migrated of the target migration task; and under the condition that the existence of the target migration core is monitored within a set time period, migrating the target migration task to the migration queue, marking the migration target of the target migration task as the target migration core, and setting an inspection mark for inspecting the migration queue task on the target migration core.
In some embodiments, within each core, tasks in the ready queue and running tasks run based on priority levels; when a target task with higher priority than an operation task exists in the ready queue, the target task preemptively occupies the processor resource, and the operation state of the operation task is switched to the operation state of the target task.
According to a second aspect of embodiments of the present disclosure, an apparatus for multi-core scheduling is provided. The device comprises: the system comprises an information acquisition module, a determination module and a scheduling module. The information acquisition module is used for acquiring the initial allocation information of the tasks of each core and the priority of the tasks on each core in the vehicle multi-core processor. The determining module is configured to determine a degree of imbalance in task allocation according to the task initial allocation information and priorities of tasks on the cores, where the degree of imbalance in task allocation indicates a degree of imbalance in at least one of a number of task allocations between the cores and execution durations of tasks of different priorities in the cores. The scheduling module is used for performing task migration scheduling among the cores according to the imbalance degree of task allocation; the transfer queue is a data structure for storing transfer tasks among the cores, and the corresponding relation between the tasks and the cores is adjusted through the belonging relation of the tasks in the data structure.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle storing a set of instruction sets, the instruction sets being executed by a system module of the vehicle to implement the method of multicore scheduling provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instruction from the memory and execute the instruction to implement the method for multi-core scheduling provided in the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of multi-core scheduling provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
According to the initial allocation information of the tasks and the priorities of the tasks on the cores, the unbalanced degree of at least one of the allocation quantity of the tasks among the cores and the execution time of the tasks with different priorities in the cores is determined, so that the task migration scheduling is carried out among the cores, the allocation quantity of the tasks among the cores is balanced, and/or the tasks with higher priorities in the cores can be timely executed and the required execution time is ensured, on one hand, the utilization rate of the multi-core processor can be improved, the multi-core load balancing is realized, the overload operation of some cores with idle and some core tasks is avoided, or the situation that the high-priority tasks in some cores cannot be executed all the time is avoided; on the other hand, the multi-core scheduling method provided by the embodiment of the disclosure can dynamically adjust along with the running process of the task, so that multi-core load balancing is realized, and timely execution of the high-priority task is ensured, and in an initial state, a user (for example, a system developer) only needs to assign the priority of the task according to the importance degree of the task, and does not need to consider the corresponding relation between the task and the core to perform initial allocation setting of the task, so that the difficulty of performing task allocation in an initializing manner is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of multi-core scheduling, according to an example embodiment.
FIG. 2 is a schematic diagram of initial allocation information of tasks for cores and priorities of tasks on cores in a multi-core processor, according to an example embodiment.
Fig. 3 is a detailed implementation flowchart of step S130 according to an exemplary embodiment.
Fig. 4 is a detailed implementation flowchart of step S310 according to an exemplary embodiment.
FIG. 5A is a diagram of a migration queue and tasks in an active state within cores, according to an example embodiment.
FIG. 5B is a schematic diagram of running task highlighting in an active state within cores according to an exemplary embodiment.
FIG. 5C is a state diagram of a first core migrating a first task to a migration queue according to an example embodiment.
FIG. 5D is a state diagram of a second core migrating a second task to a ready queue according to an example embodiment.
Fig. 6 is a detailed implementation flowchart of step S130 according to another exemplary embodiment.
FIG. 7A is a schematic diagram of a migration queue and running task highlighting in an active state within each core according to another exemplary embodiment.
FIG. 7B is a state diagram of a fourth core migrating a target-migrating task to a migration queue if it is determined that a target-migrating core exists, according to another example embodiment.
FIG. 7C is a state diagram of a target-migrating core migrating a target-migrating task from a migration queue to a ready queue according to another example embodiment.
Fig. 8 is a detailed implementation flowchart of step S630 according to an exemplary embodiment.
Fig. 9 is a detailed implementation flowchart of step S130 according to yet another exemplary embodiment.
FIG. 10 is a block diagram illustrating an apparatus for multi-core scheduling according to an example embodiment.
FIG. 11 is a block diagram of a vehicle, according to an exemplary embodiment.
Fig. 12 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Exemplary embodiments will be described in detail below with reference to the accompanying drawings.
It should be noted that the related embodiments and the drawings are only for the purpose of describing exemplary embodiments provided by the present disclosure, and not all embodiments of the present disclosure, nor should the present disclosure be construed to be limited by the related exemplary embodiments.
It should be noted that the terms "first," "second," and the like, as used in this disclosure, are used merely to distinguish between different steps, devices, or modules, and the like. Relational terms are used not to indicate any particular technical meaning nor sequence or interdependence between them.
It should be noted that the modifications of the terms "one", "a plurality", "at least one" as used in this disclosure are intended to be illustrative rather than limiting. Unless the context clearly indicates otherwise, it should be understood as "one or more".
It should be noted that the term "and/or" is used in this disclosure to describe an association between associated objects, and generally indicates that there are at least three associations. For example, a and/or B may at least represent: a exists independently, A and B exist simultaneously, and B exists independently.
It should be noted that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. The scope of the present disclosure is not limited by the order of description of the steps in the related embodiments unless specifically stated.
It should be noted that, all actions for acquiring signals, information or data in the present disclosure are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Exemplary method
FIG. 1 is a flowchart illustrating a method of multi-core scheduling, according to an example embodiment.
Referring to fig. 1, a method for multi-core scheduling according to an embodiment of the present disclosure includes the following steps: s110, S120, and S130. The above method may be applied to an electronic device having a multi-core processor, for example, an autonomous vehicle having a multi-core processor, or an electronic device having a multi-core processor and capable of communicating with a vehicle, for performing task scheduling on a vehicle multi-core processor, for example, the electronic device may be an in-vehicle device or a server that provides services for scheduling of a vehicle, or the like.
In step S110, task initial allocation information of each core and priorities of tasks on each core in the vehicle multi-core processor are obtained.
FIG. 2 is a schematic diagram of initial allocation information of tasks for cores and priorities of tasks on cores in a multi-core processor, according to an example embodiment.
The initial allocation information of the task is used for representing the corresponding relation between each task and the cores in the multi-core processor in the starting operation stage of the task. For example, referring to fig. 2, the multi-core processor includes 3 cores, core 0, core 1, and core 2, respectively, wherein core 0 is allocated with task T1, task T2, task T3, and task T4, core 1 is allocated with task T5, and core 2 is allocated with task T6, task T7, and task T8. The number of tasks herein is merely an example, and in an actual vehicle operating environment, the number of tasks correspondingly processed by the multicore processor may vary.
The task initial allocation information may be generated by allocating each task to each core (core) during the initialization of the multi-core processor. The multi-core processor may specifically perform initialization task allocation according to initialization configuration information of a user (for example, a system developer), for example, which core is declared to bind which tasks in the initialization configuration information.
The priority of each task is pre-assigned, for example, the priority of each task is configured by the user in the initialization configuration information according to the importance level of each task. In some embodiments, the priorities are global priorities, for example, referring to fig. 2, the priorities of the tasks declared in the initialization configuration information are classified into 5 levels, which are respectively expressed as 1 level to 5 level, where the priority corresponding to 1 level is the lowest, and the priority corresponding to 5 level is the highest. The priority of the task T1 is 5, the priority of the task T6 is 4, the priorities of the task T2, the task T3 and the task T7 are 3, the priorities of the task 4 and the task T8 are 2, and the priority of the task T5 is 1.
In some application scenarios, the execution subject of the method is an automatic driving vehicle, and the automatic driving vehicle can acquire the task initial allocation information of each core in the vehicle multi-core processor and the priority of each task from the initialization configuration information.
In other application scenarios, the execution body of the method is an electronic device which is provided with a multi-core processor and can communicate with the vehicle, and the electronic device can acquire the task initial allocation information of each core in the multi-core processor of the vehicle and the priority of each task from the initialization configuration information of the vehicle.
In step S120, a degree of imbalance in task allocation is determined according to the task initial allocation information and the priorities of the tasks on the cores.
The degree of imbalance of task allocation refers to the degree of imbalance of at least one of the number of task allocation among cores and the execution time period of tasks of different priorities in the cores.
For example, in some embodiments, the above-described degree of task allocation imbalance represents a degree of imbalance in the number of task allocations between cores. For example, referring to fig. 2, there is the following scenario a: the task allocation number of the core 0 is 4, the task allocation number of the core 1 is 1, the task allocation number of the core 2 is 3, the task allocation number of the core 2 is relatively smaller than that of the cores 0 and 3, and the task allocation number among the cores is unbalanced. This requires inter-core task scheduling to achieve equalization of the number of allocations per core.
In other embodiments, the degree of imbalance of task allocation indicates the degree of imbalance of execution durations of tasks with different priorities in each core. The execution time corresponding to the tasks with different priorities is required successively, the equilibrium state corresponding to the execution time of the tasks with different priorities comprises that each priority task executes timely according to the corresponding priority, and if the execution time of the task with certain target priority exceeds the set threshold value and the execution time of the task with certain target priority is still 0, the task with the target priority is not executed timely, and the imbalance exists in the execution time of the tasks with different priorities.
Taking core 0 as an example, generally, within core 0, each task is sequentially executed in order of priority or in the form of occupied time slices in order of priority, and tasks corresponding to the same priority are sequentially executed in order of wake-up time. Taking the sequential execution of the tasks according to the order of priority as an example, under normal conditions, the task T1 with the priority of 5 (being the highest priority in the embodiment) is preferentially executed, after the task T1 is executed, the task T2 is executed, then the task T3 is executed, and finally the task T4 is executed, and generally, the execution duration of the task T1 is generally within 2 s. However, the following scenario B exists: in the actual running process, the task T1 has been executed for 3s, and the running time of the task T2 is still 0, where the task with high priority cannot be executed at a later time, and there is an imbalance degree of execution time periods of tasks with different priorities in each core. Inter-core task scheduling is required because the task T2 has a high priority and needs to be able to be executed in time at an expected time (e.g., at the latest, starting execution within 3.1s (100 ms after setting the threshold), or obtaining an execution result, etc.).
In still other embodiments, the above-described degree of imbalance in task allocation indicates the degree of imbalance in the number of task allocations between cores and the execution durations of different priority tasks within cores, such as the simultaneous presence of scenario a and scenario B of the foregoing examples.
In step S130, task migration scheduling is performed between the cores according to the degree of imbalance in task allocation.
For example, in the example scenario a, the task migration scheduling is performed between the cores, so that the number of task allocations between the cores is balanced; in the example scene B, task migration scheduling is performed among the cores, so that tasks which have higher priority and have not been executed all the time in certain cores can be migrated to other cores to be executed in time, and the execution time of tasks with different priorities in the cores is balanced.
Fig. 3 is a detailed implementation flowchart of step S130 according to an exemplary embodiment.
In some embodiments, referring to fig. 3, in the step S130, task migration scheduling is performed between cores according to the degree of imbalance of task allocation, including the following steps S310, S320 and S330.
Wherein, for the same core, based on the migration status result determined in step S310, one of steps S320 or S330 is correspondingly executed; for each core in the multi-core processor, the situation that one part of cores migrate out of the task and the other part of cores migrate into the task corresponds exists at the same time.
For example, steps S310, S320, S330 described above are implemented by a scheduling module located in each core, executed by each core; in some embodiments, the scheduling module of the core need only perform steps S310 and S320, and the scheduling module of the core need only perform steps S310 and S330.
In step S310, it is determined whether the task migration and the corresponding migration state are required in each core according to the degree of imbalance of the task allocation number between the cores.
In some embodiments, if the number of task assignments for a core H1 is unbalanced relative to other cores, e.g., if the number of task assignments for the core H1 is greater than the number in the balanced state, then the core H1 needs to perform task migration and the corresponding migration state is an migrate state; if the task allocation number of the core H1 is smaller than the number in the equilibrium state, the core H1 needs to perform task migration and the corresponding migration state is an migration state; if the number of task assignments for the core H1 is equal to or within the range of the number in equilibrium, then the core H1 need not perform task migration.
Fig. 4 is a detailed implementation flowchart of step S310 according to an exemplary embodiment.
In some embodiments, referring to fig. 4, in step S310, according to the degree of imbalance of the task allocation number between the cores, it is determined whether task migration and a corresponding migration state are needed in each core, and the method includes the following steps: s410, S420, S431, S432 and S433.
In step S410, the number of core balancing tasks is determined according to the total number of cores and the total number of tasks of the multi-core processor.
The core balancing number is the number of tasks corresponding to the task allocation balancing state among the cores. The number of kernel equalizations may be a determined value or a range of values.
In some embodiments, in the step S410, determining the number of core balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; and calculating the average number of the tasks according to the total number of the tasks and the total number of the cores of the multi-core processor, wherein the average number of the tasks is used as the balanced number of the cores.
An atomic counter is a counter for counting the number of global tasks, in which all tasks involved in initializing configuration information can be counted. According to the method and the device, the total task number of the active state is obtained safely and quickly based on the atomic counter, the task number of each core is obtained without traversing the task list corresponding to each core, and the total task number of the multiple cores is calculated by summing the task number of the multiple cores. In other embodiments, it is understood that the total number of tasks may also be obtained by performing a statistical calculation for the task list of each core.
For a vehicle operating system, the task-corresponding state may correspond to a thread state association, including, but not limited to: the active state (active) task refers to an active and not yet finished task, and includes tasks in the running, ready, waiting, blocking states and the like.
The sleeping state is as follows: the task exists only in the form of codes and is not handed to the state managed by the operating system; the ready state is: tasks are already in an executable state and awaiting allocation of processor (CPU) resources; the running state is as follows: the state of processor resource execution is obtained; the interrupt state is: executing an Interrupt Service Routine (ISR) by executing the original processing logic in response to the interrupt request interrupt; the wait state (also referred to as a suspend state) is: the running task needs to wait for an event to occur before running, and its own cpu usage is deprived and in a waiting state.
The core average task number is calculated from the total task number ACTIVE TASKS in the active state and the total core number cores of the multi-core processor.
As an example, the calculation formula of the core average task number per_core_tasks is expressed as follows:
per_core_tasks=floor(active tasks/cores)+need_balance, (1)
need_balance=active tasks%cores!=01:0, (2)
Wherein floor () represents a rounding down function; the needbalance represents the equilibrium constant; equation (2) shows that the balance constant is 1 in the case where the number of core average tasks is not divided by the total number of cores, i.e., the remainder is not equal to 0; the balance constant is 0 in the case where the number of core average tasks can be divided by the total number of cores, i.e., the remainder is equal to 0.
For example, for cores 0 to 2 illustrated in fig. 2, if the total number of tasks ACTIVE TASKS in the active state is 8 and the total number of cores cores is 3, the average number of tasks per_core_tasks=8/3+1=3.
In other embodiments, in the step S410, determining the number of core balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; determining a target quantity floating value according to the total task quantity; calculating the average number of tasks of the cores according to the total number of tasks and the total number of cores of the multi-core processor; and generating an equilibrium number range according to the average task number of the cores and the target number floating value, and considering that the task number of the cores is equilibrium when the task number of the cores is in the equilibrium number range.
Determining a target number floating value according to the total task number, including: and determining a corresponding target quantity floating value according to the quantity interval in which the total task quantity is located. In some embodiments, a mapping relation table between a number interval in which the total task number is located and a corresponding number floating value is pre-stored, for example, the total task number is Z1-Z2, the corresponding number floating value is f1, the total task number is Z2-Z3, and the corresponding number floating value is f2, where Z3 > Z1, and f2 > f1; and inquiring the number interval corresponding to the total task number in the mapping relation table to obtain a corresponding target number floating value.
In some embodiments, the generating the balanced number range may be based on the core average number of tasks and the target number float value described above: the floating of the target number floating value f0 is performed above and below the core average task number per_core_tasks to obtain an equilibrium number range, namely the equilibrium number range is: [ per_core_tasks-f0, per_core_tasks+f0]; in other embodiments, the above-mentioned floating value f0 of the target number is also performed on the core average task number per_core_tasks to obtain an equilibrium number range, that is, the equilibrium number range is: [ per_core_tasks, per_core_tasks+f0] and the like.
In step S420, a number relationship between the number of task assignments for each core and the number of core-balanced tasks is determined.
The above quantitative relationship includes one of greater than, less than, or equal to three cases for each core at each dynamic time of the running task. Because migration scheduling of inter-core tasks is performed dynamically, the corresponding quantitative relationship of the same core may change at different times.
In step S431, when the number relationship indicates that the number of task assignments of the first core is greater than the number of core-balanced tasks, it is determined that the first core needs to perform task migration and the corresponding migration state is an migrate state, and the number of migrate tasks is a difference between the number of task assignments and the number of core-balanced tasks.
For example, for core 0, the task allocation number of core 0 is 4, and the core balancing task number is 3, and if core 0 belongs to the first core, task migration is required and the state is migrated.
In some embodiments, the number of actual migrated tasks may be equal to or less than the number of migrated tasks calculated based on the difference, and an example case where the number of actual migrated tasks is less than the calculated number of migrated tasks may be that a task that is ready to migrate is already running in the core before being migrated, without migrating.
In step S432, when the number relationship indicates that the number of task assignments of the second core is smaller than the number of core-balanced tasks, it is determined that the second core needs to perform task migration and the corresponding migration state is an migration state, where the number of migration tasks is a difference between the number of core-balanced tasks and the number of task assignments.
For example, for core 1, the task allocation number of core 1 is 1, and the core balancing task number is 3, and if core 1 belongs to the second core, task migration is required and the state is in the migration state.
In some embodiments, the number of actual migrated tasks may be equal to or less than the number of migrated tasks calculated according to the difference, and an example case where the number of actual migrated tasks is less than the calculated number of migrated tasks may be that the number of tasks in the migration queue is less than the number of migrated tasks.
In step S433, when the number relationship indicates that the task allocation number of the third core is in the area where the core balancing task number is located or equal to the core balancing task number, it is determined that the third core does not need to perform task migration.
For example, for the core 3, if the task allocation number of the core 3 is 3 and the core balancing task number is 3, the core 3 belongs to the third core, and task migration is not required.
In step S320, if it is determined that the first core needs to perform task migration and the migration state is the migrate state, one or more first tasks with higher priorities in the ready queue are migrated to the migration queue according to the relative high priority of the tasks in the ready queue of the first core.
The task to be migrated in each core is described as a first task.
FIG. 5A is a diagram of a migration queue and tasks in an active state within cores, according to an example embodiment. FIG. 5B is a schematic diagram of running task highlighting in an active state within cores according to an exemplary embodiment.
Referring to FIG. 5A, a rectangular box is used to illustrate a migration queue 510 and dashed boxes are used to illustrate tasks in each of cores 0-2 that are active. In some embodiments, the migration queue is a data structure for storing migration tasks between cores, and may be a data list; the transfer and migration of tasks between cores is accomplished by the migration queue 510. Referring to fig. 5B, a fill box and a white box are used to distinguish between the running task and the ready task of each core, where the fill box illustrates a running task and the white box illustrates a ready task, which is located in a ready list. For example, the running task at the current time in core 0 is task T1, and the ready tasks are task T2, task T3, and task T4. The running task at the current time in core 1 is task T5. The running task at the current moment in the core 2 is a task T6, and the ready tasks are a task T7 and a task T8.
FIG. 5C is a state diagram of a first core migrating a first task to a migration queue according to an example embodiment.
Considering the policy of running according to the priority and the inter-core scheduling migration policy in the core, if the time cost generated by interrupting the running task running in the core (which is the highest priority task in the core), and the important tasks possibly caused by the resource preemption of the high priority task (such as the task of determining whether to brake according to road conditions, the task of judging the running state of the vehicle according to the perceived traffic light display information, and the like) cannot be executed in time, abnormal functions and potential safety hazards of the vehicle are caused, the following migration policy is adopted in the embodiments of the disclosure: tasks are selected from the ready queue for task migration without migrating the running task.
For example, referring to fig. 5C, in the case where the core 0 belongs to the first core, the number of the migrated tasks is 4-3=1, 1 first task with higher priority (here, 3 level) in the ready queue (including ready tasks T2, T3 and T4) of the core 0 is migrated to the migrated queue, and in the case where the task T2 and the task T3 have the same priority, here, a task with earlier wakeup time is taken as the migrated task, for example, the first task is the task T2.
Note that, the migration queue 510 does not place specific contents of the task, but adjusts the correspondence between the task and each core only by the relationship of the task in the data structure. For example, the first core dequeues tasks in its own ready queue to the retire queue 510. The second core migrates the tasks in the migration queue 510 to its own ready queue.
In some embodiments, the migration queue 510 is maintained based on spin locks, and only one core can access the migration queue at a time.
Spin-locks are a mechanism for protecting shared resources, in which at most only one keeper is available at any time, i.e. at most only one execution unit can acquire a lock at any time.
In step S330, in the case that it is determined that the second core needs to perform task migration and the migration state is the migration state, one or more second tasks in the migration queue are migrated into the ready queue of the second core.
The task to be migrated in each core is described as a second task, which is part or all of the first task.
FIG. 5D is a state diagram of a second core migrating a second task to a ready queue according to an example embodiment.
For example, referring to fig. 5D, in the case where core 1 belongs to the second core, the number of migration tasks is 3-1=2. Since only one task T2 was previously migrated in the retire queue 510, core 1 in the embodiment herein migrates task T2 (an example of a second task) in retire queue 510 to its own ready queue.
In some embodiments, within each core, tasks in the ready queue and running tasks run based on priority levels; when a target task with higher priority than an operation task exists in the ready queue, the target task preemptively occupies the processor resource, and the operation state of the operation task is switched to the operation state of the target task.
For example, at an instant of the task T2 migrating, the task T5 inside the core 1 is an running task being run, the task T2 is a ready task, and since the priority of the task T2 is 3, and the priority of the running task T5 is 1, after the core 1 migrates into the task T2, during the scheduling according to the priority, the task T2 will preempt the processor resource corresponding to the running task T5 to start running, and the task T5 is in a blocking state, for example, the task T2 indicated by a filling box in fig. 5D is in a running state.
In some embodiments, the tasks within the migration queue 510 are ordered by priority from high to low. For example, in the case where a plurality of tasks exist in the above-described migration queue, the plurality of tasks are ordered in order of priority from high to low.
Migrating one or more second tasks in the migration queue to a ready queue of the second core, comprising: and based on the priority ordering of the tasks in the migration queue, migrating a second task with the priority ordering in the migration queue at a preset priority to a ready queue of the second core.
By migrating the second task from the migration queue according to the priority ordering, the migrated task with higher priority can be quickly executed after being migrated into the second core.
Fig. 6 is a detailed implementation flowchart of step S130 according to another exemplary embodiment.
In other embodiments, referring to fig. 6, in step S130, task migration scheduling is performed between cores according to the degree of imbalance of task allocation, and the method includes the following steps: s610, S620, and S630.
In step S610, it is determined in each core whether there is a target migration task that needs to be migrated, according to the degree of imbalance in the execution durations of the tasks of different priorities within each core.
The target migration task includes: and under the condition that the running time of the running task exceeds a set threshold value, the task corresponding to the target priority with zero running time in the ready queue.
For example, for scenario B of the above example, within core 0, the execution duration of task T1 may generally be completed within 2 s. However, the following scenario B exists: in the actual running process, task T1 has been executed for 3s (an example of setting a threshold value), while the running time of task T2 is still 0, the target-migrating task in core 0 is task T1.
In step S620, if the target migration task exists in the fourth core, it is determined whether a target migration core exists in other cores except the fourth core, where the highest priority of the task in the target migration core is smaller than the priority of the target migration task.
In some embodiments, this step S620 is performed by the scheduling module of the fourth core. In other embodiments, execution may be performed by a scheduling module of a core other than the fourth core.
Core 0 is an example of the fourth core, and in some embodiments, core 0 determines whether a target migrating core exists in the other cores for migrating into the target migrating task by obtaining the highest priority of the tasks of the other cores.
For example, core 0 determines whether a target migrating core exists by traversing the highest priority of tasks of the other cores. The highest priority of the task in the core 1 is obtained by the core 0, the priority of the target migration task T2 is 3, and the target migration task T2 is higher than the highest priority of the task in the core 1, so that the existence of the target migration core and the target migration core 1 can be determined. In addition, the core 0 also obtains that the highest priority of the task in the core 2 is 4-level, and the priority of the target migration task T2 is lower than the highest priority of the task in the core 2, which indicates that the core 2 is not the target migration core.
In step S630, if the target migration core exists, the target migration task is migrated to the target migration core.
In some scenarios, there may be one or more target-migrating cores.
FIG. 7A is a schematic diagram of a migration queue and running task highlighting in an active state within each core according to another exemplary embodiment. FIG. 7B is a state diagram of a fourth core migrating a target-migrating task to a migration queue if it is determined that a target-migrating core exists, according to another example embodiment. FIG. 7C is a state diagram of a target-migrating core migrating a target-migrating task from a migration queue to a ready queue according to another example embodiment.
Referring to FIG. 7A, a rectangular box is used to illustrate a migration queue 710, and dashed boxes are used to illustrate tasks in each of cores 0-2 that are active, where filled dashed boxes illustrate running tasks, white dashed boxes illustrate ready tasks, and ready tasks are located in a ready list. Referring to FIG. 7B, it is determined at core 0 (as an example of a fourth core) that there is a target-migrating task T2, and that there is a target-migrating core, core 1, that the target-migrating task T2 is migrated to the migration queue 710. Referring to fig. 7C, the target migrating core 1 has migrated the target migrating task T2 from the migration queue 710 to the ready queue, and the task T2 performs resource preemption on the task T5 that is originally running according to the priority order, so that the core 1 transitions from the running state of the original task T5 to the running state of the task T2.
In some embodiments, the migration queue 710 is maintained based on spin locks, and only one core can access the migration queue at a time.
Fig. 8 is a detailed implementation flowchart of step S630 according to an exemplary embodiment.
In some embodiments, referring to fig. 8, in step S630, the target migration task is migrated into the target migration core, including the following steps: s810, S820, and S830.
In step S810, the target migration task is migrated to the migration queue, and the migration target corresponding to the target migration task is marked according to the target migration core.
For example, referring to FIG. 7B, in the migration queue 710, the target-migrating task T2 is marked with a migration target as core 1.
In some embodiments, marking the migration target corresponding to the target migration task according to the target migration core includes: and when a plurality of target migration cores exist, marking the plurality of target migration cores as migration targets corresponding to the target migration tasks.
For example, in the case that there are a plurality of target-migrating cores, such as a target-migrating core X and a target-migrating core Y, there are a plurality of migrating targets marked by the target-migrating task, and in the case that one of the target-migrating cores X accesses the migration queue 710 based on the spin lock, the target-migrating core X preferentially migrates the target-migrating task in the migration queue 710 into its ready queue; in the process of revisiting other target migrating core Y, as the target migrating task marked in the target migrating core Y in the migration queue does not exist, no task is migrated after the target migrating core Y accesses the migration queue. In this embodiment, it can be ensured that the target migration task with higher priority can be executed in time under the condition of migrating to the target migration core.
In other embodiments, marking the migration target corresponding to the target migration task according to the target migration core includes: and when a plurality of target migration cores exist, marking the smallest core in the highest priority of the tasks in the plurality of target migration cores as a migration target corresponding to the target migration task.
For example, when a migration target is marked under the condition that a plurality of targets are migrated into the core, comparing the highest priority of tasks in each target migration core by adopting a bubbling sequencing method, and marking the core with the lowest highest priority as the migration target. In this embodiment, it can be ensured that the target migration task with higher priority can be executed in time under the condition of migrating into the target migration core, and meanwhile, because the migrated target migration core is the smallest value of the highest priority among the target migration cores meeting the conditions, that is, the priority of the task being executed by the target migration core is lower, the corresponding execution urgency is lower, and the influence caused by delayed execution is smaller.
In step S820, each core accesses the migration queue at the migration queue access timing set by each core.
In some embodiments, the migration queue access timing set by each core includes one of the following:
Each core is preset with an access time sequence of a migration queue, and the operation of accessing the migration queue is executed based on the access time sequence; or alternatively
Executing the operation of accessing the migration queue under the condition that each core generates the completion of the operation task processing; or alternatively
Executing the operation of accessing the migration queue under the condition that the number of tasks of each core occurrence ready queue changes; or alternatively
And setting an inspection mark for inspecting the task of the migration queue on the migration target, and determining whether the inspection mark is set in each core or not, wherein the core with the inspection mark executes the operation of accessing the migration queue.
In step S830, when the fifth core accesses the migration queue and determines that there is a first target migration task marking the fifth core as a migration target, the first target migration task in the migration queue is migrated into a ready queue of the fifth core.
In some embodiments, there may be one or more first target-eviction tasks in the migration queue that point to the fifth core.
Fig. 9 is a detailed implementation flowchart of step S130 according to yet another exemplary embodiment.
In the step S130, task migration scheduling is performed between cores according to the degree of imbalance of task allocation, except that the steps include: in addition to S610 and S620, in the case that there is no target migration core, the method may further include a step corresponding to a negative branch: s910, S920, and S930, only steps S910 to S930 are illustrated in fig. 9 for simplicity of illustration.
In step S910, the fourth core marks the target migration task as a state to be migrated, and continuously monitors whether there is a target migration core in other cores except the fourth core.
In step S920, if it is detected that the target migration core does not exist within the set period of time, the flag of the state to be migrated of the target migration task is canceled.
In step S930, if it is detected that the target migration core exists within the set period of time, the target migration task is migrated to the migration queue, the migration target of the target migration task is marked as the target migration core, and an inspection mark for inspecting the migration queue task is set on the target migration core.
In the embodiment including steps S910 to S930, in the case that the target migration task temporarily does not have a target migration core corresponding to the target migration task, a buffer time with a set duration is reserved for the mark of the state to be migrated set for the target migration task, and during the buffer time, whether the target migration core meeting the condition exists is continuously monitored, if the target migration core of the task is not received during the buffer time, the mark is cancelled, and then the target migration task can be executed or waited in the core according to internal scheduling; if the target migration core appears in the buffer time, the target migration task of the state to be migrated can be migrated to the migration queue. By adopting a short waiting strategy, the task with high priority can be timely arranged in other cores with priority meeting the condition under the condition that the task with high priority cannot be executed in the self core.
In some embodiments, within each core, tasks in the ready queue and running tasks run based on priority levels; when a target task with higher priority than an operation task exists in the ready queue, the target task preemptively occupies the processor resource, and the operation state of the operation task is switched to the operation state of the target task.
For example, referring to fig. 7C, if the task in the ready queue in the core 1 is the target migrated task T2 migrated from the migration queue 710, and the priority of the task T2 is higher than the priority of the task T5 that is running before according to the priority ranking, the task T2 preempts the processor resource of the task T5 that is running before, so that the core 1 switches from the running state of the task T5 to the running state of the task T2.
Exemplary apparatus
FIG. 10 is a block diagram illustrating an apparatus for multi-core scheduling according to an example embodiment.
Referring to fig. 10, an apparatus 1000 for multi-core scheduling according to an embodiment of the present disclosure includes: an information acquisition module 1010, a determination module 1020, and a scheduling module 1030.
The information obtaining module 1010 is configured to obtain initial task allocation information of each core and priorities of tasks on each core in the vehicle multi-core processor.
The determining module 1020 is configured to determine a degree of imbalance in task allocation according to the initial allocation information of the task and the priorities of the tasks on the cores, where the degree of imbalance in task allocation indicates a degree of imbalance in at least one of a number of task allocations between the cores and execution durations of tasks with different priorities in the cores.
The scheduling module 1030 is configured to perform task migration scheduling between cores according to the degree of imbalance of task allocation; the transfer queue is a data structure for storing transfer tasks among the cores, and the corresponding relation between the tasks and the cores is adjusted through the belonging relation of the tasks in the data structure.
In some embodiments, the scheduling module 1030 includes: the device comprises a quantity imbalance migration determining module, an outgoing module and an incoming module.
The number imbalance migration determination module is used for: and respectively determining whether task migration and corresponding migration states are needed in each core according to the imbalance degree of the task allocation quantity among the cores.
In some embodiments, according to the degree of imbalance of the task allocation number between the cores, determining, in each core, whether task migration and a corresponding migration state are needed includes: determining the number of the core balancing tasks according to the total number of the cores and the total number of the tasks of the multi-core processor; determining the number relation between the task allocation number of each core and the core balancing task number; determining that the first core needs to perform task migration and the corresponding migration state is an migrate state under the condition that the number relation indicates that the task allocation number of the first core is larger than the core balance task number, wherein the migrate task number is the difference value between the task allocation number and the core balance task number; under the condition that the number relation indicates that the task allocation number of the second core is smaller than the core balance task number, determining that the second core needs to perform task migration and the corresponding migration state is an migration state, wherein the migration task number is the difference value between the core balance task number and the task allocation number; and determining that the third core does not need to perform task migration when the number relationship indicates that the task allocation number of the third core is in a region where the core balancing task number is or equal to the core balancing task number.
In some embodiments, determining the number of core-balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; and calculating the average number of the tasks according to the total number of the tasks and the total number of the cores of the multi-core processor, wherein the average number of the tasks is used as the balanced number of the cores.
In some embodiments, determining the number of core-balancing tasks according to the total number of cores and the total number of tasks of the multi-core processor includes: acquiring the total task number of the active state based on the atomic counter; determining a target quantity floating value according to the total task quantity; calculating the average number of tasks of the cores according to the total number of tasks and the total number of cores of the multi-core processor; and generating an equilibrium number range according to the average task number of the cores and the target number floating value, and considering that the task number of the cores is equilibrium when the task number of the cores is in the equilibrium number range.
And the migration module is used for migrating one or more first tasks with higher priority in the ready queue to the migration queue according to the relative high priority of the tasks in the ready queue of the first core under the condition that the first core needs to migrate the tasks and the migration state is the migration state.
The migration module is used for: and under the condition that the second core needs to perform task migration and the migration state is the migration state, one or more second tasks in the migration queue are migrated into the ready queue of the second core.
In some embodiments, the migration queue is maintained based on spin locks, and only one core can access the migration queue at a time.
In some embodiments, the tasks in the migration queue are ordered by priority from high to low. Migrating one or more second tasks in the migration queue to a ready queue of the second core, comprising: and based on the priority ordering of the tasks in the migration queue, migrating a second task with the priority ordering in the migration queue at a preset priority to a ready queue of the second core.
In other embodiments, the scheduling module 1030 includes: the system comprises an execution duration imbalance migration determining module, a target migration core determining module and a migration module.
The execution duration imbalance migration determining module is used for: determining whether a target migration task to be migrated exists in each core according to the imbalance degree of the execution time of the tasks with different priorities in each core; the target migration task includes: and under the condition that the running time of the running task exceeds a set threshold value, the task corresponding to the target priority with zero running time in the ready queue.
The above-mentioned goal migration nucleus confirms the module is used for: and if the target migration task exists in the fourth core, determining whether the target migration core exists in other cores except the fourth core, wherein the highest priority of the target migration core task is smaller than the priority of the target migration task.
The migration module is used for: and if the target migration core exists, migrating the target migration task into the target migration core.
In some embodiments, migrating the target-migrating task to the target-migrating core includes: the target migration task is migrated to a migration queue, and a migration target corresponding to the target migration task is marked according to the target migration core; at a migration queue access time set by each core, each core accesses the migration queue; and when the fifth core accesses the migration queue and determines that a first target migration task marking the fifth core as a migration target exists, migrating the first target migration task in the migration queue into a ready queue of the fifth core.
In some embodiments, the migration queue is maintained based on spin locks, and only one core can access the migration queue at a time.
In some embodiments, marking the migration target corresponding to the target migration task according to the target migration core includes: when a plurality of target migration cores exist, marking the plurality of target migration cores as migration targets corresponding to the target migration tasks; or if a plurality of target migration cores exist, marking the smallest core in the highest priority of the tasks in the target migration cores as the migration target corresponding to the target migration task.
In some embodiments, the scheduling module 1030 further includes, in addition to the execution duration imbalance migration determining module, the target migration core determining module, and the migration module, the scheduling module further includes: a marking and monitoring module and a marking canceling module.
The marking and monitoring module is used for: and under the condition that the target migration core does not exist, marking the target migration task as a state to be migrated by the fourth core, and continuously monitoring whether the target migration core exists in other cores except the fourth core.
The mark canceling module is used for: and under the condition that the target migration core does not exist in the set time period, canceling the mark of the state to be migrated of the target migration task.
The migration module is used for: and under the condition that the existence of the target migration core is monitored within a set time period, migrating the target migration task to a migration queue, marking the migration target of the target migration task as the target migration core, and setting an inspection mark for inspecting the migration queue task on the target migration core.
In some embodiments, within each core, tasks in the ready queue and running tasks run based on priority levels; when a target task with higher priority than an operation task exists in the ready queue, the target task preemptively occupies the processor resource, and the operation state of the operation task is switched to the operation state of the target task.
The specific details of this embodiment may also refer to all descriptions in the first embodiment, and will not be repeated here.
Exemplary vehicle
FIG. 11 is a block diagram of a vehicle, according to an exemplary embodiment.
Referring to fig. 11, a vehicle 1100 provided by an embodiment of the present disclosure may be a fuel-fired vehicle, a hybrid vehicle, an electric vehicle, a fuel-cell vehicle, or other type of vehicle.
Referring to FIG. 11, a vehicle 1100 may include a number of subsystems, such as a drive system 1110, a control system 1120, a perception system 1130, a communication system 1140, an information display system 1150, and a computing processing system 1160. Vehicle 1100 may also include more or fewer subsystems, and each subsystem may also include multiple components, which are not described in detail herein.
The drive system 1110 includes components that provide powered motion to the vehicle 1100. Such as an engine, energy source, transmission, etc.
The control system 1120 includes components that provide control for the vehicle 1100. Such as vehicle control, cabin equipment control, driving assistance control, etc.
The perception system 1130 includes components that provide ambient environment perception for the vehicle 1100. For example, a vehicle positioning system, a laser sensor, a voice sensor, an ultrasonic sensor, an image pickup apparatus, and the like.
The communication system 1140 includes components that provide a communication link for the vehicle 1100. For example, mobile communication networks (e.g., 3G, 4G, 5G networks, etc.), wiFi, bluetooth, internet of vehicles, etc.
The information display system 1150 includes components that provide various information displays for the vehicle 1100. For example, vehicle information display, navigation information display, entertainment information display, and the like.
The computing processing system 1160 includes components that provide data computing and processing capabilities for the vehicle 1100. The computing processing system 1160 may include at least one processor 1161 and memory 1162. Processor 1161 may execute instructions stored in memory 1162.
The processor 1161 may be any conventional processor such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a System On Chip (SOC), an Application SPECIFIC INTEGRATED Circuit (ASIC), or a combination thereof. The processor 1161 includes system modules.
The memory 1162 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In the disclosed embodiment, a set of instructions is stored in the memory 1162, and the system module of the processor 1161 may execute the set of instructions to implement all or part of the steps of the method for multi-core scheduling described in any of the above-described exemplary embodiments.
Exemplary electronic device
Fig. 12 is a block diagram of an electronic device, according to an example embodiment.
Referring to fig. 12, an electronic device 1200 provided by an embodiment of the present disclosure may be a vehicle controller, an in-vehicle terminal, an in-vehicle computer, or other type of electronic device.
Referring to fig. 12, an electronic device 1200 may include at least one processor 1210 and memory 1220. Processor 1210 may execute instructions stored in memory 1220. The processor 1210 is communicatively coupled to the memory 1220 via a data bus. In addition to memory 1220, processor 1210 may be communicatively coupled with input devices 1230, output devices 1240, and communication devices 1250 via a data bus.
Processor 1210 may be any conventional processor such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a System On Chip (SOC), an Application SPECIFIC INTEGRATED Circuit (ASIC), or a combination thereof.
The memory 1220 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In the embodiment of the present disclosure, the memory 1220 has executable instructions stored therein, and the processor 1210 may read the executable instructions from the memory 1220 and execute the instructions to implement all or part of the steps of the method for multi-core scheduling according to any of the above-described exemplary embodiments.
Exemplary computer-readable storage Medium
In addition to the methods and apparatus described above, exemplary embodiments of the present disclosure may also be a computer program product or a computer readable storage medium storing the computer program product. The computer program product comprises computer program instructions executable by a processor to perform all or part of the steps described in any of the methods of the exemplary embodiments described above.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages, as well as scripting languages (e.g., python). The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the readable storage medium include: a Static Random Access Memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic or optical disk, or any suitable combination of the foregoing having one or more electrical conductors.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of multi-core scheduling, comprising:
acquiring initial allocation information of tasks of each core and priorities of tasks on each core in a vehicle multi-core processor;
Determining the imbalance degree of task allocation according to the initial allocation information of the tasks and the priorities of the tasks on the cores, wherein the imbalance degree of task allocation represents the imbalance degree of at least one of the number of task allocation among the cores and the execution time of the tasks with different priorities in the cores;
Performing task migration scheduling among the cores according to the unbalanced degree of task allocation; the transfer queue is a data structure for storing transfer tasks among the cores, and the corresponding relation between the tasks and the cores is adjusted through the belonging relation of the tasks in the data structure.
2. The method of claim 1, wherein performing task migration scheduling between cores based on the degree of task allocation imbalance comprises:
According to the imbalance degree of the task allocation quantity among the cores, determining whether task migration and corresponding migration states are needed to be carried out in each core or not respectively;
and under the condition that the first core needs to carry out task migration and the migration state is the migration state, one or more first tasks with higher priorities in the ready queue are migrated to the migration queue according to the relative high and low priorities of the tasks in the ready queue of the first core.
3. The method as recited in claim 2, further comprising:
and under the condition that the second core needs to perform task migration and the migration state is the migration state, migrating one or more second tasks in the migration queue to a ready queue of the second core.
4. A method according to claim 3, wherein tasks within the migration queue are ordered by priority from high to low;
Migrating one or more second tasks in the migration queue to a ready queue of the second core, comprising:
And based on the priority ordering of the tasks in the migration queue, migrating a second task with the priority ordering in the migration queue at a preset priority to a ready queue of the second core.
5. The method of claim 1, wherein migrating the target-migrating task to the target-migrating core comprises:
The target migration task is migrated to the migration queue, and a migration target corresponding to the target migration task is marked according to the target migration core;
at a migration queue access time set by each core, each core accesses the migration queue;
And under the condition that the fifth core accesses the migration queue and determines that a first target migration task marking the fifth core as a migration target exists, migrating the first target migration task in the migration queue into a ready queue of the fifth core.
6. The method of claim 5, wherein migrating the target-migrating task to the target-migrating core comprises:
The target migration task is migrated to the migration queue, and a migration target corresponding to the target migration task is marked according to the target migration core;
at a migration queue access time set by each core, each core accesses the migration queue;
And under the condition that the fifth core accesses the migration queue and determines that a first target migration task marking the fifth core as a migration target exists, migrating the first target migration task in the migration queue into a ready queue of the fifth core.
7. An apparatus for multi-core scheduling, comprising:
The information acquisition module is used for acquiring the initial allocation information of the tasks of each core and the priority of the tasks on each core in the vehicle multi-core processor;
The determining module is used for determining the imbalance degree of task allocation according to the initial allocation information of the tasks and the priorities of the tasks on the cores, wherein the imbalance degree of task allocation represents the imbalance degree of at least one of the number of task allocation among the cores and the execution time of the tasks with different priorities in the cores;
the scheduling module is used for performing task migration scheduling among the cores according to the imbalance degree of task allocation; the transfer queue is a data structure for storing transfer tasks among the cores, and the corresponding relation between the tasks and the cores is adjusted through the belonging relation of the tasks in the data structure.
8. A vehicle, characterized in that a set of instructions is stored, which is executed by a system module of the vehicle to implement the method of any of claims 1-6.
9. An electronic device, comprising:
a processor;
A memory for storing the processor-executable instructions;
The processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any of claims 1-6.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, perform the method of any of the claims 1-6.
CN202410174062.8A 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium Pending CN118034880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410174062.8A CN118034880A (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410174062.8A CN118034880A (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium
CN202211716884.1A CN116185582B (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202211716884.1A Division CN116185582B (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN118034880A true CN118034880A (en) 2024-05-14

Family

ID=86447095

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410174062.8A Pending CN118034880A (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium
CN202211716884.1A Active CN116185582B (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211716884.1A Active CN116185582B (en) 2022-12-29 2022-12-29 Multi-core scheduling method, device, vehicle, electronic equipment and medium

Country Status (1)

Country Link
CN (2) CN118034880A (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2792087B1 (en) * 1999-04-07 2001-06-15 Bull Sa METHOD FOR IMPROVING THE PERFORMANCE OF A MULTIPROCESSOR SYSTEM INCLUDING A WORK WAITING LINE AND SYSTEM ARCHITECTURE FOR IMPLEMENTING THE METHOD
US8397235B2 (en) * 2008-10-07 2013-03-12 Futurewei Technologies, Inc. User tolerance based scheduling method for aperiodic real-time tasks
CN104615488B (en) * 2015-01-16 2018-01-19 华为技术有限公司 The method and apparatus of task scheduling in heterogeneous multi-core reconfigurable calculating platform
CN107145388B (en) * 2017-05-25 2020-10-30 深信服科技股份有限公司 Task scheduling method and system under multi-task environment
CN107656813A (en) * 2017-09-29 2018-02-02 上海联影医疗科技有限公司 The method, apparatus and terminal of a kind of load dispatch
CN110633133A (en) * 2018-06-21 2019-12-31 中兴通讯股份有限公司 Task processing method and device and computer readable storage medium
CN109144691B (en) * 2018-07-13 2021-08-20 哈尔滨工程大学 Task scheduling and distributing method for multi-core processor
CN114168352B (en) * 2021-12-30 2022-11-11 科东(广州)软件科技有限公司 Multi-core task scheduling method and device, electronic equipment and storage medium
CN114816747A (en) * 2022-04-21 2022-07-29 国汽智控(北京)科技有限公司 Multi-core load regulation and control method and device of processor and electronic equipment

Also Published As

Publication number Publication date
CN116185582B (en) 2024-03-01
CN116185582A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN110832512B (en) System and method for reducing latency in providing transport services
US11822958B2 (en) Method and a device for data transmission between an internal memory of a system-on-chip and an external memory
JP6535713B2 (en) System, method, and program for managing allocation of vehicle
CN110324806B (en) Control device, recording medium, and control method
CN115086438B (en) Task processing method, video processing unit, component and traffic equipment
CN113190282A (en) Android operating environment construction method and device
CN114461396A (en) LXC-based resource scheduling method, device, equipment and storage medium
CN115145210A (en) Method and device for controlling control signal of vehicle, medium and chip
CN116185582B (en) Multi-core scheduling method, device, vehicle, electronic equipment and medium
US11934865B2 (en) Vehicle control system for dynamically updating system components
CN112579271A (en) Real-time task scheduling method, module, terminal and storage medium for non-real-time operating system
CN102929800B (en) Cache consistency protocol derivation processing method
US8793423B2 (en) Servicing interrupt requests in a computer system
US20240054002A1 (en) Vehicle-mounted computer, computer execution method, and computer program
US20220121408A1 (en) Content presentation control device, presentation control method, and non-transitory computer-readable storage medium
CN111737013B (en) Chip resource management method and device, storage medium and system chip
CN115454594A (en) Vehicle domain controller communication signal period optimization method and system and vehicle
CN113888028A (en) Patrol task allocation method and device, electronic equipment and storage medium
US20210173720A1 (en) Dynamically direct compute tasks to any available compute resource within any local compute cluster of an embedded system
CN110262522B (en) Method and apparatus for controlling an autonomous vehicle
CN116880982A (en) Ros2 deterministic scheduling method and device, electronic equipment, storage medium and vehicle
CN115589434B (en) Request processing method, service-oriented system, ECU, vehicle and storage medium
US20240036941A1 (en) Vehicle-mounted computer, computer execution method, and computer program
CN117687763B (en) High concurrency data weak priority processing method and device, electronic equipment and storage medium
WO2024074090A1 (en) Smart cockpit implementation method, smart cockpit, and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination