WO2022141297A1 - 事件处理方法和装置 - Google Patents

事件处理方法和装置 Download PDF

Info

Publication number
WO2022141297A1
WO2022141297A1 PCT/CN2020/141805 CN2020141805W WO2022141297A1 WO 2022141297 A1 WO2022141297 A1 WO 2022141297A1 CN 2020141805 W CN2020141805 W CN 2020141805W WO 2022141297 A1 WO2022141297 A1 WO 2022141297A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
processing
data
computing resource
processing task
Prior art date
Application number
PCT/CN2020/141805
Other languages
English (en)
French (fr)
Inventor
唐斌
吴东君
刘道根
陶喆
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/141805 priority Critical patent/WO2022141297A1/zh
Priority to CN202080108269.5A priority patent/CN116670650A/zh
Publication of WO2022141297A1 publication Critical patent/WO2022141297A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Definitions

  • the present application relates to the field of automatic driving, and in particular, to an event processing method and device.
  • the data in the automatic driving system in the existing automatic driving device is processed by several processing tasks to generate control instructions, so as to control the automatic driving device to realize automatic driving.
  • the several processing tasks include one or more of sensor processing tasks, perception processing tasks, fusion processing tasks, and regulation processing tasks.
  • Each processing task will have a corresponding trigger event.
  • When a processing task is triggered by the corresponding event it will start to process the data received from the upstream (hardware resource or previous processing task).
  • the scheduler in the autonomous driving system schedules multiple events to trigger corresponding processing tasks to process data, for example, by processing events corresponding to sensor processing tasks, perception processing tasks, fusion processing tasks, and regulation processing tasks. Scheduling, thereby triggering the corresponding processing task to process the data, so as to finally generate a control instruction.
  • scheduling of multiple events can be considered as scheduling of processing tasks corresponding to multiple events.
  • the above processing tasks need to process data on computing resources. Before scheduling an event, the scheduler needs to select computing resources so that the processing tasks occupy the computing resources to process the data and generate subsequent output data.
  • the scheduler can schedule processing tasks through a scheduling strategy of static arrangement (ie, artificial setting) or a scheduling strategy based on a completely fair scheduling algorithm (Completely Fair Scheduler, CFS) based on the time occupied by computing resources.
  • CFS Comppletely Fair Scheduler
  • the purpose of the CFS-based scheduling strategy is to ensure that the running time of each processing task on the computing resources is relatively fair, and does not guarantee the certainty of the completion time of each processing task, which leads to the inconsistency of the running time of the entire automatic driving system. Certainty, resulting in potential safety hazards for autonomous driving devices.
  • Embodiments of the present application provide an event processing method and apparatus, which are used to improve the certainty of an automatic driving system.
  • an event processing method comprising: selecting a first event from at least one event; selecting a first computing resource from at least one computing resource that is not occupied, and finally according to a processing task corresponding to the first event and The first computing resource processes data corresponding to the processing task.
  • the method provided in the first aspect ensures that one computing resource is only occupied by a processing task corresponding to one event, thereby ensuring that the processing task corresponding to the event can complete the processing of corresponding data within a certain time.
  • the data to be processed by the automatic driving system is determined, and the processing time of the processing tasks corresponding to these data is also determined. Therefore, the time from the input data to the output control command of the automatic driving system is also determined.
  • processing the data corresponding to the processing task according to the processing task corresponding to the first event and the first computing resource includes: selecting the first data from the data queue corresponding to the processing task, and A data determines a first processing function, and then processes the first data according to the first processing function and the first computing resource.
  • the data processing can be implemented according to the time when the data enters the data queue, which ensures the data processing time sequence.
  • processing the data corresponding to the processing task according to the processing task corresponding to the first event and the first computing resource further includes: selecting the second data from the data queue corresponding to the processing task, and according to The second data determines a second processing function, and then processes the second data according to the second processing function and the first computing resource.
  • other data can be selected for processing in the data queue until all the data to be processed are processed, so that the processing task can decide how much data needs to be processed according to the scheduling needs and control the occupation. The time to compute the resource.
  • the method further includes: if a new event is generated and requires the execution of the next processing task, the new event added to the event queue.
  • the generated new event is added to the event queue, so that the generated new event is also scheduled, so that the event that triggers the processing task in the automatic driving system to start working needs to be scheduled by the event to ensure that the entire The certainty of autonomous driving systems.
  • the method before processing the data corresponding to the processing task according to the processing task corresponding to the first event and the first computing resource, the method further includes: setting the first computing resource to an occupied state. At this time, setting the first computing resource to the occupied state can prevent other processing tasks from selecting the computing resource during the process of data processing by the processing task corresponding to the first event, thereby avoiding the preemption of computing resources between processing tasks. , so that the processing task can complete the processing of the required data within a certain time, and the end-to-end delay is also determined, thereby improving the certainty of the entire automatic driving system.
  • the computing resources include computing resources of the CPU.
  • selecting the first event from at least one event includes: selecting an event with the highest priority from the event queue as the first event.
  • events with high priority can be preferentially processed, thereby improving the certainty of the entire automatic driving system.
  • the method further includes: setting the first computing resource to an unoccupied state. At this time, the first computing resource is set to an unoccupied state, so that the computing resource can be selected again in subsequent scheduling.
  • an event processing device comprising: a processing unit; the processing unit is configured to: select a first event from at least one event, select a first computing resource from at least one computing resource that is not occupied, and select the first computing resource according to the The processing task corresponding to the first event and the first computing resource process the data corresponding to the processing task.
  • the processing unit is specifically configured to: select the first data from the data queue corresponding to the processing task, determine the first processing function according to the first data, and determine the first processing function according to the first processing function and the first computing resource The first data is processed.
  • the processing unit is specifically configured to: select the second data from the data queue corresponding to the processing task, determine the second processing function according to the second data, and determine the second processing function according to the second processing function and the first computing resource The second data is processed.
  • the processing unit in the process that the processing unit processes the first data according to the first processing function and the first computing resource, the processing unit is further configured to: if a new event is generated, the next processing task needs to be executed to add a new event to the event queue.
  • the processing unit is further configured to: set the first computing resource to an occupied state.
  • the computing resources include computing resources of the CPU.
  • the processing unit is specifically configured to: select the event with the highest priority from the event queue as the first event.
  • the processing unit is further configured to: set the first computing resource to an unoccupied state.
  • an event processing apparatus including: a processor and an interface, the processor is coupled to a memory through the interface, and when the processor executes a computer program or instruction in the memory, any one of the methods provided in the first aspect is enabled be executed.
  • an event processing apparatus comprising: a processor coupled to a memory; a memory for storing a computer program; a processor for executing the computer program stored in the memory, so that the event processing apparatus executes Any one of the methods provided in the first aspect above.
  • a computer-readable storage medium including a computer program, which, when the computer program runs on a computer, causes the computer to execute any one of the methods provided in the first aspect.
  • a computer program product including a computer program that, when the computer program runs on a computer, causes the computer to execute any one of the methods provided in the first aspect.
  • 1 is a schematic diagram of the architecture of an automatic driving system
  • Fig. 2 is a kind of schematic diagram of processing task execution sequence
  • FIG. 3 is a flowchart of an event processing method provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an automatic driving system according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of another automatic driving system provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the composition of an event processing apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a hardware structure of an event processing apparatus according to an embodiment of the present application.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • at least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • words such as “first” and “second” are used to distinguish the same items or similar items with basically the same function and effect. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • Processing task It can be used as the execution body to process data in the automatic driving system.
  • the Linux system is an operating system in the autonomous driving system.
  • the operating system in the automatic driving system may also be other, which is only an example here, and is not limited.
  • Event used to trigger a processing task to process data, including information of the processing task that needs to be triggered (for example, the identification of the processing task).
  • An event is used to trigger a corresponding processing task to process data
  • the information of the processing task that needs to be triggered included in the event is used to indicate the processing task triggered by the event.
  • the IO event includes sensor processing task information
  • the message event includes sensing processing task information, fusion processing task information, or regulation control processing task information.
  • the scheduler when the scheduler selects an event for scheduling from the received events, it knows which processing task needs to be scheduled to start working according to the information of the processing task that needs to be triggered included in the event.
  • Data refers to the input information required to process the task work.
  • the data can be organized into a data queue and read according to the first-in, first-out rule.
  • the data sent from the upstream (hardware resource or processing task) to the downstream (processing task) is stored in the memory in the form of a data queue, and a data queue has an identifier (the identifier is used to indicate which processing task the data queue is read by). fetch), the downstream can read the required data from the corresponding data queue according to the identifier.
  • the essence of data transmission is still sent from the upstream to the downstream, but the memory is required to relay the data (that is, the data is placed in the memory in the form of a data queue, and then the downstream reads the corresponding data from the memory. data in the data queue), thereby avoiding the communication overhead caused by sending data directly from upstream to downstream.
  • the methods provided in the embodiments of the present application can be applied to the automatic driving system as shown in FIG. 1 .
  • the autonomous driving system shown in Figure 1 includes sensor data, sensor processing tasks, perception processing tasks, fusion processing tasks, regulation processing tasks, schedulers, and computing resources.
  • Sensor processing tasks can convert data received from sensors into formatted data that the autonomous driving system can recognize.
  • the sensors can be vehicle cameras, lidars, and millimeter-wave radars.
  • Perceptual processing tasks can obtain formatted data from sensor processing tasks, and identify participants, obstacles, traffic signs, etc. in traffic scenes from the formatted data through artificial intelligence (AI) algorithms and traditional algorithms.
  • AI artificial intelligence
  • the participants in the traffic scene include but are not limited to vehicles and pedestrians.
  • the fusion processing task can fuse data such as participants, obstacles, and traffic signs in the traffic scene to provide a complete driving environment for the autonomous driving system.
  • the regulation processing task can decide to output control instructions through the identified driving environment, traffic rules, driver instructions, and its own state information (eg, its own vehicle speed information, orientation information, etc.).
  • its own state information eg, its own vehicle speed information, orientation information, etc.
  • the scheduler is used to schedule the processing tasks in the autonomous driving system so that the end-to-end delay (the delay from the sensor input to the output of the control command) is as deterministic as possible.
  • the scheduler may include a scheduler that performs scheduling based on events (in this case, the scheduler may be called an event scheduler, a first-level scheduler, etc.), and a scheduler that performs scheduling based on data inside the processing task (used to schedule data in the data queue). data, at this time, the scheduler may be called a secondary scheduler, a data scheduler, etc.), and a Linux scheduler (referring to a scheduler in a Linux system).
  • the computing resources may include a central processing core (Central Processing Unit, CPU) and/or an artificial intelligence processing core (Neural Process Unit, NPU).
  • the computing resources may also include other computing resources (eg, image video preprocessing (Digit Vedio Pre Process, DVPP)).
  • the processing tasks in the autonomous driving system occupy computing resources to process data.
  • the data processing process of the autonomous driving system shown in Figure 1 includes:
  • the scheduler first selects a computing resource from at least one computing resource, then selects an event from at least one event, and then determines which processing task is triggered to process data according to the information of the processing task to be triggered in the event. Exemplarily, assuming that the selected event is an IO event, the scheduler will trigger the sensor processing task to perform data processing according to the information of the sensor processing task that needs to be triggered included in the IO event. A processing task will generate events and data in the process of processing data, the data will be sent to the next processing task, and the event will be sent to the scheduler. The scheduler selects a resource and an event again, and triggers the processing task according to the needs of the event. The information determines which processing task is triggered for data processing.
  • the IO events corresponding to the sensor data are generated by hardware IO resources, not by other processing tasks.
  • the hardware IO resource generates IO events corresponding to sensor data, and sends the generated IO events to the scheduler.
  • the execution sequence may be sensor processing task ⁇ perception processing task ⁇ fusion processing task ⁇ regulation processing task
  • a control instruction is obtained.
  • the automatic driving system described in the embodiments of the present application is to more clearly illustrate the technical solutions of the embodiments of the present application, and does not constitute a limitation on the technical solutions provided by the embodiments of the present application.
  • the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • the following method 1 or method 2 can be applied to schedule processing tasks or events in an automatic driving system.
  • Static scheduling It refers to the artificial scheduling of trigger conditions, running time, running cycle and scheduling priority of processing tasks by engineers in advance.
  • Static orchestration is generally applied to the control system of the microcontroller (Micro Controller Unit, MCU) in the traditional vehicle field, where the MCU provides computing resources for the automatic driving system, and the automatic driving system (Autosar Operating System, Autosar OS) itself acts as a scheduler,
  • the processing tasks are scheduled according to the static scheduling, and multiple processing tasks with higher priority are monitored, so as to ensure the determinism of the entire automatic driving system as much as possible.
  • the scheduler supports high-priority processing tasks to preempt the computing resources of low-priority processing tasks, that is, when high-priority processing tasks need to run urgently, the scheduler will also stop low-priority processing tasks.
  • high-priority processing tasks run on this computing resource first.
  • it is necessary to configure a strategy to be scheduled for each computing resource For example, referring to Table 1 for the strategy of static arrangement of a single computing resource, assuming that it runs on CPU0, processing tasks T1, T2, T3, and T4 need to be executed.
  • the attributes that need to be configured for these four processing tasks are as follows:
  • T1 cycle Trigger method Period (ms)/Trigger Source Running time (ms) T1 cycle 100 10 T2 cycle 100 20 T3 event Event 1 (event generated by T2) 20 T4 cycle 100 30
  • the processing task T1 occupies CPU0 for processing at 0ms-10ms
  • the processing task T2 occupies CPU0 for processing at 10ms-30ms
  • the processing task T3 occupies CPU0 for processing at 30ms-50ms
  • the processing task T4 occupies CPU0 for processing, that is, T1, T2, T3, and T4 can be executed in the order shown in Figure 2, and looped in turn.
  • T1, T2, T4 are triggered to work at a fixed time, for example, the 0ms T1 is triggered in any 100ms cycle, the 10ms T2 is triggered in any 100ms cycle, and in any 100ms cycle Within the first 50ms T4 is triggered.
  • T3 is triggered by an event generated after T2 completes processing data, that is, after T2 completes data processing, the generated event is sent to the scheduler, and the scheduler schedules the event so that immediately after T2 T3 occupies CPU0 to process the data.
  • Completely fair scheduling algorithm It refers to recording the occupied time of all processing tasks on CPU computing resources. During scheduling, CPU resources are allocated according to the occupied time of each processing task on CPU computing resources to ensure that all processing tasks occupy CPU computing resources. fairness of time. Comparing the occupied time of each processing task on CPU computing resources is achieved by introducing virtual runtime (vruntime), the method is as follows:
  • vruntime actual running time*1024/processing task weight.
  • vruntime standardizes the actual running time according to the weight. After the standardization, the occupied time of each processing task on the computing resources can be directly compared with the vruntime.
  • the weight of the processing task can be configured by the engineer according to the importance of the processing task. For example, if the processing task a is the most important, the processing task weight of the processing task a is configured to be the highest, so that the tikntime of the processing task 1 is the smallest, and more running time can be obtained.
  • the completely fair scheduling algorithm makes the time that each processing task occupies computing resources fair. In this case, when there are more tasks in the automatic driving system, , the certainty of the completion time of each task cannot be guaranteed, so the certainty of the end-to-end delay between the input data and the output control instructions of the entire autonomous driving system cannot be guaranteed.
  • the completely fair scheduling algorithm is based on the independent scheduling of each computing resource, and it is necessary to periodically balance the load among various computing resources. In this case, the load of each computing resource cannot be kept balanced at all times, and it cannot be done in time. Scheduling exacerbates uncertainty.
  • Preemption will also occur between various processing tasks, and the sudden interference of other processing tasks will affect the normal processing of the preempted processing tasks and increase the uncertainty.
  • processing task a There is a dependency relationship between each processing task (for example, there is a dependency relationship between processing task a and processing task b, that is, processing task a can be run before processing task b), if it is only based on the time occupied by computing resources For example, when processing task a is not finished, scheduling processing task b will cause processing task b to fail to run normally, resulting in invalid scheduling.
  • the embodiment of the present application proposes an event processing method, which can not only run on a higher-level automatic driving system, but also improve the certainty of scheduling of the automatic driving system.
  • the method includes:
  • the execution subject of the steps shown in FIG. 3 may be an automatic driving device, for example, a car, a subway, a high-speed rail, and the like.
  • the first event may be any one of the at least one events, or may be the event with the highest priority.
  • the at least one event may be all events generated by the automatic driving system, or may be a part of events among all the events generated by the automatic driving system (for example, events sent by each processing task to the event scheduler).
  • At least one event may form an event queue, and the events in the event queue may be sorted according to priority from high to low or from low to high.
  • the priority is processing task priority and/or event priority. There are two ways to select an event.
  • Mode 1 In the case of selecting according to the priority of the processing task, select the event with the highest priority of the processing task, and if the selected event is one, the event is regarded as the first event. If there are multiple events selected, the event with the highest event priority is selected from the multiple events, and if one event is selected according to the event priority, the event is regarded as the first event. If there are more events selected according to the event priority, the event with the earliest generation time is selected as the first event. Exemplarily, the three events with the highest processing task priority (for example, event 1, event 2, and event 3) are first selected according to the processing task priority, and then the event with the highest event priority is selected according to the event priority. Two events (for example, event 1 and event 2) can be selected from the above three events, and then the judgment is made according to the time when the event is generated, and an event with an earlier time (for example, event 1) is preferentially selected as the first event. event.
  • event 1 and event 2 Two events (for example, event 1 and event 2) can be selected from the above
  • Mode 2 In the case of selecting according to the event priority, the event with the highest event priority is selected, and if the selected event is one, the event is regarded as the first event. If there are multiple events selected, the event with the highest processing task priority is selected from the multiple events, and if one event is selected according to the processing task priority, this event is taken as the first event. If there are more events selected according to the priority of processing tasks, the event with the earliest generation time is selected as the first event. Exemplarily, the three events with the highest event priority (for example, event 1, event 2, and event 3) are first selected according to the event priority, and then the event with the highest processing task priority is selected according to the processing task priority. Two events (for example, event 1 and event 2) can be selected from the above three events, and then the judgment is made according to the time when the event is generated, and an event with an earlier time (for example, event 1) is preferentially selected as the first event. event.
  • event 1 and event 2 Two events (for example, event 1 and event 2) can be selected from the above three events
  • method 1 and method 2 are only examples of methods for selecting events. Specifically, the selection can be made according to the order of processing task priority, event priority, and time, which can be set by the engineer according to the actual situation, which is not limited in this application.
  • the event queue may be stored in the memory, and the scheduler in the architecture shown in FIG. 1 may read the event queue from the memory.
  • S301 may be executed by the scheduler in the architecture shown in FIG. 1 .
  • the first computing resource can only be selected from the unoccupied computing resources. If it is selected from the occupied computing resources, it may happen that a high-priority event preempts the computing resource of a low-priority event.
  • the occupied computing resources are processing low-priority events, if a high-priority event occurs, the occupied computing resources will be preempted, and the low-priority event cannot be completed within a certain time. This in turn exacerbates the uncertainty of autonomous driving systems. Therefore, selecting the first resource from the unoccupied computing resources can avoid the interference of other events, thereby ensuring that each event can be processed within a certain time.
  • the event scheduler may unify events generated by different hardware resources (eg, hardware IO resources, NPU resources, etc.) and/or events generated by processing tasks on different types of computing resources Scheduling (eg, events generated by processing tasks on the computing resources of the NPU, events generated by processing tasks on the computing resources of the CPU).
  • different hardware resources eg, hardware IO resources, NPU resources, etc.
  • Scheduling eg, events generated by processing tasks on the computing resources of the NPU, events generated by processing tasks on the computing resources of the CPU.
  • the method before processing the data corresponding to the processing task according to the processing task corresponding to the first event and the first computing resource, the method further includes: setting the first computing resource to an occupied state to avoid subsequent scheduling of other events. When the first computing resource is selected again, the problem of preemption occurs.
  • S302 may be performed by the scheduler in the architecture shown in FIG. 1 .
  • S303 when S303 is specifically implemented, it includes: the secondary scheduler selects the first data from the data queue corresponding to the processing task corresponding to the first event, then determines the first processing function according to the first data, and finally determines the first processing function according to the first data.
  • the processing function and the first computing resource process the first data.
  • the processing task corresponding to the first event selects the first data from the corresponding data queue, then determines the first processing function according to the first data, finally occupies the first computing resource, and uses the first processing function to process the first data. to be processed.
  • the method further includes: the secondary scheduler selects the second data from the data queue corresponding to the processing task corresponding to the first event, then determines the second processing function according to the second data, and finally determines the second processing function according to the second processing function and
  • the first computing resource processes the second data.
  • the third data can also be selected for processing, and the specific processing data in the data queue is set by a certain algorithm, which is not limited in this application until the required data processing is completed. , the processing of the data queue corresponding to the processing task corresponding to the first event is completed.
  • the first processing function and the second processing function may be the same or different.
  • a data queue can correspond to one processing function, or can correspond to multiple processing functions, which can be set by an engineer according to the actual situation, which is not limited in this application.
  • the processing task can be a process, then the event and the process also have the same corresponding relationship as the processing task.
  • a processing task (also called a process) includes one or more threads. At this time, a thread in the process can process data.
  • the following method A, method B or method C can be used to select the thread processing data in the process , the selected thread is used to perform the actions performed by the above processing tasks.
  • Method A If the number of threads created in the process is the same as the number of reserved computing resources, and there is a one-to-one correspondence between threads and computing resources, for example, as shown in Table 2 below, determine the corresponding computing resources in the process.
  • the thread is the thread that processes the data.
  • Table 2 is only an example of the correspondence between events, processing tasks (also called processes), and computing resources.
  • the correspondence between events, processing tasks, and computing resources can be set by engineers according to the actual situation. Applications are not limited.
  • Method B If the number of threads created in the process is less than the number of reserved computing resources, and one thread corresponds to multiple computing resources, determine that the thread corresponding to the computing resource in the process is the thread processing data.
  • Method C If the relationship between the thread and the computing resource changes dynamically, determine an unoccupied thread in the process as the thread for processing data.
  • the implementation process of S301-S303 may be: the event scheduler selects an event (ie, the first event) from the event queue, and selects an unoccupied computing resource (ie, the first computing resource). ). Since the first event has been selected, the processing task corresponding to the first event can be determined according to the corresponding relationship between the event and the processing task.
  • the event scheduler determines a thread (referred to as the first thread) from the unoccupied threads in the process, and notifies the Linux scheduler to wake up the first thread, and the secondary scheduler can select a data from the data queue (ie First data), and then determine the processing function (ie, the first processing function) corresponding to the first data according to the corresponding relationship between the data and the processing function, and process the first data on the first computing resource according to the first processing function.
  • a thread referred to as the first thread
  • the secondary scheduler can select a data from the data queue (ie First data), and then determine the processing function (ie, the first processing function) corresponding to the first data according to the corresponding relationship between the data and the processing function, and process the first data on the first computing resource according to the first processing function.
  • the method for selecting a thread for processing data in a process may also be other methods, which are not limited in this application.
  • the method for selecting a thread for processing data in a process can be set by an engineer according to the actual situation, which is not limited in this application.
  • the data queue and the processing function also have a corresponding relationship.
  • a data queue corresponds to a processing function (for example, the data queue A1 shown in Table 3 only corresponds to the processing function F1)
  • all data in the data queue corresponds to the processing function (for example, the data shown in Table 3 Both the data B1 and the data B2 in the queue A1 correspond to the processing function F1).
  • each data in the data queue may correspond to different processing functions (for example, table The data B3 in the data queue A2 shown in 3 corresponds to the processing function F1, and the data B4 and B5 correspond to the processing function F2).
  • Table 3 below is only an example of the correspondence between data queues, data, and processing functions. The correspondence between data queues, data, and processing functions is set by the engineer according to the actual situation, and is not limited in this application.
  • the method further includes: when the processing task corresponding to the first event runs in the operating system (for example, the Linux system), the processing task corresponding to the first event should avoid being directly or indirectly affected by the Linux system. Otherwise, the processing related to the automatic driving system in the processing task may be interrupted by the linux system, so that the processing task corresponding to the first event cannot be completed within a certain time. If the operating system is interrupted, thereby causing interference, the engineer can remove the factors that will be interrupted by the operating system in the processing task corresponding to the first event in advance.
  • the operating system for example, the Linux system
  • the processing task corresponding to the first event should avoid being directly or indirectly affected by the Linux system. Otherwise, the processing related to the automatic driving system in the processing task may be interrupted by the linux system, so that the processing task corresponding to the first event cannot be completed within a certain time. If the operating system is interrupted, thereby causing interference, the engineer can remove the factors that will be interrupted by the operating system in the processing task corresponding to the first event in advance.
  • the method further includes: waking up the processing task corresponding to the first event, and waking up is selecting a thread in the processing task (also called a process) to wake up, that is, the occupation right of the first computing resource. Transition from the Linux scheduler to this thread.
  • the method further includes: if a new event is generated and requires the execution of the next processing task, adding the new event to the event queue. middle. That is to say, if the new event generated does not require the execution of the next processing task (for example, the new event is also executed by the processing task), the new event is not added to the event queue, and the processing task can be directly Handle the new event directly.
  • the method further includes: setting the first computing resource to an unoccupied state, so that a subsequent event scheduler can The computing resource can be scheduled.
  • S303 may be performed by processing tasks in the architecture shown in FIG. 1 (for example, sensor processing tasks, perception processing tasks, fusion processing tasks, regulation processing tasks, etc.).
  • the process includes:
  • S402 can be understood by referring to the above-mentioned S302, and details are not repeated here.
  • the processing task is the process
  • the process corresponding to the first event has also been determined.
  • the event scheduler needs to determine a thread from the multiple threads and notify the Linux scheduler to wake up the thread.
  • the event scheduler Before the thread in the processing task corresponding to the first event has not finished processing the required data, the event scheduler will not notify the Linux scheduler to wake up other threads during the scheduling process, which can eliminate the influence of the Linux scheduler.
  • the Linux scheduler adopts the CFS scheduling policy
  • the Linux scheduler can only select one thread each time it schedules, so that the Linux scheduler acts as an executor of transition computing resources.
  • the method includes: reading a data queue formed by data sent upstream from a memory, and then selecting one piece of data from the data queue as the first data.
  • the subsequent event scheduling can select the first computing resource, thereby promoting a virtuous cycle of resource occupation.
  • the above S401-S402 can be executed by the event scheduler
  • the above S403 can be executed jointly by the event scheduler and the Linux scheduler
  • the above S404, S405, S407 and S408 can be executed by the secondary scheduler.
  • the event scheduler selects a computing resource from the unoccupied computing resources, which can ensure that a processing task occupies a computing resource that is not occupied by other processing tasks. It avoids multiple processing tasks from preempting the same computing resource, and also avoids mutual interference between events, so that the processing tasks corresponding to the events can complete the processing of the required data within a certain time, and the end-to-end delay is also determined. , thereby improving the certainty of the entire autonomous driving system. And there is no need to statically arrange processing tasks, which greatly improves the development efficiency of operators.
  • the method provided by the above embodiment is exemplarily described below through a specific process (refer to FIG. 7 for details).
  • This method can be applied in the architectures shown in Figures 5 and 6 .
  • the difference between FIG. 5 and FIG. 6 is that the location of the event scheduler is different.
  • the event scheduler is placed in a software operating system
  • the event scheduler is placed in a hardware resource.
  • the time for scheduling events can be shortened, and it can directly receive events generated by other hardware resources without going through the software operating system, further shortening the time for scheduling events, and avoiding the software operating system. Handle the distraction of tasks and increase the certainty of scheduling events.
  • the architectures shown in FIGS. 5 and 6 include hardware resources, event queues, event schedulers, Linux schedulers, and processing tasks (ie, processes, and the processes include threads).
  • Hardware resources include hardware IO resources and computing resources.
  • the hardware IO resources are used to communicate with devices outside the automatic driving system, for example, with the interface of the Ethernet card (Ethernet, ETH), or the controller area network (Controller Area Network, CAN) communication, and when receiving external IO events are generated after the data.
  • the computing resources include computing resources on the CPU and computing resources on the NPU.
  • the computing resources are used to process events and data contained in the processing tasks corresponding to the events, and generate new events during or after processing the data.
  • the computing resources used by the automatic driving system can be reserved in advance by engineers. In order to improve the certainty of the automatic driving system, the computing resources reserved in advance can be used only by the automatic driving system.
  • the event queue includes all or some of the events sorted by priority from low to high or from high to low. Due to the limited memory of the event queue, it may not be possible to save all events. Therefore, some events can be placed in the event queue according to the policy.
  • the policy can be configured by the engineer, for example, according to the priority rules, the events with higher priority are placed in the event queue first.
  • the event scheduler is used to select unoccupied computing resources, and is also used to view events in the event queue and select events from the event queue.
  • a processing task (also referred to as a process) includes one or more threads, and the threads perform the functions of the secondary scheduler and processing functions. That is to say, the thread can perform two parts of the function, one part performs the function of the secondary scheduler, and the other part performs the function of the processing function.
  • the Linux scheduler is used to wake up threads in processing tasks.
  • the event scheduler places the events generated by the hardware resources and the events generated by the processing tasks in the event queue according to the priority.
  • the event scheduler checks whether there is an event in the event queue.
  • the event scheduler checks whether there are computing resources of the CPU that are not occupied.
  • the event scheduler selects a first computing resource from the computing resources of the CPU that are not occupied, and sets the first computing resource to an occupied state.
  • the event scheduler selects the event with the highest priority from the event queue as the first event, and determines the processing task corresponding to the first event.
  • the event scheduler determines one thread as the first thread in one or more threads in the processing task (also referred to as a process) corresponding to the first event, and notifies the Linux scheduler to wake up the first thread.
  • the secondary scheduler selects one piece of data as the first data from the data queue corresponding to the processing task corresponding to the first event.
  • the secondary scheduler determines a processing function corresponding to the first data according to the first data.
  • the first thread processes the first data on the first computing resource according to the processing function corresponding to the first data.
  • the generated new event is placed in the event queue.
  • the secondary scheduler selects one piece of data from the data queue corresponding to the processing task corresponding to the first event as the second data.
  • the secondary scheduler determines a processing function corresponding to the first data according to the second data.
  • the first thread processes the second data on the first computing resource according to the processing function corresponding to the second data.
  • the generated new event is placed in the event queue. Subsequently, you can continue to select other data in the data queue for processing until all the data to be processed are processed, but if there is only one data to be processed by the processing task, after processing the first data, you do not need to select any more data.
  • the second data is processed.
  • the secondary scheduler sends a message that the processing task corresponding to the first event has been processed to the event scheduler.
  • the event scheduler receives a message that the processing task corresponding to the first event has been processed.
  • the message that the processing task corresponding to the first event has been processed is used to transfer the occupation right of the first computing resource from the first thread to the event scheduler.
  • the event scheduler sets the first computing resource to an unoccupied state.
  • the event processing apparatus includes at least one of corresponding hardware structures and software modules for executing each function.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the event processing apparatus may be divided into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation.
  • FIG. 8 shows the event processing apparatus (referred to as the event processing apparatus 80 ) involved in the above embodiment
  • a possible structural schematic diagram of , the event processing apparatus 80 includes a processing unit 801 and a storage unit 802 .
  • the processing unit 801 is used to control and manage the actions of the event processing apparatus.
  • the processing unit 801 is used to execute 301-303 in FIG. 3, 401-409 in FIG. 4, 701-714 in FIG. 7, and/or Actions performed by the event processing apparatus in other processes described in the embodiments of this application.
  • the storage unit 802 is used for storing program codes and data of the event processing apparatus.
  • the event processing apparatus 80 further includes a communication unit 803 .
  • the processing unit 801 may communicate with other network entities through the communication unit 803 .
  • the communication unit 803 may be a hardware IO resource, and the hardware IO resource may communicate with a device other than the event processing apparatus, for example, communicate with ETH or CAN.
  • the event processing apparatus 80 may be a device or a chip or a chip system.
  • the processing unit 801 may be a processor; the communication unit 802 may be a communication interface, a transceiver, or an input interface and/or an output interface.
  • the transceiver may be a transceiver circuit.
  • the input interface may be an input circuit, and the output interface may be an output circuit.
  • the communication unit 802 may be a communication interface, an input interface and/or an output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit on the chip or the chip system, etc.
  • the processing unit 801 may be a processor, a processing circuit, a logic circuit, or the like.
  • the integrated units in FIG. 8 may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as independent products.
  • the medium includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the various embodiments of the present application.
  • Storage media for storing computer software products include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or CD, etc. that can store program codes medium.
  • An embodiment of the present application further provides a schematic diagram of a hardware structure of an event processing apparatus.
  • the event processing apparatus includes a processor 901 and, optionally, a memory 902 connected to the processor 901 .
  • the processor 901 may be a CPU, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application.
  • the processor 901 may also include a plurality of CPUs, and the processor 901 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, or processing cores for processing data (eg, computer program instructions).
  • the memory 902 can be a ROM or other types of static storage devices that can store static information and instructions, a RAM or other types of dynamic storage devices that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory).
  • read-only memory EEPROM
  • compact disc read-only memory CD-ROM
  • optical disc storage including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.
  • magnetic disk A storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, is not limited in this embodiment of the present application.
  • the memory 902 may exist independently (in this case, the memory 902 may be located outside the event processing apparatus, or may be located in the event processing apparatus), or may be integrated with the processor 901 . Among them, the memory 902 may contain computer program code.
  • the processor 901 is configured to execute the computer program codes stored in the memory 902, so as to implement the methods provided by the embodiments of the present application.
  • the processor 901 is configured to control and manage the actions of the event processing apparatus, for example, the processor 901 is configured to execute 301-303 in FIG. 3 , 401-409 in FIG. 4 , 701-714 in FIG. 7 , and/or Actions performed by the event processing apparatus in other processes described in the embodiments of this application.
  • the memory 902 is used to store program codes and data of the event processing apparatus.
  • the event processing apparatus further includes a transceiver, or the processor 901 includes a logic circuit and an input interface and/or an output interface.
  • the processor 901 may communicate with other network entities through a transceiver, or through an input interface and/or an output interface.
  • the transceiver, or the input interface and/or the output interface can be hardware IO resources, and the hardware IO resources can communicate with devices outside the event processing apparatus, for example, communicate with ETH or CAN.
  • each step in the method provided in this embodiment may be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • Embodiments of the present application also provide a computer-readable storage medium, including computer-executable instructions, which, when run on a computer, cause the computer to execute any of the foregoing methods.
  • Embodiments of the present application also provide a computer program product, including computer-executable instructions, which, when run on a computer, cause the computer to execute any of the above methods.
  • the embodiments of the present application also provide an event processing apparatus, including: a processor and an interface, the processor is coupled to a memory through the interface, and when the processor executes a computer program or computer-executable instructions in the memory, the above-mentioned embodiments provide the Either method is executed.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc., that can be integrated with the media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种事件处理方法及装置,涉及自动驾驶领域。该方法包括,从至少一个事件中选择第一事件(S301);从未被占用的至少一个计算资源中选择第一计算资源(S302),根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理(S303)。通过该方法,确保一个计算资源只被一个事件对应的处理任务占用,从而确保事件对应的处理任务能在确定的事件内完成对对应数据的处理。自动驾驶系统所需处理的数据是确定的,这些数据对应的处理任务处理这些数据的时间也是确定,因此自动驾驶系统从输入数据到输出控制指令的时间也是确定的。

Description

事件处理方法和装置 技术领域
本申请涉及自动驾驶领域,尤其涉及一种事件处理方法和装置。
背景技术
目前,现有的自动驾驶装置中的自动驾驶系统中的数据经过若干个处理任务处理后生成控制指令,以控制自动驾驶装置实现自动驾驶。若干个处理任务包括传感器处理任务、感知处理任务、融合处理任务、规控处理任务中的一个或多个。每个处理任务会有对应的触发事件,当一个处理任务被所对应事件触发后,会开始对接收到的来自于上游(硬件资源或者上一个处理任务)的数据进行处理。自动驾驶系统中的调度器通过对多个事件进行调度,从而触发对应的处理任务对数据进行处理,例如,通过对传感器处理任务、感知处理任务、融合处理任务、规控处理任务对应的事件进行调度,从而触发对应的处理任务对数据进行处理,以便最终生成控制指令。由于事件与处理任务之间存在对应关系,因此对多个事件的调度可以认为是对多个事件对应的处理任务的调度。上述处理任务对数据的处理需要在计算资源上进行,调度器在对一个事件调度之前,需要选择计算资源,以便处理任务占用该计算资源对数据进行处理,并产生后续的输出数据。
调度器目前可以通过静态编排(即人为设置)的调度策略或者基于占用计算资源的时间的完全公平调度算法(Completely Fair Scheduler,CFS)的调度策略对处理任务进行调度。目前,自动驾驶系统的复杂等级不断提高,而完全静态编排的工作量太大,人工几乎无法完成,导致静态编排的策略无法应用。而基于CFS的调度策略的目的是为了保证每个处理任务在计算资源上的运行时间相对公平,并不保证每个处理任务的完成时间的确定性,从而导致了整个自动驾驶系统运行时间的不确定性,造成自动驾驶装置存在安全隐患。
发明内容
本申请实施例提供了一种事件处理方法和装置,用于提高自动驾驶系统的确定性。
为达到上述目的,本申请实施例提供如下技术方案:
第一方面,提供了一种事件处理方法,包括:从至少一个事件中选择第一事件;从未被占用的至少一个计算资源中选择第一计算资源,最后根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理。第一方面提供的方法,确保一个计算资源只被一个事件对应的处理任务占用,从而确保事件对应的处理任务能在确定的时间内完成对对应数据的处理。自动驾驶系统所需处理的数据是确定的,这些数据对应的处理任务处理这些数据的时间也是确定,因此自动驾驶系统从输入数据到输出控制指令的时间也是确定的。
在一种可能的实现方式中,根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理,包括:从处理任务所对应的数据队列中选择第一数据,并根据第一数据确定第一处理函数,再根据第一处理函数和第一计算资源对第一数据进行处理。该种可能的实现方式,可以实现数据的处理按照数据进入数据队列的时间执 行,保证了数据的处理时序。
在一种可能的实现方式中,根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理,还包括:从处理任务所对应的数据队列中选择第二数据,并根据第二数据确定第二处理函数,再根据第二处理函数和第一计算资源对第二数据进行处理。此时,在第一数据完成处理后,可以继续在数据队列中选择其他数据进行处理,直至所需处理的数据均处理完毕后,从而使得处理任务可以根据调度需要决定需要处理多少数据,控制占用计算资源的时间。
在一种可能的实现方式中,在根据第一处理函数和第一计算资源对第一数据进行处理的过程中,方法还包括:若产生新的事件需要下一个处理任务执行,将新的事件添加至事件队列中。该种可能的实现方式,将所产生的新的事件添加至事件队列,以便所产生的新的事件也被调度,使得触发自动驾驶系统中处理任务开始工作的事件都需要经过事件调度,确保整个自动驾驶系统的确定性。
在一种可能的实现方式中,在根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理之前,方法还包括:将第一计算资源设置为被占用的状态。此时,将第一计算资源设置为被占用的状态可以使得在第一事件对应的处理任务对数据处理的过程中没有其他处理任务再选择该计算资源,避免处理任务之间对计算资源的抢占,从而使得处理任务在确定时间内完成对所需数据的处理,还使得端到端时延是确定的,进而提高了整个自动驾驶系统的确定性。
在一种可能的实现方式中,计算资源包括CPU的计算资源。
在一种可能的实现方式中,从至少一个事件中选择第一事件,包括:从事件队列中选择优先级最高的事件作为第一事件。该种可能的实现方式,可以使得优先级高的事件得到优先的处理,从而提高整个自动驾驶系统的确定性。
在一种可能的实现方式中,在根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理之后,方法还包括:将第一计算资源设置为未被占用状态。此时,将第一计算资源设置为未被占用状态,以便后续的调度可以再选择该计算资源。
第二方面,提供了一种事件处理装置,包括:处理单元;处理单元,用于:从至少一个事件中选择第一事件,从未被占用的至少一个计算资源中选择第一计算资源,根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理。
在一种可能的实现方式中,处理单元,具体用于:从处理任务所对应的数据队列中选择第一数据,根据第一数据确定第一处理函数,根据第一处理函数和第一计算资源对第一数据进行处理。
在一种可能的实现方式中,处理单元,具体用于:从处理任务所对应的数据队列中选择第二数据,根据第二数据确定第二处理函数,根据第二处理函数和第一计算资源对第二数据进行处理。
在一种可能的实现方式中,在处理单元根据第一处理函数和第一计算资源对第一数据进行处理的过程中,处理单元,还用于:若产生新的事件需要下一个处理任务执行,将新的事件添加至事件队列中。
在一种可能的实现方式中,处理单元,还用于:将第一计算资源设置为被占用的状态。
在一种可能的实现方式中,计算资源包括CPU的计算资源。
在一种可能的实现方式中,处理单元,具体用于:从事件队列中选择优先级最高的事件作为第一事件。
在一种可能的实现方式中,处理单元,还用于:将第一计算资源设置为未被占用状态。
第三方面,提供了一种事件处理装置,包括:处理器和接口,处理器通过接口与存储器耦合,当处理器执行存储器中的计算机程序或指令时,使得第一方面提供的任意一种方法被执行。
第四方面,提供了一种事件处理装置,包括:处理器,处理器与存储器耦合;存储器,用于存储计算机程序;处理器,用于执行存储器中存储的计算机程序,以使得事件处理装置执行上述第一方面提供的任意一种方法。
第五方面,提供了一种计算机可读存储介质,包括计算机程序,当计算机程序在计算机上运行时,使得计算机执行第一方面提供的任意一种方法。
第六方面,提供了一种计算机程序产品,包括计算机程序,当计算机程序在计算机上运行时,使得计算机执行第一方面提供的任意一种方法。
第二方面至第六方面中的任一种实现方式所带来的技术效果可参见第一方面对应实现方式所带来的技术效果,此处不再赘述。
需要说明的是,在方案不矛盾的前提下,上述各个方面中的方案均可以结合。
附图说明
图1为一种自动驾驶系统的架构示意图;
图2为一种处理任务执行顺序的示意图;
图3为本申请实施例提供的一种事件处理方法的流程图;
图4为本申请实施例提供的又一种事件处理方法的流程图;
图5为本申请实施例提供的一种自动驾驶系统的架构示意图;
图6为本申请实施例提供的又一种自动驾驶系统的架构示意图;
图7为本申请实施例提供的又一种事件处理方法的流程图;
图8为本申请实施例提供的一种事件处理装置的组成示意图;
图9为本申请实施例提供的一种事件处理装置的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项 或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
为了使得本申请实施例更加的清楚,对本申请中涉及到的部分概念作简单介绍。
1、处理任务:可以作为执行主体,用于处理自动驾驶系统中的数据。
例如,可以是处理任务模块,也可以是在Linux系统中一个进程。其中,Linux系统为自动驾驶系统中的操作系统。在实际实现时,自动驾驶系统中的操作系统还可以为其他,此处仅为举例,不做限定。
2、事件:用于触发处理任务去处理数据,包括需要触发的处理任务的信息(例如,处理任务的标识)。
其中,事件和处理任务是有对应关系的。一个事件用于触发对应的处理任务去处理数据,事件中所包括的需要触发的处理任务的信息用于指示事件所触发的处理任务。例如,IO事件中包括传感器处理任务的信息、消息事件中包括感知处理任务的信息、融合处理任务的信息或规控处理任务的信息。
也就是说,当调度器从所接收到的事件中选择一个事件进行调度时,根据事件包括的需要触发的处理任务的信息,就知道需要调度哪个处理任务开始工作。
3、数据:指处理任务工作时所需的输入信息。
由于处理任务需要连续处理数据,因此可以将数据组织为数据队列的方式,按照先进先出的规则读取。其中,上游(硬件资源或处理任务)发送至下游(处理任务)的数据,以数据队列的形式存储至存储器,并且一个数据队列具有一个标识(标识用于表示该数据队列由哪个处理任务进行读取),下游可以依据标识从所对应的数据队列中读取所需的数据。需要说明的是,数据传输的本质还是由上游发送至下游,只不过需要存储器将数据中继一下(即将数据以数据队列的形式放置于存储器之中,再由下游去存储器中读取所对应的数据队列中的数据),从而可以避免上游向下游直接发送数据造成的通信开销。
以上是对本申请实施例中涉及到的部分概念所做的简单介绍。
本申请实施例提供的方法可以应用于如图1所示的自动驾驶系统中。图1所示的自动驾驶系统中包括传感器数据、传感器处理任务、感知处理任务、融合处理任务、规控处理任务、调度器以及计算资源。
传感器处理任务可以将从传感器接收到的数据转换成自动驾驶系统所能识别的格式化数据。其中,传感器可以为车载摄像头、激光雷达、毫米波雷达。
感知处理任务可以从传感器处理任务得到格式化数据,通过人工智能(Artificial Intelligence,AI)算法和传统算法从格式化数据中识别出交通场景中的参与者、障碍物、交通标志等。其中,交通场景中的参与者包括但不限于车辆、行人。
融合处理任务可以将交通场景中的参与者、障碍物、交通标志等数据做融合处理,从而为自动驾驶系统提供完整的行驶环境。
规控处理任务可以通过已识别的行驶环境、交通规则、驾驶员指令以及自身的状态信息(例如,自身的车速信息、方位信息等),决策输出控制指令。
调度器用于调度自动驾驶系统中的处理任务,以满足端到端时延(从传感器输入端到控制指令的输出端的时延)是尽可能确定的。调度器可以包括基于事件进行调度 的调度器(此时,该调度器可以称为事件调度器、一级调度器等),处理任务内部基于数据进行调度的调度器(用于调度数据队列中的数据,此时,该调度器可以称为二级调度器、数据调度器等),以及Linux调度器(指Linux系统中的调度器)。
计算资源可以包括中央处理核(Central Processing Unit,CPU)和/或人工智能处理核(Neural Process Unit,NPU)。计算资源还可以包括其他的计算资源(例如,图像视频预处理(Digit Vedio Pre Process,DVPP))。自动驾驶系统中的处理任务占用计算资源对数据进行处理。
图1所示的自动驾驶系统进行数据处理的过程包括:
先由调度器从至少一个计算资源中选择一个计算资源,再从至少一个事件中选择一个事件,再根据事件中的需要触发的处理任务的信息确定触发哪个处理任务进行数据的处理。示例性的,假设所选择的事件为IO事件,调度器则会依据该IO事件中所包括的需要触发的传感器处理任务的信息,触发传感器处理任务进行数据处理。一个处理任务在对数据进行处理的过程中会产生事件和数据,数据给下一个处理任务,事件给调度器,调度器再次选择一个资源以及一个事件,并根据事件中的需要触发的处理任务的信息确定触发哪个处理任务进行数据处理。其中,传感器数据对应的IO事件是由硬件IO资源产生的,而不是由其他处理任务产生的。硬件IO资源产生传感器数据对应的IO事件,并将所产生的IO事件发送至调度器。在传感器数据依次经过上述4个处理任务的处理(执行顺序可以为传感器处理任务→感知处理任务→融合处理任务→规控处理任务)后,得到控制指令。
上述自动驾驶系统进行数据处理的过程仅为一种示例性的描述,在实际实现的时候,可能还有其他任务的参与,或者只需更少的处理任务的参与,或者处理任务之间的顺序发生变化。
此外,本申请实施例描述的自动驾驶系统是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着自动驾驶系统的升级,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
目前,在自动驾驶系统中可以应用以下方法1或方法2对处理任务或者事件进行调度。
方法1
静态编排:是指由工程师对处理任务的触发条件、运行时间、运行周期以及调度的优先级进行预先的人为编排。静态编排一般应用于传统车载领域的微控制器(Micro Controller Unit,MCU)的控制系统,其中,MCU为自动驾驶系统提供计算资源,自动驾驶系统(Autosar Operating System,Autosar OS)自身作为调度器,依据静态编排对处理任务进行调度,并对多个优先级较高的处理任务进行监控,使得尽可能的保证整个自动驾驶系统运行时的确定性。在该场景下,调度器是支持高优先级的处理任务抢占低优先级的处理任务的计算资源的,也就是说,当高优先级的处理任务需要紧急运行时,调度器也会停止低优先级的运行,优先在该计算资源上运行高优先级的处理任务。另外,在MCU上支持多个计算资源的情况下,需要对每个计算资源都配置静待编排的策略。例如,参见表1针对单个计算资源的静态编排的策略,假设运行在CPU0 上,需要执行处理任务T1、T2、T3、T4,该4个处理任务需要配置的属性如下:
表1
处理任务 触发方式 周期(ms)/触发源 运行时间(ms)
T1 周期 100 10
T2 周期 100 20
T3 事件 事件1(T2产生的事件) 20
T4 周期 100 30
按照以上配置,CPU0在每100ms的周期内,在0ms-10ms时处理任务T1占用CPU0进行处理,在10ms-30ms时处理任务T2占用CPU0进行处理,在30ms-50ms时处理任务T3占用CPU0进行处理,在50ms-80ms时处理任务T4占用CPU0进行处理,即可以按照图2所示的顺序执行T1、T2、T3、T4,并进行依次循环。其中,T1、T2、T4是在固定时刻被触发工作的,例如,在任意一个100ms的周期内的第0msT1被触发、在任意一个100ms的周期内的第10msT2被触发、在任意一个100ms的周期内的第50ms T4被触发。T3是由T2在完成处理数据后所产生的事件触发的,也就是说,在T2完成数据的处理后,将所产生的事件发送至调度器,调度器调度该事件使得在T2之后紧接着由T3占用CPU0对数据进行处理。
方法1存在的问题:
11、由上述针对静态编排的调度策略可知,静待编排的调度策略不能适用于等级较高的自动驾驶系统,因为等级较高的自动驾驶系统中处理任务的数量较多且计算量较大,从而需要的计算资源的数量也较多,因此,在每个计算资源上进行人为编排每个处理任务的触发事件、运行时间、运行周期以及调度的优先级几乎是无法完成的。
12、在实际情况中,每个处理任务实际的运行时间以及运行周期是会变化的,难以按照预期发展,由于调度器会受到整体系统调度的影响(例如,在低优先级的处理任务并没有在该计算资源上运行完毕时,高优先级的处理任务会抢占低优先级处理任务的计算资源,导致低优先级的处理任务暂停处理)会导致部分业务无法正常完成处理,一个处理任务无法正常完成,就会影响下一个处理任务的执行时间,进而导致该计算资源上的每个处理任务运行的时间以及周期均会收到影响,从而降低了自动驾驶系统的确定性。
13、工程师为了保证一个周期内能够正常完成所有处理任务,一般会参考该周期内的各个处理任务的最大运行时间规划,以便为突发情况预留些时间的余量,导致运行周期偏长,增加端到端时延,也会对计算资源造成一定程度上的浪费。
方法2
完全公平调度算法:是指将所有处理任务对CPU计算资源的占用时间记录下来,在调度时,通过各个处理任务对CPU计算资源的占用时间来分配CPU资源,保证所有处理任务对CPU计算资源占用时间的公平性。比较各个处理任务对CPU计算资源的占用时间通过引入虚拟运行时间(vruntime)来实现,该方法如下:
vruntime=实际运行时间*1024/处理任务权重。
实际上vruntime就是根据权重将实际运行时间标准化,标准化之后,各个处理任务对计算资源的占用时间就可以直接通过比较vruntime即可,其中,处理任务权重可 以由工程师依据处理任务的重要程度来配置,例如,处理任务a最重要,则将处理任务a的处理任务权重配置的最高,使得处理任务1的vruntime最小,能够获取更多的运行时间。
方法2存在的问题
21、由上述针对完全公平调度算法的调度策略的描述可知,完全公平调度算法是使得每个处理任务占用计算资源的时间是公平的,在这种情况下,当自动驾驶系统中的任务越多时,不能保证每个任务完成时间的确定性,因此不能保证整个自动驾驶系统从输入数据到输出控制指令之间端到端时延的确定性。
22、完全公平调度算法是基于每个计算资源所做的独立调度,需要定期在各个计算资源之间做负载均衡,在这种情况下会导致各个计算资源的负载不能时时保持均衡,不能及时做调度加剧了不确定性。
23、各个处理任务之间也会发生抢占,突如其来的其他处理任务的干扰会影响被抢占处理任务的正常处理,也加剧了不确定性。
24、各个处理任务之间是存在依赖关系的(例如,处理任务a与处理任务b之间存在依赖关系,即先运行完处理任务a才能运行处理任务b),若仅仅依据占用计算资源的时间的公平调度,导致处理任务之间的依赖关系被忽略,例如,出现在未运行完处理任务a时,把处理任务b调度起来,则会导致处理任务b无法正常运行,从而造成无效调度。
为了解决方法1和方法2出现的问题,本申请实施例提出了一种事件处理的方法,不仅可以运行在等级较高的自动驾驶系统上,也可以提高自动驾驶系统调度的确定性。如图3所示,该方法包括:
S301、从至少一个事件中选择第一事件。
需要说明的是,图3所示的步骤的执行主体可以为自动驾驶装置,例如,车、地铁、高铁等。
第一事件可以为至少一个事件中的任意一个事件,也可以为优先级最高的事件。至少一个事件可以为自动驾驶系统产生的所有的事件,也可以为自动驾驶系统产生的所有的事件中的部分事件(例如,各个处理任务发送给事件调度器的事件)。
其中,至少一个事件可以组成一个事件队列,事件队列中的事件可以按照优先级由高至低或者优先级由低至高排序。其中,优先级为处理任务优先级和/或事件优先级。选择一个事件的方式可以有以下两种。
方式1、在依据处理任务优先级进行选择的情况下,选择处理任务优先级最高的事件,如果选择出的事件为一个,则将该事件作为第一事件。如果选择出的事件有多个,再从这多个事件中选择事件优先级最高的事件,如果根据事件优先级选择出的事件为一个,则将该事件作为第一事件。如果根据事件优先级选择出的事件还有多个,则选择产生时间最早的事件作为第一事件。示例性的,先依据处理任务优先级选择出了处理任务优先级最高的三个事件(例如,事件1、事件2、事件3),再依据事件优先级选择事件优先级最高的事件,若只能从上述三个事件中选择出两个事件(例如,事件1和事件2),则再依据事件所产生的时间进行判断,优先选择时间较早的一个事件(例如,事件1)作为第一事件。
方式2、在依据事件优先级进行选择的情况下,选择事件优先级最高的事件,如果选择出的事件为一个,将该事件作为第一事件。如果选择出的事件有多个,再从这多个事件中选择处理任务优先级最高的事件,如果根据处理任务优先级选择出的事件为一个,将该事件作为第一事件。如果根据处理任务优先级选择出的事件还有多个,则选择产生时间最早的事件作为第一事件。示例性的,先依据事件优先级选择出了事件优先级最高的三个事件(例如,事件1、事件2、事件3),再依据处理任务优先级选择处理任务优先级最高的事件,若只能从上述三个事件中选择出两个事件(例如,事件1和事件2),则再依据事件所产生的时间进行判断,优先选择时间较早的一个事件(例如,事件1)作为第一事件。
需要说明的是,上述方法1和方法2仅为选择事件的方法的示例。具体依据处理任务优先级、事件优先级、时间这三者怎样的先后顺序进行选择可以由工程师根据实际情况设置,本申请不作限制。
其中,事件队列可以存储在存储器中,图1所示的架构中的调度器可以从存储器中读取事件队列。
需要说明的是,S301可以由图1所示架构中的调度器执行。
S302、从未被占用的至少一个计算资源中选择第一计算资源。
其中,第一计算资源只能从未被占用的计算资源中选取,如果从已被占用的计算资源中选取,则可能会出现高优先级的事件抢占低优先级的事件的计算资源的情况,当已被占用的计算资源正在处理着低优先级的事件,这时如果出现高优先级的事件就会抢占该已被占用的计算资源,那么低优先级的事件就无法在确定时间内完成,进而加剧自动驾驶系统的不确定性。因此,从未被占用的计算资源中选取第一资源可以避免了其他事件的干扰,从而保证每个事件都能在确定的时间内完成处理。
S302在具体实现时,事件调度器可以对不同的硬件资源(例如,硬件IO资源、NPU资源等)所产生的事件,和/或,处理任务在不同类型的计算资源上所产生的事件进行统一调度(例如,处理任务在NPU的计算资源上所产生的事件、处理任务在CPU的计算资源上所产生的事件)。由上述可知,本申请实施例可支持对处理任务在不同类型的计算资源上所产生的事件进行统一调度,不再对处理任务在不同类型的计算资源上所产生的事件分别调度,从而降低了不同类型的计算资源之间的通信开销和调度开销。事件调度器对不同的硬件资源所产生的事件进行统一调度时,也具备该有益效果,可参考进行理解,不再赘述。
可选的,在根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理之前,该方法还包括:将第一计算资源设置为被占用的状态,避免后续调度其他事件时再选择第一计算资源导致发生抢占的问题。
需要说明的是,工程师可以将处理任务所需占用的计算资源进行预留,使得这些预留的计算资源上只运行与自动驾驶相关的事件或者处理任务,从而避免其他与自动驾驶系统无关的事件或者处理任务的干扰,进而提高自动驾驶系统的确定性。
需要说明的是,S302可以由图1所示架构中的调度器进行执行。
S303、根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理。具体的,第一事件对应的处理任务可以占用第一计算资源对数据进行处理。
可选的,S303在具体实现时,包括:二级调度器从第一事件对应的处理任务所对应的数据队列中选择第一数据,再根据第一数据确定第一处理函数,最后根据第一处理函数和第一计算资源对第一数据进行处理。具体的,第一事件对应的处理任务从所对应的数据队列中选择第一数据,再根据第一数据确定第一处理函数,最后占用第一计算资源,并采用第一处理函数对第一数据进行处理。在后续过程,该方法还包括:二级调度器从第一事件对应的处理任务所对应的数据队列中选择第二数据,再根据第二数据确定第二处理函数,最后根据第二处理函数和第一计算资源对第二数据进行处理。在该情况下,处理完第二数据还可以选择第三数据进行处理,具体处理数据队列中的多少个数据通过一定的算法进行设置,本申请不做限制,直至将所需的数据处理完成后,才完成第一事件对应的处理任务对应的数据队列的处理。需要说明的是,如果处理任务所需处理的数据仅有一个时,在处理完第一数据后,则不需要再选择第二数据进行处理。第一处理函数和第二处理函数可以相同也可以不同。需要说明的是,一个数据队列可以对应一个处理函数,也可以对应多个处理函数,可以由工程师依据实际情况进行设置,本申请不作限制。
其中,由于事件需要触发处理任务工作,因此,事件与处理任务存在对应关系。而处理任务可以是进程,那么事件与进程也存在着与处理任务相同的对应关系。处理任务(也可称为进程)包括一个或多个线程,此时,进程中的某个线程可以对数据进行处理,可以通过以下方法A、方法B或方法C在进程中选择处理数据的线程,选择出的线程用于执行上述处理任务执行的动作。
方法A、若进程中创建的线程的数目与预留的计算资源的数目相同,并且线程与计算资源存在一一对应的关系,例如,如下表2所示,则确定进程中的与计算资源对应的线程为处理数据的线程。表2仅为事件、处理任务(也可称为进程)、计算资源之间的对应关系的一种示例,事件、处理任务、计算资源之间的对应关系可以由工程师根据实际情况进行设置,本申请不作限制。
表2
Figure PCTCN2020141805-appb-000001
方法B、若进程中创建的线程的数目小于预留的计算资源的数目,并且一个线程对应多个计算资源,则确定进程中的与计算资源对应的线程为处理数据的线程。
方法C、若线程与计算资源的关系动态变化,则将进程中的未被占用的一个线程确定为处理数据的线程。
示例性的,基于选择出的线程,S301-S303的实现过程可以为:事件调度器从事 件队列中选择一个事件(即第一事件),选择一个未被占用的计算资源(即第一计算资源)。由于已经选择出第一事件,可以依据事件与处理任务的对应关系确定第一事件对应的处理任务。事件调度器从进程中的未被占用的线程中确定一个线程(记为第一线程),并通知Linux调度器对第一线程进行唤醒,二级调度器可以从数据队列中选择一个数据(即第一数据),再依据数据与处理函数的对应关系确定第一数据对应的处理函数(即第一处理函数),并根据第一处理函数在第一计算资源上对第一数据进行处理。
在进程中选择处理数据的线程的方法还可以为其他方法,本申请不做限制。在进程中选择处理数据的线程的方法可以由工程师依据实际情况进行设置,本申请不做限制。
需要说明的是,数据队列与处理函数也具有对应关系。当一个数据队列对应一个处理函数时(例如,表3中所示的数据队列A1只对应处理函数F1),该数据队列中的所有数据均对应该处理函数(例如,表3中所示的数据队列A1中的数据B1和数据B2均对应处理函数F1)。当一个数据队列对应多个处理函数时(例如,表3中所示的数据队列A2对应处理函数F1和处理函数F2),那么该数据队列中的各个数据可能对应不同的处理函数(例如,表3中所示的数据队列A2中的数据B3对应处理函数F1、数据B4和数据B5对应处理函数F2)。下表3仅为数据队列、数据、处理函数之间的对应关系的一种示例,数据队列、数据、处理函数的对应关系由工程师根据实际情况进行设置,本申请不作限制。
表3
Figure PCTCN2020141805-appb-000002
可选的,在S303之前,该方法还包括:当第一事件对应的处理任务运行在操作系统(例如,Linux系统)中时,第一事件对应的处理任务应避免直接或间接的被linux系统调用,否则可能造成该处理任务中与自动驾驶系统有关的处理被linux系统中断,导致第一事件对应的处理任务无法在确定的时间内完成,因此,为了防止在处理任务处理数据的过程中被操作系统中断,从而造成干扰,工程师可以预先将第一事件对应的处理任务中会被操作系统中断的因素去除。
可选的,在S303之前,该方法还包括:将第一事件对应的处理任务唤醒,唤醒就 是在处理任务(也可称为进程)中选择一个线程进行唤醒,即将第一计算资源的占用权从Linux调度器过渡至该线程。
可选的,在根据第一处理函数和第一计算资源对第一数据进行处理的过程中,该方法还包括:若产生新的事件需要下一个处理任务执行,将新的事件添加至事件队列中。也就是说,如果产生的新的事件不需要下一个处理任务执行(例如,产生新的事件还由该处理任务执行),则不将该新的事件添加至事件队列中,可以直接该处理任务直接处理该新的事件。
可选的,在根据第一事件对应的处理任务和第一计算资源对处理任务对应的数据进行处理之后,该方法还包括:将第一计算资源设置为未被占用状态,以便后续事件调度器可以调度该计算资源。
需要说明的是,S303可以由图1所示架构中的处理任务(例如,传感器处理任务、感知处理任务、融合处理任务、规控处理任务等)进行执行。
需要说明的是,本申请实施例对S301和S302的执行顺序不作限定。
为了使得上述过程更加清楚,以下通过图4对上述过程进行示例性的说明,参见图4,该过程包括:
S401、在事件队列中选择一个事件作为第一事件(例如、事件1)。
S401可参见上述S301进行理解,此处不再赘述。
S402、在未被占用的计算资源中选择一个计算资源作为第一计算资源(例如、计算资源1)。
S402可参见上述S302进行理解,此处不再赘述。
S403、在事件与处理任务的对应关系中找出事件对应的处理任务(例如、事件1对应的是处理任务1)。
其中,由于处理任务就是进程,则第一事件所对应的进程也已确定。
S403在具体实现时,若处理任务(也可称为进程)中包括多个线程,则需要事件调度器从多个线程中确定一个线程并通知Linux调度器将该线程唤醒。
在第一事件对应的处理任务中的该线程没有完成对所需数据的处理之前,事件调度器不会通知Linux调度器在该调度过程中再唤醒其他的线程,可以消除Linux调度器的影响。
其中,虽然Linux调度器采用的是CFS调度策略,但是在本申请中Linux调度器每次调度时,只能选择一个线程,从而使得Linux调度器充当一个过渡计算资源的执行者,在该情况下,可以不针对Linux调度器内部的调度策略做更改,从而能够更简化的实现本申请的技术效果。
S404、从接收到的来自上游的数据队列中选择一个数据作为第一数据(例如、数据1)。
在S404的具体实现过程中,该方法包括:从存储器中读取由上游发送的数据组织成的数据队列,再从该数据队列选择一个数据作为第一数据。
S405、在数据与处理函数的对应关系中找出第一数据对应的处理函数(例如、处理函数1)。
数据与处理函数的对应关系可以参见上述相应的描述,此处不再赘述。
S406、在第一计算资源上根据第一数据对应的处理函数对第一数据进行处理。
其中,在S406的处理过程中,若产生新的事件需要下一个处理任务执行,将新的事件添加至事件队列中。
S407、从数据队里中选择一个数据作为第二数据(例如、数据2)。
可参见上述S404进行理解,此处不再赘述。需要说明的是,如果处理任务所需处理的数据仅有一个时,在处理完第一数据后,则不需要再选择第二数据进行处理。
S408、在数据与处理函数的对应关系中找出第二数据对应的处理函数(例如、处理函数2)。
可参见上述S405进行理解,此处不在赘述。
S409、在第一计算资源上根据第二数据对应的处理函数对第二数据进行处理。
其中,在S409的处理过程中,若产生新的事件需要下一个处理任务执行,将新的事件添加至事件队列中。
后续可以继续在数据队列中选择其他数据进行处理,直至所需处理的数据均处理完毕后,通知事件调度器将第一计算资源设置为未被占用的状态。
在该情况下,可以使得后续的事件调度可以选择第一计算资源,促进资源占用的良性循环。
需要说明的是,上述S401-S402可以由事件调度器执行,上述S403由事件调度器和Linux调度器共同执行,上述S404、S405、S407、S408可以由二级调度器执行。
本申请实施例提供的方法中,事件调度器在对一个事件进行调度之前,从未被占用的计算资源中选择一个计算资源,可以保证一个处理任务占用未被其他处理任务占用的一个计算资源,避免了多个处理任务抢占同一个计算资源,也就避免事件之间的互相干扰,从而使得事件对应的处理任务在确定时间内完成对所需数据的处理,还使得端到端时延是确定的,进而提高了整个自动驾驶系统的确定性。并且也无需对处理任务做静态编排,大幅度的提升了运行人员开发的效率。
需要说明的是,本申请实施例对S401和S402的执行顺序不作限定。
为了使得本申请实施例更加的清楚,以下通过一个具体的流程(具体可参见图7)对上述实施例提供的方法做示例性说明。该方法可以应用在图5和图6所示的架构中。图5和图6的区别是事件调度器的位置不同,图5所示的架构中将事件调度器置于软件操作系统中,图6所示的架构中将事件调度器置于硬件资源中。事件调度器作为一个硬件资源时,调度事件的时间可以缩短,并且可以直接接收其他硬件资源所产生的事件,不经过软件操作系统,进一步的缩短了调度事件的时间,还可以避免软件操作系统对处理任务的干扰,提高调度事件的确定性。
图5和图6所示的架构中包括硬件资源、事件队列、事件调度器、Linux调度器、处理任务(即进程,进程中包括线程)。
硬件资源包括硬件IO资源和计算资源。其中,硬件IO资源用于与自动驾驶系统之外的设备进行通信,例如,与以太网卡的接口(Ethernet,ETH),或者控制器局部网(Controller Area Network,CAN)通信,并在接收到外部数据后产生IO事件。计算资源包括CPU上的计算资源与NPU上的计算资源,计算资源用于对事件以及事件对应的处理任务中所包含的数据进行处理,并在处理数据的过程中或者处理之后产生 新的事件。另外,自动驾驶系统所用的计算资源可以由工程师预先预留,为了提高自动驾驶系统的确定性,预先预留的计算资源可以仅供自动驾驶系统使用。
事件队列中包括按照优先级由低至高或者由高至低排序的所有或者部分事件。由于事件队列的内存有限,可能无法保存所有事件,因此,可以依据策略将部分事件置于事件队列中。该策略可由工程师进行配置,例如,依据优先级的规则放置,优先级高的事件优先放置在事件队列中。
事件调度器用于选择未被占用的计算资源,还用于查看事件队列中的事件,并从事件队列中选择事件。
处理任务(也可称为进程)中包括一个或多个线程,线程执行二级调度器和处理函数的功能。也就是说,线程可以执行两部分功能,一部分执行二级调度器的功能,一部分执行处理函数的功能。
Linux调度器用于将处理任务中的线程唤醒。
参见图7,包括:
S701、事件调度器将硬件资源产生的事件以及处理任务产生的事件按照优先级置于事件队列中。
S702、事件调度器查看事件队列中是否有事件。
若有,则执行S703。若无,则结束流程。
S703、事件调度器查看有无未被占用的CPU的计算资源。
若有,则执行S704。若无,则结束流程。
S704、事件调度器从未被占用的CPU的计算资源中选择第一计算资源,并将第一计算资源设置为被占用状态。
可参见上述S302进行理解,此处不再赘述。
S705、事件调度器从事件队列中选择优先级最高的事件作为第一事件,并确定第一事件对应的处理任务。
可参见上述S301进行理解,此处不再赘述。
S706、事件调度器在第一事件对应的处理任务(也可称为进程)中的一个或多个线程中确定一个线程作为第一线程,并通知Linux调度器将第一线程唤醒。
可参见上述S403进行理解,此处不再赘述。
S707、二级调度器从第一事件对应的处理任务所对应的数据队列中选择一个数据作为第一数据。
可参见上述S404进行理解,此处不在赘述。
S708、二级调度器根据第一数据确定第一数据对应的处理函数。
可参见上述S405进行理解,此处不在赘述。
S709、第一线程在第一计算资源上根据第一数据对应的处理函数对第一数据进行处理。
其中,在对第一数据进行处理的过程中,若产生新的事件需要下一个处理任务执行,则将所产生的新的事件置于事件队列。
S710、二级调度器从第一事件对应的处理任务所对应的数据队列中选择一个数据作为第二数据。
可参见上述S407进行理解,此处不在赘述。
S711、二级调度器根据第二数据确定第一数据对应的处理函数。
可参见上述S408进行理解,此处不在赘述。
S712、第一线程在第一计算资源上根据第二数据对应的处理函数对第二数据进行处理。
其中,在对第二数据进行处理的过程中,若产生新的事件需要下一个处理任务执行,则将所产生的新的事件置于事件队列。后续可以继续在数据队列中选择其他数据进行处理,直至所需处理的数据均处理完毕,但是如果处理任务所需处理的数据仅有一个时,在处理完第一数据后,则不需要再选择第二数据进行处理。
S713、二级调度器将第一事件对应的处理任务处理完成的消息发送至事件调度器。相应的,事件调度器接收第一事件对应的处理任务处理完成的消息。
其中,第一事件对应的处理任务处理完成的消息用于将第一计算资源的占用权从第一线程再过渡至事件调度器。
S714、事件调度器将第一计算资源设置为未被占用的状态。
上述主要从方法的角度对本申请实施例的方案进行了介绍。可以理解的是,事件处理装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和软件模块中的至少一个。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对事件处理装置进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
示例性的,图8示出了上述实施例中所涉及的事件处理装置(记为事件处理装置80)
的一种可能的结构示意图,该事件处理装置80包括处理单元801和存储单元802。
处理单元801用于对事件处理装置的动作进行控制管理,例如,处理单元801用于执行图3中的301-303,图4中的401-409,图7中的701-714,和/或本申请实施例中所描述的其他过程中的事件处理装置执行的动作。存储单元802用于存储事件处理装置的程序代码和数据。
可选的,该事件处理装置80还包括通信单元803。处理单元801可以通过通信单元803与其他网络实体通信。例如,通信单元803可以为硬件IO资源,硬件IO资源可以与事件处理装置之外的设备进行通信,例如,与ETH,或者CAN通信。
示例性的,事件处理装置80可以为一个设备也可以为芯片或芯片系统。
当事件处理装置80为一个设备时,处理单元801可以是处理器;通信单元802可以是通信接口、收发器,或,输入接口和/或输出接口。可选地,收发器可以为收发电路。可选地,输入接口可以为输入电路,输出接口可以为输出电路。
当事件处理装置80为芯片或芯片系统时,通信单元802可以是该芯片或芯片系统上的通信接口、输入接口和/或输出接口、接口电路、输出电路、输入电路、管脚或相关电路等。处理单元801可以是处理器、处理电路或逻辑电路等。
图8中的集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。存储计算机软件产品的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例还提供了一种事件处理装置的硬件结构示意图,参见图9,该事件处理装置包括处理器901,可选的,还包括与处理器901连接的存储器902。
处理器901可以是一个CPU、微处理器、特定应用集成电路(application-specific integrated circuit,ASIC),或者一个或多个用于控制本申请方案程序执行的集成电路。处理器901也可以包括多个CPU,并且处理器901可以是一个单核(single-CPU)处理器,也可以是多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路或用于处理数据(例如计算机程序指令)的处理核。
存储器902可以是ROM或可存储静态信息和指令的其他类型的静态存储设备、RAM或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,本申请实施例对此不作任何限制。存储器902可以是独立存在(此时,存储器902可以位于事件处理装置外,也可以位于事件处理装置内),也可以和处理器901集成在一起。其中,存储器902中可以包含计算机程序代码。处理器901用于执行存储器902中存储的计算机程序代码,从而实现本申请实施例提供的方法。
处理器901用于对事件处理装置的动作进行控制管理,例如,处理器901用于执行图3中的301-303,图4中的401-409,图7中的701-714,和/或本申请实施例中所描述的其他过程中的事件处理装置执行的动作。存储器902用于存储事件处理装置的程序代码和数据。
可选的,事件处理装置还包括收发器,或者,处理器901包括逻辑电路以及输入接口和/或输出接口。处理器901可以通过收发器,或,通过输入接口和/或输出接口与其他网络实体通信。例如,收发器,或,输入接口和/或输出接口可以为硬件IO资源,硬件IO资源可以与事件处理装置之外的设备进行通信,例如,与ETH,或者CAN通信。
在实现过程中,本实施例提供的方法中的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本申请实施例还提供了一种计算机可读存储介质,包括计算机可执行指令,当其在计 算机上运行时,使得计算机执行上述任一方法。
本申请实施例还提供了一种计算机程序产品,包含计算机可执行指令,当其在计算机上运行时,使得计算机执行上述任一方法。
本申请实施例还提供了一种事件处理装置,包括:处理器和接口,处理器通过接口与存储器耦合,当处理器执行存储器中的计算机程序或计算机可执行指令时,使得上述实施例提供的任意一种方法被执行。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的保护范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (19)

  1. 一种事件处理方法,其特征在于,包括:
    从至少一个事件中选择第一事件;
    从未被占用的至少一个计算资源中选择第一计算资源;
    根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理,包括:
    从所述处理任务所对应的数据队列中选择第一数据;
    根据所述第一数据确定第一处理函数;
    根据所述第一处理函数和所述第一计算资源对所述第一数据进行处理。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理,还包括:
    从所述处理任务所对应的数据队列中选择第二数据;
    根据所述第二数据确定第二处理函数;
    根据所述第二处理函数和所述第一计算资源对所述第二数据进行处理。
  4. 根据权利要求2或3所述的方法,其特征在于,在所述根据所述第一处理函数和所述第一计算资源对所述第一数据进行处理的过程中,所述方法还包括:
    若产生新的事件需要下一个处理任务执行,将所述新的事件添加至事件队列中。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,在所述根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理之前,所述方法还包括:
    将所述第一计算资源设置为被占用的状态。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述计算资源包括中央处理器CPU的计算资源。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述从至少一个事件中选择第一事件,包括:
    从事件队列中选择优先级最高的事件作为所述第一事件。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,在所述根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理之后,所述方法还包括:
    将所述第一计算资源设置为未被占用状态。
  9. 一种事件处理装置,其特征在于,包括:处理单元;所述处理单元,用于:
    从至少一个事件中选择第一事件;
    从未被占用的至少一个计算资源中选择第一计算资源;
    根据所述第一事件对应的处理任务和所述第一计算资源对所述处理任务对应的数据进行处理。
  10. 根据权利要求9所述的装置,其特征在于,所述处理单元,具体用于:
    从所述处理任务所对应的数据队列中选择第一数据;
    根据所述第一数据确定第一处理函数;
    根据所述第一处理函数和所述第一计算资源对所述第一数据进行处理。
  11. 根据权利要求10所述的装置,其特征在于,所述处理单元,具体用于:
    从所述处理任务所对应的数据队列中选择第二数据;
    根据所述第二数据确定第二处理函数;
    根据所述第二处理函数和所述第一计算资源对所述第二数据进行处理。
  12. 根据权利要求10或11所述的装置,其特征在于,在所述处理单元根据所述第一处理函数和所述第一计算资源对所述第一数据进行处理的过程中,所述处理单元,还用于:
    若产生新的事件需要下一个处理任务执行,将所述新的事件添加至事件队列中。
  13. 根据权利要求9-12任一项所述的装置,其特征在于,所述处理单元,还用于:
    将所述第一计算资源设置为被占用的状态。
  14. 根据权利要求9-13任一项所述的装置,其特征在于,所述计算资源包括中央处理器CPU的计算资源。
  15. 根据权利要求9-14任一项所述的装置,其特征在于,所述处理单元,具体用于:
    从事件队列中选择优先级最高的事件作为所述第一事件。
  16. 根据权利要求9-15任一项所述的装置,其特征在于,所述处理单元,还用于:
    将所述第一计算资源设置为未被占用状态。
  17. 一种事件处理装置,其特征在于,包括:处理器,所述处理器与存储器耦合;
    所述存储器,用于存储计算机程序;
    所述处理器,用于执行所述存储器中存储的所述计算机程序,以使得所述事件处理装置执行如权利要求1-8中任一项所述的方法。
  18. 一种计算机可读存储介质,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的方法。
  19. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的方法。
PCT/CN2020/141805 2020-12-30 2020-12-30 事件处理方法和装置 WO2022141297A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/141805 WO2022141297A1 (zh) 2020-12-30 2020-12-30 事件处理方法和装置
CN202080108269.5A CN116670650A (zh) 2020-12-30 2020-12-30 事件处理方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141805 WO2022141297A1 (zh) 2020-12-30 2020-12-30 事件处理方法和装置

Publications (1)

Publication Number Publication Date
WO2022141297A1 true WO2022141297A1 (zh) 2022-07-07

Family

ID=82260078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141805 WO2022141297A1 (zh) 2020-12-30 2020-12-30 事件处理方法和装置

Country Status (2)

Country Link
CN (1) CN116670650A (zh)
WO (1) WO2022141297A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268996A1 (en) * 2012-12-18 2015-09-24 Huawei Technologies Co., Ltd. Real-Time Multi-Task Scheduling Method and Apparatus
CN105045658A (zh) * 2015-07-02 2015-11-11 西安电子科技大学 一种利用多核嵌入式dsp实现动态任务调度分发的方法
CN109213143A (zh) * 2017-07-03 2019-01-15 百度(美国)有限责任公司 操作自动驾驶车辆的使用事件循环的集中调度系统
CN111338785A (zh) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 资源调度方法及装置、电子设备、存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268996A1 (en) * 2012-12-18 2015-09-24 Huawei Technologies Co., Ltd. Real-Time Multi-Task Scheduling Method and Apparatus
CN105045658A (zh) * 2015-07-02 2015-11-11 西安电子科技大学 一种利用多核嵌入式dsp实现动态任务调度分发的方法
CN109213143A (zh) * 2017-07-03 2019-01-15 百度(美国)有限责任公司 操作自动驾驶车辆的使用事件循环的集中调度系统
CN111338785A (zh) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 资源调度方法及装置、电子设备、存储介质

Also Published As

Publication number Publication date
CN116670650A (zh) 2023-08-29

Similar Documents

Publication Publication Date Title
US10606653B2 (en) Efficient priority-aware thread scheduling
JP3922070B2 (ja) 分散制御方法及び装置
US8572622B2 (en) Reducing queue synchronization of multiple work items in a system with high memory latency between processing nodes
CN104580396A (zh) 一种任务调度方法、节点及系统
US10467054B2 (en) Resource management method and system, and computer storage medium
CN112130963A (zh) 虚拟机任务的调度方法、装置、计算机设备及存储介质
CN105051691A (zh) 调度
KR101373786B1 (ko) 자원-기반 스케쥴러
CN112925616A (zh) 任务分配方法、装置、存储介质及电子设备
CN114461365A (zh) 一种进程调度处理方法、装置、设备和存储介质
US20170344266A1 (en) Methods for dynamic resource reservation based on classified i/o requests and devices thereof
CN114327894A (zh) 资源分配方法、装置、电子设备及存储介质
CN111400073B (zh) 基于汽车开放架构系统到统一软硬件表示的形式化系统模型转换和可靠性分析方法
WO2022141297A1 (zh) 事件处理方法和装置
CN112579271A (zh) 用于非实时操作系统的实时任务调度方法、模块、终端和存储介质
CN109656716B (zh) 一种Slurm作业调度方法及系统
US20190317827A1 (en) Method and apparatus for managing kernel services in multi-core system
CN116244073A (zh) 混合关键分区实时操作系统的资源感知型任务分配方法
CN113296957B (zh) 一种用于动态分配片上网络带宽的方法及装置
WO2022236816A1 (zh) 一种任务分配方法及装置
CN110413397B (zh) 一种面向自动驾驶的任务调度方法
CN115840621A (zh) 一种多核系统的交互方法及相关装置
CN112597080A (zh) 读请求控制装置及方法以及存储器控制器
KR101733339B1 (ko) 사이버 물리 시스템을 위한 보안 제약 사항을 포함하는 실시간 스케줄링 장치 및 방법
Bartolini et al. Using priority inheritance techniques to override the size limit of CAN messages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967636

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080108269.5

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967636

Country of ref document: EP

Kind code of ref document: A1