WO2019056263A1 - Computer storage medium and embedded scheduling method and system - Google Patents
Computer storage medium and embedded scheduling method and system Download PDFInfo
- Publication number
- WO2019056263A1 WO2019056263A1 PCT/CN2017/102701 CN2017102701W WO2019056263A1 WO 2019056263 A1 WO2019056263 A1 WO 2019056263A1 CN 2017102701 W CN2017102701 W CN 2017102701W WO 2019056263 A1 WO2019056263 A1 WO 2019056263A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- event
- task
- processed
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
Definitions
- the present application relates to the field of computer software, and in particular, to a computer storage medium, an embedded scheduling method, and a system.
- Embedded system is a kind of "special computer system designed for specific applications inside fully embedded control device". It is application-oriented, based on computer technology, software and hardware can be tailored to adapt to application system. A dedicated computer system with strict requirements on function, reliability, cost, size, power consumption, etc.
- an embedded operating system refers to an operating system for an embedded system, and generally includes a hardware-related underlying driver software, a system kernel, a device driver interface, a communication protocol, a graphical interface, Standardize browsers, etc.
- Operating systems widely used in the embedded field are: embedded linux, windows embedded, VxWorks, and Android, iOS, etc. for smartphones and tablets.
- the embedded operating system is responsible for all software and hardware resource allocation, task scheduling, control, and coordination of concurrent activities of the embedded system.
- Multi-task scheduling systems such as embedded scheduling systems, are embedded in these embedded operating systems to complete complex scheduling tasks of the system.
- This embedded scheduling system has the advantages of reasonable design, optimized, powerful and no need for application developers to design and develop.
- the present application provides a computer storage medium, an embedded scheduling method, and a system, which are used to solve the problem that the existing embedded scheduling solution cannot balance the use effect, memory resource consumption, and startup time.
- a first aspect of the present application is to provide an embedded scheduling method, including: traversing a current value of a bit in a first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, the task The bit in the corresponding first integer is in one-to-one correspondence with the event supported by the task; and the event corresponding to the bit with the current value of the first value in the first integer corresponding to each task is determined as the current pending An event; calling a task that supports the to-be-processed event, and processing the to-be-processed event.
- a second aspect of the present application is to provide an embedded scheduling system, including: a query module, configured to traverse a current value of a bit in a first integer corresponding to each task, where the task and the first integer are one by one Correspondingly, the bit in the first integer corresponding to the task is in one-to-one correspondence with the event supported by the task; the query module is further configured to: in the first integer corresponding to each task, the current value is the first value The event corresponding to the bit of the value is determined as the current event to be processed; the processing module is configured to invoke the task supporting the event to be processed, and process the event to be processed.
- a third aspect of the present application is to provide an embedded scheduling system including: at least one processor and a memory; the memory storing computer executing instructions; the at least one processor executing the memory stored computer executing instructions to Perform the method as described above.
- a fourth aspect of the present application is to provide a computer storage medium having stored therein program instructions that, when executed by a processor, implement the method as previously described.
- the computer storage medium, the embedded scheduling method and the system provided by the application by assigning corresponding integers to tasks, allocating corresponding bits to the event, and designing the value of the bit corresponding to the event as a single bit value that does not interfere with each other, The value of the bit is used to indicate whether there is currently an event corresponding to the bit to be processed.
- task scheduling by traversing the value of the bit in the integer corresponding to the task, the current waiting can be quickly and accurately determined. The event is processed, and the corresponding task is called to process the pending event.
- the scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage.
- Equipment and the solution is easy to implement, resource consumption is low, and startup is fast.
- the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
- FIGS. 1A-1E are schematic flowcharts of an embedded scheduling method according to Embodiment 1 of the present application.
- FIG. 1F is a schematic diagram of an example of task registration
- FIG. 1G is a schematic diagram of a scheduling process in Embodiment 1 of the present invention.
- FIG. 2A to 2B, 2D to 2G, and FIG. 2I are schematic flowcharts of an embedded scheduling method according to Embodiment 2 of the present application;
- FIG. 2C is a schematic diagram of a scheduling process in Embodiment 2 of the present invention.
- 2H is a diagram showing an example of a process of message buffering in Embodiment 2 of the present invention.
- 3A-3D are schematic structural diagrams of an embedded scheduling system according to Embodiment 3 of the present application.
- FIGS. 4A-4D are schematic structural diagrams of an embedded scheduling system according to Embodiment 4 of the present application.
- FIG. 5 is a schematic structural diagram of an embedded scheduling system according to Embodiment 5 of the present invention.
- FIG. 1A is a schematic flowchart of an embedded scheduling method according to Embodiment 1 of the present application; as shown in FIG. 1A, the embodiment provides an embedded scheduling method, which is used to implement a lightweight, A scheduling scheme for low resource consumption, specifically, the embedded scheduling method includes:
- Step 101 traverse a current value of a bit in a first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, and a bit in the first integer corresponding to the task and an event supported by the task One-to-one correspondence;
- Step 102 Determine, in the first integer corresponding to each task, an event corresponding to a bit whose current value is the first value as a current pending event;
- the execution body of the embedded scheduling method may be an embedded scheduling system.
- the embedded scheduling system may be a medium storing related execution code, for example, a USB flash drive, etc.; or the embedded scheduling system may also be a physical device integrated or installed with related execution code, for example, a chip. , smart terminals, computers, etc.
- each task is assigned a corresponding integer in advance, and corresponding bits are allocated for events supported by each task.
- the method further includes: assigning a corresponding integer to each task, and assigning the integer bits of the task to the events supported by the task respectively.
- the current support calls are task A, task B and task C.
- the events supported by task A have events A1 and A2; the events supported by task B have events B1, B2 and B3;
- the event handled is C1.
- the task A is assigned an integer a
- the task B is assigned an integer b
- the task C is assigned an integer c.
- the integers a, b, c can all include multiple bits, and the bit a1 of the integer a is assigned to the event A1. Bit A2 of integer a is assigned to event A2, bit b1 of integer b is assigned to event B1, bit b2 of integer b is assigned to event B2, bit b3 of integer b is assigned to event B3, Such a push, for each task, assigns its corresponding integer bit to the event supported by the task, specifically, the one-to-one correspondence between the task and the integer, the bit of the integer corresponding to the task, and the event supported by the task. One-to-one correspondence.
- the different values of the bits corresponding to an event may indicate whether the event currently needs to be processed. For example, if an event is currently received and needs to be processed, the bit value corresponding to the event is set to a corresponding value to represent that the event currently needs to be processed. Correspondingly, when the value of the bit corresponding to the event is another value, the event that does not need to be processed is represented.
- the method further includes: if a new event is detected, setting a value of a bit corresponding to the new event to the first value.
- the value of the corresponding bit may be updated to improve the accuracy of the subsequent scheduling. reliability.
- the embedded scheduling method may further include: if the processing of the to-be-processed event is completed, setting a bit corresponding to the to-be-processed event to a second value.
- the first value and the second value of the first value and the second value are used to indicate that the values of the two are different, and the specific values may be customized, for example, the first value may be set. It can be 1, and the second value can be 0.
- the new event mentioned here is a newly generated event that needs to be processed.
- the new event may be an event received from outside the scheduling system, or may be triggered by an internal event of the scheduling system before processing.
- the generated event in short, as long as there is a newly generated event to be processed, the bit corresponding to the event to be processed is set to a corresponding value.
- the value of the bit corresponding to the event may include 1 and 0.
- 1 can be used to characterize the current existence of the event that needs to be processed
- 0 is used to characterize the event that is currently not needed to be processed.
- the value of the bit corresponding to each event can be initially set to 0.
- the bit a1 corresponding to the event A1 and the bit b2 corresponding to the event B1 are set to 1, and when the subsequent scheduling is performed, the values of the bits a1 and b2 are detected as 1, It can be determined that the events A1 and B2 corresponding to the bits a1 and b2 are pending events.
- FIG. 1B is a schematic flowchart of another embedded scheduling method according to Embodiment 1 of the present application.
- the method further includes:
- Step 105 Allocate the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
- an integer is assigned to each task, and the number of bits corresponding to each task is consistent with the number of events supported by the task, and still combined with the foregoing example, for the task.
- the number of events it supports is two, then it is assigned an integer a with a bit of 2 (integer a includes bit [a1][a2]), and for task B, the number of events supported is three.
- the integer b includes the bit [b1][b2][b3]
- the number of events supported is one, and the allocated bit is 1
- the integer c of the bit (for example, the integer c includes the bit [c1]).
- the bits of the integer corresponding to the task are assigned to the task support one by one.
- bits a1 and a2 are assigned to events A1 and A2, respectively (eg, a1 is assigned to event A2, a2 is assigned to event A1), and for task B, bits are used.
- Bits b1, b2 and b3 are assigned to events B1, B2 and B3, respectively (for example, b1 is assigned to event B2, b2 is assigned to event B1, b3 is assigned to event B3), and for task C, bit c1 is assigned to Event C1.
- the value of the bit c1 corresponding to the new event C1 is set to a first value, for example, set to 1, and then traversed by each
- the current value of the bit in the integer corresponding to the task can quickly and accurately determine the event whose value is 1, that is, the event C1 is determined as a pending event, thereby calling the task supporting the event C1, that is, the task C, and processing the event C1.
- the integers corresponding to each task may be distributed in various ways, for example, may be discretely distributed or may be adjacently distributed. Among them, the discrete distribution method can set the position of the integer at will, so the flexibility is higher. Alternatively, the manner of adjacent distribution may be adopted to improve the integration of the integer distribution, so as to further reduce the consumption of processing resources when traversing the integer corresponding to each task.
- the integers may be generated by establishing an integer array. Specifically, as shown in FIG. 1C, FIG. 1C is provided in Embodiment 1 of the present application.
- a flow diagram of an embedded scheduling method. Based on the first embodiment, 104 specifically includes:
- a corresponding one-dimensional array is established, and the integers in the array are assigned to each task according to the one-to-one correspondence principle, and then, corresponding to each task
- the bits of the integer are assigned to the events supported by the task in a one-to-one correspondence.
- the number of the bits of each integer may be consistent with the number of events supported by the corresponding task.
- a one-dimensional integer array including three integers a, b, and c is established, and the integers a, b, and c are allocated one by one.
- integer a can be assigned to task A
- integer b can be assigned to task B
- integer c can be assigned to task C.
- all events are cached by a one-dimensional array of integers, and each task can be assigned an integer in the array as an event buffer of the events it supports.
- the cache can be The default is 0, that is, the initial value of each bit can be set to 0, which means that there is no event that needs to be processed at present.
- Each event corresponds to one bit of an integer. If an event needs to be processed, the value of the bit corresponding to the event will be set to 1. After the event is processed, the bit will be processed again. Set to 0.
- an integer corresponding to each task is generated by establishing a one-dimensional array, thereby reducing processing consumption and time consumption, and improving scheduling efficiency, so that the scheduling scheme is better. Applicable to devices with limited hardware resources.
- FIG. 1D is a schematic flowchart of still another embedded scheduling method according to Embodiment 1 of the present application. On the basis of Embodiment 1, before 105, the method further includes:
- 106 Determine the priority of each event; correspondingly, 105 specifically includes:
- the processing priority of each event may be different, some events need to be processed first, and some events may be suspended.
- the priority of the event processing is introduced, and the resources for priority scheduling need not be specifically configured, and the event processing priority is easily and conveniently implemented.
- Level control Specifically, in this implementation manner, the priority of each event may be determined as needed, and after determining the priority, in combination with the foregoing implementation manner, the bit order and the priority of each event are considered while allocating bits for each event.
- Each event is assigned a corresponding bit according to the bit-order and time-priority consistent non-allocation principle.
- the agreement here includes the same or the opposite, that is, as long as there is a regular mapping relationship between the bit order and the priority of the event, for example, the bit order may be assigned in the same order as the event priority, or the bit order may be used.
- the principle is reversed from the principle of event priority and is not limited here.
- the priority of each event may be determined first, and then in bit order.
- the principle of consistent priority of events is allocated so that in the process of traversing the bits of each integer, determining the event to be processed and processing the event to be processed, the event to be processed may be processed in bit order. Since the correspondence between the bit order and the event priority is introduced in the process of previously allocating bits for the event, subsequent event processing based on the bit order is actually performing event processing according to the priority of the event.
- the subsequent event processing in the bit order simply and subtly utilizes the characteristics of the bit sequence to implement event processing.
- Priority control which further saves resources required for scheduling.
- each task in the solution includes a task that can be invoked by the current scheduling system.
- the method further includes: receiving a registration request, where the registration request includes a task processing function of the task; and performing function registration on the task according to the task processing function.
- the specific solution of the function registration can be implemented in combination with the existing task registration scheme, and will not be described here.
- FIG. 1F is an example schematic diagram of task registration. Task registration is completed through data interaction between the task and the scheduler.
- FIG. 1G is an exemplary diagram of a scheduling process in Embodiment 1 of the present invention.
- the system detects whether there is a pending event, and if so, invokes the corresponding task.
- the processing event is processed, and after the processing is completed, the process returns to perform a step of detecting whether there is a pending event.
- scheduling is achieved during the reciprocating process.
- the sleep state may be entered to save the power of the device or the consumed resource, and the subsequent may be based on the existing After the wakeup mechanism wakes up, it detects again whether there are pending events.
- the embedded scheduling method allocates a corresponding bit to the event by assigning a corresponding integer to the task, and designates the value of the bit corresponding to the event as a single bit value that does not interfere with each other, and the value of the bit is used.
- the current pending event can be quickly and accurately determined by traversing the value of the bit in the integer corresponding to the task, and then calling The corresponding task processes the pending event.
- the scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage.
- Equipment, and the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
- an event it may be accompanied by a message associated with it.
- the processing of such an event needs to deal with these messages. Therefore, during the scheduling process, the message caching and processing mechanism may also be involved. .
- FIG. 2A is a schematic flowchart of an embedded scheduling method according to Embodiment 2 of the present application.
- the embedded scheduling method further includes:
- the message is cached to the message pool.
- FIG. 2B is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application.
- 103 may specifically include:
- FIG. 2C is an exemplary diagram of a scheduling process in Embodiment 2 of the present invention.
- FIG. 2C differs from FIG. 1G in that, when it is detected that there is an event to be processed, it is necessary to call a corresponding task and detect whether there is a message corresponding to the event. If there is a corresponding message, a new event for processing the message needs to be generated and cached. After that, the pending events that need to be processed are detected again.
- a memory dynamic allocation mechanism may be used to cache the message, that is, real-time allocation of a corresponding size of memory according to the current message size to implement message caching, and the flexibility of the solution. High but resource-intensive and complex to implement.
- a fixed-length memory block can also be used to store the message, that is, a static fixed-size memory is preset to buffer the message, and the solution resource is small, but the preset memory size is fixed. Therefore, in order to ensure that there is enough message buffer space, a large memory is often set, and for a scenario with less current messages, a large memory resource is wasted.
- FIG. 2D is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application.
- the embedded scheduling is performed.
- the method also includes:
- 201 may specifically include:
- the partitioning granularity may be set according to needs or experience, for example, 10 memory units are one granularity.
- the partitioning granularity may be a single, that is, the message pool is divided into a plurality of memory blocks of equal size according to a partitioning granularity; or the partitioning granularity may be multiple, that is, the message pool is divided into sizes.
- the plurality of memory blocks are not equal, and the embodiment does not limit them here.
- the memory in the message pool may be equally divided. By dividing the message pool into multiple memory blocks, distributed management of the memory of the message pool is realized.
- the idle target memory block is found from the plurality of memory blocks, and the message is cached to the target memory block.
- the message pool is divided into multiple memory blocks to implement distributed management. If the detected new event includes a message, and the message needs to be cached, the air memory is searched for from multiple memory blocks. The free memory block caches the message of the new event into the found memory block, improving the efficiency of the message cache and saving resource consumption.
- FIG. 2E is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application, which is implemented in FIG. 2D.
- the method may further include:
- a state identifier is set for each memory block, and when it is necessary to find the free memory block, the state identifier of each memory block needs to be traversed to quickly and accurately determine the idleness.
- the memory block improves the efficiency of the message cache.
- FIG. 2F is a schematic flowchart of still another embedded scheduling method according to Embodiment 2 of the present application.
- 203 is specifically Can include:
- a second integer whose number of bits is consistent with the number of memory blocks is created, and the bits of the second integer are allocated according to a one-to-one correspondence principle.
- a block of memory that is, each bit represents a block of memory. Since the bits themselves have an assignable property, different values of the memory blocks can be characterized by different values of the bits. For example, for a bit corresponding to a memory block, when the value of the bit is 1, the memory block is characterized as being non-idle, and when the value of the bit is 0, the memory block is characterized as being idle.
- the idle or non-idle here is used to indicate whether the memory block is occupied, that is, whether data is cached.
- the unoccupied memory block can be set as an idle memory block, and all or part of the occupied memory block is set as a non-idle memory block.
- each memory block is allocated a corresponding bit, and the different values of the bit are used to characterize whether the memory block is idle, thereby simply and effectively characterizing the state of the memory block, further saving resource consumption, and improving the efficiency of message buffering. .
- FIG. 2G is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application.
- 2011 may specifically include:
- the message pool is divided into multiple memory blocks, and a status identifier is set for the multiple memory blocks, where the status identifier may be a bit allocated for the memory block, and if the detected new event includes a message, Then, the state identifier of each memory block can be traversed, the free memory block can be quickly found, and the message is cached into the found memory block, and correspondingly, the status identifier of the memory block is updated to be non-idle, so as to improve subsequent search idle. The accuracy and reliability of the memory block.
- the idle memory block is quickly determined to perform message buffering based on the status identifier of each memory block, and the status identifier of the memory block is updated, thereby improving the efficiency and accuracy of the message cache.
- the number of memory blocks occupied by the message may be different according to the data size of the message. Accordingly, on the basis of the second embodiment, the message is described in 2011.
- the buffering to the target memory block may include: if the data volume of the message is not greater than the storage capacity of the single target memory block, buffering the message to the target memory block; if the data volume of the message is greater than The storage capacity of a single target memory block splits the message into a plurality of message blocks and caches the plurality of message blocks to a plurality of target memory blocks, respectively.
- FIG. 2H is a schematic diagram of a process of message buffering in Embodiment 2 of the present invention.
- a message it is first detected whether there is currently a free memory block, and if so, Then, the size relationship between the message and the memory block is determined. Specifically, the difference between the memory block capacity and the message length can be calculated to obtain the result LEN. If LEN is not less than 0, the capacity of the memory block is sufficient to cache the message, and the message ends.
- the present embodiment may be implemented in combination with the foregoing other embodiments, for example, based on the status identifier to find the target memory block, and selecting one or more target memory according to the data amount of the message.
- the block is cached. In this manner, messages of different sizes can be cached to improve the reliability of the message cache.
- FIG. 2I is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application.
- the method may further include:
- 1032 may specifically include:
- the message may be cached to an idle memory block.
- the specific cache method may refer to the foregoing solution.
- the message header of the message may also be generated.
- the header includes information about the memory block in which the message is located, for example, may include an identification of a memory block in which the message is stored.
- the memory block in which the message corresponding to the event to be processed is located may be determined based on the message header of each message, and the message corresponding to the event to be processed is extracted from the memory block.
- the memory block information in which the message is located may have one or more.
- the memory block may have corresponding bits.
- the identifier of the memory block mentioned herein may also be an identifier of its corresponding bit.
- the message header of the message is accurately and simply represented by the message header of the message, so that the cache location of the message is accurately and quickly determined, and the efficiency of subsequent message processing is improved.
- the message header of the message further includes information about the event to which the message belongs; , 1034 may specifically include:
- the message may be cached to an idle memory block.
- the specific cache method may refer to the foregoing solution.
- the message header of the message may also be generated.
- the header includes information about the memory block in which the message is located and information about the event to which the message belongs, that is, information about the new event.
- the target message corresponding to the to-be-processed event may be determined based on the event information in the message header of each message, and the target message is extracted according to the memory block information in the message header of the target message, and Process the target message.
- the event may have corresponding bits.
- the identifier of the event mentioned herein may be the identifier of its corresponding bit.
- the identifier of its corresponding bit by generating a message header of the message, the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
- the message header of the message further includes information about a task that supports an event to which the message belongs according to any one of the foregoing embodiments.
- 1034 may specifically include:
- the message may be cached to an idle memory block.
- the specific cache method may refer to the foregoing solution.
- the message header of the message may also be generated.
- the message header includes information of a memory block in which the message is located and information of a task supporting an event to which the message belongs, that is, information supporting a task of the new event.
- the target message corresponding to the event to be processed may be determined based on the task information in the message header of each message, and the target message is extracted according to the memory block information in the message header of the target message, and The target message is processed.
- the event may have corresponding bits.
- the identifier of the event mentioned herein may be the identifier of its corresponding bit.
- the priority processing mechanism of the message may be introduced.
- the message header of the message further includes Message counting information; correspondingly, 1033 may specifically include:
- the message header may also include message counting information while buffering the multiple messages and generating a message header for the message.
- the form of the message count information mentioned here may be various, as long as the order can be reflected, for example, numbers 1, 2, 3, ....
- the priority of the message may also be set according to requirements. For example, the priority is determined according to the order in which the messages are received, and the sooner the message is received, the higher the priority.
- the message may be sorted according to the message counting information in each message, and each message is processed in turn according to the order of the message counting information, and the corresponding processing may be called in the corresponding process. Task. And the messages that have been processed can be cleared to save memory space.
- the method further includes: setting a status identifier of the memory block where the processed target message is located to be idle.
- the state identifier of the memory block can be updated according to the processing progress of the message, so that the state identifier truly reflects the state of the memory block, thereby improving the accuracy and reliability of the message cache.
- the foregoing embodiments of the message header may be implemented separately or in combination, and the embodiment is not limited thereto.
- the first message and the second message are only used to distinguish messages obtained based on different implementation manners, and the “first” “second” does not specifically define the content of the message. It can be understood that the first message and the second message can be the same message.
- 1033 may include: if the number of the target messages is multiple, calling, for any target message, a task that supports the to-be-processed event, The target message is processed, and the processed target message is cleared; and the step of performing the acquiring the target message associated with the to-be-processed event is returned, until the number of the target message is 1, and the event supporting the pending event is invoked. Task, processing the target message, clearing the processed target message, and The bit corresponding to the processing event is set to the second value.
- each message of the event is processed in turn, and when the last message of the event is processed, in addition to the message
- the value of the bit corresponding to the event is also updated and set.
- the bit value corresponding to the event is updated, the event state is updated in time, and the accuracy and reliability of the scheduling are improved.
- the embedded scheduling method provided in this embodiment divides the message pool into multiple memory blocks. For an event including a message, the idle memory block can be searched for the event message, and then the event is processed from the memory block. Extracting corresponding messages for processing enables efficient and reliable management of event messages and effectively saves memory resources.
- the processing of the event to be processed may also be processed by polling.
- the method may further include: if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, the polling processing manner is adopted.
- each pending event is taken as the current processing object, and the task supporting the processing object is called, and the processing object is processed in a single process until the processing of all pending events is completed.
- the single processing can represent processing of a single target.
- the calling supports the task of the processing object, and performing a single processing on the processing object may include: if the processing object includes a message, calling a task supporting the processing object to the processing object The single message is processed; if the processing object does not include a message, the task supporting the processing object is called, and the processing object is processed.
- an event corresponds to multiple messages
- the remaining messages are skipped after one message of the event is processed, and the next event is scheduled, and the remaining messages are waiting for subsequent scheduling. Polling, this can effectively prevent improper use of scheduling, and avoid scheduling locks, for example, locks caused by tasks sending messages to themselves.
- step 2 may be performed first, or step 2 may be performed first, or step 1 may be performed first, or step 1 and step 2 may be performed at the same time, which is not limited in this embodiment.
- FIG. 3A is a schematic structural diagram of an embedded scheduling system according to Embodiment 3 of the present application; as shown in FIG. 3A, the embodiment provides an embedded scheduling system, which is used to implement a lightweight, A scheduling scheme with low resource consumption.
- the embedded scheduling system includes:
- the query module 31 is configured to traverse the current value of the bit in the first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, and the bit in the first integer corresponding to the task is The events supported by the task correspond one-to-one;
- the querying module 31 is further configured to determine, as the current pending event, an event corresponding to a bit whose current value is the first value among the first integers corresponding to the tasks.
- the processing module 32 is configured to invoke a task that supports the to-be-processed event, and process the to-be-processed event.
- the embedded scheduling system may be a medium storing related execution code, for example, a USB flash drive, etc.; or the embedded scheduling system may also be a physical device integrated or installed with related execution code, for example, a chip. , smart terminals, computers, etc.
- each task is assigned a corresponding integer in advance, and corresponding bits are allocated for events supported by each task. Specifically, the different values of the bits corresponding to an event may indicate whether the event currently needs to be processed.
- the system may further include: a first update module, configured to: if the processing of the to-be-processed event is completed, set a bit corresponding to the to-be-processed event to a second value. After the processing of the pending event is completed, the first update module may update the value of its corresponding bit to improve the accuracy and reliability of the subsequent scheduling.
- a first update module configured to: if the processing of the to-be-processed event is completed, set a bit corresponding to the to-be-processed event to a second value. After the processing of the pending event is completed, the first update module may update the value of its corresponding bit to improve the accuracy and reliability of the subsequent scheduling.
- the system further includes: a second update module, configured to: if a new event is detected, set a value of a bit corresponding to the new event to the first value.
- the new event mentioned here is a newly generated event that needs to be processed.
- the second update module sets the bit corresponding to the event to be processed to a corresponding value.
- the system further includes: an allocating module 33, configured to allocate a first integer for each task, and the number of bits of the first integer corresponding to the task The number of the events supported by the task is consistent; the allocating module 33 is further configured to allocate the bits of the first integer corresponding to the task to the any one-to-one correspondingly Supported events.
- the allocation module 33 assigns an integer to each task for all tasks that can be called currently, and the number of bits corresponding to each task is consistent with the number of events supported by the task. After the allocation module 33 assigns the corresponding integer, for each task, the allocation module 33 assigns the bits of the integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
- the integers may be generated by creating an integer array.
- the allocation module 33 includes: a first creating unit 331, configured to establish, according to the number of the tasks, a one-dimensional integer array including a plurality of first integers, the number of the plurality of first integers being consistent with the number of the tasks;
- the unit 332 is configured to allocate the first integers in the one-dimensional integer array to the tasks in a one-to-one correspondence.
- the first creating unit 331 creates a corresponding one-dimensional array according to the number of currently available tasks, and the first allocating unit 332 allocates the integers in the array to each task according to a one-to-one correspondence principle.
- the allocation module 33 then assigns the bits of its corresponding integers to the tasks supported by the task in a one-to-one correspondence for each task.
- an integer corresponding to each task is generated by establishing a one-dimensional array, thereby reducing processing consumption and time consumption, and improving scheduling efficiency, so that the scheduling scheme is better. Applicable to devices with limited hardware resources.
- the system further includes: an event priority module 34, configured to determine a priority of each event; and an allocation module 33, specifically configured to perform bit order and events.
- the principle of consistent priority allocation allocates the bits of the first integer corresponding to the task in a one-to-one correspondence to the events supported by the task.
- the processing priority of each event may be different.
- the event priority module 34 first determines the priority of each event, and after determining the priority, in combination with the foregoing embodiment, the allocation module 33 considers the bit order and the priority of each event while allocating bits for each event. Level, assigning corresponding bits to each event according to the principle of consistent non-allocation of bit order and time. The agreement here includes the same or the opposite.
- the processing module 32 is specifically configured to: in response to the bit sequence of the bit corresponding to the to-be-processed event, sequentially, for each pending event, invoke a task that supports the to-be-processed event, and process the to-be-processed event. .
- the subsequent event processing in the bit order simply and skillfully utilizes the characteristics of the bit sequence to implement event processing.
- Priority control which further saves resources required for scheduling.
- each task in the solution includes a task that can be invoked by the current scheduling system.
- the system further includes: a receiving module, configured to receive a registration request, the registration request includes a task processing function of the task; and a registration module, configured to perform, according to the task processing function, The task performs function registration.
- the embedded scheduling system provided in this embodiment allocates corresponding bits for the event by assigning corresponding integers to the task, and designates the value of the bit corresponding to the event as a single bit value that does not interfere with each other, and the value of the bit is used.
- the current pending event can be quickly and accurately determined by traversing the value of the bit in the integer corresponding to the task, and then calling The corresponding task processes the pending event.
- the scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage.
- Equipment, and the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
- an event it may be accompanied by a message associated with it.
- the processing of such an event needs to deal with these messages. Therefore, during the scheduling process, the message caching and processing mechanism may also be involved. .
- FIG. 4A is a schematic structural diagram of an embedded scheduling system according to Embodiment 4 of the present application.
- the embedded scheduling system further includes: a cache module 41, If a new event is detected and the new event includes a message, the message is cached to the message pool.
- the processing module 32 includes: a message obtaining unit 321 for acquiring a target message associated with the to-be-processed event; and a message processing unit 322, The target message is processed and the processed target message is cleared by invoking a task supporting the pending event.
- the processing of the event is completed by processing the message, and the processed message is cleared, thereby effectively saving the memory occupied by the message cache on the basis of implementing the event processing.
- the system further includes: a dividing module 42 for dividing the memory in the message pool according to a preset partitioning granularity, as shown in FIG. 4C. Obtaining a plurality of memory blocks; the cache module 41 is configured to: if a new event is detected and the new event includes a message, find a free target memory block from the plurality of memory blocks, and cache the message to The target memory block.
- the dividing module 42 divides the message pool into a plurality of memory blocks, and implements distributed management of the memory of the message pool.
- the cache module 41 searches for a free target memory block from the plurality of memory blocks when the message is cached, and caches the message to the target memory block. In this implementation manner, the message pool is divided into multiple memory blocks to implement distributed management. If the detected new event includes a message, when the message needs to be cached, the free memory block is searched from the plurality of memory blocks. The message of the new event is cached into the found memory block, which improves the efficiency of the message cache and saves resource consumption.
- the system further includes: an identification module 43 for each memory.
- the block sets a status identifier, and the status identifier of the memory block is used to characterize whether the memory block is free.
- the identifying module 43 sets a status identifier for each memory block.
- the status identifier can be in various forms.
- the identifier module 43 includes:
- a second creating unit configured to create a second integer, where the number of bits of the second integer is consistent with the number of the plurality of memory blocks;
- a second allocation unit configured to allocate the bits of the second integer to the inner one-to-one correspondence a storage block, wherein a status of the memory block is identified as a bit corresponding to the memory block, and different values of the corresponding bit of the memory block respectively indicate that the memory block is idle or not idle.
- the second creating unit creates a second integer whose number of bits is consistent with the number of the memory blocks according to the number of the memory blocks, and the second allocating unit according to the one-to-one correspondence
- the allocation principle assigns the bits of the second integer to the memory block.
- each memory block is allocated a corresponding bit, and the different values of the bit are used to characterize whether the memory block is idle, thereby simply and effectively characterizing the state of the memory block, further saving resource consumption, and improving the efficiency of message buffering. .
- the cache module may specifically include: a searching unit, configured to: if a new event is detected and the new event includes a message, traverse the state identifiers of the plurality of memory blocks to find out The status is identified as an idle target memory block; a storage unit is configured to cache the message to the target memory block and set the status identifier of the target memory block to be non-idle.
- the idle memory block is quickly determined to perform message buffering based on the status identifier of each memory block, and the status identifier of the memory block is updated, thereby improving the efficiency and accuracy of the message cache.
- the number of memory blocks occupied by the message may be different according to the data size of the message.
- the storage unit is specifically used for If the data volume of the message is not greater than the storage capacity of the single target memory block, the message is cached to the target memory block; the storage unit is further configured to: if the data volume of the message is greater than a single target memory The storage capacity of the block splits the message into a plurality of message blocks and caches the plurality of message blocks to a plurality of target memory blocks, respectively. In this manner, messages of different sizes can be cached to improve the reliability of the message cache.
- the system further includes: a generating module, configured to generate a message header of the message, where the message header includes the message,
- the message acquiring unit 321 includes: a processing subunit, configured to determine a target message associated with the to-be-processed event; and an extracting subunit, configured to acquire the target message from a message header of the target message.
- the information of the memory block is located, and the message cached in the memory block where the target message is located is extracted as the target message.
- the message block in which the message is located is accurately and simply represented by generating a message header of the message, thereby accurately and quickly determining the cache of the message. Location to improve the efficiency of subsequent message processing.
- the message header of the message further includes information about an event to which the message belongs; the processing subunit Specifically, the first message is determined to be the target message, and the message header of the first message includes information about the to-be-processed event.
- the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
- the message header of the message further includes information that supports a task to which the message belongs, and the processing sub-unit is specifically used.
- the second message is determined to be the target message, and the message header of the second message includes information of a task supporting the to-be-processed event.
- the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
- the priority processing mechanism of the message may be introduced.
- the message header of the message further includes message counting information, where the message processing unit 322 is specifically configured to sequentially follow the order of the counting information corresponding to the target message.
- the target information, the task supporting the to-be-processed event is invoked, the target message is processed, and the processed target message is cleared.
- the priority of the message processing is introduced, and the priority mechanism of the message processing is implemented simply and effectively.
- the status identifier of the memory block where the processed target message is located may also be set to idle.
- the system further includes: a third update module, configured to set a status identifier of the memory block where the processed target message is located to be idle.
- the state identifier of the memory block can be updated according to the processing progress of the message, so that the state identifier truly reflects the state of the memory block, thereby improving the accuracy and reliability of the message cache.
- the event can be determined to be processed only after the event corresponding to the event is processed, and the value of the bit corresponding to the event can be updated.
- the message processing unit 322 is specifically configured to: if the number of the target messages is multiple, invoke a task that supports the to-be-processed event for any target message, For the stated The target message is processed, and the processed target message is cleared.
- the message processing unit 322 is further configured to perform the step of acquiring the target message associated with the to-be-processed event again, until the number of the target message is 1.
- the task supporting the to-be-processed event is invoked, the target message is processed, the processed target message is cleared, and the bit corresponding to the to-be-processed event is set to a second value.
- the bit value corresponding to the event is updated, the event state is updated in time, and the accuracy and reliability of the scheduling are improved.
- the embedded scheduling system provided in this embodiment divides the message pool into multiple memory blocks. For events including messages, the idle memory block can be searched for the event message, and the event is processed from the memory block. Extracting corresponding messages for processing enables efficient and reliable management of event messages and effectively saves memory resources.
- the processing of the event to be processed may also be processed by polling.
- the processing module 32 is specifically configured to: if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, polling is used.
- the processing mode sequentially takes each to-be-processed event as a current processing object, calls a task that supports the processing object, and performs a single processing on the processing object until the processing of all pending events is completed.
- the single processing can represent processing of a single target.
- the processing module 32 is specifically configured to: if there are multiple to-be-processed events, and an event that includes multiple messages exists in the multiple to-be-processed events, use a polling processing manner to sequentially process each to be processed.
- the event is the current processing object. If the processing object includes a message, the task supporting the processing object is called to process a single message of the processing object until the processing of all pending events is completed; the processing module 32 further Specifically, if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, the polling processing mode is adopted, and each pending event is sequentially used as a current processing object.
- the processing object does not include a message
- the task supporting the processing object is invoked, and the processing object is processed until the processing of all pending events is completed.
- the message processing strategy if an event corresponds to multiple messages, the remaining messages are skipped after one message of the event is processed, and the next event is scheduled, and the remaining messages are waiting for subsequent scheduling polling. This can effectively prevent improper use of scheduling, and avoid scheduling locks, for example, locks caused by tasks sending messages to themselves.
- FIG. 5 is a schematic structural diagram of an embedded scheduling system according to Embodiment 5 of the present invention. As shown in FIG. 5, the system includes: a scheduler, an event management module, and a message management module.
- the event management module is mainly responsible for caching events, for example, the event may be an event received from an external event source or an event triggered during event processing; the message management module includes a message header module and a message pool, and message management The module is mainly responsible for the buffering of messages, and the scheduler is mainly responsible for the scheduling of events and messages.
- the list in the event management module represents a one-dimensional array storing integers corresponding to each task. Each unit in the list represents a single integer, and each integer corresponds to a task. The bits in the integer correspond to the events supported by the task. correspond.
- the message management module stores a message header of each message, and the message pool is used to cache each message.
- the scheduler performs scheduling processing by referring to the scheduling scheme based on the events and messages cached in the event management module and the message management module.
- the scheduling scheme performed by the scheduler may refer to related content in the foregoing method embodiments, and details are not described herein again.
- the embedded scheduling system provided in this embodiment adopts an event-driven task scheduling manner, so that all events of the task can be buffered by integers, and a lightweight and low resource consumption scheduling scheme can be implemented, which can be effectively applied to devices with insufficient hardware resources. And the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above embedded scheduling scheme can be used for scheduling common tasks as well as for calling deep hierarchical systems, and can reduce system coupling.
- the sixth embodiment of the present application further provides a computer storage medium, which may include: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory).
- a computer storage medium which may include: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory).
- a medium for storing a program code such as a magnetic disk or an optical disk.
- the computer storage medium stores program instructions, and the program instructions are used in the embedded scheduling method in the above embodiment.
- Embodiment 7 of the present application provides an embedded scheduling system, which may be a terminal device installed with a program running system, such as a mobile phone, a computer, a PAD, or a smart watch, etc.
- the embedded scheduling system includes at least one processor And a memory for storing computer execution instructions, the number of processors may be one or more, and may work separately or in cooperation, and the processor is configured to execute the computer-executed instructions of the memory storage to implement the above embodiment.
- Embedded scheduling method may be a terminal device installed with a program running system, such as a mobile phone, a computer, a PAD, or a smart watch, etc.
- the embedded scheduling system includes at least one processor And a memory for storing computer execution instructions, the number of processors may be one or more, and may work separately or in cooperation, and the processor is configured to execute the computer-executed instructions of the memory storage to implement the above embodiment.
- Embedded scheduling method may be a terminal device installed with a program running system, such as
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Stored Programmes (AREA)
Abstract
A computer storage medium and an embedded scheduling method and system, the method comprising: traversing the current value of the bits in a first integer corresponding to each task, the tasks corresponding to the first integer on a one to one basis, the bits in the first integer corresponding to the tasks corresponding to the events supported by the tasks on a one to one basis (S101); determining that the event corresponding to the bits with a first value as the current value in the first integer corresponding to the each task is a current event to be processed (S102); and invoking the task supporting the event to be processed to implement processing of the event to be processed (S103). The present method achieves good usage effects and also implements a lightweight and low resource consumption scheduling solution, capable of being used effectively in devices with scarce hardware resources; the method is easy to implement, resource consumption is low, and startup is fast. In practical applications, the present method can be used for ordinary task scheduling and for invoking deep hierarchical systems, and can reduce system coupling.
Description
本申请涉及计算机软件领域,尤其涉及一种计算机存储介质、嵌入式调度方法及系统。The present application relates to the field of computer software, and in particular, to a computer storage medium, an embedded scheduling method, and a system.
嵌入式系统(Embedded system),是一种“完全嵌入受控器件内部,为特定应用而设计的专用计算机系统”,是以应用为中心,以计算机技术为基础,软硬件可裁剪,适应应用系统对功能、可靠性、成本、体积、功耗等严格要求的专用计算机系统。相应的,嵌入式操作系统(Embedded Operating System,简称:EOS)是指用于嵌入式系统的操作系统,通常包括与硬件相关的底层驱动软件、系统内核、设备驱动接口、通信协议、图形界面、标准化浏览器等。目前在嵌入式领域广泛使用的操作系统有:嵌入式linux、windows Embedded、VxWorks,以及应用在智能手机和平板电脑的Android、iOS等。Embedded system is a kind of "special computer system designed for specific applications inside fully embedded control device". It is application-oriented, based on computer technology, software and hardware can be tailored to adapt to application system. A dedicated computer system with strict requirements on function, reliability, cost, size, power consumption, etc. Correspondingly, an embedded operating system (EOS) refers to an operating system for an embedded system, and generally includes a hardware-related underlying driver software, a system kernel, a device driver interface, a communication protocol, a graphical interface, Standardize browsers, etc. Operating systems widely used in the embedded field are: embedded linux, windows embedded, VxWorks, and Android, iOS, etc. for smartphones and tablets.
嵌入式操作系统负责嵌入式系统的全部软、硬件资源的分配、任务调度,控制、协调并发活动。在这些嵌入式操作系统中都内嵌有多任务调度系统,例如,嵌入式调度系统,以完成系统复杂的调度任务。这种嵌入式调度系统具备设计合理、优化得当、功能强大且无需应用开发者进行设计和开发的优点,使用效果好。The embedded operating system is responsible for all software and hardware resource allocation, task scheduling, control, and coordination of concurrent activities of the embedded system. Multi-task scheduling systems, such as embedded scheduling systems, are embedded in these embedded operating systems to complete complex scheduling tasks of the system. This embedded scheduling system has the advantages of reasonable design, optimized, powerful and no need for application developers to design and develop.
然而这种嵌入式系统通常需要进行复杂的设计,容易导致嵌入式调度系统的内存资源消耗大、启动时间长,进而产生一系列问题。例如,当嵌入式系统应用于不同领域,尤其是物联网领域时,这种缺点变得尤其明显。由于成本限制,例如物联网设备等的硬件系统的硬件资源都相当有限,现有的嵌入式系统无法应用于这些设备进行任务调度,而目前适用于这些设备的嵌入式调度方案往往存在使用效果差,且资源浪费严重的问题。However, such embedded systems usually require complex design, which easily leads to large memory resources consumption and long startup time of the embedded scheduling system, which leads to a series of problems. For example, this drawback becomes especially apparent when embedded systems are used in different fields, especially in the Internet of Things. Due to cost constraints, the hardware resources of hardware systems such as IoT devices are quite limited, and existing embedded systems cannot be applied to these devices for task scheduling. However, the embedded scheduling schemes currently applicable to these devices often have poor performance. And the problem of serious waste of resources.
申请内容
Application content
本申请提供了一种计算机存储介质、嵌入式调度方法及系统,用于解决现有的嵌入式调度方案无法兼顾使用效果和内存资源消耗以及启动耗时的问题。The present application provides a computer storage medium, an embedded scheduling method, and a system, which are used to solve the problem that the existing embedded scheduling solution cannot balance the use effect, memory resource consumption, and startup time.
本申请的第一方面是为了提供一种嵌入式调度方法,包括:遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;调用支持所述待处理事件的任务,对所述待处理事件进行处理。A first aspect of the present application is to provide an embedded scheduling method, including: traversing a current value of a bit in a first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, the task The bit in the corresponding first integer is in one-to-one correspondence with the event supported by the task; and the event corresponding to the bit with the current value of the first value in the first integer corresponding to each task is determined as the current pending An event; calling a task that supports the to-be-processed event, and processing the to-be-processed event.
本申请的第二方面是为了提供一种嵌入式调度系统,包括:查询模块,用于遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;所述查询模块,还用于将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;处理模块,用于调用支持所述待处理事件的任务,对所述待处理事件进行处理。A second aspect of the present application is to provide an embedded scheduling system, including: a query module, configured to traverse a current value of a bit in a first integer corresponding to each task, where the task and the first integer are one by one Correspondingly, the bit in the first integer corresponding to the task is in one-to-one correspondence with the event supported by the task; the query module is further configured to: in the first integer corresponding to each task, the current value is the first value The event corresponding to the bit of the value is determined as the current event to be processed; the processing module is configured to invoke the task supporting the event to be processed, and process the event to be processed.
本申请的第三方面是为了提供一种嵌入式调度系统,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,以执行如前所述的方法。A third aspect of the present application is to provide an embedded scheduling system including: at least one processor and a memory; the memory storing computer executing instructions; the at least one processor executing the memory stored computer executing instructions to Perform the method as described above.
本申请的第四方面是为了提供一种计算机存储介质,该计算机存储介质中存储有程序指令,所述程序指令被处理器执行时实现如前所述的方法。A fourth aspect of the present application is to provide a computer storage medium having stored therein program instructions that, when executed by a processor, implement the method as previously described.
本申请提供的计算机存储介质、嵌入式调度方法及系统,通过为任务分配相应的整数,为事件分配相应的比特位,将事件对应的比特位的值设计成互不干预的单比特值,该比特位的值用于表征当前是否存在与该比特位对应的事件需要处理,后续在任务调度的过程中,通过遍历个任务对应的整数中比特位的值,即可快速准确地确定当前的待处理事件,进而调用相应的任务对该待处理事件进行处理。本方案采用事件驱动任务调度的方式,使得任务的所有事件可以用整数来缓存,在实现良好的使用效果的同时,实现轻量级、低资源消耗的调度方案,能够有效适用于硬件资源匮乏的设备,并且该方案易于实现、资源消耗低、启动快。实际应用中,上述方案既可用于普通任务的调度,也可用于调用层级深的系统,并且能够降低系统的耦合性。
The computer storage medium, the embedded scheduling method and the system provided by the application, by assigning corresponding integers to tasks, allocating corresponding bits to the event, and designing the value of the bit corresponding to the event as a single bit value that does not interfere with each other, The value of the bit is used to indicate whether there is currently an event corresponding to the bit to be processed. In the process of task scheduling, by traversing the value of the bit in the integer corresponding to the task, the current waiting can be quickly and accurately determined. The event is processed, and the corresponding task is called to process the pending event. The scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage. Equipment, and the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其它的附图。The drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application, and those skilled in the art may also The figure obtains other figures.
图1A~图1E为本申请实施例一提供的嵌入式调度方法的流程示意图;1A-1E are schematic flowcharts of an embedded scheduling method according to Embodiment 1 of the present application;
图1F为任务注册的举例示意图;FIG. 1F is a schematic diagram of an example of task registration;
图1G为本发明实施例一中调度流程的示例图;1G is a schematic diagram of a scheduling process in Embodiment 1 of the present invention;
图2A~图2B,2D~图2G,以及图2I为本申请实施例二提供的嵌入式调度方法的流程示意图;2A to 2B, 2D to 2G, and FIG. 2I are schematic flowcharts of an embedded scheduling method according to Embodiment 2 of the present application;
图2C为本发明实施例二中调度流程的示例图;2C is a schematic diagram of a scheduling process in Embodiment 2 of the present invention;
图2H为本发明实施例二中消息缓存的流程示例图;2H is a diagram showing an example of a process of message buffering in Embodiment 2 of the present invention;
图3A~图3D为本申请实施例三提供的嵌入式调度系统的结构示意图;3A-3D are schematic structural diagrams of an embedded scheduling system according to Embodiment 3 of the present application;
图4A~图4D为本申请实施例四提供的嵌入式调度系统的结构示意图;4A-4D are schematic structural diagrams of an embedded scheduling system according to Embodiment 4 of the present application;
图5为本发明实施例五提供的一种嵌入式调度系统的结构示例图。FIG. 5 is a schematic structural diagram of an embedded scheduling system according to Embodiment 5 of the present invention.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present application are within the scope of the present disclosure.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention applies, unless otherwise defined. The terminology used herein is for the purpose of describing particular embodiments, and is not intended to be limiting. Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below can be combined with each other without conflict.
图1A为本申请实施例一提供的一种嵌入式调度方法的流程示意图;参考附图1A可知,本实施例提供了一种嵌入式调度方法,该嵌入式调度方法用于实现轻量级、低资源消耗的调度方案,具体的,该嵌入式调度方法包括:
1A is a schematic flowchart of an embedded scheduling method according to Embodiment 1 of the present application; as shown in FIG. 1A, the embodiment provides an embedded scheduling method, which is used to implement a lightweight, A scheduling scheme for low resource consumption, specifically, the embedded scheduling method includes:
101:遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;Step 101: traverse a current value of a bit in a first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, and a bit in the first integer corresponding to the task and an event supported by the task One-to-one correspondence;
102:将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;Step 102: Determine, in the first integer corresponding to each task, an event corresponding to a bit whose current value is the first value as a current pending event;
103:调用支持所述待处理事件的任务,对所述待处理事件进行处理。103: Call a task that supports the to-be-processed event, and process the to-be-processed event.
具体的,该嵌入式调度方法的执行主体可以为嵌入式调度系统。在实际应用中,该嵌入式调度系统可以为存储有相关执行代码的介质,例如,U盘等;或者,该嵌入式调度系统还可以为集成或安装有相关执行代码的实体装置,例如,芯片、智能终端、计算机等。Specifically, the execution body of the embedded scheduling method may be an embedded scheduling system. In an actual application, the embedded scheduling system may be a medium storing related execution code, for example, a USB flash drive, etc.; or the embedded scheduling system may also be a physical device integrated or installed with related execution code, for example, a chip. , smart terminals, computers, etc.
实际应用中,不同任务支持的事件不同,即不同的任务能够处理的事件不同。本实施例中,预先为各任务分配对应的整数,并为各任务支持的事件分配相应的比特位。具体的,所述方法还包括:为每个任务分配相应的整数,并将该任务对应的整数下,该整数的比特位分别分配给该任务支持的事件。举例来说:假设当前支持调用的有任务A,任务B和任务C,其中,任务A支持处理的事件有事件A1,A2;任务B支持处理的事件有事件B1,B2和B3;任务C支持处理的事件有C1。则相应的,为任务A分配整数a,为任务B分配整数b,为任务C分配整数c,整数a,b,c都可以包括多个比特位,将整数a的比特位a1分配给事件A1,将整数a的比特位a2分配给事件A2,将整数b的比特位b1分配给事件B1,将整数b的比特位b2分配给事件B2,将整数b的比特位b3分配给事件B3,以此类推,针对每个任务,将其对应的整数的比特位分配给该任务支持的事件,具体的,任务和整数之间一一对应,任务对应的整数的比特位和该任务支持的事件之间一一对应。In practical applications, different tasks support different events, that is, different tasks can handle different events. In this embodiment, each task is assigned a corresponding integer in advance, and corresponding bits are allocated for events supported by each task. Specifically, the method further includes: assigning a corresponding integer to each task, and assigning the integer bits of the task to the events supported by the task respectively. For example: suppose that the current support calls are task A, task B and task C. Among them, the events supported by task A have events A1 and A2; the events supported by task B have events B1, B2 and B3; The event handled is C1. Correspondingly, the task A is assigned an integer a, the task B is assigned an integer b, and the task C is assigned an integer c. The integers a, b, c can all include multiple bits, and the bit a1 of the integer a is assigned to the event A1. Bit A2 of integer a is assigned to event A2, bit b1 of integer b is assigned to event B1, bit b2 of integer b is assigned to event B2, bit b3 of integer b is assigned to event B3, Such a push, for each task, assigns its corresponding integer bit to the event supported by the task, specifically, the one-to-one correspondence between the task and the integer, the bit of the integer corresponding to the task, and the event supported by the task. One-to-one correspondence.
具体的,某事件对应的比特位的不同取值可以表征当前是否存在该事件需要处理。例如,当前接收到一事件需要进行处理,则将该事件对应的比特值设置为相应的值,以表征当前存在需要处理的该项事件。相应的,当该事件对应的比特位的值为另一值时,表征当前没有需要处理的该项事件。Specifically, the different values of the bits corresponding to an event may indicate whether the event currently needs to be processed. For example, if an event is currently received and needs to be processed, the bit value corresponding to the event is set to a corresponding value to represent that the event currently needs to be processed. Correspondingly, when the value of the bit corresponding to the event is another value, the event that does not need to be processed is represented.
基于上述场景,可选的,所述方法还包括:若检测到新事件,则将该新事件对应的比特位的值设置为所述第一值。再可选的,当待处理事件处理完成后,则可以对其对应的比特位的值进行更新,以提高后续调度的准确性和
可靠性。相应的,所述嵌入式调度方法还可以包括:若完成对所述待处理事件的处理,则将所述待处理事件对应的比特位设置为第二值。其中,所述第一值和第二值中的“第一”“第二”用于表示两者的取值不同,其具体的取值可以自定义设定,例如,可以设定第一值可以为1,第二值可以为0。Optionally, the method further includes: if a new event is detected, setting a value of a bit corresponding to the new event to the first value. Optionally, after the processing of the pending event is completed, the value of the corresponding bit may be updated to improve the accuracy of the subsequent scheduling.
reliability. Correspondingly, the embedded scheduling method may further include: if the processing of the to-be-processed event is completed, setting a bit corresponding to the to-be-processed event to a second value. The first value and the second value of the first value and the second value are used to indicate that the values of the two are different, and the specific values may be customized, for example, the first value may be set. It can be 1, and the second value can be 0.
其中,这里所说的新事件为新产生的需要处理的事件,举例来说,所述新事件可以是从调度系统外部接收的事件,也可以是调度系统内部在处理之前的某个事件时触发产生的事件,总之,只要当前有新产生的需要处理的事件,则将该需要处理的事件对应的比特位设置为相应的值。具体的,事件对应的比特位的值可以包括1和0。可选的,可以将1用于表征当前存在需要处理的该事件,将0用于表征当前没有需要处理的该事件。相应的,仍结合前述举例,初始时可以将各事件对应的比特位的值设置为0。假设当前检测到有新事件A1和B2,则将事件A1对应的比特位a1和事件B1对应的比特位b2设置为1,后续进行调度时,检测到比特位a1和b2的值为1,则可确定比特位a1和b2对应的事件A1和B2为待处理事件。The new event mentioned here is a newly generated event that needs to be processed. For example, the new event may be an event received from outside the scheduling system, or may be triggered by an internal event of the scheduling system before processing. The generated event, in short, as long as there is a newly generated event to be processed, the bit corresponding to the event to be processed is set to a corresponding value. Specifically, the value of the bit corresponding to the event may include 1 and 0. Alternatively, 1 can be used to characterize the current existence of the event that needs to be processed, and 0 is used to characterize the event that is currently not needed to be processed. Correspondingly, in combination with the foregoing examples, the value of the bit corresponding to each event can be initially set to 0. Assuming that there are new events A1 and B2 currently detected, the bit a1 corresponding to the event A1 and the bit b2 corresponding to the event B1 are set to 1, and when the subsequent scheduling is performed, the values of the bits a1 and b2 are detected as 1, It can be determined that the events A1 and B2 corresponding to the bits a1 and b2 are pending events.
可选的,为了进一步节省调度所需的资源和空间,可以根据任务支持的事件数量为其分配比特位数量一致的整数。相应的,如图1B所示,图1B为本申请实施例一提供的另一种嵌入式调度方法的流程示意图,在实施例一的基础上,所述方法还包括:Optionally, in order to further save resources and space required for scheduling, an integer of a consistent number of bits may be allocated according to the number of events supported by the task. Correspondingly, as shown in FIG. 1B, FIG. 1B is a schematic flowchart of another embedded scheduling method according to Embodiment 1 of the present application. On the basis of Embodiment 1, the method further includes:
104:为每个任务分配第一整数,所述任务对应的第一整数的比特位数量与所述任务支持的事件的数量一致;104: assign a first integer to each task, where the number of bits of the first integer corresponding to the task is consistent with the number of events supported by the task;
105:将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。Step 105: Allocate the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
以实际场景举例来说:针对当前可调用的所有任务,为每个任务分配整数,每个任务对应的整数的比特位数量与该任务支持的事件数量一致,仍结合前述举例来说,对于任务A,其支持的事件数量为2个,则为其分配比特位为2位的整数a(整数a包括比特位[a1][a2]),对于任务B,其支持的事件数量为3个,则为其分配比特位为3位的整数b(整数b包括比特位[b1][b2][b3]),对于任务C,其支持的事件数量为1个,则为其分配比特位为1位的整数c(例如,整数c包括比特位[c1])。分配对应的整数后,针对每个任务,将该任务对应的整数的比特位一一对应地分配给该任务支持的事
件,仍结合前述举例来说,对于任务A,将比特位a1和a2分别分配给事件A1和A2(例如,将a1分配给事件A2,将a2分配给事件A1),对于任务B,将比特位b1,b2和b3分别分配给事件B1,B2和B3(例如,将b1分配给事件B2,将b2分配给事件B1,将b3分配给事件B3),对于任务C,将比特位c1分配给事件C1。相应的,在调度过程中,一旦产生新的事件,例如,新事件为C1,则将该新事件C1对应的比特位c1的值设置为第一值,例如,设置为1,后续通过遍历各任务对应的整数中比特位的当前值,可以快速准确地将其值为1的事件,即事件C1确定为待处理事件,从而调用支持事件C1的任务,即任务C,对该事件C1进行处理。For example, in the actual scenario, for each task that can be called, an integer is assigned to each task, and the number of bits corresponding to each task is consistent with the number of events supported by the task, and still combined with the foregoing example, for the task. A, the number of events it supports is two, then it is assigned an integer a with a bit of 2 (integer a includes bit [a1][a2]), and for task B, the number of events supported is three. Then, it is assigned an integer b with a bit of 3 bits (the integer b includes the bit [b1][b2][b3]), and for task C, the number of events supported is one, and the allocated bit is 1 The integer c of the bit (for example, the integer c includes the bit [c1]). After assigning the corresponding integer, for each task, the bits of the integer corresponding to the task are assigned to the task support one by one.
In conjunction with the foregoing example, for task A, bits a1 and a2 are assigned to events A1 and A2, respectively (eg, a1 is assigned to event A2, a2 is assigned to event A1), and for task B, bits are used. Bits b1, b2 and b3 are assigned to events B1, B2 and B3, respectively (for example, b1 is assigned to event B2, b2 is assigned to event B1, b3 is assigned to event B3), and for task C, bit c1 is assigned to Event C1. Correspondingly, in the scheduling process, once a new event is generated, for example, the new event is C1, the value of the bit c1 corresponding to the new event C1 is set to a first value, for example, set to 1, and then traversed by each The current value of the bit in the integer corresponding to the task can quickly and accurately determine the event whose value is 1, that is, the event C1 is determined as a pending event, thereby calling the task supporting the event C1, that is, the task C, and processing the event C1. .
实际应用中,各任务对应的整数的分布方式可以有多种,例如,可以离散分布或者也可以相邻分布。其中,离散分布的方式可以随意设置整数的位置,因此灵活性更高。或者,也可以采用相邻分布的方式,来提高整数分布的集成性,以便在遍历各任务对应的整数时,进一步减少处理资源的消耗。相应的,在一种可实施的方式中,对于各任务对应的整数,可以通过建立整数数组的方式来生成这些整数,具体的,如图1C所示,图1C为本申请实施例一提供的又一种嵌入式调度方法的流程示意图,在实施例一的基础上,104具体包括:In practical applications, the integers corresponding to each task may be distributed in various ways, for example, may be discretely distributed or may be adjacently distributed. Among them, the discrete distribution method can set the position of the integer at will, so the flexibility is higher. Alternatively, the manner of adjacent distribution may be adopted to improve the integration of the integer distribution, so as to further reduce the consumption of processing resources when traversing the integer corresponding to each task. Correspondingly, in an implementable manner, for an integer corresponding to each task, the integers may be generated by establishing an integer array. Specifically, as shown in FIG. 1C, FIG. 1C is provided in Embodiment 1 of the present application. A flow diagram of an embedded scheduling method. Based on the first embodiment, 104 specifically includes:
1041:根据所述各任务的数量,建立包括多个第一整数的一维整数数组,所述多个第一整数的数量与所述各任务的数量一致;1041: Establish, according to the number of the tasks, a one-dimensional integer array including a plurality of first integers, where the number of the plurality of first integers is consistent with the number of the tasks;
1042:将所述一维整数数组中的第一整数一一对应地分配给所述各任务。1042: Allocate the first integers in the one-dimensional integer array to the tasks in a one-to-one correspondence.
具体的,根据当前可调用的各任务的数量,建立相应的一维数组,按照一一对应的分配原则,将该数组中的整数分配给各任务,后续,再针对每个任务将其对应的整数的比特位一一对应地分配给该任务支持的事件。可选的,各整数的比特位数量可以与其对应的任务支持的事件数量一致,具体方案可以参照前述实施方式中的步骤,在此不再赘述。具体的,结合前述举例来说,本实施方式中:对于任务A,B和C,建立包括3个整数a,b和c的一维整数数组,将整数a,b和c一一对应地分配给任务A,B和C,具体的,可以将整数a分配至任务A,将整数b分配至任务B,将整数c分配至任务C。具体的,通过本实施方式,用一维整数数组来缓存所有的事件,每一个任务可以分配到该数组中的一个整数作为其支持的事件的事件缓存,可选的,该缓存可以
默认为0,即各比特位的初始值可以设置为0,即表征当前没有需要处理的事件。其中,每个事件对应于整数的一个比特位,如果收到了某一事件需要被处理,则该事件对应的比特位的值将会被置1,后续该事件被处理后该比特位会被再次置0。Specifically, according to the number of currently available tasks, a corresponding one-dimensional array is established, and the integers in the array are assigned to each task according to the one-to-one correspondence principle, and then, corresponding to each task The bits of the integer are assigned to the events supported by the task in a one-to-one correspondence. Optionally, the number of the bits of each integer may be consistent with the number of events supported by the corresponding task. For the specific solution, refer to the steps in the foregoing embodiments, and details are not described herein again. Specifically, in combination with the foregoing example, in the embodiment, for tasks A, B, and C, a one-dimensional integer array including three integers a, b, and c is established, and the integers a, b, and c are allocated one by one. For tasks A, B, and C, specifically, integer a can be assigned to task A, integer b can be assigned to task B, and integer c can be assigned to task C. Specifically, in this embodiment, all events are cached by a one-dimensional array of integers, and each task can be assigned an integer in the array as an event buffer of the events it supports. Optionally, the cache can be
The default is 0, that is, the initial value of each bit can be set to 0, which means that there is no event that needs to be processed at present. Each event corresponds to one bit of an integer. If an event needs to be processed, the value of the bit corresponding to the event will be set to 1. After the event is processed, the bit will be processed again. Set to 0.
本实施方式,通过建立一维数组的方式,生成各任务对应的整数,从而在后续遍历各整数时,减小处理资源的消耗和耗时,提高调度的效率,以使本调度方案更好地适用于硬件资源有限的设备。In this embodiment, an integer corresponding to each task is generated by establishing a one-dimensional array, thereby reducing processing consumption and time consumption, and improving scheduling efficiency, so that the scheduling scheme is better. Applicable to devices with limited hardware resources.
基于本实施例在进行调度时需要先查找其比特位的值不为0的事件,然后对这些事件进行处理,在此过程中,可选的,在确定需要处理的待处理事件后,可以将比特顺序的特性引入事件间的优先级控制,在处理的过程中,通过按照比特顺序对整数中的比特位依次进行轮询,实现事件处理的优先级控制。相应的,如图1D所示,图1D为本申请实施例一提供的又一种嵌入式调度方法的流程示意图,在实施例一的基础上,在105之前,还包括:Based on the present embodiment, when performing scheduling, it is necessary to first find an event whose value of the bit is not 0, and then process the events. In this process, optionally, after determining the pending event to be processed, The characteristics of the bit order introduce priority control between events. In the process of processing, priority control of event processing is realized by sequentially polling bits in integers in bit order. Correspondingly, as shown in FIG. 1D, FIG. 1D is a schematic flowchart of still another embedded scheduling method according to Embodiment 1 of the present application. On the basis of Embodiment 1, before 105, the method further includes:
106:确定各事件的优先级;相应的,105具体包括:106: Determine the priority of each event; correspondingly, 105 specifically includes:
1051:按照比特顺序和事件的优先级一致的分配原则,将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。1051: Allocate the first integer bit corresponding to the task to the event supported by the task in a one-to-one correspondence according to the bit sequence and the priority of the event.
在实际调度过程中,为了提高调度效果和用户体验,各事件的处理优先级可能不同,有些事件需要被优先处理,有些事件可以暂缓处理。在本方案中,基于事件对应至比特位,且比特位之间存在比特顺序的特性,引入事件处理的优先级,无需专门配置用于进行优先级调度的资源,简单方便地实现事件处理的优先级控制。具体的,在本实施方式中,可以根据需要先确定各事件的优先级,确定优先级之后,结合前述实施方式,在为各事件分配比特位的同时,考虑比特顺序和各事件的优先级,按照比特顺序和时间的优先级一致非分配原则,为每个事件分配相应的比特位。这里的一致包括相同或者相反,即只要使得比特顺序和事件的优先级之间存在规律的映射关系即可,例如,可以按照比特顺序与事件优先级相同的原则进行分配,或者也可以按照比特顺序与事件优先级相反的原则进行分配,在此不对其进行限制。In the actual scheduling process, in order to improve the scheduling effect and user experience, the processing priority of each event may be different, some events need to be processed first, and some events may be suspended. In this solution, based on the characteristics that the event corresponds to the bit, and the bit order exists between the bits, the priority of the event processing is introduced, and the resources for priority scheduling need not be specifically configured, and the event processing priority is easily and conveniently implemented. Level control. Specifically, in this implementation manner, the priority of each event may be determined as needed, and after determining the priority, in combination with the foregoing implementation manner, the bit order and the priority of each event are considered while allocating bits for each event. Each event is assigned a corresponding bit according to the bit-order and time-priority consistent non-allocation principle. The agreement here includes the same or the opposite, that is, as long as there is a regular mapping relationship between the bit order and the priority of the event, for example, the bit order may be assigned in the same order as the event priority, or the bit order may be used. The principle is reversed from the principle of event priority and is not limited here.
可选的,基于上述事件分配方案,假设按照比特顺序和事件的优先级相同的原则进行分配,则后续在进行事件处理时,如图1E所示,在图1D所示实施方式的基础上,103具体包括:
Optionally, based on the foregoing event allocation scheme, if the allocation is performed according to the principle that the bit order and the priority of the event are the same, then when the event processing is performed, as shown in FIG. 1E, based on the implementation manner shown in FIG. 1D, 103 specifically includes:
1031:按照所述待处理事件对应的比特位的比特顺序,依次针对每个待处理事件,调用支持所述待处理事件的任务,对所述待处理事件进行处理。1031: Call a task that supports the to-be-processed event for each pending event according to the bit sequence of the bit corresponding to the to-be-processed event, and process the to-be-processed event.
具体的,为各任务分配对应的整数后,针对每个任务,在将其对应的整数的比特位分配给其支持的事件的过程中,可以先确定各事件的优先级,然后按照比特顺序与事件的优先级一致的原则进行分配,以使后续通过遍历各整数的比特位,确定出待处理事件并对待处理事件进行处理的过程中,可以按照比特顺序对待处理事件进行处理。由于在之前为事件分配比特位的过程中,引入了比特顺序和事件优先级的对应关系,因此,后续基于比特顺序进行事件处理的同时,实际上也在按照事件的优先级进行事件处理。Specifically, after each task is assigned a corresponding integer, for each task, in the process of assigning its corresponding integer bit to its supported event, the priority of each event may be determined first, and then in bit order. The principle of consistent priority of events is allocated so that in the process of traversing the bits of each integer, determining the event to be processed and processing the event to be processed, the event to be processed may be processed in bit order. Since the correspondence between the bit order and the event priority is introduced in the process of previously allocating bits for the event, subsequent event processing based on the bit order is actually performing event processing according to the priority of the event.
上述实施方式,通过在为事件分配比特位的过程中,结合考虑比特顺序的特性和事件的优先级,后续按照比特顺序进行事件处理的方案,简单巧妙地利用比特顺序的特性,实现事件处理的优先级控制,从而进一步节省调度所需的资源。In the foregoing embodiment, by combining the characteristics of the bit order and the priority of the event in the process of allocating the bit for the event, the subsequent event processing in the bit order simply and subtly utilizes the characteristics of the bit sequence to implement event processing. Priority control, which further saves resources required for scheduling.
具体的,在本方案中的各任务包括当前调度系统可调用的任务。实际应用中,如果需要实现对任务的调用,需要该任务预先在调度系统中进行注册。相应的,在实施例一的基础上,所述方法还包括:接收注册请求,所述注册请求包括任务的任务处理函数;根据所述任务处理函数,对所述任务进行函数注册。具体的,函数注册的具体方案可以结合现有的任务注册的方案实施,在此不再阐述。举例来说,对于调度系统来说,其可调用的任务可能有多个,如图1F所示,图1F为任务注册的举例示意图。通过任务与调度器之间的数据交互,完成任务注册。Specifically, each task in the solution includes a task that can be invoked by the current scheduling system. In practical applications, if a call to a task needs to be implemented, the task needs to be registered in the scheduling system in advance. Correspondingly, based on the first embodiment, the method further includes: receiving a registration request, where the registration request includes a task processing function of the task; and performing function registration on the task according to the task processing function. Specifically, the specific solution of the function registration can be implemented in combination with the existing task registration scheme, and will not be described here. For example, for a scheduling system, there may be multiple tasks that can be called, as shown in FIG. 1F, and FIG. 1F is an example schematic diagram of task registration. Task registration is completed through data interaction between the task and the scheduler.
可以理解,本实施例中的各步骤可以用于表示单次执行的过程,实际应用中,调度系统可能需要循环执行上述过程,以完成任务调度和事件处理。举例来说,如图1G所示,图1G为本发明实施例一中调度流程的示例图,在实际应用中,在系统启动后,检测当前是否存在待处理事件,如果存在则调用相应的任务对待处理事件进行处理,处理完成后,再次返回执行检测当前是否存在待处理事件的步骤。以此类推,在循环往复的过程中实现调度。其中,检测和处理待处理事件的具体方案可以参照本方案中的相关执行步骤,在此不再进行重复说明。可选的,在上述过程中,如果不存在待处理事件,则可进入休眠状态,以节省设备的电量或消耗的资源,后续可以基于现有的
唤醒机制唤醒后,再次检测是否存在待处理事件。通过本方案,能够提供简洁可靠的访问方式以便于调度器进行访问。It can be understood that the steps in this embodiment may be used to represent a single execution process. In an actual application, the scheduling system may need to perform the above process cyclically to complete task scheduling and event processing. For example, as shown in FIG. 1G, FIG. 1G is an exemplary diagram of a scheduling process in Embodiment 1 of the present invention. In an actual application, after the system is started, it detects whether there is a pending event, and if so, invokes the corresponding task. The processing event is processed, and after the processing is completed, the process returns to perform a step of detecting whether there is a pending event. By analogy, scheduling is achieved during the reciprocating process. For the specific solution for detecting and processing the event to be processed, reference may be made to the relevant execution steps in the solution, and the repeated description is not repeated here. Optionally, in the foregoing process, if there is no pending event, the sleep state may be entered to save the power of the device or the consumed resource, and the subsequent may be based on the existing
After the wakeup mechanism wakes up, it detects again whether there are pending events. Through this solution, it is possible to provide a simple and reliable access method for the scheduler to access.
本实施例提供的嵌入式调度方法,通过为任务分配相应的整数,为事件分配相应的比特位,将事件对应的比特位的值设计成互不干预的单比特值,该比特位的值用于表征当前是否存在与该比特位对应的事件需要处理,后续在任务调度的过程中,通过遍历个任务对应的整数中比特位的值,即可快速准确地确定当前的待处理事件,进而调用相应的任务对该待处理事件进行处理。本方案采用事件驱动任务调度的方式,使得任务的所有事件可以用整数来缓存,在实现良好的使用效果的同时,实现轻量级、低资源消耗的调度方案,能够有效适用于硬件资源匮乏的设备,并且该方案易于实现、资源消耗低、启动快。实际应用中,上述方案既可用于普通任务的调度,也可用于调用层级深的系统,并且能够降低系统的耦合性。The embedded scheduling method provided in this embodiment allocates a corresponding bit to the event by assigning a corresponding integer to the task, and designates the value of the bit corresponding to the event as a single bit value that does not interfere with each other, and the value of the bit is used. In the process of task scheduling, the current pending event can be quickly and accurately determined by traversing the value of the bit in the integer corresponding to the task, and then calling The corresponding task processes the pending event. The scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage. Equipment, and the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
实际应用中,对于一个事件来说,可能伴随有与其关联的消息,对这种事件的处理需要涉及到对这些消息进行处理,因此,在调度过程中,还可能需要涉及消息的缓存和处理机制。In practical applications, for an event, it may be accompanied by a message associated with it. The processing of such an event needs to deal with these messages. Therefore, during the scheduling process, the message caching and processing mechanism may also be involved. .
实际应用中,消息池可用于缓存各种消息。相应的,如图2A所示,图2A为本申请实施例二提供的一种嵌入式调度方法的流程示意图,在实施例一的基础上,该嵌入式调度方法还包括:In practice, a message pool can be used to cache various messages. Correspondingly, as shown in FIG. 2A, FIG. 2A is a schematic flowchart of an embedded scheduling method according to Embodiment 2 of the present application. On the basis of Embodiment 1, the embedded scheduling method further includes:
201:若检测到新事件且新事件包括消息,则将所述消息缓存至消息池。201: If a new event is detected and the new event includes a message, the message is cached to the message pool.
具体的,若检测到的新事件包括消息,则将该消息缓存至消息池中,后续对该事件进行处理时,从消息池中获取该消息,通过对该消息进行处理,完成对事件的处理。相应的,如图2B所示,图2B为本申请实施例二提供的另一种嵌入式调度方法的流程示意图,在图2A所示实施方式的基础上,103具体可以包括:Specifically, if the detected new event includes a message, the message is cached in the message pool, and when the event is processed, the message is obtained from the message pool, and the event is processed to complete the processing of the event. . Correspondingly, as shown in FIG. 2B, FIG. 2B is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application. On the basis of the implementation manner shown in FIG. 2A, 103 may specifically include:
1032:获取与所述待处理事件关联的目标消息;1032: Acquire a target message associated with the to-be-processed event.
1033:调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。本实施方式,对于包括消息的事件,通过对其消息进行处理完成对该事件的处理,并且清除经过处理的消息,从而在实现事件处理的基础上,有效节约消息缓存所占用的内存。1033: Call a task that supports the to-be-processed event, process the target message, and clear the processed target message. In this embodiment, for an event including a message, the processing of the event is completed by processing the message, and the processed message is cleared, thereby effectively saving the memory occupied by the message cache on the basis of implementing the event processing.
举例来说,如图2C所示,图2C为本发明实施例二中调度流程的示例图,
参照附图所示,参照附图,图2C与图1G的区别在于,当检测到存在待处理事件后,需要调用相应的任务,并检测是否存在该事件对应的消息。如果存在对应的消息,则需要生成用于处理该消息的新事件,并且将该消息进行缓存。之后,再次检测当前需要处理的待处理事件。其中,各步骤的具体实现方法可以参照方法实施例中的相关内容。For example, as shown in FIG. 2C, FIG. 2C is an exemplary diagram of a scheduling process in Embodiment 2 of the present invention.
Referring to the drawings, with reference to the accompanying drawings, FIG. 2C differs from FIG. 1G in that, when it is detected that there is an event to be processed, it is necessary to call a corresponding task and detect whether there is a message corresponding to the event. If there is a corresponding message, a new event for processing the message needs to be generated and cached. After that, the pending events that need to be processed are detected again. For the specific implementation method of each step, reference may be made to related content in the method embodiment.
可选的,在消息缓存的过程中,可以采用内存动态分配机制来进行消息的缓存,即实时根据当前的消息大小,动态为其分配相应大小的内存来实现消息缓存,这种方案的灵活性高但资源消耗较大,并且实现的复杂程度也较大。再可选的,还可以采用定长的内存块来存储消息,即预先设定静态的固定大小的内存用于缓存消息,这种方案资源耗费小,但由于其预先设定的内存大小是固定的,因此为了保证有足够的消息缓存空间,往往会设定较大的内存,而对于当前消息较少的场景,会导致很大的内存资源浪费。Optionally, in the process of message caching, a memory dynamic allocation mechanism may be used to cache the message, that is, real-time allocation of a corresponding size of memory according to the current message size to implement message caching, and the flexibility of the solution. High but resource-intensive and complex to implement. Optionally, a fixed-length memory block can also be used to store the message, that is, a static fixed-size memory is preset to buffer the message, and the solution resource is small, but the preset memory size is fixed. Therefore, in order to ensure that there is enough message buffer space, a large memory is often set, and for a scenario with less current messages, a large memory resource is wasted.
对此,为了可靠有效地实现消息缓存,如图2D所示,图2D为本申请实施例二提供的又一种嵌入式调度方法的流程示意图,在实施例二的基础上,该嵌入式调度方法还包括:In this regard, in order to implement the message cache reliably and efficiently, as shown in FIG. 2D, FIG. 2D is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application. On the basis of Embodiment 2, the embedded scheduling is performed. The method also includes:
202:按照预设的划分粒度,对消息池中的内存进行划分,获得多个内存块;202: Divide the memory in the message pool according to a preset granularity to obtain multiple memory blocks.
相应的,201具体可以包括:Correspondingly, 201 may specifically include:
2011:若检测到新事件且所述新事件包括消息,则从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。2011: If a new event is detected and the new event includes a message, an idle target memory block is found from the plurality of memory blocks, and the message is cached to the target memory block.
其中,所述划分粒度可以根据需要或者经验设定,例如,10个内存单元为一个粒度。可选的,所述划分粒度可以为单一的,即根据一个划分粒度将消息池平均划分为大小相等的多个内存块;或者,所述划分粒度还可以为多个,即将消息池划分为大小不等的多个内存块,本实施例在此不对其进行限制。可选的,为了提高消息缓存的效率,可以对消息池中的内存进行平均划分。通过将消息池划分为多个内存块,实现对消息池的内存进行分布式管理,无需动态分配内存,也无需限定过大的固定内存,提高消息缓存的效率,节省资源消耗。具体的,在进行消息缓存时,从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。The partitioning granularity may be set according to needs or experience, for example, 10 memory units are one granularity. Optionally, the partitioning granularity may be a single, that is, the message pool is divided into a plurality of memory blocks of equal size according to a partitioning granularity; or the partitioning granularity may be multiple, that is, the message pool is divided into sizes. The plurality of memory blocks are not equal, and the embodiment does not limit them here. Optionally, in order to improve the efficiency of the message cache, the memory in the message pool may be equally divided. By dividing the message pool into multiple memory blocks, distributed management of the memory of the message pool is realized. There is no need to dynamically allocate memory, and there is no need to limit the excessive fixed memory, thereby improving the efficiency of message buffering and saving resource consumption. Specifically, when the message cache is performed, the idle target memory block is found from the plurality of memory blocks, and the message is cached to the target memory block.
本实施方式中,将消息池划分为多个内存块,实现分布式管理,若检测到的新事件包括消息,需要对该消息进行缓存时,从多个内存块中查找出空
闲的内存块,并将该新事件的消息缓存至查找出的内存块中,提高消息缓存的效率,节省资源消耗。In this implementation manner, the message pool is divided into multiple memory blocks to implement distributed management. If the detected new event includes a message, and the message needs to be cached, the air memory is searched for from multiple memory blocks.
The free memory block caches the message of the new event into the found memory block, improving the efficiency of the message cache and saving resource consumption.
可选的,为了更快更准确的查找出空闲的内存块,如图2E所示,图2E为本申请实施例二提供的另一种嵌入式调度方法的流程示意图,在图2D所示实施方式的基础上,所述方法还可以包括:Optionally, in order to find the free memory block in a faster and more accurate manner, as shown in FIG. 2E, FIG. 2E is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application, which is implemented in FIG. 2D. On the basis of the method, the method may further include:
203:为每个内存块设置状态标识,所述内存块的状态标识用于表征所述内存块是否空闲。203: Set a status identifier for each memory block, where the status identifier of the memory block is used to indicate whether the memory block is free.
具体的,将消息池划分为多个内存块后,为每个内存块设置状态标识,后续当需要查找空闲的内存块时,只需要遍历各内存块的状态标识即可快速准确地确定出空闲的内存块,提高消息缓存的效率。Specifically, after the message pool is divided into multiple memory blocks, a state identifier is set for each memory block, and when it is necessary to find the free memory block, the state identifier of each memory block needs to be traversed to quickly and accurately determine the idleness. The memory block improves the efficiency of the message cache.
其中,状态标识的形式可以有多种。可选的,为了进一步减少资源消耗,如图2F所示,图2F为本申请实施例二提供的又一种嵌入式调度方法的流程示意图,在图2E所示实施方式的基础上,203具体可以包括:Among them, the status identifier can be in various forms. Optionally, in order to further reduce resource consumption, as shown in FIG. 2F, FIG. 2F is a schematic flowchart of still another embedded scheduling method according to Embodiment 2 of the present application. On the basis of the implementation manner shown in FIG. 2E, 203 is specifically Can include:
2031:创建第二整数,所述第二整数的比特位数量与所述多个内存块的数量一致;2031: Create a second integer, where the number of bits of the second integer is consistent with the number of the multiple memory blocks;
2032:将所述第二整数的比特位一一对应地分配给所述内存块。其中,内存块的状态标识为所述内存块对应的比特位,所述内存块对应的比特位的不同值分别表征所述内存块处于空闲或非空闲。2032: Allocate the bits of the second integer to the memory block in a one-to-one correspondence. The status of the memory block is identified as a bit corresponding to the memory block, and different values of the bit corresponding to the memory block respectively indicate that the memory block is idle or not.
具体的,将消息池划分为多个内存块后,根据内存块的数量,创建比特位数量与内存块数量一致的第二整数,按照一一对应的分配原则,将第二整数的比特位分配给内存块,也就是说,每个比特位表征一个内存块。由于比特位自身有可赋值的特性,因此可以用比特位的不同值表征内存块的不同状态。例如,对于某内存块对应的比特位来说,当该比特位的值为1时,表征该内存块非空闲,当该比特位的值为0时,表征该内存块空闲。这里的空闲或者非空闲用于表征内存块是否被占用,即是否缓存有数据。可选的,可以设定未被占用的内存块为空闲的内存块,设定全部或部分被占用的内存块为非空闲的内存块。Specifically, after the message pool is divided into multiple memory blocks, according to the number of memory blocks, a second integer whose number of bits is consistent with the number of memory blocks is created, and the bits of the second integer are allocated according to a one-to-one correspondence principle. Give a block of memory, that is, each bit represents a block of memory. Since the bits themselves have an assignable property, different values of the memory blocks can be characterized by different values of the bits. For example, for a bit corresponding to a memory block, when the value of the bit is 1, the memory block is characterized as being non-idle, and when the value of the bit is 0, the memory block is characterized as being idle. The idle or non-idle here is used to indicate whether the memory block is occupied, that is, whether data is cached. Optionally, the unoccupied memory block can be set as an idle memory block, and all or part of the occupied memory block is set as a non-idle memory block.
本实施方式,为每个内存块分配对应的比特位,用比特位的不同取值表征内存块是否空闲,从而简单有效地实现对内存块状态的表征,进一步节省资源消耗,提高消息缓存的效率。
In this embodiment, each memory block is allocated a corresponding bit, and the different values of the bit are used to characterize whether the memory block is idle, thereby simply and effectively characterizing the state of the memory block, further saving resource consumption, and improving the efficiency of message buffering. .
基于上述状态标识,当需要为事件的消息进行缓存时,可以快速查找出空闲的内存块。具体的,如图2G所示,图2G为本申请实施例二提供的又一种嵌入式调度方法的流程示意图,在前述实施方式的基础上,2011具体可以包括:Based on the above status identifier, when the message of the event needs to be cached, the free memory block can be quickly found. Specifically, as shown in FIG. 2G, FIG. 2G is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application. On the basis of the foregoing implementation manner, 2011 may specifically include:
2012:若检测到新事件且所述新事件包括消息,则遍历所述多个内存块的状态标识,查找出其状态标识为空闲的目标内存块;2012: If a new event is detected and the new event includes a message, traversing the status identifiers of the multiple memory blocks to find a target memory block whose status identifier is idle;
2013:将所述消息缓存至所述目标内存块,并将所述目标内存块的状态标识设置为非空闲。2013: Cache the message to the target memory block, and set a status identifier of the target memory block to be non-idle.
具体的,将消息池划分为多个内存块,并为多个内存块设置状态标识,其中,所述状态标识可以是为内存块分配的比特位,后续,若检测到的新事件包括消息,则可以遍历各内存块的状态标识,快速查找出空闲的内存块,并将消息缓存至查找出的内存块中,相应的,将该内存块的状态标识更新为非空闲,以提高后续查找空闲内存块的准确性和可靠性。Specifically, the message pool is divided into multiple memory blocks, and a status identifier is set for the multiple memory blocks, where the status identifier may be a bit allocated for the memory block, and if the detected new event includes a message, Then, the state identifier of each memory block can be traversed, the free memory block can be quickly found, and the message is cached into the found memory block, and correspondingly, the status identifier of the memory block is updated to be non-idle, so as to improve subsequent search idle. The accuracy and reliability of the memory block.
本实施方式,若检测到新事件包括消息,则基于各内存块的状态标识,快速确定空闲的内存块进行消息缓存,并对内存块的状态标识进行更新,从而提高消息缓存的效率和准确性。In this embodiment, if a new event is detected, including a message, the idle memory block is quickly determined to perform message buffering based on the status identifier of each memory block, and the status identifier of the memory block is updated, thereby improving the efficiency and accuracy of the message cache. .
具体的,查找出空闲内存块进行消息缓存的过程中,基于消息的数据大小,其占用的内存块数量也可能不同,相应的,在实施例二的基础上,2011中所述将所述消息缓存至所述目标内存块,具体可以包括:若所述消息的数据量不大于单个目标内存块的存储容量,则将所述消息缓存至所述目标内存块;若所述消息的数据量大于单个目标内存块的存储容量,则将所述消息拆分为多个消息块,并将所述多个消息块分别缓存至多个目标内存块。Specifically, in the process of finding a free memory block for message buffering, the number of memory blocks occupied by the message may be different according to the data size of the message. Accordingly, on the basis of the second embodiment, the message is described in 2011. The buffering to the target memory block may include: if the data volume of the message is not greater than the storage capacity of the single target memory block, buffering the message to the target memory block; if the data volume of the message is greater than The storage capacity of a single target memory block splits the message into a plurality of message blocks and caches the plurality of message blocks to a plurality of target memory blocks, respectively.
举例来说,如图2H所示,图2H为本发明实施例二中消息缓存的流程示例图,参照附图所示,对于某个消息,先检测当前是否存在空闲的内存块,如果存在,则判断消息与内存块之间的大小关系,具体的,可以计算内存块容量与消息长度之差获得结果LEN,如果LEN不小于0,则该内存块的容量足够缓存该消息,结束本次消息缓存,如果LEN小于0,则需要查找另一空闲的内存块,将这两个内存块作为用于缓存该消息的内存块,并再次计算当前内存块容量(此时为用于缓存该消息的所有内存块容量之和)与消息长度之差获得当前的结果LEN,以此类推循环执行,直至LEN不小于0,则结束
本次消息缓存的流程。For example, as shown in FIG. 2H, FIG. 2H is a schematic diagram of a process of message buffering in Embodiment 2 of the present invention. Referring to the accompanying drawings, for a message, it is first detected whether there is currently a free memory block, and if so, Then, the size relationship between the message and the memory block is determined. Specifically, the difference between the memory block capacity and the message length can be calculated to obtain the result LEN. If LEN is not less than 0, the capacity of the memory block is sufficient to cache the message, and the message ends. Cache, if LEN is less than 0, you need to find another free memory block, use the two memory blocks as the memory block for buffering the message, and calculate the current memory block capacity again (this is used to cache the message) The difference between the sum of all memory blocks and the length of the message obtains the current result LEN, and so on, until the LEN is not less than 0, then the end
The process of this message cache.
可以理解,在技术特征不冲突的情形下,本实施方式可以与前述其它实施方式结合实施,例如,基于状态标识查找出目标内存块的基础上,根据消息的数据量选择一个或多个目标内存块进行缓存。通过本方式,可以保证对不同大小的消息进行缓存,提高消息缓存的可靠性。It can be understood that, in the case that the technical features do not conflict, the present embodiment may be implemented in combination with the foregoing other embodiments, for example, based on the status identifier to find the target memory block, and selecting one or more target memory according to the data amount of the message. The block is cached. In this manner, messages of different sizes can be cached to improve the reliability of the message cache.
实际应用中,通过查找空闲的内存块来对事件的消息进行缓存的方案中,不同事件的消息所在的内存块不同,并且存储同一消息的内存块可能不连续,因此为了后续在进行事件处理时能够获取到相应的消息,如图2I所示,图2I为本申请实施例二提供的又一种嵌入式调度方法的流程示意图,在实施例二的基础上,所述方法还可以包括:In an actual application, in a scheme of caching an event message by looking up an idle memory block, a memory block in which different events are located is different, and a memory block storing the same message may be discontinuous, so in order to perform subsequent event processing As shown in FIG. 2I, FIG. 2I is a schematic flowchart of another embedded scheduling method according to Embodiment 2 of the present application. On the basis of Embodiment 2, the method may further include:
205:生成消息的消息头,所述消息头包括所述消息所在内存块的信息;205: Generate a message header of the message, where the message header includes information about a memory block where the message is located;
相应的,1032具体可以包括:Correspondingly, 1032 may specifically include:
1034:确定与所述待处理事件关联的目标消息;1034: Determine a target message associated with the to-be-processed event;
1035:从所述目标消息的消息头中获取所述目标消息所在内存块的信息,提取所述目标消息所在内存块中缓存的消息作为所述目标消息。1035. Obtain information about a memory block where the target message is located from a message header of the target message, and extract a message cached in a memory block where the target message is located as the target message.
具体的,若检测到新事件包括消息,则可以将消息缓存至空闲的内存块,具体的缓存方法可以参照前述方案,在缓存消息的过程中或者缓存后,还可以生成该消息的消息头,该消息头包括消息所在内存块的信息,例如,可以包括存储有所述消息的内存块的标识。后续,对待处理事件进行处理时,可以基于各消息的消息头,确定该待处理事件对应的消息所在的内存块,并从这些内存块中提取出待处理事件对应的消息。可以理解,基于消息的数据量和内存块大小之间的匹配,消息所在的内存块信息可以有一个或多个。此外,基于前述的实施方式,内存块会有对应的比特位,相应的,在一种实施方式中这里所说的内存块的标识,还可以为其对应的比特位的标识。Specifically, if a new event is detected, the message may be cached to an idle memory block. The specific cache method may refer to the foregoing solution. In the process of caching the message or after the cache, the message header of the message may also be generated. The header includes information about the memory block in which the message is located, for example, may include an identification of a memory block in which the message is stored. Subsequently, when processing the event to be processed, the memory block in which the message corresponding to the event to be processed is located may be determined based on the message header of each message, and the message corresponding to the event to be processed is extracted from the memory block. It can be understood that based on the matching between the amount of data of the message and the size of the memory block, the memory block information in which the message is located may have one or more. In addition, based on the foregoing embodiments, the memory block may have corresponding bits. Correspondingly, in one embodiment, the identifier of the memory block mentioned herein may also be an identifier of its corresponding bit.
本实施方式,通过生成消息的消息头,准确简单地表征消息所在的内存块,从而准确快速地确定消息的缓存位置,提高后续消息处理的效率。In this implementation manner, the message header of the message is accurately and simply represented by the message header of the message, so that the cache location of the message is accurately and quickly determined, and the efficiency of subsequent message processing is improved.
进一步的,为了能够确定待处理事件对应的消息,在一种可实施方式中,在图2I所示实施方式的基础上,所述消息的消息头还包括所述消息所属事件的信息;相应的,1034具体可以包括:Further, in order to be able to determine the message corresponding to the event to be processed, in an implementation manner, on the basis of the embodiment shown in FIG. 2I, the message header of the message further includes information about the event to which the message belongs; , 1034 may specifically include:
1036:确定第一消息为所述目标消息,所述第一消息的消息头包括所述
待处理事件的信息。1036: Determine that the first message is the target message, and the message header of the first message includes the
Information about pending events.
具体的,若检测到新事件包括消息,则可以将消息缓存至空闲的内存块,具体的缓存方法可以参照前述方案,在缓存消息的过程中或者缓存后,还可以生成该消息的消息头,该消息头包括消息所在内存块的信息和该消息所属事件的信息,即该新事件的信息。后续,对待处理事件进行处理时,可以基于各消息的消息头中的事件信息,确定该待处理事件对应的目标消息,并根据目标消息的消息头中的内存块信息,提取出目标消息,并对目标消息进行处理。此外,基于前述的实施方式,事件可以有对应的比特位,相应的,在一种实施方式中这里所说的事件的标识,可以为其对应的比特位的标识。本实施方式,通过生成消息的消息头,准确简单地表征事件与消息的对应的关系,并且能够准确快速地确定消息的缓存位置,从而提高后续消息处理的效率。Specifically, if a new event is detected, the message may be cached to an idle memory block. The specific cache method may refer to the foregoing solution. In the process of caching the message or after the cache, the message header of the message may also be generated. The header includes information about the memory block in which the message is located and information about the event to which the message belongs, that is, information about the new event. Subsequently, when processing the event to be processed, the target message corresponding to the to-be-processed event may be determined based on the event information in the message header of each message, and the target message is extracted according to the memory block information in the message header of the target message, and Process the target message. In addition, based on the foregoing embodiments, the event may have corresponding bits. Correspondingly, in one embodiment, the identifier of the event mentioned herein may be the identifier of its corresponding bit. In this embodiment, by generating a message header of the message, the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
可选的,在另一种可实施方式中,在上述任一实施方式的基础上,所述消息的消息头还包括支持所述消息所属事件的任务的信息;相应的,1034具体可以包括:Optionally, in another implementation manner, the message header of the message further includes information about a task that supports an event to which the message belongs according to any one of the foregoing embodiments. Correspondingly, 1034 may specifically include:
1037:确定第二消息为所述目标消息,所述第二消息的消息头包括支持所述待处理事件的任务的信息。1037: Determine that the second message is the target message, and the message header of the second message includes information of a task that supports the to-be-processed event.
具体的,若检测到新事件包括消息,则可以将消息缓存至空闲的内存块,具体的缓存方法可以参照前述方案,在缓存消息的过程中或者缓存后,还可以生成该消息的消息头,该消息头包括消息所在内存块的信息和支持所述消息所属事件的任务的信息,即支持该新事件的任务的信息。后续,对待处理事件进行处理时,可以基于各消息的消息头中的任务信息,确定待处理事件对应的目标消息,并根据目标消息的消息头中的内存块信息,提取出目标消息,并对目标消息进行处理。此外,基于前述的实施方式,事件可以有对应的比特位,相应的,在一种实施方式中这里所说的事件的标识,可以为其对应的比特位的标识。本实施方式,通过生成消息的消息头,准确简单地表征事件与消息的对应的关系,并且能够准确快速地确定消息的缓存位置,从而提高后续消息处理的效率。Specifically, if a new event is detected, the message may be cached to an idle memory block. The specific cache method may refer to the foregoing solution. In the process of caching the message or after the cache, the message header of the message may also be generated. The message header includes information of a memory block in which the message is located and information of a task supporting an event to which the message belongs, that is, information supporting a task of the new event. Subsequently, when processing the event to be processed, the target message corresponding to the event to be processed may be determined based on the task information in the message header of each message, and the target message is extracted according to the memory block information in the message header of the target message, and The target message is processed. In addition, based on the foregoing embodiments, the event may have corresponding bits. Correspondingly, in one embodiment, the identifier of the event mentioned herein may be the identifier of its corresponding bit. In this embodiment, by generating a message header of the message, the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
进一步的,如果某事件包括的消息为多个,则可以引入消息的优先级处理机制。可选的,在上述任一实施方式的基础上,所述消息的消息头还包括
消息计数信息;相应的,1033具体可以包括:Further, if an event includes multiple messages, the priority processing mechanism of the message may be introduced. Optionally, on the basis of any of the foregoing embodiments, the message header of the message further includes
Message counting information; correspondingly, 1033 may specifically include:
1038:按照所述目标消息对应的计数信息的顺序,依次针对每个目标信息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。1038. In accordance with the order of the counting information corresponding to the target message, sequentially, for each target information, invoke a task that supports the to-be-processed event, process the target message, and clear the processed target message.
具体的,当检测到的新事件包括多个消息时,在缓存这多个消息并为其生成消息头的同时,其消息头中还可以包括消息计数信息。这里所说的消息计数信息的形式可以有多种,只要能够反映顺序即可,例如,数字1,2,3…等。具体的,消息的优先级也可以根据需要设定,例如,根据接收消息的先后顺序确定优先级,越早接收的消息优先级越高。后续,对待处理事件的消息进行处理时,可以按照各消息中的消息计数信息,对各消息进行排序,并按照消息计数信息的顺序依次对每个消息进行处理,具体的处理过程中可以调用相应的任务。并且已经处理的消息可以被清除,以节省内存空间。Specifically, when the detected new event includes multiple messages, the message header may also include message counting information while buffering the multiple messages and generating a message header for the message. The form of the message count information mentioned here may be various, as long as the order can be reflected, for example, numbers 1, 2, 3, .... Specifically, the priority of the message may also be set according to requirements. For example, the priority is determined according to the order in which the messages are received, and the sooner the message is received, the higher the priority. Subsequently, when processing the message to be processed, the message may be sorted according to the message counting information in each message, and each message is processed in turn according to the order of the message counting information, and the corresponding processing may be called in the corresponding process. Task. And the messages that have been processed can be cleared to save memory space.
本实施方式,通过在消息的消息头中加入消息计数信息,地引入消息处理的优先级,简单有效地实现消息处理的优先级机制。In this embodiment, by adding message count information to the message header of the message, the priority of the message processing is introduced, and the priority mechanism of the message processing is implemented simply and effectively.
进一步的,在前述任一实施方式的基础上,在清除经处理的目标消息后,还可以包括:将经处理的目标消息所在的内存块的状态标识设置为空闲。通过本实施方式,可以根据消息的处理进度,对内存块的状态标识进行更新,使状态标识真实可靠地反映内存块的状态,从而提高消息缓存的准确性和可靠性。Further, on the basis of any of the preceding embodiments, after clearing the processed target message, the method further includes: setting a status identifier of the memory block where the processed target message is located to be idle. According to the embodiment, the state identifier of the memory block can be updated according to the processing progress of the message, so that the state identifier truly reflects the state of the memory block, thereby improving the accuracy and reliability of the message cache.
实际应用中,上述关于消息头的各实施方式可以单独实施,也可以结合实施,本实施例在此不对其进行限制。其中,第一消息和第二消息仅用于区分基于不同的实施方式获得的消息,所述“第一”“第二”并不对消息的内容做具体限定。可以理解,第一消息和第二消息可以为相同的消息。In an actual application, the foregoing embodiments of the message header may be implemented separately or in combination, and the embodiment is not limited thereto. The first message and the second message are only used to distinguish messages obtained based on different implementation manners, and the “first” “second” does not specifically define the content of the message. It can be understood that the first message and the second message can be the same message.
可以理解,只有当事件对应的消息均被处理完成后,才可认定该事件被处理完成,进而可以对该事件对应的比特位的值进行更新。可选的,在前述任一实施方式的基础上,1033具体可以包括:若所述目标消息的数量为多个,则针对任一目标消息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息;返回执行所述获取与所述待处理事件关联的目标消息的步骤,直至所述目标消息的数量为1,则调用支持所述待处理事件的任务,对所述目标消息进行处理,清除经处理的目标消息,并将所
述待处理事件对应的比特位设置为第二值。具体的,本实施方式中在对某事件的处理过程中,如果该事件包括消息,则依次对该事件的每个消息进行处理,当处理至该事件的最后一个消息时,除了对该消息进行处理之外,还对该事件对应的比特位的值进行更新设置。通过本实施方式,在对事件进行处理的过程中,实现对事件对应的比特位值进行更新设置,保证事件状态的及时更新,提高调度的准确性和可靠性。It can be understood that the event can be determined to be processed only after the event corresponding to the event is processed, and the value of the bit corresponding to the event can be updated. Optionally, based on any of the foregoing embodiments, 1033 may include: if the number of the target messages is multiple, calling, for any target message, a task that supports the to-be-processed event, The target message is processed, and the processed target message is cleared; and the step of performing the acquiring the target message associated with the to-be-processed event is returned, until the number of the target message is 1, and the event supporting the pending event is invoked. Task, processing the target message, clearing the processed target message, and
The bit corresponding to the processing event is set to the second value. Specifically, in the process of processing an event in the embodiment, if the event includes a message, each message of the event is processed in turn, and when the last message of the event is processed, in addition to the message In addition to the processing, the value of the bit corresponding to the event is also updated and set. In this implementation manner, in the process of processing the event, the bit value corresponding to the event is updated, the event state is updated in time, and the accuracy and reliability of the scheduling are improved.
本实施例提供的嵌入式调度方法,将消息池划分为多个内存块,对于包括消息的事件,可以查找空闲的内存块对事件的消息进行缓存,后续对事件进行处理时,从内存块中提取相应的消息进行处理,能够实现对事件消息进行高效可靠地管理,并且有效节省内存资源。The embedded scheduling method provided in this embodiment divides the message pool into multiple memory blocks. For an event including a message, the idle memory block can be searched for the event message, and then the event is processed from the memory block. Extracting corresponding messages for processing enables efficient and reliable management of event messages and effectively saves memory resources.
实际应用中,为了避免调度锁死的情形,还可以采用轮询的处理方式对待处理事件进行处理。相应的,在前述任一实施方式的基础上,103具体可以包括:若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,调用支持所述处理对象的任务,对所述处理对象进行单次处理,直至完成对所有待处理事件的处理。In practical applications, in order to avoid the situation of scheduling lockup, the processing of the event to be processed may also be processed by polling. Correspondingly, based on any of the preceding embodiments, the method may further include: if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, the polling processing manner is adopted. In turn, each pending event is taken as the current processing object, and the task supporting the processing object is called, and the processing object is processed in a single process until the processing of all pending events is completed.
其中,所述单次处理可以表示对单个目标的处理。举例来说,所述调用支持所述处理对象的任务,对所述处理对象进行单次处理具体可以包括:若所述处理对象包括消息,则调用支持所述处理对象的任务对所述处理对象的单个消息进行处理;若所述处理对象不包括消息,则调用支持所述处理对象的任务,对所述处理对象进行处理。Wherein, the single processing can represent processing of a single target. For example, the calling supports the task of the processing object, and performing a single processing on the processing object may include: if the processing object includes a message, calling a task supporting the processing object to the processing object The single message is processed; if the processing object does not include a message, the task supporting the processing object is called, and the processing object is processed.
具体的,本实施方式在消息处理策略上,如果某事件对应多个消息,则在处理完该事件的一个消息后跳过剩余的消息,进入下一事件的调度,剩余的消息等待后续的调度轮询,这样能够有效的防止调度使用不当,并且避免调度锁死,例如,任务给自己发送消息导致的锁死。Specifically, in the message processing policy, if an event corresponds to multiple messages, the remaining messages are skipped after one message of the event is processed, and the next event is scheduled, and the remaining messages are waiting for subsequent scheduling. Polling, this can effectively prevent improper use of scheduling, and avoid scheduling locks, for example, locks caused by tasks sending messages to themselves.
需要说明的是,附图只是一种举例的实施方式,上述方法实施例中只要在逻辑不冲突的前提下,上述各步骤之间的执行顺序可以不限于图中所示的情形。以步骤一和步骤二举例来讲,可以先执行步骤一再执行步骤二,或者先执行步骤二再执行步骤一,或者步骤一和步骤二同时执行,本实施例不对其进行限制。
It should be noted that the drawings are only an exemplary embodiment. In the foregoing method embodiments, the order of execution between the foregoing steps may not be limited to the case shown in the figure, as long as the logic does not conflict. For example, in step 1 and step 2, step 2 may be performed first, or step 2 may be performed first, or step 1 may be performed first, or step 1 and step 2 may be performed at the same time, which is not limited in this embodiment.
图3A为本申请实施例三提供的一种嵌入式调度系统的结构示意图;参考附图3A可知,本实施例提供了一种嵌入式调度系统,该嵌入式调度系统用于实现轻量级、低资源消耗的调度方案,具体的,该嵌入式调度系统包括:3A is a schematic structural diagram of an embedded scheduling system according to Embodiment 3 of the present application; as shown in FIG. 3A, the embodiment provides an embedded scheduling system, which is used to implement a lightweight, A scheduling scheme with low resource consumption. Specifically, the embedded scheduling system includes:
查询模块31,用于遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;The query module 31 is configured to traverse the current value of the bit in the first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, and the bit in the first integer corresponding to the task is The events supported by the task correspond one-to-one;
查询模块31,还用于将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;The querying module 31 is further configured to determine, as the current pending event, an event corresponding to a bit whose current value is the first value among the first integers corresponding to the tasks.
处理模块32,用于调用支持所述待处理事件的任务,对所述待处理事件进行处理。The processing module 32 is configured to invoke a task that supports the to-be-processed event, and process the to-be-processed event.
在实际应用中,该嵌入式调度系统可以为存储有相关执行代码的介质,例如,U盘等;或者,该嵌入式调度系统还可以为集成或安装有相关执行代码的实体装置,例如,芯片、智能终端、计算机等。In an actual application, the embedded scheduling system may be a medium storing related execution code, for example, a USB flash drive, etc.; or the embedded scheduling system may also be a physical device integrated or installed with related execution code, for example, a chip. , smart terminals, computers, etc.
实际应用中,不同任务支持的事件不同,即不同的任务能够处理的事件不同。本实施例中,预先为各任务分配对应的整数,并为各任务支持的事件分配相应的比特位。具体的,某事件对应的比特位的不同取值可以表征当前是否存在该事件需要处理。In practical applications, different tasks support different events, that is, different tasks can handle different events. In this embodiment, each task is assigned a corresponding integer in advance, and corresponding bits are allocated for events supported by each task. Specifically, the different values of the bits corresponding to an event may indicate whether the event currently needs to be processed.
基于上述场景,所述系统还可以包括:第一更新模块,用于若完成对所述待处理事件的处理,则将所述待处理事件对应的比特位设置为第二值。当待处理事件处理完成后,第一更新模块可以对其对应的比特位的值进行更新,以提高后续调度的准确性和可靠性。Based on the foregoing scenario, the system may further include: a first update module, configured to: if the processing of the to-be-processed event is completed, set a bit corresponding to the to-be-processed event to a second value. After the processing of the pending event is completed, the first update module may update the value of its corresponding bit to improve the accuracy and reliability of the subsequent scheduling.
再可选的,所述系统还包括:第二更新模块,用于若检测到新事件,则将所述新事件对应的比特位的值设置为所述第一值。其中,这里所说的新事件为新产生的需要处理的事件,当前有新产生的需要处理的事件时,第二更新模块将该需要处理的事件对应的比特位设置为相应的值。Optionally, the system further includes: a second update module, configured to: if a new event is detected, set a value of a bit corresponding to the new event to the first value. The new event mentioned here is a newly generated event that needs to be processed. When there is a newly generated event that needs to be processed, the second update module sets the bit corresponding to the event to be processed to a corresponding value.
为了进一步节省调度所需的资源和空间,可以根据任务支持的事件数量为其分配比特位数量一致的整数。相应的,如图3B所示,在实施例三的基础上,所述系统还包括:分配模块33,用于为每个任务分配第一整数,所述任务对应的第一整数的比特位数量与所述任务支持的事件的数量一致;分配模块33,还用于将所述任务对应的第一整数的比特位一一对应地分配给所述任
务支持的事件。In order to further save the resources and space required for scheduling, an integer of the same number of bits can be allocated according to the number of events supported by the task. Correspondingly, as shown in FIG. 3B, on the basis of the third embodiment, the system further includes: an allocating module 33, configured to allocate a first integer for each task, and the number of bits of the first integer corresponding to the task The number of the events supported by the task is consistent; the allocating module 33 is further configured to allocate the bits of the first integer corresponding to the task to the any one-to-one correspondingly
Supported events.
以实际场景举例来说:针对当前可调用的所有任务,分配模块33为每个任务分配整数,每个任务对应的整数的比特位数量与该任务支持的事件数量一致。分配模块33分配对应的整数后,针对每个任务,分配模块33将该任务对应的整数的比特位一一对应地分配给该任务支持的事件。For example, in the actual scenario, the allocation module 33 assigns an integer to each task for all tasks that can be called currently, and the number of bits corresponding to each task is consistent with the number of events supported by the task. After the allocation module 33 assigns the corresponding integer, for each task, the allocation module 33 assigns the bits of the integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
在一种可实施的方式中,对于各任务对应的整数,可以通过建立整数数组的方式来生成这些整数,具体的,如图3C所示,在实施例三的基础上,分配模块33包括:第一创建单元331,用于根据所述各任务的数量,建立包括多个第一整数的一维整数数组,所述多个第一整数的数量与所述各任务的数量一致;第一分配单元332,用于将所述一维整数数组中的第一整数一一对应地分配给所述各任务。In an implementation manner, for an integer corresponding to each task, the integers may be generated by creating an integer array. Specifically, as shown in FIG. 3C, on the basis of Embodiment 3, the allocation module 33 includes: a first creating unit 331, configured to establish, according to the number of the tasks, a one-dimensional integer array including a plurality of first integers, the number of the plurality of first integers being consistent with the number of the tasks; The unit 332 is configured to allocate the first integers in the one-dimensional integer array to the tasks in a one-to-one correspondence.
具体的,第一创建单元331根据当前可调用的各任务的数量,建立相应的一维数组,第一分配单元332按照一一对应的分配原则,将该数组中的整数分配给各任务,后续,分配模块33再针对每个任务将其对应的整数的比特位一一对应地分配给该任务支持的事件。本实施方式,通过建立一维数组的方式,生成各任务对应的整数,从而在后续遍历各整数时,减小处理资源的消耗和耗时,提高调度的效率,以使本调度方案更好地适用于硬件资源有限的设备。Specifically, the first creating unit 331 creates a corresponding one-dimensional array according to the number of currently available tasks, and the first allocating unit 332 allocates the integers in the array to each task according to a one-to-one correspondence principle. The allocation module 33 then assigns the bits of its corresponding integers to the tasks supported by the task in a one-to-one correspondence for each task. In this embodiment, an integer corresponding to each task is generated by establishing a one-dimensional array, thereby reducing processing consumption and time consumption, and improving scheduling efficiency, so that the scheduling scheme is better. Applicable to devices with limited hardware resources.
可选的,在确定需要处理的待处理事件后,可以将比特顺序的特性引入事件间的优先级控制。相应的,如图3D所示,在实施例三的基础上,所述系统还包括:事件优先级模块34,用于确定各事件的优先级;分配模块33,具体用于按照比特顺序和事件的优先级一致的分配原则,将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。Optionally, after determining the pending event to be processed, the characteristics of the bit order can be introduced into the priority control between events. Correspondingly, as shown in FIG. 3D, on the basis of Embodiment 3, the system further includes: an event priority module 34, configured to determine a priority of each event; and an allocation module 33, specifically configured to perform bit order and events. The principle of consistent priority allocation allocates the bits of the first integer corresponding to the task in a one-to-one correspondence to the events supported by the task.
在实际调度过程中,为了提高调度效果和用户体验,各事件的处理优先级可能不同。在本实施方式中,事件优先级模块34先确定各事件的优先级,确定优先级之后,结合前述实施方式,分配模块33在为各事件分配比特位的同时,考虑比特顺序和各事件的优先级,按照比特顺序和时间的优先级一致非分配原则,为每个事件分配相应的比特位。这里的一致包括相同或者相反。In the actual scheduling process, in order to improve the scheduling effect and user experience, the processing priority of each event may be different. In this embodiment, the event priority module 34 first determines the priority of each event, and after determining the priority, in combination with the foregoing embodiment, the allocation module 33 considers the bit order and the priority of each event while allocating bits for each event. Level, assigning corresponding bits to each event according to the principle of consistent non-allocation of bit order and time. The agreement here includes the same or the opposite.
可选的,基于上述事件分配方案,假设按照比特顺序和事件的优先级相同的原则进行分配,则后续在进行事件处理时,在图3D所示实施方式的基
础上,处理模块32,具体用于按照所述待处理事件对应的比特位的比特顺序,依次针对每个待处理事件,调用支持所述待处理事件的任务,对所述待处理事件进行处理。本实施方式,通过在为事件分配比特位的过程中,结合考虑比特顺序的特性和事件的优先级,后续按照比特顺序进行事件处理的方案,简单巧妙地利用比特顺序的特性,实现事件处理的优先级控制,从而进一步节省调度所需的资源。Optionally, based on the foregoing event allocation scheme, it is assumed that the allocation is performed according to the principle that the bit order and the priority of the event are the same, and then the event processing is performed later, and the basis of the embodiment shown in FIG. 3D is performed.
The processing module 32 is specifically configured to: in response to the bit sequence of the bit corresponding to the to-be-processed event, sequentially, for each pending event, invoke a task that supports the to-be-processed event, and process the to-be-processed event. . In this embodiment, by combining the characteristics of the bit order and the priority of the event in the process of allocating the bit for the event, the subsequent event processing in the bit order simply and skillfully utilizes the characteristics of the bit sequence to implement event processing. Priority control, which further saves resources required for scheduling.
具体的,在本方案中的各任务包括当前调度系统可调用的任务。实际应用中,如果需要实现对任务的调用,需要该任务预先在调度系统中进行注册。相应的,在实施例三的基础上,所述系统还包括:接收模块,用于接收注册请求,所述注册请求包括任务的任务处理函数;注册模块,用于根据所述任务处理函数,对所述任务进行函数注册。Specifically, each task in the solution includes a task that can be invoked by the current scheduling system. In practical applications, if a call to a task needs to be implemented, the task needs to be registered in the scheduling system in advance. Correspondingly, based on the third embodiment, the system further includes: a receiving module, configured to receive a registration request, the registration request includes a task processing function of the task; and a registration module, configured to perform, according to the task processing function, The task performs function registration.
本实施例提供的嵌入式调度系统,通过为任务分配相应的整数,为事件分配相应的比特位,将事件对应的比特位的值设计成互不干预的单比特值,该比特位的值用于表征当前是否存在与该比特位对应的事件需要处理,后续在任务调度的过程中,通过遍历个任务对应的整数中比特位的值,即可快速准确地确定当前的待处理事件,进而调用相应的任务对该待处理事件进行处理。本方案采用事件驱动任务调度的方式,使得任务的所有事件可以用整数来缓存,在实现良好的使用效果的同时,实现轻量级、低资源消耗的调度方案,能够有效适用于硬件资源匮乏的设备,并且该方案易于实现、资源消耗低、启动快。实际应用中,上述方案既可用于普通任务的调度,也可用于调用层级深的系统,并且能够降低系统的耦合性。The embedded scheduling system provided in this embodiment allocates corresponding bits for the event by assigning corresponding integers to the task, and designates the value of the bit corresponding to the event as a single bit value that does not interfere with each other, and the value of the bit is used. In the process of task scheduling, the current pending event can be quickly and accurately determined by traversing the value of the bit in the integer corresponding to the task, and then calling The corresponding task processes the pending event. The scheme adopts the event-driven task scheduling method, so that all events of the task can be cached by integers, and realize a good use effect, and realize a lightweight and low resource consumption scheduling scheme, which can be effectively applied to hardware resource shortage. Equipment, and the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above solution can be used for scheduling common tasks as well as for calling deep hierarchical systems and reducing system coupling.
实际应用中,对于一个事件来说,可能伴随有与其关联的消息,对这种事件的处理需要涉及到对这些消息进行处理,因此,在调度过程中,还可能需要涉及消息的缓存和处理机制。In practical applications, for an event, it may be accompanied by a message associated with it. The processing of such an event needs to deal with these messages. Therefore, during the scheduling process, the message caching and processing mechanism may also be involved. .
实际应用中,消息池可用于缓存各种消息。相应的,如图4A所示,图4A为本申请实施例四提供的一种嵌入式调度系统的结构示意图,在实施例三的基础上,该嵌入式调度系统还包括:缓存模块41,用于若检测到新事件且所述新事件包括消息,则将所述消息缓存至消息池。In practice, a message pool can be used to cache various messages. Correspondingly, as shown in FIG. 4A, FIG. 4A is a schematic structural diagram of an embedded scheduling system according to Embodiment 4 of the present application. On the basis of Embodiment 3, the embedded scheduling system further includes: a cache module 41, If a new event is detected and the new event includes a message, the message is cached to the message pool.
具体的,若检测到的新事件包括消息,则将该消息缓存至消息池中,后续对该事件进行处理时,从消息池中获取该消息,通过对该消息进行处理,
完成对事件的处理。相应的,如图4B所示,在图4A所示实施方式的基础上,处理模块32包括:消息获取单元321,用于获取与所述待处理事件关联的目标消息;消息处理单元322,用于调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。本实施方式,对于包括消息的事件,通过对其消息进行处理完成对该事件的处理,并且清除经过处理的消息,从而在实现事件处理的基础上,有效节约消息缓存所占用的内存。Specifically, if the detected new event includes a message, the message is cached in the message pool, and when the event is processed, the message is obtained from the message pool, and the message is processed.
Complete the processing of the event. Correspondingly, as shown in FIG. 4B, on the basis of the embodiment shown in FIG. 4A, the processing module 32 includes: a message obtaining unit 321 for acquiring a target message associated with the to-be-processed event; and a message processing unit 322, The target message is processed and the processed target message is cleared by invoking a task supporting the pending event. In this embodiment, for an event including a message, the processing of the event is completed by processing the message, and the processed message is cleared, thereby effectively saving the memory occupied by the message cache on the basis of implementing the event processing.
为了可靠有效地实现消息缓存,如图4C所示,在实施例四的基础上,所述系统还包括:划分模块42,用于按照预设的划分粒度,对消息池中的内存进行划分,获得多个内存块;缓存模块41,具体用于若检测到新事件且所述新事件包括消息,则从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。As shown in FIG. 4C, the system further includes: a dividing module 42 for dividing the memory in the message pool according to a preset partitioning granularity, as shown in FIG. 4C. Obtaining a plurality of memory blocks; the cache module 41 is configured to: if a new event is detected and the new event includes a message, find a free target memory block from the plurality of memory blocks, and cache the message to The target memory block.
其中,所述划分粒度可以根据需要或者经验设定。划分模块42将消息池划分为多个内存块,实现对消息池的内存进行分布式管理。缓存模块41在进行消息缓存时,从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。本实施方式中,将消息池划分为多个内存块,实现分布式管理,若检测到的新事件包括消息,需要对该消息进行缓存时,从多个内存块中查找出空闲的内存块,并将该新事件的消息缓存至查找出的内存块中,提高消息缓存的效率,节省资源消耗。Wherein, the division granularity can be set according to needs or experience. The dividing module 42 divides the message pool into a plurality of memory blocks, and implements distributed management of the memory of the message pool. The cache module 41 searches for a free target memory block from the plurality of memory blocks when the message is cached, and caches the message to the target memory block. In this implementation manner, the message pool is divided into multiple memory blocks to implement distributed management. If the detected new event includes a message, when the message needs to be cached, the free memory block is searched from the plurality of memory blocks. The message of the new event is cached into the found memory block, which improves the efficiency of the message cache and saves resource consumption.
可选的,为了更快更准确的查找出空闲的内存块,如图4D所示,在图4C所示实施方式的基础上,所述系统还包括:标识模块43,用于为每个内存块设置状态标识,所述内存块的状态标识用于表征所述内存块是否空闲。具体的,划分模块42将消息池划分为多个内存块后,标识模块43为每个内存块设置状态标识,后续当缓存模块41需要查找空闲的内存块时,只需要遍历各内存块的状态标识即可快速准确地确定出空闲的内存块,提高消息缓存的效率。Optionally, in order to find the free memory block more quickly and accurately, as shown in FIG. 4D, on the basis of the embodiment shown in FIG. 4C, the system further includes: an identification module 43 for each memory. The block sets a status identifier, and the status identifier of the memory block is used to characterize whether the memory block is free. Specifically, after the dividing module 42 divides the message pool into a plurality of memory blocks, the identifying module 43 sets a status identifier for each memory block. When the cache module 41 needs to find an idle memory block, it only needs to traverse the status of each memory block. The identification can quickly and accurately determine the free memory block, improving the efficiency of the message cache.
其中,状态标识的形式可以有多种。可选的,为了进一步减少资源消耗,在图4D所示实施方式的基础上,标识模块43包括:Among them, the status identifier can be in various forms. Optionally, in order to further reduce resource consumption, on the basis of the embodiment shown in FIG. 4D, the identifier module 43 includes:
第二创建单元,用于创建第二整数,所述第二整数的比特位数量与所述多个内存块的数量一致;a second creating unit, configured to create a second integer, where the number of bits of the second integer is consistent with the number of the plurality of memory blocks;
第二分配单元,用于将所述第二整数的比特位一一对应地分配给所述内
存块,其中,内存块的状态标识为所述内存块对应的比特位,所述内存块对应的比特位的不同值分别表征所述内存块处于空闲或非空闲。a second allocation unit, configured to allocate the bits of the second integer to the inner one-to-one correspondence
a storage block, wherein a status of the memory block is identified as a bit corresponding to the memory block, and different values of the corresponding bit of the memory block respectively indicate that the memory block is idle or not idle.
具体的,划分模块42将消息池划分为多个内存块后,第二创建单元根据内存块的数量,创建比特位数量与内存块数量一致的第二整数,第二分配单元按照一一对应的分配原则,将第二整数的比特位分配给内存块。Specifically, after the dividing module 42 divides the message pool into a plurality of memory blocks, the second creating unit creates a second integer whose number of bits is consistent with the number of the memory blocks according to the number of the memory blocks, and the second allocating unit according to the one-to-one correspondence The allocation principle assigns the bits of the second integer to the memory block.
本实施方式,为每个内存块分配对应的比特位,用比特位的不同取值表征内存块是否空闲,从而简单有效地实现对内存块状态的表征,进一步节省资源消耗,提高消息缓存的效率。In this embodiment, each memory block is allocated a corresponding bit, and the different values of the bit are used to characterize whether the memory block is idle, thereby simply and effectively characterizing the state of the memory block, further saving resource consumption, and improving the efficiency of message buffering. .
基于上述状态标识,当需要为事件的消息进行缓存时,可以快速查找出空闲的内存块。具体的,在前述实施方式的基础上,缓存模块具体可以包括:查找单元,用于若检测到新事件且所述新事件包括消息,则遍历所述多个内存块的状态标识,查找出其状态标识为空闲的目标内存块;存储单元,用于将所述消息缓存至所述目标内存块,并将所述目标内存块的状态标识设置为非空闲。本实施方式,若检测到新事件包括消息,则基于各内存块的状态标识,快速确定空闲的内存块进行消息缓存,并对内存块的状态标识进行更新,从而提高消息缓存的效率和准确性。Based on the above status identifier, when the message of the event needs to be cached, the free memory block can be quickly found. Specifically, the cache module may specifically include: a searching unit, configured to: if a new event is detected and the new event includes a message, traverse the state identifiers of the plurality of memory blocks to find out The status is identified as an idle target memory block; a storage unit is configured to cache the message to the target memory block and set the status identifier of the target memory block to be non-idle. In this embodiment, if a new event is detected, including a message, the idle memory block is quickly determined to perform message buffering based on the status identifier of each memory block, and the status identifier of the memory block is updated, thereby improving the efficiency and accuracy of the message cache. .
具体的,查找出空闲内存块进行消息缓存的过程中,基于消息的数据大小,其占用的内存块数量也可能不同,相应的,在实施例四的基础上,所述存储单元,具体用于若所述消息的数据量不大于单个目标内存块的存储容量,则将所述消息缓存至所述目标内存块;所述存储单元,还具体用于若所述消息的数据量大于单个目标内存块的存储容量,则将所述消息拆分为多个消息块,并将所述多个消息块分别缓存至多个目标内存块。通过本方式,可以保证对不同大小的消息进行缓存,提高消息缓存的可靠性。Specifically, in the process of finding a free memory block for message buffering, the number of memory blocks occupied by the message may be different according to the data size of the message. Correspondingly, on the basis of the fourth embodiment, the storage unit is specifically used for If the data volume of the message is not greater than the storage capacity of the single target memory block, the message is cached to the target memory block; the storage unit is further configured to: if the data volume of the message is greater than a single target memory The storage capacity of the block splits the message into a plurality of message blocks and caches the plurality of message blocks to a plurality of target memory blocks, respectively. In this manner, messages of different sizes can be cached to improve the reliability of the message cache.
实际应用中,为了在进行事件处理时能够获取到相应的消息,在实施例四的基础上,所述系统还包括:生成模块,用于生成消息的消息头,所述消息头包括所述消息所在内存块的信息;消息获取单元321包括:处理子单元,用于确定与所述待处理事件关联的目标消息;提取子单元,用于从所述目标消息的消息头中获取所述目标消息所在内存块的信息,提取所述目标消息所在内存块中缓存的消息作为所述目标消息。本实施方式,通过生成消息的消息头,准确简单地表征消息所在的内存块,从而准确快速地确定消息的缓存
位置,提高后续消息处理的效率。In an actual application, in order to obtain a corresponding message when the event processing is performed, the system further includes: a generating module, configured to generate a message header of the message, where the message header includes the message, The message acquiring unit 321 includes: a processing subunit, configured to determine a target message associated with the to-be-processed event; and an extracting subunit, configured to acquire the target message from a message header of the target message The information of the memory block is located, and the message cached in the memory block where the target message is located is extracted as the target message. In this implementation manner, the message block in which the message is located is accurately and simply represented by generating a message header of the message, thereby accurately and quickly determining the cache of the message.
Location to improve the efficiency of subsequent message processing.
进一步的,为了能够确定待处理事件对应的消息,在一种可实施方式中,在上述实施方式的基础上,所述消息的消息头还包括所述消息所属事件的信息;所述处理子单元,具体用于确定第一消息为所述目标消息,所述第一消息的消息头包括所述待处理事件的信息。本实施方式,通过生成消息的消息头,准确简单地表征事件与消息的对应的关系,并且能够准确快速地确定消息的缓存位置,从而提高后续消息处理的效率。Further, in order to be able to determine the message corresponding to the event to be processed, in an implementation manner, on the basis of the foregoing implementation manner, the message header of the message further includes information about an event to which the message belongs; the processing subunit Specifically, the first message is determined to be the target message, and the message header of the first message includes information about the to-be-processed event. In this embodiment, by generating a message header of the message, the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
可选的,在另一种可实施方式中,在上述任一实施方式的基础上,所述消息的消息头还包括支持所述消息所属事件的任务的信息;所述处理子单元,具体用于确定第二消息为所述目标消息,所述第二消息的消息头包括支持所述待处理事件的任务的信息。本实施方式,通过生成消息的消息头,准确简单地表征事件与消息的对应的关系,并且能够准确快速地确定消息的缓存位置,从而提高后续消息处理的效率。Optionally, in another implementation manner, the message header of the message further includes information that supports a task to which the message belongs, and the processing sub-unit is specifically used. The second message is determined to be the target message, and the message header of the second message includes information of a task supporting the to-be-processed event. In this embodiment, by generating a message header of the message, the corresponding relationship between the event and the message is accurately and simply characterized, and the cache location of the message can be accurately and quickly determined, thereby improving the efficiency of subsequent message processing.
进一步的,如果某事件包括的消息为多个,则可以引入消息的优先级处理机制。可选的,在前述任一实施方式的基础上,所述消息的消息头还包括消息计数信息;消息处理单元322,具体用于按照所述目标消息对应的计数信息的顺序,依次针对每个目标信息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。本实施方式,通过在消息的消息头中加入消息计数信息,地引入消息处理的优先级,简单有效地实现消息处理的优先级机制。Further, if an event includes multiple messages, the priority processing mechanism of the message may be introduced. Optionally, the message header of the message further includes message counting information, where the message processing unit 322 is specifically configured to sequentially follow the order of the counting information corresponding to the target message. The target information, the task supporting the to-be-processed event is invoked, the target message is processed, and the processed target message is cleared. In this embodiment, by adding message count information to the message header of the message, the priority of the message processing is introduced, and the priority mechanism of the message processing is implemented simply and effectively.
进一步的,在前述任一实施方式的基础上,在清除经处理的目标消息后,还可以将经处理的目标消息所在的内存块的状态标识设置为空闲。相应的,所述系统还包括:第三更新模块,用于将经处理的目标消息所在的内存块的状态标识设置为空闲。通过本实施方式,可以根据消息的处理进度,对内存块的状态标识进行更新,使状态标识真实可靠地反映内存块的状态,从而提高消息缓存的准确性和可靠性。Further, on the basis of any of the foregoing embodiments, after clearing the processed target message, the status identifier of the memory block where the processed target message is located may also be set to idle. Correspondingly, the system further includes: a third update module, configured to set a status identifier of the memory block where the processed target message is located to be idle. According to the embodiment, the state identifier of the memory block can be updated according to the processing progress of the message, so that the state identifier truly reflects the state of the memory block, thereby improving the accuracy and reliability of the message cache.
可以理解,只有当事件对应的消息均被处理完成后,才可认定该事件被处理完成,进而可以对该事件对应的比特位的值进行更新。可选的,在前述任一实施方式的基础上,消息处理单元322,具体用于若所述目标消息的数量为多个,则针对任一目标消息,调用支持所述待处理事件的任务,对所述
目标消息进行处理,并清除经处理的目标消息;消息处理单元322,还具体用于再次执行所述获取与所述待处理事件关联的目标消息的步骤,直至所述目标消息的数量为1,则调用支持所述待处理事件的任务,对所述目标消息进行处理,清除经处理的目标消息,并将所述待处理事件对应的比特位设置为第二值。通过本实施方式,在对事件进行处理的过程中,实现对事件对应的比特位值进行更新设置,保证事件状态的及时更新,提高调度的准确性和可靠性。It can be understood that the event can be determined to be processed only after the event corresponding to the event is processed, and the value of the bit corresponding to the event can be updated. Optionally, on the basis of any of the preceding embodiments, the message processing unit 322 is specifically configured to: if the number of the target messages is multiple, invoke a task that supports the to-be-processed event for any target message, For the stated
The target message is processed, and the processed target message is cleared. The message processing unit 322 is further configured to perform the step of acquiring the target message associated with the to-be-processed event again, until the number of the target message is 1. Then, the task supporting the to-be-processed event is invoked, the target message is processed, the processed target message is cleared, and the bit corresponding to the to-be-processed event is set to a second value. In this implementation manner, in the process of processing the event, the bit value corresponding to the event is updated, the event state is updated in time, and the accuracy and reliability of the scheduling are improved.
本实施例提供的嵌入式调度系统,将消息池划分为多个内存块,对于包括消息的事件,可以查找空闲的内存块对事件的消息进行缓存,后续对事件进行处理时,从内存块中提取相应的消息进行处理,能够实现对事件消息进行高效可靠地管理,并且有效节省内存资源。The embedded scheduling system provided in this embodiment divides the message pool into multiple memory blocks. For events including messages, the idle memory block can be searched for the event message, and the event is processed from the memory block. Extracting corresponding messages for processing enables efficient and reliable management of event messages and effectively saves memory resources.
实际应用中,为了避免调度锁死的情形,还可以采用轮询的处理方式对待处理事件进行处理。相应的,在前述任一实施方式的基础上,处理模块32,具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,调用支持所述处理对象的任务,对所述处理对象进行单次处理,直至完成对所有待处理事件的处理。In practical applications, in order to avoid the situation of scheduling lockup, the processing of the event to be processed may also be processed by polling. Correspondingly, on the basis of any of the foregoing embodiments, the processing module 32 is specifically configured to: if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, polling is used. The processing mode sequentially takes each to-be-processed event as a current processing object, calls a task that supports the processing object, and performs a single processing on the processing object until the processing of all pending events is completed.
其中,所述单次处理可以表示对单个目标的处理。举例来说,处理模块32,具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,若所述处理对象包括消息,则调用支持所述处理对象的任务对所述处理对象的单个消息进行处理,直至完成对所有待处理事件的处理;处理模块32,还具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,若所述处理对象不包括消息,则调用支持所述处理对象的任务,对所述处理对象进行处理,直至完成对所有待处理事件的处理。本实施方式在消息处理策略上,如果某事件对应多个消息,则在处理完该事件的一个消息后跳过剩余的消息,进入下一事件的调度,剩余的消息等待后续的调度轮询,这样能够有效的防止调度使用不当,并且避免调度锁死,例如,任务给自己发送消息导致的锁死。
Wherein, the single processing can represent processing of a single target. For example, the processing module 32 is specifically configured to: if there are multiple to-be-processed events, and an event that includes multiple messages exists in the multiple to-be-processed events, use a polling processing manner to sequentially process each to be processed. The event is the current processing object. If the processing object includes a message, the task supporting the processing object is called to process a single message of the processing object until the processing of all pending events is completed; the processing module 32 further Specifically, if there are multiple to-be-processed events, and an event including multiple messages exists in the multiple to-be-processed events, the polling processing mode is adopted, and each pending event is sequentially used as a current processing object. If the processing object does not include a message, the task supporting the processing object is invoked, and the processing object is processed until the processing of all pending events is completed. In the message processing strategy, if an event corresponds to multiple messages, the remaining messages are skipped after one message of the event is processed, and the next event is scheduled, and the remaining messages are waiting for subsequent scheduling polling. This can effectively prevent improper use of scheduling, and avoid scheduling locks, for example, locks caused by tasks sending messages to themselves.
图5为本发明实施例五提供的一种嵌入式调度系统的结构示例图,如图5所示,该系统包括:调度器、事件管理模块和消息管理模块。FIG. 5 is a schematic structural diagram of an embedded scheduling system according to Embodiment 5 of the present invention. As shown in FIG. 5, the system includes: a scheduler, an event management module, and a message management module.
其中,事件管理模块主要负责缓存事件,例如,该事件可以是从外部的事件源接收到的事件或者是在事件处理过程中触发产生的事件;消息管理模块包括消息头模块和消息池,消息管理模块主要负责消息的缓存,调度器则主要负责事件和消息的调度处理。事件管理模块中的列表表示存储有各任务对应的整数的一维数组,列表中的每个单元表示单个整数,每个整数对应一个任务,该整数中的比特位与该任务支持的事件一一对应。消息管理模块中存储有各消息的消息头,消息池用于缓存各消息。The event management module is mainly responsible for caching events, for example, the event may be an event received from an external event source or an event triggered during event processing; the message management module includes a message header module and a message pool, and message management The module is mainly responsible for the buffering of messages, and the scheduler is mainly responsible for the scheduling of events and messages. The list in the event management module represents a one-dimensional array storing integers corresponding to each task. Each unit in the list represents a single integer, and each integer corresponds to a task. The bits in the integer correspond to the events supported by the task. correspond. The message management module stores a message header of each message, and the message pool is used to cache each message.
具体的,调度器基于事件管理模块和消息管理模块中缓存的事件和消息,参照调度方案进行调度处理。具体的,调度器执行的调度方案可以参照前述方法实施例中的相关内容,本实施例在此不再赘述。Specifically, the scheduler performs scheduling processing by referring to the scheduling scheme based on the events and messages cached in the event management module and the message management module. Specifically, the scheduling scheme performed by the scheduler may refer to related content in the foregoing method embodiments, and details are not described herein again.
本实施例提供的嵌入式调度系统,采用事件驱动任务调度的方式,使得任务的所有事件可以用整数来缓存,实现轻量级、低资源消耗的调度方案,能够有效适用于硬件资源匮乏的设备,并且该方案易于实现、资源消耗低、启动快。实际应用中,上述嵌入式调度方案既可以用于普通任务的调度,也可适用于调用层级深的系统,并且能够降低系统的耦合性。The embedded scheduling system provided in this embodiment adopts an event-driven task scheduling manner, so that all events of the task can be buffered by integers, and a lightweight and low resource consumption scheduling scheme can be implemented, which can be effectively applied to devices with insufficient hardware resources. And the solution is easy to implement, resource consumption is low, and startup is fast. In practical applications, the above embedded scheduling scheme can be used for scheduling common tasks as well as for calling deep hierarchical systems, and can reduce system coupling.
本申请实施例六还提供一种计算机存储介质,该计算机存储介质可以包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质,具体的,该计算机存储介质中存储有程序指令,程序指令用于上述实施例中的嵌入式调度方法。The sixth embodiment of the present application further provides a computer storage medium, which may include: a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), and a RAM (Random Access Memory). A medium for storing a program code, such as a magnetic disk or an optical disk. Specifically, the computer storage medium stores program instructions, and the program instructions are used in the embedded scheduling method in the above embodiment.
本申请实施例七提供一种嵌入式调度系统,该系统可以为安装有程序运行系统的终端设备,例如,手机、电脑、PAD、或者智能手表等等,该嵌入式调度系统包括至少一个处理器和存储器,存储器用于存储计算机执行指令,处理器的个数可以为1个或多个,且可以单独或协同工作,处理器用于执行所述存储器存储的计算机执行指令,以实现上述实施例中的嵌入式调度方法。Embodiment 7 of the present application provides an embedded scheduling system, which may be a terminal device installed with a program running system, such as a mobile phone, a computer, a PAD, or a smart watch, etc., the embedded scheduling system includes at least one processor And a memory for storing computer execution instructions, the number of processors may be one or more, and may work separately or in cooperation, and the processor is configured to execute the computer-executed instructions of the memory storage to implement the above embodiment. Embedded scheduling method.
以上各个实施例中的技术方案、技术特征在不相冲突的情况下均可以单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。
The technical solutions and the technical features in the above various embodiments may be used in the case of the present invention without departing from the scope of the present invention.
Claims (48)
- 一种嵌入式调度方法,其特征在于,包括:An embedded scheduling method, comprising:遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;Traversing the current value of the bit in the first integer corresponding to each task, the task is in one-to-one correspondence with the first integer, and the bit in the first integer corresponding to the task and the event supported by the task are one by one correspond;将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;And determining, by the first integer corresponding to each task, an event corresponding to a bit whose current value is the first value as a current pending event;调用支持所述待处理事件的任务,对所述待处理事件进行处理。Calling a task that supports the to-be-processed event, and processing the to-be-processed event.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:为每个任务分配第一整数,所述任务对应的第一整数的比特位数量与所述任务支持的事件的数量一致;Assigning a first integer to each task, the number of bits of the first integer corresponding to the task is consistent with the number of events supported by the task;将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。Assigning the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
- 根据权利要求2所述的方法,其特征在于,所述将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件之前,还包括:The method according to claim 2, wherein the assigning the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence comprises:确定各事件的优先级;Determine the priority of each event;所述将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件,包括:And allocating the bits of the first integer corresponding to the task to the events supported by the task in a one-to-one correspondence, including:按照比特顺序和事件的优先级一致的分配原则,将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。The bits of the first integer corresponding to the task are assigned to the event supported by the task in a one-to-one correspondence according to the allocation principle of the bit order and the priority of the event.
- 根据权利要求3所述的方法,其特征在于,所述调用支持所述待处理事件的任务,对所述待处理事件进行处理,包括:The method according to claim 3, wherein the invoking a task that supports the to-be-processed event, and processing the to-be-processed event comprises:按照所述待处理事件对应的比特位的比特顺序,依次针对每个待处理事件,调用支持所述待处理事件的任务,对所述待处理事件进行处理。The task supporting the to-be-processed event is invoked for each pending event according to the bit sequence of the bit corresponding to the to-be-processed event, and the to-be-processed event is processed.
- 根据权利要求2-4中任一项所述的方法,其特征在于,所述为每个任务分配第一整数,包括:The method according to any one of claims 2 to 4, wherein the assigning a first integer to each task comprises:根据所述各任务的数量,建立包括多个第一整数的一维整数数组,所述多个第一整数的数量与所述各任务的数量一致;Establishing, according to the number of the tasks, a one-dimensional integer array including a plurality of first integers, the number of the plurality of first integers being consistent with the number of the tasks;将所述一维整数数组中的第一整数一一对应地分配给所述各任务。 The first integers in the one-dimensional integer array are assigned to the tasks in a one-to-one correspondence.
- 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:若完成对所述待处理事件的处理,则将所述待处理事件对应的比特位设置为第二值。If the processing of the to-be-processed event is completed, the bit corresponding to the to-be-processed event is set to a second value.
- 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 6, wherein the method further comprises:若检测到新事件,则将所述新事件对应的比特位的值设置为所述第一值。If a new event is detected, the value of the bit corresponding to the new event is set to the first value.
- 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:The method of any of claims 1-7, wherein the method further comprises:若检测到新事件且所述新事件包括消息,则将所述消息缓存至消息池。If a new event is detected and the new event includes a message, the message is cached to the message pool.
- 根据权利要求8所述的方法,其特征在于,所述调用支持所述待处理事件的任务,对所述待处理事件进行处理,包括:The method according to claim 8, wherein the invoking a task that supports the to-be-processed event, and processing the to-be-processed event comprises:获取与所述待处理事件关联的目标消息;Obtaining a target message associated with the pending event;调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。Calling a task that supports the pending event, processing the target message, and clearing the processed target message.
- 根据权利要求8或9所述的方法,其特征在于,所述方法还包括:The method according to claim 8 or 9, wherein the method further comprises:按照预设的划分粒度,对消息池中的内存进行划分,获得多个内存块;Divide the memory in the message pool according to the preset granularity to obtain multiple memory blocks;所述若检测到新事件且所述新事件包括消息,则将所述消息缓存至消息池,包括:If the new event is detected and the new event includes a message, the message is cached to the message pool, including:若检测到新事件且所述新事件包括消息,则从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。If a new event is detected and the new event includes a message, an idle target memory block is found from the plurality of memory blocks and the message is cached to the target memory block.
- 根据权利要求10所述的方法,其特征在于,所述方法还包括:The method of claim 10, wherein the method further comprises:为每个内存块设置状态标识,所述内存块的状态标识用于表征所述内存块是否空闲。A status identifier is set for each memory block, and the status identifier of the memory block is used to characterize whether the memory block is free.
- 根据权利要求11所述的方法,其特征在于,所述为每个内存块设置状态标识,包括:The method according to claim 11, wherein said setting a status identifier for each memory block comprises:创建第二整数,所述第二整数的比特位数量与所述多个内存块的数量一致;Creating a second integer, the number of bits of the second integer being consistent with the number of the plurality of memory blocks;将所述第二整数的比特位一一对应地分配给所述内存块,其中,内存块的状态标识为所述内存块对应的比特位,所述内存块对应的比特位的不同值 分别表征所述内存块处于空闲或非空闲。Allocating the bits of the second integer to the memory block in a one-to-one correspondence, wherein a state identifier of the memory block is a bit corresponding to the memory block, and a different value of a bit corresponding to the memory block The memory blocks are characterized as being idle or not, respectively.
- 根据权利要求11或12所述的方法,其特征在于,所述若检测到新事件且所述新事件包括消息,则从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块,包括:The method according to claim 11 or 12, wherein if the new event is detected and the new event includes a message, the free target memory block is searched from the plurality of memory blocks, and the The message is cached to the target memory block, including:若检测到新事件且所述新事件包括消息,则遍历所述多个内存块的状态标识,查找出其状态标识为空闲的目标内存块;If a new event is detected and the new event includes a message, traversing the status identifiers of the plurality of memory blocks to find a target memory block whose status is identified as idle;将所述消息缓存至所述目标内存块,并将所述目标内存块的状态标识设置为非空闲。The message is cached to the target memory block and the status identifier of the target memory block is set to be non-idle.
- 根据权利要求10-13中任一项所述的方法,其特征在于,所述将所述消息缓存至所述目标内存块,包括:The method according to any one of claims 10 to 13, wherein the buffering the message to the target memory block comprises:若所述消息的数据量不大于单个目标内存块的存储容量,则将所述消息缓存至所述目标内存块;If the data amount of the message is not greater than the storage capacity of the single target memory block, buffering the message to the target memory block;若所述消息的数据量大于单个目标内存块的存储容量,则将所述消息拆分为多个消息块,并将所述多个消息块分别缓存至多个目标内存块。If the data amount of the message is greater than the storage capacity of the single target memory block, the message is split into a plurality of message blocks, and the plurality of message blocks are separately cached to a plurality of target memory blocks.
- 根据权利要求10-14中任一项所述的方法,其特征在于,所述方法还包括:The method of any of claims 10-14, wherein the method further comprises:生成消息的消息头,所述消息头包括所述消息所在内存块的信息;Generating a message header of the message, where the message header includes information of a memory block in which the message is located;所述获取与所述待处理事件关联的目标消息,包括:And obtaining the target message associated with the to-be-processed event, including:确定与所述待处理事件关联的目标消息;Determining a target message associated with the pending event;从所述目标消息的消息头中获取所述目标消息所在内存块的信息,提取所述目标消息所在内存块中缓存的消息作为所述目标消息。Acquiring information of a memory block in which the target message is located from a message header of the target message, and extracting a message cached in a memory block where the target message is located as the target message.
- 根据权利要求15所述的方法,其特征在于,所述消息的消息头还包括所述消息所属事件的信息;所述确定与所述待处理事件关联的目标消息,包括:The method according to claim 15, wherein the message header of the message further includes information about an event to which the message belongs; and the determining the target message associated with the to-be-processed event includes:确定第一消息为所述目标消息,所述第一消息的消息头包括所述待处理事件的信息。Determining that the first message is the target message, and the message header of the first message includes information of the to-be-processed event.
- 根据权利要求15或16所述的方法,其特征在于,所述消息的消息头还包括支持所述消息所属事件的任务的信息;所述确定与所述待处理事件关联的目标消息,包括:The method according to claim 15 or 16, wherein the message header of the message further comprises information of a task supporting an event to which the message belongs; and the determining the target message associated with the to-be-processed event comprises:确定第二消息为所述目标消息,所述第二消息的消息头包括支持所述待 处理事件的任务的信息。Determining that the second message is the target message, and the message header of the second message includes supporting the Information about the task that handles the event.
- 根据权利要求15-17中任一项所述的方法,其特征在于,所述消息的消息头还包括消息计数信息;所述调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息,包括:The method according to any one of claims 15-17, wherein the message header of the message further comprises message counting information; the calling supports a task of the to-be-processed event, and processing the target message And clear the processed target message, including:按照所述目标消息对应的计数信息的顺序,依次针对每个目标信息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。And according to the order of the counting information corresponding to the target message, sequentially, for each target information, a task supporting the to-be-processed event is invoked, the target message is processed, and the processed target message is cleared.
- 根据权利要求11-18中任一项所述的方法,其特征在于,所述清除经处理的目标消息之后,还包括:The method according to any one of claims 11 to 18, wherein after the clearing the processed target message, the method further comprises:将经处理的目标消息所在的内存块的状态标识设置为空闲。Set the status ID of the memory block where the processed target message is located to idle.
- 根据权利要求9-19中任一项所述的方法,其特征在于,所述调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息,包括:The method according to any one of claims 9 to 19, wherein the invoking a task supporting the event to be processed, processing the target message, and clearing the processed target message comprises:若所述目标消息的数量为多个,则针对任一目标消息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息;If the number of the target messages is multiple, the task supporting the to-be-processed event is invoked for any target message, the target message is processed, and the processed target message is cleared;返回执行所述获取与所述待处理事件关联的目标消息的步骤,直至所述目标消息的数量为1,则调用支持所述待处理事件的任务,对所述目标消息进行处理,清除经处理的目标消息,并将所述待处理事件对应的比特位设置为第二值。Returning to the step of performing the acquiring the target message associated with the to-be-processed event, until the number of the target message is 1, calling a task supporting the to-be-processed event, processing the target message, and clearing the processed a target message, and setting a bit corresponding to the to-be-processed event to a second value.
- 根据权利要求1-20中任一项所述的方法,其特征在于,所述调用支持所述待处理事件的任务,对所述待处理事件进行处理,包括:The method according to any one of claims 1 to 20, wherein the invoking a task that supports the to-be-processed event, and processing the to-be-processed event comprises:若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,调用支持所述处理对象的任务,对所述处理对象进行单次处理,直至完成对所有待处理事件的处理。If there are multiple to-be-processed events, and an event including multiple messages exists in the plurality of to-be-processed events, the polling processing mode is adopted, and each to-be-processed event is sequentially used as a current processing object, and the support is invoked. The task of the object is processed, and the processing object is processed in a single process until the processing of all pending events is completed.
- 根据权利要求21所述的方法,其特征在于,所述调用支持所述处理对象的任务,对所述处理对象进行单次处理,包括:The method according to claim 21, wherein the invoking a task supporting the processing object, and performing a single processing on the processing object comprises:若所述处理对象包括消息,则调用支持所述处理对象的任务对所述处理对象的单个消息进行处理;If the processing object includes a message, calling a task that supports the processing object to process a single message of the processing object;若所述处理对象不包括消息,则调用支持所述处理对象的任务,对所述 处理对象进行处理。If the processing object does not include a message, calling a task that supports the processing object, Process objects for processing.
- 根据权利要求1-22中任一项所述的方法,其特征在于,所述方法还包括:The method of any of claims 1 to 22, wherein the method further comprises:接收注册请求,所述注册请求包括任务的任务处理函数;Receiving a registration request, the registration request including a task processing function of the task;根据所述任务处理函数,对所述任务进行函数注册。A function registration is performed on the task according to the task processing function.
- 一种嵌入式调度系统,其特征在于,包括:An embedded scheduling system, comprising:查询模块,用于遍历各任务对应的第一整数中比特位的当前值,所述任务与所述第一整数一一对应,所述任务对应的第一整数中的比特位与所述任务支持的事件一一对应;a query module, configured to traverse a current value of a bit in a first integer corresponding to each task, where the task is in one-to-one correspondence with the first integer, and a bit in the first integer corresponding to the task is related to the task Supported events correspond one-to-one;所述查询模块,还用于将所述各任务对应的第一整数中,当前值为第一值的比特位对应的事件确定为当前的待处理事件;The querying module is further configured to determine, as the current pending event, an event corresponding to a bit whose current value is the first value in the first integer corresponding to each task;处理模块,用于调用支持所述待处理事件的任务,对所述待处理事件进行处理。And a processing module, configured to invoke a task that supports the to-be-processed event, and process the to-be-processed event.
- 根据权利要求24所述的系统,其特征在于,所述系统还包括:The system of claim 24, wherein the system further comprises:分配模块,用于为每个任务分配第一整数,所述任务对应的第一整数的比特位数量与所述任务支持的事件的数量一致;An allocating module, configured to allocate a first integer to each task, where the number of bits of the first integer corresponding to the task is consistent with the number of events supported by the task;所述分配模块,还用于将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。The allocating module is further configured to allocate the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence.
- 根据权利要求25所述的系统,其特征在于,所述系统还包括:The system of claim 25, wherein the system further comprises:事件优先级模块,用于确定各事件的优先级;An event priority module for determining the priority of each event;所述分配模块,具体用于按照比特顺序和事件的优先级一致的分配原则,将所述任务对应的第一整数的比特位一一对应地分配给所述任务支持的事件。The allocating module is specifically configured to allocate the bits of the first integer corresponding to the task to the event supported by the task in a one-to-one correspondence according to the allocation principle of the bit order and the priority of the event.
- 根据权利要求26所述的系统,其特征在于,The system of claim 26 wherein:所述处理模块,具体用于按照所述待处理事件对应的比特位的比特顺序,依次针对每个待处理事件,调用支持所述待处理事件的任务,对所述待处理事件进行处理。The processing module is configured to: in response to the bit sequence of the bit corresponding to the to-be-processed event, sequentially, for each pending event, invoke a task that supports the to-be-processed event, and process the to-be-processed event.
- 根据权利要求25-27中任一项所述的系统,其特征在于,所述分配模块包括:The system of any of claims 25-27, wherein the distribution module comprises:第一创建单元,用于根据所述各任务的数量,建立包括多个第一整数的 一维整数数组,所述多个第一整数的数量与所述各任务的数量一致;a first creating unit, configured to establish, according to the number of the tasks, a plurality of first integers a one-dimensional array of integers, the number of the plurality of first integers being consistent with the number of the tasks;第一分配单元,用于将所述一维整数数组中的第一整数一一对应地分配给所述各任务。And a first allocation unit, configured to allocate the first integers in the one-dimensional integer array to the tasks in a one-to-one correspondence.
- 根据权利要求24-28中任一项所述的系统,其特征在于,所述系统还包括:A system according to any one of claims 24 to 28, wherein the system further comprises:第一更新模块,用于若完成对所述待处理事件的处理,则将所述待处理事件对应的比特位设置为第二值。And a first update module, configured to set a bit corresponding to the to-be-processed event to a second value, if the processing of the to-be-processed event is completed.
- 根据权利要求24-29中任一项所述的系统,其特征在于,所述系统还包括:A system according to any one of claims 24 to 29, wherein the system further comprises:第二更新模块,用于若检测到新事件,则将所述新事件对应的比特位的值设置为所述第一值。And a second update module, configured to: if a new event is detected, set a value of a bit corresponding to the new event to the first value.
- 根据权利要求24-30中任一项所述的系统,其特征在于,所述系统还包括:A system according to any one of claims 24 to 30, wherein the system further comprises:缓存模块,用于若检测到新事件且所述新事件包括消息,则将所述消息缓存至消息池。a caching module, configured to cache the message to a message pool if a new event is detected and the new event includes a message.
- 根据权利要求31所述的系统,其特征在于,所述处理模块包括:The system of claim 31, wherein the processing module comprises:消息获取单元,用于获取与所述待处理事件关联的目标消息;a message obtaining unit, configured to acquire a target message associated with the to-be-processed event;消息处理单元,用于调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。a message processing unit, configured to invoke a task that supports the to-be-processed event, process the target message, and clear the processed target message.
- 根据权利要求31或32所述的系统,其特征在于,所述系统还包括:The system of claim 31 or 32, wherein the system further comprises:划分模块,用于按照预设的划分粒度,对消息池中的内存进行划分,获得多个内存块;a dividing module, configured to divide a memory in a message pool according to a preset granularity, and obtain a plurality of memory blocks;所述缓存模块,具体用于若检测到新事件且所述新事件包括消息,则从所述多个内存块中查找出空闲的目标内存块,并将所述消息缓存至所述目标内存块。The cache module is configured to: if a new event is detected and the new event includes a message, find a free target memory block from the plurality of memory blocks, and cache the message to the target memory block .
- 根据权利要求33所述的系统,其特征在于,所述系统还包括:The system of claim 33, wherein the system further comprises:标识模块,用于为每个内存块设置状态标识,所述内存块的状态标识用于表征所述内存块是否空闲。And an identifier module, configured to set a status identifier for each memory block, where the status identifier of the memory block is used to indicate whether the memory block is idle.
- 根据权利要求34所述的系统,其特征在于,所述标识模块包括:The system of claim 34, wherein the identification module comprises:第二创建单元,用于创建第二整数,所述第二整数的比特位数量与所述 多个内存块的数量一致;a second creating unit, configured to create a second integer, the number of bits of the second integer is the same as The number of multiple memory blocks is the same;第二分配单元,用于将所述第二整数的比特位一一对应地分配给所述内存块,其中,内存块的状态标识为所述内存块对应的比特位,所述内存块对应的比特位的不同值分别表征所述内存块处于空闲或非空闲。a second allocation unit, configured to allocate the bits of the second integer to the memory block in a one-to-one correspondence, wherein a state identifier of the memory block is a bit corresponding to the memory block, and the memory block corresponds to The different values of the bits respectively indicate that the memory block is idle or not idle.
- 根据权利要求34或35所述的系统,其特征在于,所述缓存模块包括:The system according to claim 34 or 35, wherein the cache module comprises:查找单元,用于若检测到新事件且所述新事件包括消息,则遍历所述多个内存块的状态标识,查找出其状态标识为空闲的目标内存块;a searching unit, configured to: if a new event is detected, and the new event includes a message, traverse the state identifiers of the plurality of memory blocks, and find a target memory block whose status identifier is idle;存储单元,用于将所述消息缓存至所述目标内存块,并将所述目标内存块的状态标识设置为非空闲。a storage unit, configured to cache the message to the target memory block, and set a status identifier of the target memory block to be non-idle.
- 根据权利要求33-36中任一项所述的系统,其特征在于,A system according to any one of claims 33 to 36, wherein所述存储单元,具体用于若所述消息的数据量不大于单个目标内存块的存储容量,则将所述消息缓存至所述目标内存块;The storage unit is specifically configured to cache the message to the target memory block if the data volume of the message is not greater than a storage capacity of a single target memory block;所述存储单元,还具体用于若所述消息的数据量大于单个目标内存块的存储容量,则将所述消息拆分为多个消息块,并将所述多个消息块分别缓存至多个目标内存块。The storage unit is further configured to: if the data volume of the message is greater than a storage capacity of a single target memory block, split the message into multiple message blocks, and cache the multiple message blocks to multiple Target memory block.
- 根据权利要求33-37中任一项所述的系统,其特征在于,所述系统还包括:The system of any of claims 33-37, wherein the system further comprises:生成模块,用于生成消息的消息头,所述消息头包括所述消息所在内存块的信息;a generating module, configured to generate a message header of the message, where the message header includes information about a memory block where the message is located;所述消息获取单元包括:The message obtaining unit includes:处理子单元,用于确定与所述待处理事件关联的目标消息;a processing subunit, configured to determine a target message associated with the to-be-processed event;提取子单元,用于从所述目标消息的消息头中获取所述目标消息所在内存块的信息,提取所述目标消息所在内存块中缓存的消息作为所述目标消息。And an extracting subunit, configured to acquire information about a memory block in which the target message is located from a message header of the target message, and extract a message buffered in a memory block where the target message is located as the target message.
- 根据权利要求38所述的系统,其特征在于,所述消息的消息头还包括所述消息所属事件的信息;The system according to claim 38, wherein the message header of the message further includes information of an event to which the message belongs;所述处理子单元,具体用于确定第一消息为所述目标消息,所述第一消息的消息头包括所述待处理事件的信息。The processing sub-unit is specifically configured to determine that the first message is the target message, and the message header of the first message includes information about the to-be-processed event.
- 根据权利要求38或39所述的系统,其特征在于,所述消息的消息头还包括支持所述消息所属事件的任务的信息; The system according to claim 38 or 39, wherein the message header of the message further comprises information supporting a task to which the message belongs;所述处理子单元,具体用于确定第二消息为所述目标消息,所述第二消息的消息头包括支持所述待处理事件的任务的信息。The processing sub-unit is specifically configured to determine that the second message is the target message, and the message header of the second message includes information that supports the task of the to-be-processed event.
- 根据权利要求38-40中任一项所述的系统,其特征在于,所述消息的消息头还包括消息计数信息;A system according to any one of claims 38 to 40, wherein the message header of the message further comprises message count information;所述消息处理单元,具体用于按照所述目标消息对应的计数信息的顺序,依次针对每个目标信息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息。The message processing unit is configured to: in response to the order of the counting information corresponding to the target message, sequentially, for each target information, invoke a task that supports the to-be-processed event, process the target message, and clear the processed Target message.
- 根据权利要求34-41中任一项所述的系统,其特征在于,所述系统还包括:The system of any of claims 34-41, wherein the system further comprises:第三更新模块,用于将经处理的目标消息所在的内存块的状态标识设置为空闲。And a third update module, configured to set a status identifier of the memory block where the processed target message is located to be idle.
- 根据权利要求32-42中任一项所述的系统,其特征在于,A system according to any of claims 32-42, wherein所述消息处理单元,具体用于若所述目标消息的数量为多个,则针对任一目标消息,调用支持所述待处理事件的任务,对所述目标消息进行处理,并清除经处理的目标消息;The message processing unit is specifically configured to: if the number of the target messages is multiple, invoke a task that supports the to-be-processed event for any target message, process the target message, and clear the processed message. Target message所述消息处理单元,还具体用于再次执行所述获取与所述待处理事件关联的目标消息的步骤,直至所述目标消息的数量为1,则调用支持所述待处理事件的任务,对所述目标消息进行处理,清除经处理的目标消息,并将所述待处理事件对应的比特位设置为第二值。The message processing unit is further configured to perform the step of acquiring the target message associated with the to-be-processed event again, until the number of the target message is 1, and the task supporting the to-be-processed event is invoked, The target message is processed, the processed target message is cleared, and the bit corresponding to the to-be-processed event is set to a second value.
- 根据权利要求24-43中任一项所述的系统,其特征在于,A system according to any one of claims 24 to 43 wherein:所述处理模块,具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,调用支持所述处理对象的任务,对所述处理对象进行单次处理,直至完成对所有待处理事件的处理。The processing module is specifically configured to: if there are multiple to-be-processed events, and an event that includes multiple messages exists in the multiple to-be-processed events, use a polling processing manner to sequentially treat each pending event as a current The processing object calls a task that supports the processing object, and performs a single processing on the processing object until the processing of all pending events is completed.
- 根据权利要求44所述的系统,其特征在于,The system of claim 44, wherein所述处理模块,具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,若所述处理对象包括消息,则调用支持所述处理对象的任务对所述处理对象的单个消息进行处理,直至完成对所有待处理事件的处理; The processing module is specifically configured to: if there are multiple to-be-processed events, and an event that includes multiple messages exists in the multiple to-be-processed events, use a polling processing manner to sequentially treat each pending event as a current a processing object, if the processing object includes a message, calling a task supporting the processing object to process a single message of the processing object until the processing of all pending events is completed;所述处理模块,还具体用于若存在多个待处理事件,且所述多个待处理事件中存在包括多个消息的事件,则采用轮询的处理方式,依次将每个待处理事件作为当前的处理对象,若所述处理对象不包括消息,则调用支持所述处理对象的任务,对所述处理对象进行处理,直至完成对所有待处理事件的处理。The processing module is further configured to: if there are multiple to-be-processed events, and an event that includes multiple messages exists in the multiple to-be-processed events, use a polling processing manner to sequentially treat each pending event as The current processing object, if the processing object does not include a message, invokes a task that supports the processing object, and processes the processing object until the processing of all pending events is completed.
- 根据权利要求24-45中任一项所述的系统,其特征在于,所述系统还包括:A system according to any one of claims 24 to 45, wherein the system further comprises:接收模块,用于接收注册请求,所述注册请求包括任务的任务处理函数;a receiving module, configured to receive a registration request, where the registration request includes a task processing function of the task;注册模块,用于根据所述任务处理函数,对所述任务进行函数注册。a registration module, configured to perform function registration on the task according to the task processing function.
- 一种嵌入式调度系统,其特征在于,包括:至少一个处理器和存储器;An embedded scheduling system, comprising: at least one processor and a memory;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,以执行如权利要求1-23中任一项所述的方法。The memory storage computer executes instructions; the at least one processor executes the computer-executed instructions stored by the memory to perform the method of any of claims 1-23.
- 一种计算机存储介质,其特征在于,该计算机存储介质中存储有程序指令,所述程序指令被处理器执行时实现权利要求1-23中任一项所述的方法。 A computer storage medium, characterized in that the computer storage medium stores program instructions, the program instructions being executed by a processor to implement the method of any one of claims 1-23.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780001291.8A CN109819674B (en) | 2017-09-21 | 2017-09-21 | Computer storage medium, embedded scheduling method and system |
PCT/CN2017/102701 WO2019056263A1 (en) | 2017-09-21 | 2017-09-21 | Computer storage medium and embedded scheduling method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/102701 WO2019056263A1 (en) | 2017-09-21 | 2017-09-21 | Computer storage medium and embedded scheduling method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019056263A1 true WO2019056263A1 (en) | 2019-03-28 |
Family
ID=65809975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/102701 WO2019056263A1 (en) | 2017-09-21 | 2017-09-21 | Computer storage medium and embedded scheduling method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109819674B (en) |
WO (1) | WO2019056263A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111581827A (en) * | 2020-05-09 | 2020-08-25 | 中国人民解放军海军航空大学 | Event interaction method and system for distributed simulation |
CN113688067A (en) * | 2021-08-30 | 2021-11-23 | 上海汉图科技有限公司 | Data writing method, data reading method and device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110417910B (en) * | 2019-08-07 | 2022-04-22 | 北京达佳互联信息技术有限公司 | Notification message sending method, device, server and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1285549A (en) * | 2000-10-23 | 2001-02-28 | 大唐电信科技股份有限公司微电子分公司 | Method for realizing intelligent card embedded software adopting lagic interval chained list addressing |
EP2267724A1 (en) * | 2009-06-26 | 2010-12-29 | STMicroelectronics Rousset SAS | EEPROM memory architecture optimised for embedded memories |
CN102508632A (en) * | 2011-09-30 | 2012-06-20 | 飞天诚信科技股份有限公司 | Method and device for realizing multiplication in embedded system |
CN102521042A (en) * | 2011-12-16 | 2012-06-27 | 中船重工(武汉)凌久电子有限责任公司 | Quick text switching method for DSP (digital signal processor) based on Harvard structure |
CN105589760A (en) * | 2015-12-21 | 2016-05-18 | 中国电子科技集团公司第十一研究所 | Task timeout protection method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003083693A1 (en) * | 2002-04-03 | 2003-10-09 | Fujitsu Limited | Task scheduler in distributed processing system |
ES2718801T3 (en) * | 2007-06-19 | 2019-07-04 | Optis Cellular Tech Llc | Procedures and systems for planning resources in a telecommunications system |
CN104166590A (en) * | 2013-05-20 | 2014-11-26 | 阿里巴巴集团控股有限公司 | Task scheduling method and system |
CN104318165A (en) * | 2014-11-05 | 2015-01-28 | 何宗彬 | Tailorable safety real-time embedded operating system |
-
2017
- 2017-09-21 CN CN201780001291.8A patent/CN109819674B/en active Active
- 2017-09-21 WO PCT/CN2017/102701 patent/WO2019056263A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1285549A (en) * | 2000-10-23 | 2001-02-28 | 大唐电信科技股份有限公司微电子分公司 | Method for realizing intelligent card embedded software adopting lagic interval chained list addressing |
EP2267724A1 (en) * | 2009-06-26 | 2010-12-29 | STMicroelectronics Rousset SAS | EEPROM memory architecture optimised for embedded memories |
CN102508632A (en) * | 2011-09-30 | 2012-06-20 | 飞天诚信科技股份有限公司 | Method and device for realizing multiplication in embedded system |
CN102521042A (en) * | 2011-12-16 | 2012-06-27 | 中船重工(武汉)凌久电子有限责任公司 | Quick text switching method for DSP (digital signal processor) based on Harvard structure |
CN105589760A (en) * | 2015-12-21 | 2016-05-18 | 中国电子科技集团公司第十一研究所 | Task timeout protection method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111581827A (en) * | 2020-05-09 | 2020-08-25 | 中国人民解放军海军航空大学 | Event interaction method and system for distributed simulation |
CN111581827B (en) * | 2020-05-09 | 2023-04-21 | 中国人民解放军海军航空大学 | Event interaction method and system for distributed simulation |
CN113688067A (en) * | 2021-08-30 | 2021-11-23 | 上海汉图科技有限公司 | Data writing method, data reading method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109819674B (en) | 2022-04-26 |
CN109819674A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6449872B2 (en) | Efficient packet processing model in network environment and system and method for supporting optimized buffer utilization for packet processing | |
JP2018533122A (en) | Efficient scheduling of multiversion tasks | |
US9329899B2 (en) | Parallel execution of parsed query based on a concurrency level corresponding to an average number of available worker threads | |
CN113243005A (en) | Performance-based hardware emulation in on-demand network code execution systems | |
CN104008013B (en) | A kind of nuclear resource distribution method, device and many-core system | |
KR100958303B1 (en) | A System and A Method for Dynamic Loading and Execution of Module Devices using Inter-Core-Communication Channel in Multicore system environment | |
TW201731253A (en) | Quantum key distribution method and device obtaining a key sequence matching the requested length in the sub-key pool allocated from the requesting party after receiving a quantum key obtaining request | |
US20170207958A1 (en) | Performance of Multi-Processor Computer Systems | |
US20130151747A1 (en) | Co-processing acceleration method, apparatus, and system | |
CN107562685B (en) | Method for data interaction between multi-core processor cores based on delay compensation | |
TWI831729B (en) | Method for processing multiple tasks, processing device and heterogeneous computing system | |
KR101859188B1 (en) | Apparatus and method for partition scheduling for manycore system | |
WO2019056263A1 (en) | Computer storage medium and embedded scheduling method and system | |
CN101826003A (en) | Multithread processing method and device | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
CN112698959A (en) | Multi-core communication method and device | |
JP4862056B2 (en) | Virtual machine management mechanism and CPU time allocation control method in virtual machine system | |
CN113010453A (en) | Memory management method, system, equipment and readable storage medium | |
CN110245027B (en) | Inter-process communication method and device | |
CN114860449A (en) | Data processing method, device, equipment and storage medium | |
CN109412973A (en) | Audio processing method and device and storage medium | |
CN118363542B (en) | Dynamic storage management method, device, equipment and medium during task running | |
US9489327B2 (en) | System and method for supporting an efficient packet processing model in a network environment | |
CN115391042B (en) | Resource allocation method and device, electronic equipment and storage medium | |
CN114168233A (en) | Data processing method, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17925994 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17925994 Country of ref document: EP Kind code of ref document: A1 |