CN113934528A - Task differentiation scheduling method, device and system of Internet of things and storage medium - Google Patents

Task differentiation scheduling method, device and system of Internet of things and storage medium Download PDF

Info

Publication number
CN113934528A
CN113934528A CN202111403609.XA CN202111403609A CN113934528A CN 113934528 A CN113934528 A CN 113934528A CN 202111403609 A CN202111403609 A CN 202111403609A CN 113934528 A CN113934528 A CN 113934528A
Authority
CN
China
Prior art keywords
task
interval
buffer
cache
internet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111403609.XA
Other languages
Chinese (zh)
Inventor
刘阳
郑凛
王琳
刘贝彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jixiang Technology Zhejiang Co Ltd
Original Assignee
Jixiang Technology Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jixiang Technology Zhejiang Co Ltd filed Critical Jixiang Technology Zhejiang Co Ltd
Publication of CN113934528A publication Critical patent/CN113934528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The embodiment of the invention discloses a task differentiation scheduling method and device of the Internet of things, an Internet of things system and a storage medium. The method comprises the following steps: monitoring the task state in the multi-core Internet of things system, wherein each execution core of the multi-core Internet of things system corresponds to a section of a first-in first-out cache queue; when the input of the latest task is monitored, caching the latest task in a first cache interval to be correspondingly distributed to an execution core corresponding to the first cache interval, wherein the first cache interval is the cache interval with the least current residual tasks; and when the idle buffer interval appears, transferring at least one task buffer to the idle buffer interval from a second buffer interval so as to be correspondingly distributed to the execution core corresponding to the idle buffer interval, wherein the idle buffer interval is a buffer interval for emptying the task, and the second buffer interval is a buffer interval with the most current residual tasks. According to the scheme, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved.

Description

Task differentiation scheduling method, device and system of Internet of things and storage medium
Technical Field
The embodiment of the invention relates to the technical field of networks, in particular to a task differentiated scheduling method, device, equipment and storage medium of the Internet of things.
Background
The internet of things is regarded as a major development and transformation opportunity in the information field, and is expected to bring revolutionary transformation, which has all-round influence on various fields such as industry, agriculture, property, city management, safety fire fighting and the like in a relatively common view. However, technically, the internet of things is not only a main body for changing data transmission, but also has obvious difference from traditional communication. For example, a feature of the large-scale internet of things is that a large number of users sporadically transmit very small packets, unlike conventional cellular communications.
In order to meet the task scheduling requirements in the internet of things, a high-performance embedded node is usually designed for large-scale internet of things to perform parallel processing on data collected in the internet of things, and even a multi-core processing mode is adopted to achieve task scheduling.
The inventor finds that in the process of task scheduling in a multi-core processing mode in a large-scale internet of things, one task may be scheduled among multiple execution cores for multiple times, a large amount of useless scheduling is performed, the scheduling efficiency is low, and the task scheduling of different priorities is complex to process.
Disclosure of Invention
The invention provides a task differential scheduling method, device and system of the Internet of things and a storage medium, and aims to solve the technical problems that in the prior art, the scheduling efficiency of multi-core processing task scheduling of the Internet of things is low, and the processing of task scheduling of different priorities is complex.
In a first aspect, an embodiment of the present invention provides a task differentiation scheduling method for an internet of things, which is used in a multi-core internet of things system, and includes:
monitoring the task state in the multi-core Internet of things system, wherein each execution core of the multi-core Internet of things system is respectively and correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system;
when the input of a latest task is monitored, acquiring the task priority of the latest task, wherein the task priority comprises a first priority and a second priority;
if the task priority of the latest task is first priority, caching the latest task to a first cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the first cache interval, wherein the first cache interval is the cache interval with the least current residual tasks;
if the task priority of the latest task is suboptimal, caching the latest task to a secondary cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the secondary cache interval, wherein the secondary cache interval is the cache interval with the longest current residual cache queue.
Further, the method further comprises:
when an idle buffer interval is monitored to appear, at least one task buffer is transferred to the idle buffer interval from a second buffer interval so as to be correspondingly distributed to an execution core corresponding to the idle buffer interval, the idle buffer interval is a buffer interval with empty tasks, and the second buffer interval is a buffer interval with the most current remaining tasks.
Further, when it is monitored that an idle buffer interval occurs, migrating at least one task buffer from a second buffer interval to the idle buffer interval, including:
when the occurrence of an idle cache interval is monitored, successively confirming a second cache interval, and migrating task caches to the idle cache interval one by one from the second cache interval until the number of tasks in the idle cache interval reaches a preset threshold value or the number of tasks in all the cache intervals is not higher than the preset threshold value.
Further, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
Further, when a plurality of first buffer intervals exist, the latest task is randomly buffered to one of the first buffer intervals.
In a second aspect, an embodiment of the present invention further provides a task differentiation scheduling device for an internet of things, which is used in a multi-core internet of things system, and includes:
the state monitoring unit is used for monitoring the task state in the multi-core Internet of things system, each execution core of the multi-core Internet of things system is correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system;
the priority judging unit is used for acquiring the task priority of the latest task when the latest task is monitored to be input, wherein the task priority comprises a first priority and a second priority;
the first cache unit is used for caching the latest task to a first cache interval to correspondingly allocate the latest task to an execution core corresponding to the first cache interval if the task priority of the latest task is first priority, wherein the first cache interval is a cache interval with the least current residual tasks;
and the second cache unit is used for caching the latest task to a secondary cache interval to be correspondingly distributed to the execution core corresponding to the secondary cache interval if the task priority of the latest task is suboptimal, wherein the secondary cache interval is the cache interval with the longest current residual cache queue.
Further, the apparatus further includes:
and the task migration unit is used for migrating at least one task cache to the idle cache interval from a second cache interval when the idle cache interval appears, so as to correspondingly allocate the task cache to the execution core corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with a task being emptied, and the second cache interval is a cache interval with the most current remaining tasks.
Further, when it is monitored that an idle buffer interval occurs, migrating at least one task buffer from a second buffer interval to the idle buffer interval, including:
when the occurrence of an idle cache interval is monitored, successively confirming a second cache interval, and migrating task caches to the idle cache interval one by one from the second cache interval until the number of tasks in the idle cache interval reaches a preset threshold value or the number of tasks in all the cache intervals is not higher than the preset threshold value.
Further, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
Further, when a plurality of first buffer intervals exist, the latest task is randomly buffered to one of the first buffer intervals.
In a third aspect, an embodiment of the present invention further provides an internet of things system, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the internet of things system is enabled to implement the task differentiation scheduling method of the internet of things according to any one of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for task differentiated scheduling of the internet of things according to the first aspect.
The task differentiation scheduling method and device of the internet of things, the internet of things system and the storage medium monitor the task state in the multi-core internet of things system, each execution core of the multi-core internet of things system is respectively and correspondingly allocated with a cache interval with different lengths, and the cache interval is one section of a first-in first-out cache queue in the multi-core internet of things system; when the input of a latest task is monitored, acquiring the task priority of the latest task, wherein the task priority comprises a first priority and a second priority; the first cache unit is used for caching the latest task to a first cache interval to correspondingly allocate the latest task to an execution core corresponding to the first cache interval if the task priority of the latest task is first priority, wherein the first cache interval is a cache interval with the least current residual tasks; if the task priority of the latest task is suboptimal, caching the latest task to a secondary cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the secondary cache interval, wherein the secondary cache interval is the cache interval with the longest current residual cache queue. According to the scheme, the corresponding cache intervals are distributed to the execution cores, when the latest task is received, different judgment mechanisms are adopted to cache in the cache intervals according to the priority of the latest task so as to complete the distribution of the proper execution cores, the switching process of task distribution is reduced, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the task scheduling of different priorities is simplified.
Drawings
Fig. 1 is a flowchart of a task differentiation scheduling method of the internet of things according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a task differentiation scheduling device of the internet of things according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an internet of things system according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that, for the sake of brevity, this description does not exhaust all alternative embodiments, and it should be understood by those skilled in the art after reading this description that any combination of features may constitute an alternative embodiment as long as the features are not mutually inconsistent.
The following examples are described in detail.
Example one
Fig. 1 is a flowchart of a task differentiation scheduling method of the internet of things according to an embodiment of the present invention. The task differential scheduling method for the internet of things provided in the embodiment may be executed by various operating devices for task differential scheduling of the internet of things, the operating devices may be implemented in a software and/or hardware manner, and the operating devices may be composed of two or more physical entities or may be composed of one physical entity.
Specifically, referring to fig. 1, the task differentiation scheduling method for the internet of things specifically includes:
step S101: and monitoring the task state in the multi-core Internet of things system, wherein each execution core of the multi-core Internet of things system is correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system.
In the architecture of the internet of things system, a sink node is a key component of the architecture, in the specific implementation process, the multi-core internet of things system is designed based on an embedded multi-core processor, and a plurality of execution cores in the embedded multi-core processor can perform operation simultaneously, so that higher processing efficiency is brought to data collection in the multi-core internet of things system under the condition of limited resource configuration.
For an embedded multi-core processor, each processing core cannot process all tasks allocated to the processing core at the same time, that is, tasks allocated to one internet of things node may need to be queued, and the tasks in the queued state are temporarily cached in a first-in first-out cache queue. According to the prior art, during the queuing process, the tasks may be continuously scheduled and switched to different execution cores to wait for execution according to the actual processing progress of the execution cores, which is equivalent to performing an invalid scheduling process in the task scheduling process.
In the scheme, in order to improve the scheduling processing efficiency, the first-in first-out cache queue is segmented, each segment corresponds to one execution core, a task specifically allocated to one execution core is cached to a corresponding cache interval first, and through the corresponding allocation mode of the execution cores and the cache intervals, the association relation between the task and the execution core correspondingly processing the task is fixed in a relatively static mode, so that the invalid task allocation scheduling processing is reduced as much as possible. When the fifo buffer queue is segmented, the length of each segment is set to be different, wherein a small number of buffer intervals with fewer cacheable tasks are particularly required to be set to adapt to subsequent task scheduling. In the specific length setting, the difference may be set to be different, or may be set to be multiple lengths, each length corresponding to one or more execution cores, and the specific length and number are not specifically limited herein.
Step S102: when the input of the latest task is monitored, the task priority of the latest task is obtained, wherein the task priority comprises a first priority and a second priority.
For tasks in the internet of things, different processing priorities may exist according to different processing targets and processing requirements of different task types, generally, two-level priorities are distinguished, in the actual processing process, multi-level priorities can be further set on the basis of the two-level priorities according to the actual task types, namely, the first priority and the second priority are divided into multiple levels, and execution cores can be further distributed among the multi-level priorities based on the design idea of the scheme. Generally speaking, a task with a higher priority has a higher requirement on processing time, and needs to be preferentially allocated with an execution core for fast processing; tasks with lower priority have lower processing time requirements and may be queued for processing after an existing task. Based on this assumption, in step S101, on the basis of the buffer sections having different lengths, different buffer mechanisms are adopted by the determination based on the priority in step S103 and step S104, and priority processing is performed on the task having a higher priority.
Step S103: if the task priority of the latest task is the first priority, caching the latest task to a first cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the first cache interval, wherein the first cache interval is the cache interval with the least current residual tasks.
Step S104: if the task priority of the latest task is suboptimal, caching the latest task to a secondary cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the secondary cache interval, wherein the secondary cache interval is the cache interval with the longest current residual cache queue.
For an internet of things node, when receiving input of a latest task, the latest task needs to be allocated to a certain execution core in an embedded multi-core processor of the internet of things node, in the existing processing mode, when all tasks of the current execution cores are to be executed, a first-in first-out cache queue is taken as a whole for cache management, the allocation process of a task to a specific execution core may be continuously adjusted due to the change of task processing progress, so that the task processing states of all execution cores need to be continuously monitored when the task is cached in the first-in first-out cache queue, and further, the task allocation is continuously adjusted adaptively.
In the scheme, the method and the device are suitable for setting different lengths of the cache intervals corresponding to each execution core, and task allocation is carried out according to the priority of the latest task and the storage state of the cache intervals, so that the tasks needing to be processed preferentially can be processed preferentially while the task allocation is relatively stable. Specifically, the latest task with the second priority is directly allocated to the execution core with the longest current remaining buffer queue, the task allocated to each execution core is relatively fixed, the initially allocated execution core is used for performing task processing as a basic processing principle, and the allocation change in the task waiting processing process is reduced. And directly allocating the buffer interval which is distributed to the least current residual tasks for the latest task with the first priority so as to start the processing of the task as soon as possible.
The limitation of the buffer intervals and the task allocation mode can simultaneously reduce useless scheduling and meet the requirement of task scheduling with higher priority. Assuming that there are two execution cores currently, the maximum number of buffer tasks allocated to each execution core is 3 and 5, respectively, according to the allocation manner in step S103 and step S104, when there is a sub-priority task, which is the execution core allocated to the longest remaining buffer queue, and when the lengths are different, the remaining buffer queue in the longer buffer interval is roughly longer, so the sub-priority task is roughly allocated to the longer buffer interval, and similarly, the current remaining tasks in the shorter buffer interval are roughly less, so the first priority task is roughly allocated to the shorter buffer interval, and under this allocation policy, the sub-priority task can be scheduled to the execution core for waiting processing without any doubt, and the first priority task is allocated to even one execution core currently having a task, because there are the least tasks to be processed, and the upper limit of the total amount of the tasks is not high, so that the processing of the first priority task is not seriously influenced. Overall, by setting the buffer intervals with different lengths, tasks with different priorities are allocated according to the states of the remaining tasks or the remaining buffer queues, so that the data processing efficiency of multi-core processing task scheduling in the internet of things system is improved, and the task scheduling with different priorities is simplified.
In the actual processing process, there may be a plurality of buffer intervals in which the remaining tasks are the same, that is, there may be a plurality of buffer intervals that are all first buffer intervals, and when there are a plurality of first buffer intervals, the latest task is randomly buffered in one of the first buffer intervals.
Step S105: when an idle buffer interval is monitored to appear, at least one task buffer is transferred to the idle buffer interval from a second buffer interval so as to be correspondingly distributed to an execution core corresponding to the idle buffer interval, the idle buffer interval is a buffer interval with empty tasks, and the second buffer interval is a buffer interval with the most current remaining tasks.
In the process of processing the task by each execution core, due to reasons such as task complexity, data transmission speed, bandwidth allocation and the like, the speed of task processing may not be completely the same, and finally, the task queuing conditions in the buffer interval are different. For example, some execution cores may have processed all tasks, i.e., have emptied the tasks in the corresponding buffer interval; and a plurality of tasks are queued in the buffer intervals corresponding to other execution cores. At this time, one or more tasks can be migrated from the queued buffer interval to the idle buffer interval, so that the processing speed of the tasks is improved as a whole, and the situation that the execution core is in an idle state is avoided.
When the task is migrated specifically, instead of migrating a plurality of tasks from other buffer intervals to an idle buffer interval at a time, the tasks are migrated one by one, and the task number states in all the buffer intervals are determined. Overall, when it is monitored that an idle buffer interval appears, second buffer intervals are successively confirmed, and task buffers are migrated one by one from the second buffer intervals to the idle buffer intervals until the number of tasks in the idle buffer intervals reaches a preset threshold value or the number of tasks in all buffer intervals is not higher than the preset threshold value. In the process of migrating tasks one by one, whether the previously confirmed idle buffer interval reaches a preset threshold value or not is judged, if the previously confirmed idle buffer interval reaches the preset threshold value, the fact that a certain number of tasks to be processed exist in the idle buffer interval means that the tasks are not migrated to the idle buffer interval any more, and the task is only required to be allocated to the buffer interval when the latest tasks are allocated to the buffer interval. Meanwhile, in order to avoid too few tasks in other buffer intervals, when the number of the tasks in the other buffer intervals is small due to outward migration to a certain extent, namely the number of the remaining tasks in the other buffer intervals is not higher than a preset threshold value, the migration is stopped.
In a specific processing process, the remaining tasks in the multiple buffer intervals are most parallel, that is, there are multiple second buffer intervals, at this time, instead of directly migrating one task from each second buffer interval to an idle buffer interval, one task buffer is randomly migrated from one second buffer interval to the idle buffer interval, and a successive migration confirmation mode is still adopted, and after one task is migrated, the remaining tasks in the idle buffer interval and the tasks in other buffer intervals are judged until the set number of tasks is reached, and the migration is stopped. The task migrated to the idle buffer interval may be the task of the latest buffer or the task of the earliest buffer.
The judgment basis of stopping migration of other buffer intervals can be based on the comparison with the tasks in the idle buffer intervals besides the preset threshold, and if the number of the remaining tasks in other buffer intervals is not more than one than the number of the tasks in the idle buffer intervals, the task migration is not performed.
It should be noted that, in this embodiment, the first buffer interval and the second buffer interval are not one or more fixed buffer intervals, which are only defined differently according to the state of the buffer interval at a certain time, and are special marks for convenience in description of the embodiment, and the functions of the special marks are not different from those of other buffer intervals, and after a current latest task is buffered in a certain first buffer interval, the first buffer interval may not be the first buffer interval when the next latest task is buffered. And the idle buffer interval is defined as the idle buffer interval in the whole migration process, but not defined as the idle buffer interval only if no task exists, and the state definition of the idle buffer interval is finished after the task migration of a certain buffer interval is finished in terms of the task queuing state.
Meanwhile, in the present solution, it should be understood that steps S101 to S105 exist as a whole, which are not sequentially executed in the strict order described above, when the multi-core internet of things system processes the tasks, the assignment of the latest task and the migration of the task may be executed according to the actual monitoring result, and when the latest task is monitored, the latest task is cached; and when the idle buffer interval is monitored, migrating the task to the idle buffer interval, if the latest task is continuously monitored, continuously executing the step S102 to the step S104, and if the idle buffer interval is continuously monitored, continuously executing the step S105.
Monitoring the task state in the multi-core internet of things system, wherein each execution core of the multi-core internet of things system is correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core internet of things system; when the input of the latest task is monitored, caching the latest task to a first cache interval to be correspondingly distributed to an execution core corresponding to the first cache interval, wherein the first cache interval is the cache interval with the least current residual tasks; if the task priority of the latest task is suboptimal, caching the latest task to a secondary cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the secondary cache interval, wherein the secondary cache interval is the cache interval with the longest current residual cache queue. According to the scheme, the corresponding cache intervals are distributed to the execution cores, when the latest task is received, different judgment mechanisms are adopted to cache in the cache intervals according to the priority of the latest task so as to complete the distribution of the proper execution cores, the switching process of task distribution is reduced, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the task scheduling of different priorities is simplified.
Example two
Fig. 2 is a schematic structural diagram of a task differentiation scheduling device of the internet of things according to a second embodiment of the present invention. Referring to fig. 2, the task differentiation scheduling device of the internet of things includes: a status snooping unit 210, a priority determination unit 220, a first buffer unit 230, and a second buffer unit 240.
The state monitoring unit 210 is configured to monitor a task state in the multi-core internet of things system, where each execution core of the multi-core internet of things system is respectively and correspondingly allocated with a buffer interval with a different length, and the buffer interval is a segment of a first-in first-out buffer queue in the multi-core internet of things system; the priority judging unit 220 is configured to, when it is monitored that a latest task is input, obtain a task priority of the latest task, where the task priority includes a first priority and a second priority; a first caching unit 230, configured to cache the latest task in a first caching interval to be correspondingly allocated to an execution core corresponding to the first caching interval if the task priority of the latest task is first priority, where the first caching interval is a caching interval with the fewest current remaining tasks; a second caching unit 240, configured to cache the latest task in a secondary caching interval to be correspondingly allocated to an execution core corresponding to the secondary caching interval if the task priority of the latest task is suboptimal, where the secondary caching interval is a caching interval with a longest current remaining caching queue.
On the basis of the above embodiment, the apparatus further includes:
the task migration unit 230 is configured to, when it is monitored that an idle buffer interval occurs, migrate at least one task buffer from a second buffer interval to the idle buffer interval to be correspondingly allocated to an execution core corresponding to the idle buffer interval, where the idle buffer interval is a buffer interval in which a task is cleared, and the second buffer interval is a buffer interval in which the current remaining task is the most.
On the basis of the foregoing embodiment, when it is monitored that an idle buffer interval occurs, migrating at least one task buffer from a second buffer interval to the idle buffer interval includes:
when the occurrence of an idle cache interval is monitored, successively confirming a second cache interval, and migrating task caches to the idle cache interval one by one from the second cache interval until the number of tasks in the idle cache interval reaches a preset threshold value or the number of tasks in all the cache intervals is not higher than the preset threshold value.
On the basis of the above embodiment, when there are a plurality of second buffer intervals, a task buffer is randomly migrated from one second buffer interval to the idle buffer interval.
On the basis of the above embodiment, when there are a plurality of first buffer intervals, the latest task is randomly buffered to one of the first buffer intervals.
The task differentiation scheduling device of the internet of things provided by the embodiment of the invention is included in the task differentiation scheduling equipment of the internet of things, can be used for executing any task differentiation scheduling method of the internet of things provided by the embodiment one, and has corresponding functions and beneficial effects.
EXAMPLE III
Fig. 3 is a schematic structural diagram of node devices of the internet of things according to a third embodiment of the present invention, where the node devices of the internet of things are used to form a system of the internet of things, so as to comprehensively implement task scheduling in this scheme. As shown in fig. 3, the node apparatus of the internet of things includes a processor 310, a memory 320, an input device 330, an output device 340, and a communication device 350; the number of the processors 310 in the node device of the internet of things may be one or more, and one processor 310 is taken as an example in fig. 3; the processor 310, the memory 320, the input device 330, the output device 340 and the communication device 350 in the node device of the internet of things may be connected through a bus or other manners, and fig. 3 illustrates the connection through the bus as an example.
The memory 320 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the task differentiation scheduling method of the internet of things in the embodiment of the present invention (for example, the status monitoring unit 210, the priority determining unit 220, the first cache unit 230, and the second cache unit 240 in the task differentiation scheduling apparatus of the internet of things). The processor 310 executes various functional applications and data processing of the node device of the internet of things by running software programs, instructions and modules stored in the memory 320, that is, the task differentiation scheduling method of the internet of things is realized.
The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the node device of the internet of things, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 320 may further include memory located remotely from the processor 310, which may be connected to the internet of things node device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the internet of things node device. The output device 340 may include a display device such as a display screen.
The node equipment of the Internet of things comprises a task differentiation scheduling device of the Internet of things, can be used for executing a task differentiation scheduling method of any Internet of things, and has corresponding functions and beneficial effects.
Example four
Embodiments of the present invention further provide a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform relevant operations in the task differentiation scheduling method for the internet of things provided in any embodiment of the present invention, and the storage medium has corresponding functions and beneficial effects.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product.
Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A task differentiation scheduling method of the Internet of things is used for a multi-core Internet of things system and is characterized by comprising the following steps:
monitoring the task state in the multi-core Internet of things system, wherein each execution core of the multi-core Internet of things system is respectively and correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system;
when the input of a latest task is monitored, acquiring the task priority of the latest task, wherein the task priority comprises a first priority and a second priority;
if the task priority of the latest task is first priority, caching the latest task to a first cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the first cache interval, wherein the first cache interval is the cache interval with the least current residual tasks;
if the task priority of the latest task is suboptimal, caching the latest task to a secondary cache interval so as to correspondingly distribute the latest task to an execution core corresponding to the secondary cache interval, wherein the secondary cache interval is the cache interval with the longest current residual cache queue;
when an idle buffer interval is monitored to appear, at least one task buffer is transferred to the idle buffer interval from a second buffer interval so as to be correspondingly distributed to an execution core corresponding to the idle buffer interval, the idle buffer interval is a buffer interval with empty tasks, and the second buffer interval is a buffer interval with the most current remaining tasks.
2. The method of claim 1, wherein when it is monitored that a free buffer interval occurs, migrating at least one task buffer from a second buffer interval to the free buffer interval, comprises:
when the occurrence of an idle cache interval is monitored, successively confirming a second cache interval, and migrating task caches to the idle cache interval one by one from the second cache interval until the number of tasks in the idle cache interval reaches a preset threshold value or the number of tasks in all the cache intervals is not higher than the preset threshold value.
3. The method of claim 2, wherein when there are multiple second buffer intervals, a task buffer is randomly migrated from one of the second buffer intervals to the free buffer interval.
4. The method of claim 1, wherein when there are multiple first buffer intervals, the latest task is randomly buffered into one of the first buffer intervals.
5. The utility model provides a task differentiation scheduling device of thing networking which characterized in that includes:
the state monitoring unit is used for monitoring the task state in the multi-core Internet of things system, each execution core of the multi-core Internet of things system is correspondingly allocated with a buffer interval with different lengths, and the buffer interval is one section of a first-in first-out buffer queue in the multi-core Internet of things system;
the priority judging unit is used for acquiring the task priority of the latest task when the latest task is monitored to be input, wherein the task priority comprises a first priority and a second priority;
the first cache unit is used for caching the latest task to a first cache interval to correspondingly allocate the latest task to an execution core corresponding to the first cache interval if the task priority of the latest task is first priority, wherein the first cache interval is a cache interval with the least current residual tasks;
the second cache unit is used for caching the latest task to a secondary cache interval to be correspondingly distributed to an execution core corresponding to the secondary cache interval if the task priority of the latest task is suboptimal, and the secondary cache interval is a cache interval with the longest current residual cache queue;
and the task migration unit is used for migrating at least one task cache to the idle cache interval from a second cache interval when the idle cache interval appears, so as to correspondingly allocate the task cache to the execution core corresponding to the idle cache interval, wherein the idle cache interval is a cache interval with a task being emptied, and the second cache interval is a cache interval with the most current remaining tasks.
6. The apparatus of claim 5, wherein the migrating at least one task buffer from a second buffer interval to an idle buffer interval when an idle buffer interval is monitored to occur comprises:
when the occurrence of an idle cache interval is monitored, successively confirming a second cache interval, and migrating task caches to the idle cache interval one by one from the second cache interval until the number of tasks in the idle cache interval reaches a preset threshold value or the number of tasks in all the cache intervals is not higher than the preset threshold value.
7. The apparatus of claim 6, wherein when there are multiple second buffer intervals, a task buffer is randomly migrated from one of the second buffer intervals to the free buffer interval.
8. The apparatus of claim 5, wherein the latest task is randomly buffered to one of the first buffer intervals when there are multiple first buffer intervals.
9. An internet of things system, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the internet of things system to implement the method for task differentiated scheduling of the internet of things as recited in any one of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for task-differentiated scheduling of the internet of things according to any one of claims 1 to 4.
CN202111403609.XA 2020-12-31 2021-11-24 Task differentiation scheduling method, device and system of Internet of things and storage medium Pending CN113934528A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020116416559 2020-12-31
CN202011641655 2020-12-31

Publications (1)

Publication Number Publication Date
CN113934528A true CN113934528A (en) 2022-01-14

Family

ID=79288155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111403609.XA Pending CN113934528A (en) 2020-12-31 2021-11-24 Task differentiation scheduling method, device and system of Internet of things and storage medium

Country Status (1)

Country Link
CN (1) CN113934528A (en)

Similar Documents

Publication Publication Date Title
US11977784B2 (en) Dynamic resources allocation method and system for guaranteeing tail latency SLO of latency-sensitive application
CN113934530A (en) Multi-core multi-queue task cross processing method, device, system and storage medium
AU2015229200B2 (en) Coordinated admission control for network-accessible block storage
US20150295970A1 (en) Method and device for augmenting and releasing capacity of computing resources in real-time stream computing system
CN109697122B (en) Task processing method, device and computer storage medium
CN107454017B (en) Mixed data stream cooperative scheduling method in cloud data center network
KR20140134190A (en) Multicore system and job scheduling method thereof
CN106603692B (en) Data storage method and device in distributed storage system
KR20110080735A (en) Computing system and method
Shen et al. Probabilistic network-aware task placement for mapreduce scheduling
CN113934529A (en) Task scheduling method, device and system of multi-level core and storage medium
CN112162835A (en) Scheduling optimization method for real-time tasks in heterogeneous cloud environment
CN107092649B (en) Real-time stream calculation-oriented non-perception topology replacement method
CN113971085A (en) Method, device, system and storage medium for distinguishing processing tasks by master core and slave core
CN114020440A (en) Multi-stage task classification processing method, device and system and storage medium
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN113010309A (en) Cluster resource scheduling method, device, storage medium, equipment and program product
CN113934528A (en) Task differentiation scheduling method, device and system of Internet of things and storage medium
CN110928649A (en) Resource scheduling method and device
CN112650574A (en) Priority-based task scheduling method, device, system and storage medium
CN108228323B (en) Hadoop task scheduling method and device based on data locality
CN112764895A (en) Task scheduling method, device and system of multi-core Internet of things chip and storage medium
CN113971086A (en) Task scheduling method, device and system based on task relevance and storage medium
CN112764896A (en) Task scheduling method, device and system based on standby queue and storage medium
JP2004046372A (en) Distributed system, resource allocation method, program, and recording medium with which resource allocation program is recorded

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination