CN113934529A - Task scheduling method, device and system of multi-level core and storage medium - Google Patents

Task scheduling method, device and system of multi-level core and storage medium Download PDF

Info

Publication number
CN113934529A
CN113934529A CN202111403628.2A CN202111403628A CN113934529A CN 113934529 A CN113934529 A CN 113934529A CN 202111403628 A CN202111403628 A CN 202111403628A CN 113934529 A CN113934529 A CN 113934529A
Authority
CN
China
Prior art keywords
core
task
priority
cache queue
common
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111403628.2A
Other languages
Chinese (zh)
Inventor
刘阳
郑凛
王琳
刘贝彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jixiang Technology Zhejiang Co Ltd
Original Assignee
Jixiang Technology Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jixiang Technology Zhejiang Co Ltd filed Critical Jixiang Technology Zhejiang Co Ltd
Publication of CN113934529A publication Critical patent/CN113934529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The embodiment of the invention discloses a task scheduling method and device of a multi-level core, an Internet of things system and a storage medium. The method comprises the following steps: monitoring a task state in a multi-core Internet of things system, wherein an execution core of the multi-core Internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue; when the input of the latest task is monitored, caching the latest task into a priority cache queue or a common cache queue according to the task priority; and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires the new task from the common cache queue for processing when the current task is processed. According to the scheme, the task allocation process is omitted, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the targeted management of different types of tasks is realized.

Description

Task scheduling method, device and system of multi-level core and storage medium
Technical Field
The embodiment of the invention relates to the technical field of networks, in particular to a task scheduling method, a task scheduling device and a task scheduling storage medium for a multi-level core.
Background
The internet of things is regarded as a major development and transformation opportunity in the information field, and is expected to bring revolutionary transformation, which has all-round influence on various fields such as industry, agriculture, property, city management, safety fire fighting and the like in a relatively common view. However, technically, the internet of things is not only a main body for changing data transmission, but also has obvious difference from traditional communication. For example, a feature of the large-scale internet of things is that a large number of users sporadically transmit very small packets, unlike conventional cellular communications.
In order to meet the task scheduling requirements in the internet of things, a high-performance embedded node is usually designed for large-scale internet of things to perform parallel processing on data collected in the internet of things, and even a multi-core processing mode is adopted to achieve task scheduling.
The inventor finds that in the process of task scheduling in a multi-core processing mode in a large-scale Internet of things, one task can be scheduled among multiple execution cores for multiple times, a large amount of useless scheduling is performed, the scheduling efficiency is low, and different types of tasks are complex to schedule.
Disclosure of Invention
The invention provides a task scheduling method, a task scheduling device, a task scheduling system and a storage medium for a multi-level core, and aims to solve the technical problems that in the prior art, the scheduling efficiency of multi-core processing task scheduling of the Internet of things is low, and the scheduling of different types of tasks is complex.
In a first aspect, an embodiment of the present invention provides a task scheduling method for a multi-level core, which is used in a multi-core internet of things system, and includes:
monitoring a task state in the multi-core Internet of things system, wherein an execution core of the multi-core Internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue;
when the input of the latest task is monitored, caching the latest task into the priority cache queue or the common cache queue according to the task priority;
and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires a new task from the common cache queue for processing when the current task is processed.
Further, the method further comprises:
and when the priority core fails to acquire the new task from the priority cache queue, acquiring the new task from the common cache queue for processing.
Further, the method further comprises:
and when the common core fails to acquire the new task from the common cache queue, acquiring the new task from the priority cache queue for processing.
Further, the number of the priority cores is larger than the number of the normal cores. In a second aspect, an embodiment of the present invention further provides a task scheduling device with a multi-level core, which is used in a multi-core internet of things system, and includes:
the state monitoring unit is used for monitoring the task state in the multi-core Internet of things system, the execution core of the multi-core Internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue;
the hierarchical cache unit is used for caching the latest task to the priority cache queue or the common cache queue according to the task priority when the input of the latest task is monitored;
and the task obtaining unit is used for obtaining a new task from the priority cache queue for processing when the current task is processed by the priority core, and obtaining the new task from the common cache queue for processing when the current task is processed by the common core.
Further, the apparatus further includes:
and the first obtaining unit is used for obtaining a new task from the common buffer queue for processing when the priority core fails to obtain the new task from the priority buffer queue.
Further, the apparatus further includes:
and the second obtaining unit is used for obtaining a new task from the priority cache queue for processing when the common core fails to obtain the new task from the common cache queue.
Further, the number of the priority cores is larger than the number of the normal cores.
In a third aspect, an embodiment of the present invention further provides an internet of things system, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the system of things enables the method for task scheduling of a multi-level core as set forth in any of the first aspects.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the task scheduling method for a multi-level core according to the first aspect.
The task scheduling method and device of the multi-level core, the Internet of things system and the storage medium monitor the task state in the multi-core Internet of things system, the execution core of the multi-core Internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue; when the input of the latest task is monitored, caching the latest task into the priority cache queue or the common cache queue according to the task priority; and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires a new task from the common cache queue for processing when the current task is processed. According to the scheme, the first-in first-out cache queues in the multi-core Internet of things system are divided into the cache queues according to the priority, the latest tasks are distributed to the corresponding cache queues according to the priority when being received, the execution cores directly distribute the tasks of the corresponding categories from the corresponding cache queues according to the categories, the distribution process of the tasks is omitted, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the targeted management of different types of tasks is realized.
Drawings
Fig. 1 is a flowchart of a task scheduling method for a multi-level core according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a task scheduling apparatus of a multi-level core according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an internet of things system according to a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that, for the sake of brevity, this description does not exhaust all alternative embodiments, and it should be understood by those skilled in the art after reading this description that any combination of features may constitute an alternative embodiment as long as the features are not mutually inconsistent.
The following examples are described in detail.
Example one
Fig. 1 is a flowchart of a task scheduling method for a multi-level core according to an embodiment of the present invention. The task scheduling method of the multi-level core provided in the embodiment may be performed by various operating devices for task scheduling of the multi-level core, where the operating devices may be implemented in software and/or hardware, and the operating devices may be formed by two or more physical entities or may be formed by one physical entity.
Specifically, referring to fig. 1, the task scheduling method for a multi-level core specifically includes:
step S101: and monitoring the task state in the multi-core Internet of things system, wherein the execution core of the multi-core Internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue.
In the architecture of the internet of things system, a sink node is a key component of the architecture, in the specific implementation process, the multi-core internet of things system is designed based on an embedded multi-core processor, and a plurality of execution cores in the embedded multi-core processor can perform operation simultaneously, so that higher processing efficiency is brought to data collection in the multi-core internet of things system under the condition of limited resource configuration.
For an embedded multi-core processor, each processing core cannot process all tasks allocated to the processing core at the same time, that is, tasks allocated to one internet of things node may need to be queued, and the tasks in the queued state are temporarily cached in a first-in first-out cache queue. According to the prior art, during the queuing process, the tasks may be continuously scheduled and switched to different execution cores to wait for execution according to the actual processing progress of the execution cores, which is equivalent to performing an invalid scheduling process in the task scheduling process.
In the scheme, in order to improve the scheduling processing efficiency, the first-in first-out cache queue is divided into a priority cache queue and a common cache queue, and different cache queues are used for correspondingly caching tasks with different priorities. And correspondingly, the processing task of the priority core is firstly acquired from the priority cache queue, and the processing task of the common core is firstly acquired from the common cache queue. The task specifically allocated to a certain type of execution core is firstly cached in the corresponding cache queue, and through the corresponding allocation mode of the execution core and the cache queue, the association relation between the task and the execution core correspondingly processing the task is fixed in a relatively static mode, so that the invalid task allocation scheduling processing is reduced as much as possible. And when the core processing task is executed, the core processing task is directly obtained from the corresponding cache queue.
When the task allocation of the execution cores is specifically performed, the number of the priority cores can be set to be larger than that of the ordinary cores, so that the priority tasks can be ensured to be processed quickly.
Step S102: and when the input of the latest task is monitored, caching the latest task into the priority cache queue or the common cache queue according to the task priority.
For an internet of things node, when receiving input of a latest task, the latest task needs to be allocated to a certain execution core in an embedded multi-core processor of the internet of things node, in the existing processing mode, when all tasks of the current execution cores are to be executed, a first-in first-out cache queue is taken as a whole for cache management, the allocation process of a task to a specific execution core may be continuously adjusted due to the change of task processing progress, so that the task processing states of all execution cores need to be continuously monitored when the task is cached in the first-in first-out cache queue, and further, the task allocation is continuously adjusted adaptively.
Step S103: and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires a new task from the common cache queue for processing when the current task is processed.
In the task execution process, although all the priority tasks are cached in the priority cache queue and all the common tasks are cached in the common cache queue, because each priority core acquires a new task from the priority cache queue and each common core acquires a new task from the common cache queue, the task flow speed in the priority cache queue or the common cache queue is quite high, and the queue waiting time of each task is not too long.
In a specific execution process, a situation that a certain kind of task is completely processed may occur, and in order to improve the overall processing efficiency of the data, step S104 and step S105 may be further executed:
step S104: and when the priority core fails to acquire the new task from the priority cache queue, acquiring the new task from the common cache queue for processing.
Step S105: and when the common core fails to acquire the new task from the common cache queue, acquiring the new task from the priority cache queue for processing.
That is, after the task in the priority cache queue is processed, the priority core cannot acquire a new task from the priority cache queue, that is, the acquisition of the new task fails, and at this time, in order to provide overall task processing efficiency, reduce the waiting time of the task, avoid the idle of the execution core, and acquire the task from the common cache queue for processing. Correspondingly, after the tasks in the ordinary cache queue are processed, the ordinary core cannot acquire new tasks from the ordinary cache queue, and can acquire new tasks from the priority cache queue.
It should be noted that, the priority core and the normal core in this embodiment do not have a change in data processing capability, and are only used to limit whether a certain execution core mainly processes a priority task or a normal task, and are only defined according to different differences of main processing objects, and are special marks for convenience of description of the embodiment, and there is no difference in processing procedures of tasks. And the priority buffer queue and the common buffer queue are only used for explaining that the first-in first-out buffer queue can be divided into a plurality of buffer queues without representing the limitation of the number of the specific buffer queues, and if more priorities are set for the tasks in the Internet of things system, the first-in first-out buffer queue can be correspondingly divided into a plurality of buffer queues.
Meanwhile, in the present solution, it should be understood that steps S101 to S103 exist as a whole, which are not sequentially executed in the strict order described above, when the multi-core internet of things system processes the tasks, the assignment of the latest task and the migration of the task may be executed according to the actual monitoring result, and when the latest task is monitored, the latest task is cached; and when the completion of the task of the execution core is monitored, acquiring the task from the corresponding cache queue, if the latest task is continuously monitored, continuously executing the step S102, and if the completion of the task of the execution core is continuously monitored, continuously executing the step S103.
Monitoring the task state in the multi-core internet of things system, wherein the execution core of the multi-core internet of things system comprises a priority core and a common core, and a first-in first-out cache queue in the multi-core internet of things system is divided into a priority cache queue and a common cache queue; when the input of the latest task is monitored, caching the latest task into the priority cache queue or the common cache queue according to the task priority; and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires a new task from the common cache queue for processing when the current task is processed. According to the scheme, the first-in first-out cache queues in the multi-core Internet of things system are divided into the cache queues according to the priority, the latest tasks are distributed to the corresponding cache queues according to the priority when being received, the execution cores directly distribute the tasks of the corresponding categories from the corresponding cache queues according to the categories, the distribution process of the tasks is omitted, the data processing efficiency of multi-core processing task scheduling in the Internet of things system is improved, and the targeted management of different types of tasks is realized.
Example two
Fig. 2 is a schematic structural diagram of a task scheduling apparatus of a multi-level core according to a second embodiment of the present invention. Referring to fig. 2, the task scheduling apparatus of the multi-level core includes: a status snoop unit 210, a hierarchical cache unit 220, and a task fetch unit 230.
The state monitoring unit 210 is configured to monitor a task state in the multi-core internet of things system, where an execution core of the multi-core internet of things system includes a priority core and a common core, and a first-in first-out cache queue in the multi-core internet of things system is divided into a priority cache queue and a common cache queue; the hierarchical caching unit 220 is configured to cache the latest task in the priority caching queue or the common caching queue according to a task priority when it is monitored that the latest task is input; a task obtaining unit 230, configured to, when the current task is completed, obtain a new task from the priority cache queue for processing by the priority core, and when the current task is completed, obtain a new task from the normal cache queue for processing by the normal core.
On the basis of the above embodiment, the apparatus further includes:
and the first obtaining unit is used for obtaining a new task from the common buffer queue for processing when the priority core fails to obtain the new task from the priority buffer queue.
On the basis of the above embodiment, the apparatus further includes:
and the second obtaining unit is used for obtaining a new task from the priority cache queue for processing when the common core fails to obtain the new task from the common cache queue.
On the basis of the above embodiment, the number of the priority cores is greater than the number of the normal cores.
The task scheduling device of the multi-level core provided by the embodiment of the present invention is included in the task scheduling device of the multi-level core, and can be used to execute the task scheduling method of any one of the multi-level cores provided by the first embodiment of the present invention, and has corresponding functions and beneficial effects.
EXAMPLE III
Fig. 3 is a schematic structural diagram of node devices of the internet of things according to a third embodiment of the present invention, where the node devices of the internet of things are used to form a system of the internet of things, so as to comprehensively implement task scheduling in this scheme. As shown in fig. 3, the node apparatus of the internet of things includes a processor 310, a memory 320, an input device 330, an output device 340, and a communication device 350; the number of the processors 310 in the node device of the internet of things may be one or more, and one processor 310 is taken as an example in fig. 3; the processor 310, the memory 320, the input device 330, the output device 340 and the communication device 350 in the node device of the internet of things may be connected through a bus or other manners, and fig. 3 illustrates the connection through the bus as an example.
The memory 320 may be used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the task scheduling method of the multi-level core in the embodiment of the present invention (for example, the status listening unit 210, the hierarchical caching unit 220, and the task obtaining unit 230 in the task scheduling device of the multi-level core). The processor 310 executes various functional applications and data processing of the node device of the internet of things by running software programs, instructions and modules stored in the memory 320, that is, the task scheduling method of the multi-level core is implemented.
The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the node device of the internet of things, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 320 may further include memory located remotely from the processor 310, which may be connected to the internet of things node device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the internet of things node device. The output device 340 may include a display device such as a display screen.
The node equipment of the Internet of things comprises a task scheduling device with multi-level cores, can be used for executing a task scheduling method of any multi-level core, and has corresponding functions and beneficial effects.
Example four
Embodiments of the present invention further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform relevant operations in a task scheduling method for a multi-level core provided in any embodiment of the present application, and have corresponding functions and advantages.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product.
Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A task scheduling method of a multilevel core is used for a multi-core Internet of things system and is characterized by comprising the following steps:
monitoring the task state in the multi-core Internet of things system, wherein an execution core of the multi-core Internet of things system comprises a priority core and a common core, a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue, and the priority cache queue and the common cache queue are used for caching tasks with different priorities correspondingly;
when the input of the latest task is monitored, caching the latest task into the priority cache queue or the common cache queue according to the task priority;
and the priority core acquires a new task from the priority cache queue for processing when the current task is processed, and the common core acquires a new task from the common cache queue for processing when the current task is processed.
2. The method of claim 1, further comprising:
and when the priority core fails to acquire the new task from the priority cache queue, acquiring the new task from the common cache queue for processing.
3. The method of claim 1, further comprising:
and when the common core fails to acquire the new task from the common cache queue, acquiring the new task from the priority cache queue for processing.
4. The method of claim 1, wherein the number of priority cores is greater than the number of normal cores.
5. A task scheduler for a multi-level core, comprising:
the system comprises a state monitoring unit, a state monitoring unit and a state monitoring unit, wherein the state monitoring unit is used for monitoring the task state in the multi-core Internet of things system, an execution core of the multi-core Internet of things system comprises a priority core and a common core, a first-in first-out cache queue in the multi-core Internet of things system is divided into a priority cache queue and a common cache queue, and the priority cache queue and the common cache queue are used for correspondingly caching tasks with different priorities;
the hierarchical cache unit is used for caching the latest task to the priority cache queue or the common cache queue according to the task priority when the input of the latest task is monitored;
and the task obtaining unit is used for obtaining a new task from the priority cache queue for processing when the current task is processed by the priority core, and obtaining the new task from the common cache queue for processing when the current task is processed by the common core.
6. The apparatus of claim 5, further comprising:
and the first obtaining unit is used for obtaining a new task from the common buffer queue for processing when the priority core fails to obtain the new task from the priority buffer queue.
7. The apparatus of claim 5, further comprising:
and the second obtaining unit is used for obtaining a new task from the priority cache queue for processing when the common core fails to obtain the new task from the common cache queue.
8. The apparatus of claim 5, wherein the number of priority cores is greater than the number of normal cores.
9. An internet of things system, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the system of things to implement a method for task scheduling for a multi-level core as recited in any of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for task scheduling for a multi-level core according to any one of claims 1 to 4.
CN202111403628.2A 2020-12-31 2021-11-24 Task scheduling method, device and system of multi-level core and storage medium Pending CN113934529A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011641861X 2020-12-31
CN202011641861 2020-12-31

Publications (1)

Publication Number Publication Date
CN113934529A true CN113934529A (en) 2022-01-14

Family

ID=79288190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111403628.2A Pending CN113934529A (en) 2020-12-31 2021-11-24 Task scheduling method, device and system of multi-level core and storage medium

Country Status (1)

Country Link
CN (1) CN113934529A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016919A (en) * 2022-08-05 2022-09-06 阿里云计算有限公司 Task scheduling method, electronic device and storage medium
CN117389486A (en) * 2023-12-13 2024-01-12 浙江国利信安科技有限公司 Method, computing device and storage medium for real-time processing EPA network data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016919A (en) * 2022-08-05 2022-09-06 阿里云计算有限公司 Task scheduling method, electronic device and storage medium
CN115016919B (en) * 2022-08-05 2022-11-04 阿里云计算有限公司 Task scheduling method, electronic device and storage medium
CN117389486A (en) * 2023-12-13 2024-01-12 浙江国利信安科技有限公司 Method, computing device and storage medium for real-time processing EPA network data
CN117389486B (en) * 2023-12-13 2024-04-19 浙江国利信安科技有限公司 Method, computing device and storage medium for real-time processing EPA network data

Similar Documents

Publication Publication Date Title
US8424007B1 (en) Prioritizing tasks from virtual machines
US20130212594A1 (en) Method of optimizing performance of hierarchical multi-core processor and multi-core processor system for performing the method
CN109729024B (en) Data packet processing system and method
CN113934530A (en) Multi-core multi-queue task cross processing method, device, system and storage medium
CN113934529A (en) Task scheduling method, device and system of multi-level core and storage medium
US20130152103A1 (en) Preparing parallel tasks to use a synchronization register
CN107818012B (en) Data processing method and device and electronic equipment
CN106569892B (en) Resource scheduling method and equipment
CN110011936B (en) Thread scheduling method and device based on multi-core processor
CN107430526B (en) Method and node for scheduling data processing
JP2014235746A (en) Multi-core device and job scheduling method for multi-core device
US9417924B2 (en) Scheduling in job execution
EP4361808A1 (en) Resource scheduling method and device and computing node
EP2482189A1 (en) Utilization-based threshold for choosing dynamically between eager and lazy scheduling strategies in RF resource allocation
CN113971085A (en) Method, device, system and storage medium for distinguishing processing tasks by master core and slave core
US20170344266A1 (en) Methods for dynamic resource reservation based on classified i/o requests and devices thereof
CN114020440A (en) Multi-stage task classification processing method, device and system and storage medium
Gracioli et al. Two‐phase colour‐aware multicore real‐time scheduler
CN112395056A (en) Embedded asymmetric real-time system and electric power secondary equipment
CN116737370A (en) Multi-resource scheduling method, system, storage medium and terminal
KR101771183B1 (en) Method for managing in-memory cache
CN112486638A (en) Method, apparatus, device and storage medium for executing processing task
CN110928649A (en) Resource scheduling method and device
KR101771178B1 (en) Method for managing in-memory cache
CN113296957B (en) Method and device for dynamically distributing network bandwidth on chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination