CN112395063A - Dynamic multithreading scheduling method and system - Google Patents

Dynamic multithreading scheduling method and system Download PDF

Info

Publication number
CN112395063A
CN112395063A CN202011290157.4A CN202011290157A CN112395063A CN 112395063 A CN112395063 A CN 112395063A CN 202011290157 A CN202011290157 A CN 202011290157A CN 112395063 A CN112395063 A CN 112395063A
Authority
CN
China
Prior art keywords
processed
tasks
thread
target
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011290157.4A
Other languages
Chinese (zh)
Other versions
CN112395063B (en
Inventor
魏龄
罗鸿轩
韩彤
金鑫
李毅
黄博阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Research Institute of Southern Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Research Institute of Southern Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd, Research Institute of Southern Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202011290157.4A priority Critical patent/CN112395063B/en
Publication of CN112395063A publication Critical patent/CN112395063A/en
Application granted granted Critical
Publication of CN112395063B publication Critical patent/CN112395063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The application provides a dynamic multithreading scheduling method and system. The method comprises the following steps: and after all the thread channels are processed for a preset time length, the target task to be processed is allocated to each thread channel again according to the current state of each thread channel and all the remaining tasks to be processed, each thread channel is processed according to a new target task to be processed, and thus the target task to be processed is allocated to each thread channel again every other preset time length until all the tasks to be processed are completely processed. Therefore, the whole process does not always maintain the initial allocation mode for processing, but continuously optimizes and reallocates the target tasks to be processed of each thread channel according to the preset duration, so that the thread resources are saved, each task to be processed does not need to experience longer waiting delay, and the whole processing efficiency is higher.

Description

Dynamic multithreading scheduling method and system
Technical Field
The present application relates to the field of multithread scheduling technologies, and in particular, to a dynamic multithread scheduling method and system.
Background
Multithreading refers to a technique in which multiple threads are executed concurrently, either from software or hardware. Some computers in use at present can execute more than one thread at the same time due to relevant hardware support, namely, the computer has the capability of executing multi-thread tasks, and the performance of the whole processing is greatly improved.
In the process of executing a multi-threaded task, multiple threads need to be scheduled. The traditional multithread scheduling method is generally static multithread scheduling, and mainly comprises the steps that after tasks are allocated to all threads through an initial allocation method, the corresponding threads process the corresponding tasks until all the tasks are processed. Due to the fact that the sizes of the tasks are different, the number of the tasks allocated to the threads is different, and the like, the multi-thread scheduling method enables some threads to run in a high load mode and some threads to be in a light load or idle mode, not only is thread resource waste easily caused, but also the tasks allocated to the high-load threads need to experience long waiting delay, and therefore the overall processing efficiency is not high.
Based on this, a dynamic multithreading scheduling method is needed to solve the problem that tasks allocated to high-load threads in the prior art need to experience long waiting delay, and thus the overall processing efficiency is not high.
Disclosure of Invention
The application provides a dynamic multithreading scheduling method and system, which can be used for solving the technical problem that tasks distributed to high-load threads in the prior art need to experience longer waiting delay, so that the overall processing efficiency is not high.
In a first aspect, an embodiment of the present application provides a dynamic multithreading scheduling method, where the dynamic multithreading scheduling method includes:
acquiring a plurality of tasks to be processed;
acquiring a preset initial state of each thread channel; the initial state comprises a working rate and the number and the size of initial uncompleted tasks;
determining the task to be processed as a target task;
determining a target task to be processed distributed by each thread channel according to the initial states of all thread channels and all target tasks;
after all the thread channels are processed for a preset duration according to the corresponding working rates, acquiring the current state of each thread channel; the current state comprises the number and the size of the current uncompleted tasks and a target processed task; the target processed task is a task which is already processed in the target to-be-processed task;
if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels;
determining an updating task to be processed distributed by each thread channel according to the current state of all thread channels and all the remaining tasks to be processed, and setting the updating task to be processed as a target task to be processed;
and setting the rest tasks to be processed as target tasks, and returning to the step of acquiring the current state of each thread channel after executing the processing of preset duration in all the thread channels according to the corresponding working rates until all the tasks to be processed are processed.
In an implementation manner of the first aspect, the dynamic multithreading scheduling method further includes:
if the sum of the number of target processed tasks of all thread channels is equal to the number of target tasks, the thread scheduling process is ended.
In an implementation manner of the first aspect, the determining, according to the initial states of all the thread channels and all the target tasks, a target to-be-processed task allocated to each thread channel includes:
and determining the target to-be-processed task allocated to each thread channel by adopting a preset initial allocation method according to the initial states of all thread channels and all target tasks.
In an implementation manner of the first aspect, the determining the remaining tasks to be processed according to the target processed tasks and all target tasks of all thread channels includes:
and removing the target processed tasks of all the thread channels from all the target tasks to obtain the remaining tasks to be processed.
In an implementation manner of the first aspect, the determining, according to the current states of all the thread channels and all the remaining tasks to be processed, the updated task to be processed to which each thread channel is allocated includes:
and processing the current states of all the thread channels and all the remaining tasks to be processed by adopting a preset thread scheduling algorithm, and determining the updated tasks to be processed distributed by each thread channel.
In a second aspect, an embodiment of the present application provides a dynamic multithreading scheduling system, which includes a thread processing subsystem and an edge server, where the thread processing subsystem includes a thread processor and a timer, and the thread processor is electrically connected to the timer and communicatively connected to the edge server; wherein:
the timer is used for timing the processing time of all the thread channels;
the thread processor is used for acquiring a plurality of tasks to be processed; acquiring the initial state of each preset thread channel; the initial state comprises a working rate and the number and the size of initial uncompleted tasks; determining the task to be processed as a target task; determining a target task to be processed distributed by each thread channel according to the initial states of all thread channels and all target tasks; after all the thread channels are processed for a preset time length according to the corresponding working rates, the current state of each thread channel is obtained; the current state comprises the number and the size of the current uncompleted tasks and a target processed task; the target processed task is a task which is already processed in the target to-be-processed task; if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels; sending the current states of all the thread channels and all the remaining tasks to be processed to the edge server;
the edge server is used for determining the updated task to be processed distributed by each thread channel according to the current states of all the thread channels and all the remaining tasks to be processed, and sending the updated task to be processed to the thread processor;
the thread processor is also used for setting the updated task to be processed as a target task to be processed; and setting the remaining tasks to be processed as target tasks, and returning to the step of acquiring the current state of each thread channel after executing the processing of preset duration in all the thread channels according to the corresponding working rates until all the tasks to be processed are processed.
In one implementation form of the second aspect, the thread processor is further configured to:
if the sum of the number of target processed tasks of all thread channels is equal to the number of target tasks, the thread scheduling process is ended.
In an implementation manner of the second aspect, the thread processor is specifically configured to:
and determining the target to-be-processed task allocated to each thread channel by adopting a preset initial allocation method according to the initial states of all thread channels and all target tasks.
In an implementation manner of the second aspect, the thread processor is specifically configured to:
and removing the target processed tasks of all the thread channels from all the target tasks to obtain the remaining tasks to be processed.
In an implementation manner of the second aspect, the edge server is specifically configured to:
and processing the current states of all the thread channels and all the remaining tasks to be processed by adopting a preset thread scheduling algorithm, and determining the updated tasks to be processed distributed by each thread channel.
In this way, according to the initial state and all the tasks to be processed of each thread channel, the target task to be processed is allocated to each thread channel, after all the thread channels are processed for the preset time, the target task to be processed is allocated to each thread channel again according to the current state and all the remaining tasks to be processed of each thread channel, each thread channel is processed according to a new target task to be processed, and thus the target task to be processed is allocated to each thread channel again every other preset time until all the tasks to be processed are completely processed. The whole process does not always maintain the initial allocation mode for processing, but continuously optimizes and reallocates the target tasks to be processed of each thread channel according to the preset time interval, so that the situation that some threads run excessively under high load and some threads are lightly loaded or idle is avoided, thread resources are saved, the tasks initially allocated to the high-load threads do not need to experience longer waiting delay, and the overall processing efficiency is higher.
Drawings
Fig. 1 is a schematic flowchart corresponding to a dynamic multithread scheduling method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a dynamic multithreading scheduling system according to an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of a dynamic multithreading scheduling system according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In order to solve the problem of the prior art, an embodiment of the present application provides a dynamic multithreading scheduling method, which is specifically used for solving the problem that in the prior art, a task assigned to a high-load thread needs to experience a long wait delay, and thus the overall processing efficiency is not high. Fig. 1 is a schematic flow chart corresponding to a dynamic multithread scheduling method according to an embodiment of the present application. The method specifically comprises the following steps:
step 101, acquiring a plurality of tasks to be processed.
Step 102, obtaining the initial state of each preset thread channel.
And 103, determining the task to be processed as a target task.
And step 104, determining the target to-be-processed task allocated to each thread channel according to the initial states of all the thread channels and all the target tasks.
And 105, after all the thread channels are processed for a preset time length according to the corresponding working rates, acquiring the current state of each thread channel.
Step 106, judging whether the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, and executing step 107 if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks; otherwise, step 108 is performed.
And step 107, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels.
Step 108, the thread scheduling process is ended.
And step 109, determining the updating to-be-processed task allocated to each thread channel according to the current states of all the thread channels and all the remaining to-be-processed tasks, and setting the updating to-be-processed task as a target to-be-processed task.
And step 110, setting the remaining tasks to be processed as target tasks, and returning to execute step 105 until all the tasks to be processed are processed.
Specifically, in step 101, the acquired multiple to-be-processed tasks may be sent by the client.
In step 102, the initial state may include a work rate and the number and size of initial uncompleted tasks.
In steps 103 and 104, a preset initial allocation method may be adopted to determine the target to-be-processed task allocated to each thread channel according to the initial states of all thread channels and all target tasks. The initial allocation method may adopt an average allocation method, or may adopt a method of allocating according to the number and size of initial unfinished tasks of each thread channel, and is not particularly limited. If the method of allocating according to the number and size of the initial uncompleted tasks of each thread channel is adopted, that is, if the number of the initial uncompleted tasks of a certain thread channel is large, some target tasks to be processed are divided less, and if the number of the initial uncompleted tasks of a certain thread channel is small, some target tasks to be processed are divided more.
In step 105, the current status includes the number and size of the current uncompleted tasks and the target processed task. The target processed task is a task which is already processed in the target to-be-processed task.
In step 104, a target task to be processed is already allocated to each thread channel, and each thread channel starts processing the target task to be processed after the original initial unfinished task is processed. The preset duration is not specifically limited, and may be set to any duration before all the tasks to be processed are completely processed, generally, the shorter the preset duration is, the higher the frequency of thread scheduling is, the better the allocation of the tasks to be processed in the whole processing process is, and the shorter the processing time is, but the too short the preset duration is, the workload may be increased, and thus the preset duration may be set within a suitable range.
In steps 106 to 108, if the sum of the number of the target processed tasks of all the thread channels is smaller than the number of the target tasks, it indicates that the to-be-processed tasks are not completely processed, and the remaining to-be-processed tasks need to be determined according to the target processed tasks and all the target tasks of all the thread channels. Specifically, the target processed tasks of all the thread channels may be removed from all the target tasks, and the remaining to-be-processed tasks are obtained.
If the sum of the target processed tasks of all thread channels equals the target task number, the thread scheduling process ends.
Generally, in the case where no new task is allocated to a thread channel, the case where the sum of the numbers of target processed tasks of all thread channels is greater than the number of target tasks does not occur.
In step 109, the current states of all thread channels and all remaining tasks to be processed may be processed by using a preset thread scheduling algorithm, and the updated tasks to be processed allocated to each thread channel is determined.
Specifically, the thread scheduling algorithm may be set in various manners, such as a first-come-first-serve algorithm, a shortest-priority algorithm, or a high-priority scheduling algorithm, and those skilled in the art may set the algorithm according to experience and actual conditions, which is not limited specifically.
In step 110, the remaining tasks to be processed are set as target tasks, and the process returns to step 105 until all the tasks to be processed are processed, that is, the sum of the target processed tasks of all the thread channels is equal to the number of the target tasks, and the thread scheduling process for all the tasks to be processed is finished.
In this way, according to the initial state and all the tasks to be processed of each thread channel, the target task to be processed is allocated to each thread channel, after all the thread channels are processed for the preset time, the target task to be processed is allocated to each thread channel again according to the current state and all the remaining tasks to be processed of each thread channel, each thread channel is processed according to a new target task to be processed, and thus the target task to be processed is allocated to each thread channel again every other preset time until all the tasks to be processed are completely processed. The whole process does not always maintain the initial allocation mode for processing, but continuously optimizes and reallocates the target tasks to be processed of each thread channel according to the preset time interval, so that the situation that some threads run excessively under high load and some threads are lightly loaded or idle is avoided, thread resources are saved, the tasks initially allocated to the high-load threads do not need to experience longer waiting delay, and the overall processing efficiency is higher.
The following are embodiments of the system of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the system of the present application, reference is made to the embodiments of the method of the present application.
Fig. 2 schematically illustrates a structural diagram of a dynamic multithreading scheduling system provided by an embodiment of the present application. As shown in fig. 2, the system has a function of implementing the dynamic multithread scheduling method, and the function may be implemented by hardware, or by hardware executing corresponding software. The dynamic multithreading scheduling system may include: the thread processing subsystem 201 and the edge server 202, the thread processing subsystem 201 comprises a thread processor 2011 and a timer 2012, and the thread processor 2011 is electrically connected with the timer 2012 and is connected with the edge server 202 in a communication way; wherein:
the timer 2012 counts the processing time of all the thread channels.
A thread processor 2011 configured to obtain a plurality of tasks to be processed; acquiring the initial state of each preset thread channel; the initial state comprises the working rate and the number and the size of initial unfinished tasks; determining the task to be processed as a target task; determining a target task to be processed distributed by each thread channel according to the initial states of all thread channels and all target tasks; after all the thread channels are processed for a preset time length according to the corresponding working rates, the current state of each thread channel is obtained; the current state comprises the number and the size of the current uncompleted tasks and the target processed tasks; the target processed task is a task which is already processed in the target to-be-processed task; if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels; and sending the current states of all the thread channels and all the remaining tasks to be processed to the edge server.
The edge server 202 is configured to determine, according to the current states of all the thread channels and all the remaining tasks to be processed, an updated task to be processed allocated to each thread channel, and send the updated task to be processed to the thread processor 2011.
The thread processor 2011 is further configured to set the updated task to be processed as the target task to be processed; and setting the rest tasks to be processed as target tasks, and returning to the step of acquiring the current state of each thread channel after executing the processing of preset duration in all the thread channels according to the corresponding working rates until all the tasks to be processed are processed.
In one implementation, the thread processor 2011 is further configured to:
if the sum of the target processed tasks of all thread channels equals the target task number, the thread scheduling process ends.
In one implementation, the thread processor 2011 is specifically configured to:
and determining the target to-be-processed task allocated to each thread channel by adopting a preset initial allocation method according to the initial states of all thread channels and all target tasks.
In one implementation, the thread processor 2011 is specifically configured to:
and removing the target processed tasks of all the thread channels from all the target tasks to obtain the remaining tasks to be processed.
In one implementation, the edge server 202 is specifically configured to:
and processing the current states of all the thread channels and all the remaining tasks to be processed by adopting a preset thread scheduling algorithm, and determining the updated tasks to be processed distributed by each thread channel.
In this way, according to the initial state and all the tasks to be processed of each thread channel, the target task to be processed is allocated to each thread channel, after all the thread channels are processed for the preset time, the target task to be processed is allocated to each thread channel again according to the current state and all the remaining tasks to be processed of each thread channel, each thread channel is processed according to a new target task to be processed, and thus the target task to be processed is allocated to each thread channel again every other preset time until all the tasks to be processed are completely processed. The whole process does not always maintain the initial allocation mode for processing, but continuously optimizes and reallocates the target tasks to be processed of each thread channel according to the preset time interval, so that the situation that some threads run excessively under high load and some threads are lightly loaded or idle is avoided, thread resources are saved, the tasks initially allocated to the high-load threads do not need to experience longer waiting delay, and the overall processing efficiency is higher.
Fig. 3 is a schematic hardware structure diagram of a dynamic multithreading scheduling system according to an embodiment of the present application. The thread processing subsystem 201 may specifically include, but is not limited to, a high performance computer and a cluster of high performance computers. As shown in fig. 3, the hardware structure provided in the embodiment of the present application includes: a thread processing subsystem 201, an edge server 202, a client 301 and a thread channel 302; the client 301 is used for sending a task to be processed; the thread channel 302 is used for processing the task to be processed; the thread processing subsystem 201 comprises a thread processor 2011, a timer 2012, registers 2013, an interface 2014 and a shared memory 2015; the edge server 202 includes a plurality of memories 2021. The thread processor 2011 is configured to obtain a requirement sent by the client 301, and execute a preset program instruction, so as to implement the dynamic multithreading scheduling method according to the foregoing embodiment; a timer 2012 for timing the processing procedure of the thread channel 302; a register 2013 for storing data; an interface 2014 for connecting with the memory 2021 and transmitting data; shared memory 2015 for storing data; a memory 2021 for storing data. In this embodiment, the thread processor 2011, the timer 2012, the register 2013, the interface 2014, the shared memory 2015, the memory 2021 and the client 301 may be connected through a system bus or in other manners. Those skilled in the art will appreciate that the architecture shown in fig. 3 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the hardware architecture to which the subject application may be applied, and that a particular hardware architecture may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when at least one processor of the network resource transmission scheduling system executes the computer program, the network resource transmission scheduling system executes the network resource transmission scheduling method described in the foregoing embodiment.
The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
The present application has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to limit the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the presently disclosed embodiments and implementations thereof without departing from the spirit and scope of the present disclosure, and these fall within the scope of the present disclosure. The protection scope of this application is subject to the appended claims.

Claims (10)

1. A dynamic multithread scheduling method, comprising:
acquiring a plurality of tasks to be processed;
acquiring a preset initial state of each thread channel; the initial state comprises a working rate and the number and the size of initial uncompleted tasks;
determining the task to be processed as a target task;
determining a target task to be processed distributed by each thread channel according to the initial states of all thread channels and all target tasks;
after all the thread channels are processed for a preset duration according to the corresponding working rates, acquiring the current state of each thread channel; the current state comprises the number and the size of the current uncompleted tasks and a target processed task; the target processed task is a task which is already processed in the target to-be-processed task;
if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels;
determining an updating task to be processed distributed by each thread channel according to the current state of all thread channels and all the remaining tasks to be processed, and setting the updating task to be processed as a target task to be processed;
and setting the rest tasks to be processed as target tasks, and returning to the step of acquiring the current state of each thread channel after executing the processing of preset duration in all the thread channels according to the corresponding working rates until all the tasks to be processed are processed.
2. The dynamic multithreading scheduling method of claim 1, further comprising:
if the sum of the number of target processed tasks of all thread channels is equal to the number of target tasks, the thread scheduling process is ended.
3. The dynamic multithreading scheduling method of claim 2, wherein the determining the target to-be-processed task allocated to each thread channel according to the initial states of all the thread channels and all the target tasks comprises:
and determining the target to-be-processed task allocated to each thread channel by adopting a preset initial allocation method according to the initial states of all thread channels and all target tasks.
4. The dynamic multithreading scheduling method of claim 2, wherein the determining the remaining tasks to be processed according to the target processed tasks and all target tasks of all the thread channels comprises:
and removing the target processed tasks of all the thread channels from all the target tasks to obtain the remaining tasks to be processed.
5. The dynamic multithreading scheduling method of claim 2, wherein the determining the updated pending task allocated to each thread channel according to the current status of all thread channels and all remaining pending tasks comprises:
and processing the current states of all the thread channels and all the remaining tasks to be processed by adopting a preset thread scheduling algorithm, and determining the updated tasks to be processed distributed by each thread channel.
6. A dynamic multithreading scheduling system, which is characterized by comprising a thread processing subsystem and an edge server, wherein the thread processing subsystem comprises a thread processor and a timer, and the thread processor is electrically connected with the timer and is in communication connection with the edge server; wherein:
the timer is used for timing the processing time of all the thread channels;
the thread processor is used for acquiring a plurality of tasks to be processed; acquiring the initial state of each preset thread channel; the initial state comprises a working rate and the number and the size of initial uncompleted tasks; determining the task to be processed as a target task; determining a target task to be processed distributed by each thread channel according to the initial states of all thread channels and all target tasks; after all the thread channels are processed for a preset time length according to the corresponding working rates, the current state of each thread channel is obtained; the current state comprises the number and the size of the current uncompleted tasks and a target processed task; the target processed task is a task which is already processed in the target to-be-processed task; if the sum of the number of the target processed tasks of all the thread channels is less than the number of the target tasks, determining the remaining tasks to be processed according to the target processed tasks and all the target tasks of all the thread channels; sending the current states of all the thread channels and all the remaining tasks to be processed to the edge server;
the edge server is used for determining the updated task to be processed distributed by each thread channel according to the current states of all the thread channels and all the remaining tasks to be processed, and sending the updated task to be processed to the thread processor;
the thread processor is also used for setting the updated task to be processed as a target task to be processed; and setting the remaining tasks to be processed as target tasks, and returning to the step of acquiring the current state of each thread channel after executing the processing of preset duration in all the thread channels according to the corresponding working rates until all the tasks to be processed are processed.
7. The dynamic multithreading scheduling system of claim 6, wherein the threaded processor is further configured to:
if the sum of the number of target processed tasks of all thread channels is equal to the number of target tasks, the thread scheduling process is ended.
8. The dynamic multithreading scheduling system of claim 7, wherein the thread processor is further configured to:
and determining the target to-be-processed task allocated to each thread channel by adopting a preset initial allocation method according to the initial states of all thread channels and all target tasks.
9. The dynamic multithreading scheduling system of claim 7, wherein the thread processor is further configured to:
and removing the target processed tasks of all the thread channels from all the target tasks to obtain the remaining tasks to be processed.
10. The dynamic multithreading scheduling system of claim 7, wherein the edge server is specifically configured to:
and processing the current states of all the thread channels and all the remaining tasks to be processed by adopting a preset thread scheduling algorithm, and determining the updated tasks to be processed distributed by each thread channel.
CN202011290157.4A 2020-11-18 2020-11-18 Dynamic multithreading scheduling method and system Active CN112395063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011290157.4A CN112395063B (en) 2020-11-18 2020-11-18 Dynamic multithreading scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011290157.4A CN112395063B (en) 2020-11-18 2020-11-18 Dynamic multithreading scheduling method and system

Publications (2)

Publication Number Publication Date
CN112395063A true CN112395063A (en) 2021-02-23
CN112395063B CN112395063B (en) 2023-01-20

Family

ID=74606399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011290157.4A Active CN112395063B (en) 2020-11-18 2020-11-18 Dynamic multithreading scheduling method and system

Country Status (1)

Country Link
CN (1) CN112395063B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926282A (en) * 2021-03-25 2021-06-08 中国科学院微电子研究所 Electronic design automation EDA simulation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832146A (en) * 2017-10-27 2018-03-23 北京计算机技术及应用研究所 Thread pool task processing method in highly available cluster system
CN109814998A (en) * 2019-01-22 2019-05-28 中国联合网络通信集团有限公司 A kind of method and device of multi-process task schedule

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832146A (en) * 2017-10-27 2018-03-23 北京计算机技术及应用研究所 Thread pool task processing method in highly available cluster system
CN109814998A (en) * 2019-01-22 2019-05-28 中国联合网络通信集团有限公司 A kind of method and device of multi-process task schedule

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张多利等: "粗粒度多核系统任务级多线程调度研究", 《微电子学与计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926282A (en) * 2021-03-25 2021-06-08 中国科学院微电子研究所 Electronic design automation EDA simulation method and device
CN112926282B (en) * 2021-03-25 2024-03-01 中国科学院微电子研究所 Electronic design automation EDA simulation method and device

Also Published As

Publication number Publication date
CN112395063B (en) 2023-01-20

Similar Documents

Publication Publication Date Title
CN109582455B (en) Multithreading task processing method and device and storage medium
KR101651871B1 (en) Job Allocation Method on Multi-core System and Apparatus thereof
US20160306680A1 (en) Thread creation method, service request processing method, and related device
US20190319895A1 (en) Resource Scheduling Method And Apparatus
CN109564528B (en) System and method for computing resource allocation in distributed computing
CN109582447B (en) Computing resource allocation method, task processing method and device
US8677362B2 (en) Apparatus for reconfiguring, mapping method and scheduling method in reconfigurable multi-processor system
CN105760234A (en) Thread pool management method and device
US9037703B1 (en) System and methods for managing system resources on distributed servers
US10733024B2 (en) Task packing scheduling process for long running applications
US11438271B2 (en) Method, electronic device and computer program product of load balancing
CN106775975B (en) Process scheduling method and device
CN112395063B (en) Dynamic multithreading scheduling method and system
CN111625339A (en) Cluster resource scheduling method, device, medium and computing equipment
CN114816709A (en) Task scheduling method, device, server and readable storage medium
US10877790B2 (en) Information processing apparatus, control method and storage medium
CN114461385A (en) Thread pool scheduling method, device and equipment and readable storage medium
CN109819674B (en) Computer storage medium, embedded scheduling method and system
CN111143063A (en) Task resource reservation method and device
CN112214299A (en) Multi-core processor and task scheduling method and device thereof
CN109189581B (en) Job scheduling method and device
CN110175078B (en) Service processing method and device
EP2413240A1 (en) Computer micro-jobs
CN115495249A (en) Task execution method of cloud cluster
CN112685158B (en) Task scheduling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant