CN112380024A - Thread scheduling method based on distributed counting - Google Patents

Thread scheduling method based on distributed counting Download PDF

Info

Publication number
CN112380024A
CN112380024A CN202110064735.0A CN202110064735A CN112380024A CN 112380024 A CN112380024 A CN 112380024A CN 202110064735 A CN202110064735 A CN 202110064735A CN 112380024 A CN112380024 A CN 112380024A
Authority
CN
China
Prior art keywords
executed
task
tasks
execution
amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110064735.0A
Other languages
Chinese (zh)
Other versions
CN112380024B (en
Inventor
胡奇韬
杨象笋
禹雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandao Jinke Co ltd
Original Assignee
Tiandao Jinke Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiandao Jinke Co ltd filed Critical Tiandao Jinke Co ltd
Priority to CN202110064735.0A priority Critical patent/CN112380024B/en
Publication of CN112380024A publication Critical patent/CN112380024A/en
Application granted granted Critical
Publication of CN112380024B publication Critical patent/CN112380024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a thread scheduling method based on distributed counting, and relates to the technical field of computer application. The method comprises the following steps: setting a counting number, wherein the counting number is the total amount of a certain batch of tasks to be executed; delivering the task, delivering the task to be executed to the message queue; suspending a thread; distributing the tasks to be executed in the message queue to a plurality of execution servers; when any execution server finishes a task to be executed, the counting number is decreased by 1; monitoring the counting number; and when the counting number is 0, releasing the thread suspension. The invention realizes the statistics of the task execution progress in a plurality of execution servers in a counting mode, namely realizes the counting of a large batch of tasks in the cross-server execution process by one synchronous counter.

Description

Thread scheduling method based on distributed counting
Technical Field
The invention relates to the technical field of computer application, in particular to a thread scheduling method based on distributed counting.
Background
With the popularization and development of the internet, the business and data of users are explosively increased, the single application has great bottleneck and limitation on performance, at this time, multiple host applications are needed to complete some batch tasks, a distributed system is provided, the tasks are split into multiple concurrent tasks to be executed simultaneously when the continuous tasks are processed in batches, and the execution efficiency is improved.
But at the same time, the distributed system brings many problems which the original single machine does not have, such as: 1. cross-host task counting cannot be achieved; 2. because the performance of the multiple hosts is different, the task execution capacity of the multiple hosts is different, and how to balance the task load among the hosts is a difficult problem; 3. if any one of the multiple hosts goes down, the risk of the task execution failure occurs, that is, compared with a single host, the distributed system increases the risk of the task execution failure.
Disclosure of Invention
In order to solve at least one of the above technical problems, an object of the present invention is to provide a thread scheduling method based on distributed counting.
In order to achieve the purpose, the invention provides the following technical scheme:
a thread scheduling method based on distributed counting comprises the following steps:
s1, setting the counting number, wherein the counting number is the total amount of a certain batch of tasks to be executed;
s2, delivering the task to be executed to the message queue;
s3, suspending the thread;
s4, distributing the tasks to be executed in the message queue to a plurality of execution servers;
s5, when any execution server completes a task to be executed, the counting number is decreased by 1;
s6, monitoring the counting number;
s7, when the counting number is 0, releasing the thread suspension.
Compared with the prior art, the invention has the beneficial effects that: the invention realizes the statistics of the task execution progress in a plurality of execution servers in a counting mode, namely realizes the counting of a large batch of tasks in the cross-server execution process by one synchronous counter.
Further, in S1, while setting the count number, generating a batch number of the batch of tasks to be executed; meanwhile, in the step S2, the batch number is delivered to the message queue together with the task to be executed while delivering the task; for distinguishing batches of tasks to be performed.
Further, in S4, load balancing the execution servers while performing task allocation; the method comprises the following specific steps:
s41, calculating the amount of tasks to be executed in each execution server and the real-time task processing capacity, wherein the real-time task processing capacity is the amount of processing tasks of the execution server in unit time;
and S42, newly distributing the amount of the tasks to be executed to each execution server, so that the ratio of the sum of the amount of the tasks to be executed in each execution server and the amount of the newly distributed tasks to be executed to the task processing capacity of the corresponding execution server is equal. And according to the actual performance of each execution server, allocating the adaptive tasks to be executed to achieve the effect of load balancing.
Further, the amount of the existing tasks to be executed, the amount of the tasks processed in the unit time, and the amount of the newly allocated executed tasks are the number of the tasks or the instruction length of the tasks. When the instruction lengths of the tasks to be executed are approximately equal, the corresponding processing times are also approximately equal, so that the load of the execution server can be measured simply through the number of the tasks; when the length of the instruction of the task to be executed is not the same, the corresponding processing time is different, and therefore, the load of the execution server is measured through the length of the instruction of the task to be executed, and compared with the former, the length of the instruction can be used for representing the load more accurately.
Further, S43 is further included after S42, and when there is no task to be executed in any execution server, the task to be executed is fetched from the remaining execution servers; and the second load balance is realized.
Further, capturing the amount of the tasks to be executed, which is determined according to the real-time task processing capacity of the other execution servers and the amount of the existing tasks to be executed, and specifically as follows:
s431, calculating the real-time task processing capacity of each execution server;
s432, dividing the current amount of the tasks to be executed by the corresponding real-time task processing capacity, and calculating the residual processing time of each execution server;
s433, taking the minimum value of the residual processing time as the reference time;
s434, multiplying the reference time by the real-time task processing capacity of each execution server to obtain a reference processing task amount of the execution server;
s435, the execution server without the task to be executed captures the amount of the task to be executed from the other execution servers as the difference value of the current amount of the task to be executed in the other execution servers minus the reference processing task amount.
Further, in S5, when the execution server completes a task to be executed, the execution server sends a feedback message back to the message queue, and the message queue deletes the corresponding task to be executed after receiving the feedback message. The persistent function of the message queue is realized, and the task to be executed cannot be lost due to sudden downtime of one execution server.
Further, in S5, when any execution server completes a task to be executed, the execution results are stored in the database or the shared cache for summarization, and the summary of the execution results is obtained from the database or the cache until the thread is released from being suspended.
Further, in S7, a timeout period is set, and if the execution time of the task reaches the timeout period, and the count number is still not monitored to be 0, the thread suspension is still released; avoiding the generation of deadlock.
Further, S0 is also included before S1, and the tasks to be executed are assembled and placed into an array.
Drawings
FIG. 1 is a system architecture block diagram according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating thread scheduling according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
referring to fig. 1, the present embodiment provides a thread scheduling system based on distributed counting, which includes a scheduling server, a message middleware, a load balancer, a counting middleware, and a plurality of execution servers.
The scheduling server is internally provided with a scheduling thread, and the scheduling thread execution task can be divided into 3 steps of a pre-task, a batch task and a post-task according to the sequence, wherein the processing of the batch task is the problem to be solved in the embodiment, and the pre-task and the post-task are only the pre-condition and the post-condition of the embodiment, and are the same as the prior art, and the implementation relation with the scheme of the embodiment is not large, so that the description is omitted here. Specifically, the scheduling server includes the following components:
and the task assembly component is used for assembling the tasks to be executed in batches before executing the tasks to be executed in batches, and putting the tasks to be executed in batches into an array. It should be noted that the instruction of the batch to-be-executed task is an executable code script, a character string in xml or json format, and the format of the instruction is not mandatory in this embodiment, as long as the instruction can be analyzed and identified by an instruction executor, the maximum length of the instruction is 128k, and if the maximum length is exceeded, the configuration of the message queue can be modified.
And the counting setting component is used for setting the number of the counts, wherein the number is the number of the tasks to be executed in the array. And after the setting is successful, a key of the counter is generated, the key is the batch number of the batch to-be-executed tasks and is used for distinguishing other batch to-be-executed tasks, and the counting is completed in the counting middleware.
And the task delivery component delivers the tasks to be executed in the array to a message queue one by one, wherein the message queue adopts a distributed deployed message middleware. Tasks to be executed in the message middleware are distinguished by 'task ij', wherein i is a batch number, namely a key of a counter; j is the serial number of the task to be executed; as shown in fig. 1, "task 12" indicates the 2 nd task in the 1 st batch of tasks to be executed.
A thread suspension component that suspends a thread while a task is delivered;
the counting monitor monitors the counting value of the counting middleware;
and the thread releasing component releases the thread suspension when the counting value is monitored to be 0.
The distributed cluster deployment of the message middleware ensures high availability; meanwhile, the message middleware stores the tasks to be executed in a file, and adopts a sequential reading and writing mode to ensure the reading and writing performance and ensure the permanence of the messages.
The message middleware is designed by adopting Topic and Queue, wherein one Topic is a type of message, one Queue is a physical storage control of one message, one Topic is divided into a plurality of Queue storages, and different queues are stored on different physical disks.
The execution server starts a thread pool to receive the tasks to be executed from the message middleware, each thread in the thread pool processes one task, and after the tasks are processed, the threads can multiplex and process the next task. And after each task to be executed is executed, calling counting middleware, and counting down according to the key of the counter.
The load balancer is used for balancing the load of each execution server, and the specific method is as follows:
calculating the amount of tasks to be executed and the real-time task processing capacity in each execution server, wherein the real-time task processing capacity is the amount of the tasks processed by the execution server in unit time;
and newly distributing the amount of the tasks to be executed to each execution server, so that the ratio of the sum of the amount of the tasks to be executed in each execution server and the amount of the tasks to be executed newly distributed to the corresponding execution server is equal to the ratio of the task processing capacity of the corresponding execution server. And according to the actual performance of each execution server, allocating the adaptive tasks to be executed to achieve the effect of load balancing.
Furthermore, the load balancer further sets a work stealing mechanism, specifically:
when no task to be executed exists in any execution server, capturing the task to be executed from the other execution servers; and the second load balance is realized. Capturing the amount of the tasks to be executed, wherein the amount is determined according to the real-time task processing capacity of the other execution servers and the amount of the existing tasks to be executed, and the method specifically comprises the following steps:
calculating the real-time task processing capacity of each execution server;
dividing the current amount of the tasks to be executed by the corresponding real-time task processing capacity, and calculating the residual processing time of each execution server;
taking the minimum value of the residual processing time as reference time;
multiplying the reference time by the real-time task processing capacity of each execution server to serve as the reference processing task amount of the execution server;
and the executing server without the tasks to be executed captures the quantity of the tasks to be executed from the other executing servers as the difference value obtained by subtracting the reference processing task quantity from the quantity of the current tasks to be executed in the other executing servers.
The counting middleware adopts a memory database with a key-value structure, communicates with the application server in a socket mode, and the key of the counting middleware is the key of the counter, and the value is the counting number. Whenever a countdown occurs, the decode (key) method is called to find the corresponding key and the count is decremented by 1. The decode method belongs to a thread synchronization method and can ensure sequential execution when multiple threads are called simultaneously. The high-availability scheme of the counting middleware adopts a master-slave mode for deployment, the key-value is written into the file, when the main server is down, the slave server can be switched into the main server, and the key-value is read from the file to restore the service state.
Example two:
referring to fig. 2, the present embodiment provides a thread scheduling method based on distributed counting, including the following steps:
and S0, assembling the tasks to be executed and putting the tasks into an array. The instructions of the batch tasks to be executed are executable code scripts, character strings in xml or json format, the format of the instructions is not mandatory in the embodiment, the instructions can be analyzed and identified by an instruction executor, the maximum length of the instructions is 128k, and if the maximum length is required, the configuration of the message queue can be modified.
S1, setting the counting number, wherein the counting number is the total amount of a certain batch of tasks to be executed; meanwhile, generating a key of the counter, wherein the key is the batch number of the batch to-be-executed tasks; meanwhile, in S2, the task is delivered, and the batch number is delivered to the message queue together with the to-be-executed task, so as to distinguish the batch of the to-be-executed task. Specifically, the task to be executed is named as a task ij, wherein i is a batch number, namely a key of a counter; j is the serial number of the task to be executed; as indicated by "task 12" is the 2 nd task of the 1 st batch of tasks to be performed.
And S2, delivering the task, and delivering the task to be executed to the message queue. The message queue employs a distributed deployed message middleware.
The distributed cluster deployment of the message middleware ensures high availability; meanwhile, the message middleware stores the tasks to be executed in a file, and adopts a sequential reading and writing mode to ensure the reading and writing performance and ensure the permanence of the messages.
The message middleware is designed by adopting Topic and Queue, wherein one Topic is a type of message, one Queue is a physical storage control of one message, one Topic is divided into a plurality of Queue storages, and different queues are stored on different physical disks.
S3, suspending the thread;
s4, distributing the tasks to be executed in the message queue to a plurality of execution servers; performing load balancing on the plurality of execution servers while performing task allocation; the method comprises the following specific steps:
s41, calculating the amount of tasks to be executed in each execution server and the real-time task processing capacity, wherein the real-time task processing capacity is the amount of processing tasks of the execution server in unit time;
the amount of the existing tasks to be executed, the amount of the tasks processed in unit time and the amount of the newly allocated execution tasks are the sum of the number of the tasks or the instruction length of the tasks. When the instruction lengths of the tasks to be executed are approximately equal, the corresponding processing times are also approximately equal, so that the load of the execution server can be measured simply through the number of the tasks; when the length of the instruction of the task to be executed is not the same, the corresponding processing time is different, and therefore, the load of the execution server is measured through the length of the instruction of the task to be executed, and compared with the former, the length of the instruction can be used for representing the load more accurately.
Taking the instruction length of the task as an example, the specific calculation process is as follows:
Figure 768658DEST_PATH_IMAGE001
Figure 13694DEST_PATH_IMAGE002
wherein, the upper label
Figure 855748DEST_PATH_IMAGE003
A sequence number indicating an execution server;
Figure 578854DEST_PATH_IMAGE004
is as follows
Figure 455543DEST_PATH_IMAGE005
The station execution server has the sum of the instruction lengths of the tasks to be executed,
Figure 340322DEST_PATH_IMAGE006
is allocated to the message queue
Figure 666743DEST_PATH_IMAGE007
The station execution server's history of the sum of the instruction lengths of the tasks to be performed,
Figure 193539DEST_PATH_IMAGE008
is as follows
Figure 190314DEST_PATH_IMAGE007
The sum of instruction lengths of tasks historically executed by the station execution server;
Figure 245995DEST_PATH_IMAGE009
is as follows
Figure 62641DEST_PATH_IMAGE007
The station performs the real-time task processing capabilities of the server,
Figure 861970DEST_PATH_IMAGE010
is a statistical period, which can be 10min, 15min, 0.5h or 1h, according to the specific task,
Figure 713251DEST_PATH_IMAGE011
is as follows
Figure 208342DEST_PATH_IMAGE007
The station execution server sums the instruction lengths of the tasks executed in a statistical period between the current points in time. It should be noted that, when one to-be-executed task in the execution server is completed, the execution server sends back a feedback message to the message queue, the amount of the executed task determines the executed task according to the feedback message, and then the instruction lengths of the tasks are summed.
And S42, newly distributing the amount of the tasks to be executed to each execution server, so that the ratio of the sum of the amount of the tasks to be executed in each execution server and the amount of the newly distributed tasks to be executed to the task processing capacity of the corresponding execution server is equal. And according to the actual performance of each execution server, allocating the adaptive tasks to be executed to achieve the effect of load balancing.
Specifically, in step S421, the sum of the instruction length of the task to be allocated in the message list and the instruction length of the task to be executed in each execution server is calculated
Figure 512284DEST_PATH_IMAGE012
Figure 849725DEST_PATH_IMAGE013
Wherein the content of the first and second substances,
Figure 821092DEST_PATH_IMAGE014
the instruction length of the task to be distributed;
s422, calculating the common execution time of all the execution servers
Figure 218575DEST_PATH_IMAGE015
Figure 275393DEST_PATH_IMAGE016
S423, calculating the instruction length of each executive server for newly distributing the task to be executed
Figure 416524DEST_PATH_IMAGE017
Figure 976819DEST_PATH_IMAGE018
If the tasks to be executed in the message queue are sequentially distributed to each execution server in sequence, the first step
Figure 807853DEST_PATH_IMAGE019
The instruction length of the newly allocated task to be executed in the platform execution server is
Figure 820808DEST_PATH_IMAGE020
S424, it is worth mentioning that when one or more execution servers are calculated
Figure 31209DEST_PATH_IMAGE020
If the value is less than 0, it indicates that the one or more execution servers have spare tasks to be executed, and at this time, the one or more execution servers are excluded, and load balancing distribution is performed on the remaining execution servers, as described in steps S421 to S423.
S43, when no task to be executed exists in any execution server, capturing the task to be executed from the other execution servers; and the second load balance is realized. Capturing the amount of the tasks to be executed, wherein the amount is determined according to the real-time task processing capacity of other execution servers and the amount of the existing tasks to be executed, and the method specifically comprises the following steps:
s431, calculating the real-time task processing capacity of each execution server
Figure 446010DEST_PATH_IMAGE021
S432, dividing the current amount of the tasks to be executed by the corresponding real-time task processing capacity, and calculating the residual processing time of each execution server
Figure 450875DEST_PATH_IMAGE022
The method specifically comprises the following steps:
Figure 951127DEST_PATH_IMAGE023
s433, taking the minimum value of the residual processing time as the reference time
Figure 699640DEST_PATH_IMAGE024
Figure 237456DEST_PATH_IMAGE025
S434, multiplying the reference time by the real-time task processing capacity of each execution server as the reference processing of the execution serverTask volume
Figure 413223DEST_PATH_IMAGE026
Figure 400770DEST_PATH_IMAGE027
S435, the execution server without the task to be executed captures the amount of the task to be executed from the other execution servers to be the difference value of the current amount of the task to be executed in the other execution servers minus the reference processing task amount
Figure 952974DEST_PATH_IMAGE028
Figure 342367DEST_PATH_IMAGE029
S5, when any execution server completes a task to be executed, the counting number is decreased by 1; and the execution server sends a feedback message back to the message queue, and the message queue deletes the corresponding task to be executed after receiving the feedback message. The persistent function of the message queue is realized, and the task to be executed cannot be lost due to sudden downtime of one execution server. It should be noted that when any execution server completes a task to be executed, the execution results are stored in the database or the shared cache for summarization, and the summarization of the execution results is obtained from the database or the cache until the thread suspension is removed.
S6, monitoring the counting number;
s7, when the counting number is 0, releasing the thread suspension. If the execution time of the task reaches the overtime time, the thread suspension is still removed if the counting number is not monitored to be 0; avoiding the generation of deadlock.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (10)

1. A thread scheduling method based on distributed counting is characterized by comprising the following steps:
s1, setting the counting number, wherein the counting number is the total amount of a certain batch of tasks to be executed;
s2, delivering the task to be executed to the message queue;
s3, suspending the thread;
s4, distributing the tasks to be executed in the message queue to a plurality of execution servers;
s5, when any execution server completes a task to be executed, the counting number is decreased by 1;
s6, monitoring the counting number;
s7, when the counting number is 0, releasing the thread suspension.
2. The thread scheduling method based on distributed counting according to claim 1, wherein in S1, while setting the counting number, generating a batch number of the batch of tasks to be executed; meanwhile, in S2, the task is delivered, and the batch number is delivered to the message queue together with the task to be executed.
3. The method according to claim 1, wherein in S4, the plurality of execution servers are load balanced while task allocation is performed; the method comprises the following specific steps:
s41, calculating the amount of tasks to be executed in each execution server and the real-time task processing capacity, wherein the real-time task processing capacity is the amount of processing tasks of the execution server in unit time;
and S42, newly distributing the amount of the tasks to be executed to each execution server, so that the ratio of the sum of the amount of the tasks to be executed in each execution server and the amount of the newly distributed tasks to be executed to the task processing capacity of the corresponding execution server is equal.
4. The method according to claim 3, wherein the amount of tasks to be executed, the amount of tasks processed in a unit time, and the amount of tasks to be executed newly are the number of tasks or the instruction length of the tasks.
5. The method according to claim 3 or 4, wherein said S42 is followed by S43, and when there is no task waiting to be executed in any execution server, the task waiting to be executed is fetched from the rest of the execution servers.
6. The thread scheduling method based on distributed counting according to claim 5, wherein the amount of the tasks to be executed is captured, which is determined according to the real-time task processing capability of the remaining execution servers and the amount of the existing tasks to be executed, and specifically as follows:
s431, calculating the real-time task processing capacity of each execution server;
s432, dividing the current amount of the tasks to be executed by the corresponding real-time task processing capacity, and calculating the residual processing time of each execution server;
s433, taking the minimum value of the residual processing time as the reference time;
s434, multiplying the reference time by the real-time task processing capacity of each execution server to obtain a reference processing task amount of the execution server;
s435, the execution server without the task to be executed captures the amount of the task to be executed from the other execution servers as the difference value of the current amount of the task to be executed in the other execution servers minus the reference processing task amount.
7. The thread scheduling method according to claim 1, wherein in S5, when the execution server completes a task to be executed, the execution server sends back a feedback message to the message queue, and the message queue receives the feedback message and deletes the corresponding task to be executed.
8. The thread scheduling method according to claim 1, wherein in step S5, when any one of the execution servers completes a task to be executed, the execution results are stored in the database or the shared cache for summarization, and the summarization of the execution results is obtained from the database or the cache until the thread suspension is released.
9. The thread scheduling method according to claim 1, wherein a timeout period is further set in S7, and if the number of counts is not monitored to be 0 when the execution time of the task reaches the timeout period, the thread suspension is still released.
10. The method for thread scheduling based on distributed count as claimed in claim 1, wherein said step S1 further comprises the step S0, wherein the tasks to be executed are assembled and placed into an array.
CN202110064735.0A 2021-01-18 2021-01-18 Thread scheduling method based on distributed counting Active CN112380024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110064735.0A CN112380024B (en) 2021-01-18 2021-01-18 Thread scheduling method based on distributed counting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110064735.0A CN112380024B (en) 2021-01-18 2021-01-18 Thread scheduling method based on distributed counting

Publications (2)

Publication Number Publication Date
CN112380024A true CN112380024A (en) 2021-02-19
CN112380024B CN112380024B (en) 2021-05-25

Family

ID=74581974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110064735.0A Active CN112380024B (en) 2021-01-18 2021-01-18 Thread scheduling method based on distributed counting

Country Status (1)

Country Link
CN (1) CN112380024B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858133A (en) * 2023-03-01 2023-03-28 北京仁科互动网络技术有限公司 Batch data processing method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8943353B2 (en) * 2013-01-31 2015-01-27 Hewlett-Packard Development Company, L.P. Assigning nodes to jobs based on reliability factors
CN106095585A (en) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 Task requests processing method, device and enterprise information system
CN106878369A (en) * 2016-08-29 2017-06-20 阿里巴巴集团控股有限公司 A kind of method and device for business processing
CN110287245A (en) * 2019-05-15 2019-09-27 北方工业大学 Method and system for scheduling and executing distributed ETL (extract transform load) tasks
CN110287033A (en) * 2019-07-03 2019-09-27 网易(杭州)网络有限公司 Batch tasks processing method, device, system, equipment and readable storage medium storing program for executing
CN111078510A (en) * 2018-10-18 2020-04-28 北京国双科技有限公司 Method and device for recording task processing progress
CN111158889A (en) * 2020-01-02 2020-05-15 中国银行股份有限公司 Batch task processing method and system
CN111694663A (en) * 2020-06-02 2020-09-22 中国工商银行股份有限公司 Load balancing method, device and system for server cluster
CN112000445A (en) * 2020-07-08 2020-11-27 苏宁云计算有限公司 Distributed task scheduling method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8943353B2 (en) * 2013-01-31 2015-01-27 Hewlett-Packard Development Company, L.P. Assigning nodes to jobs based on reliability factors
CN106095585A (en) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 Task requests processing method, device and enterprise information system
CN106878369A (en) * 2016-08-29 2017-06-20 阿里巴巴集团控股有限公司 A kind of method and device for business processing
CN111078510A (en) * 2018-10-18 2020-04-28 北京国双科技有限公司 Method and device for recording task processing progress
CN110287245A (en) * 2019-05-15 2019-09-27 北方工业大学 Method and system for scheduling and executing distributed ETL (extract transform load) tasks
CN110287033A (en) * 2019-07-03 2019-09-27 网易(杭州)网络有限公司 Batch tasks processing method, device, system, equipment and readable storage medium storing program for executing
CN111158889A (en) * 2020-01-02 2020-05-15 中国银行股份有限公司 Batch task processing method and system
CN111694663A (en) * 2020-06-02 2020-09-22 中国工商银行股份有限公司 Load balancing method, device and system for server cluster
CN112000445A (en) * 2020-07-08 2020-11-27 苏宁云计算有限公司 Distributed task scheduling method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858133A (en) * 2023-03-01 2023-03-28 北京仁科互动网络技术有限公司 Batch data processing method and device, electronic equipment and storage medium
CN115858133B (en) * 2023-03-01 2023-05-02 北京仁科互动网络技术有限公司 Batch data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112380024B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US9262228B2 (en) Distributed workflow in loosely coupled computing
US7523196B2 (en) Session monitoring using shared memory
EP1679602B1 (en) Shared memory based monitoring for application servers
US7689989B2 (en) Thread monitoring using shared memory
US20170359240A1 (en) System and method for supporting a selection service in a server environment
US9582312B1 (en) Execution context trace for asynchronous tasks
JP5088234B2 (en) Message association processing apparatus, method, and program
US7596790B2 (en) Allocating computing resources in a distributed environment
US7698602B2 (en) Systems, methods and computer products for trace capability per work unit
US20060143595A1 (en) Virtual machine monitoring using shared memory
CN110795254A (en) Method for processing high-concurrency IO based on PHP
US10437645B2 (en) Scheduling of micro-service instances
CN111880934A (en) Resource management method, device, equipment and readable storage medium
US9313267B2 (en) Using a same program on a local system and a remote system
CN112380024B (en) Thread scheduling method based on distributed counting
US20120324194A1 (en) Adjusting the amount of memory allocated to a call stack
CN113485812B (en) Partition parallel processing method and system based on large-data-volume task
US7792896B2 (en) Heterogeneous two-phase commit test engine
CN115437766A (en) Task processing method and device
Guo et al. Decomposing and executing serverless applications as resource graphs
CN114237910A (en) Client load balancing implementation method and device
CN114661475A (en) Distributed resource scheduling method and device for machine learning
CN114385351A (en) Cloud management platform load balancing performance optimization method, device, equipment and medium
CN112115118B (en) Database pressure measurement optimization method and device, storage medium and electronic equipment
CN115934335A (en) Task processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant