CN115016915A - Task scheduling method, device, computer equipment, storage medium and program product - Google Patents

Task scheduling method, device, computer equipment, storage medium and program product Download PDF

Info

Publication number
CN115016915A
CN115016915A CN202210757280.5A CN202210757280A CN115016915A CN 115016915 A CN115016915 A CN 115016915A CN 202210757280 A CN202210757280 A CN 202210757280A CN 115016915 A CN115016915 A CN 115016915A
Authority
CN
China
Prior art keywords
task
candidate
candidate task
service node
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210757280.5A
Other languages
Chinese (zh)
Inventor
胡文涛
王卓成
李逶
陈鹏翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210757280.5A priority Critical patent/CN115016915A/en
Publication of CN115016915A publication Critical patent/CN115016915A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to the field of big data, and in particular, to a task scheduling method, apparatus, computer device, storage medium, and program product. The method comprises the following steps: acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system; acquiring the minimum idle thread number of the key service node corresponding to each candidate task; according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks. By adopting the method, the risk that the blocking possibly occurs when the candidate task is executed again can be reduced.

Description

Task scheduling method, device, computer equipment, storage medium and program product
Technical Field
The present application relates to the field of big data technologies, and in particular, to a task scheduling method, apparatus, computer device, storage medium, and program product.
Background
In a distributed system, multiple servers (nodes) or multiple services provide one business function at a time. When a certain service node in the distributed system fails or the network is abnormal, if the timeout time is long, the distributed system can be avalanche.
Based on this, in the related art, avalanche limitation of a distributed system can be avoided by improving the overall fault tolerance of the system. For example, common fault tolerance schemes include service degradation, fusing, and service throttling. The fusing is passive timeout frame processing, belongs to a post fault tolerance mechanism, and causes meaningless system occupation due to excessive retry and the post fault tolerance mechanism; and the degradation and the current limitation belong to in-service fault-tolerant mechanisms, the tasks after the degradation or the current limitation are restarted and executed after a certain time, but when the tasks are restarted and executed, the resource occupation condition of the service nodes in the call chain cannot be accurately estimated, so that the tasks executed again still can be blocked.
Disclosure of Invention
In view of the above, it is desirable to provide a task scheduling method, apparatus, computer device, storage medium, and program product capable of reducing the risk that a blocking may still occur when a candidate task is executed again.
In a first aspect, the present application provides a task scheduling method. The method comprises the following steps:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In one embodiment, before obtaining the minimum number of idle threads of the key service node corresponding to each candidate task, the method further includes:
acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task;
determining service nodes with preset node identifications in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
In one embodiment, the obtaining the minimum number of idle threads of the key service node corresponding to each candidate task includes:
constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval;
and determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
In one embodiment, determining the minimum number of idle threads of the key service node corresponding to each candidate task according to the service node classification tree includes:
inquiring the real-time idle thread number of the key service node corresponding to each candidate task from the service node classification tree;
and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, determining the minimum number of idle threads of the key service node corresponding to each candidate task according to the service node classification tree includes:
for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from a service node classification tree;
and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, after the task concurrency number corresponding to the candidate task is adjusted to obtain the target task concurrency number of each candidate task, the method further includes:
and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
In one embodiment, the adjusting the task concurrency number corresponding to the candidate task according to the minimum idle thread number of each key service node to obtain the target task concurrency number of each candidate task includes:
comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task;
and adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
In one embodiment, the adjusting the task concurrency number corresponding to each candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task includes:
aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; the minimum idle thread number is the target task concurrency number of the candidate tasks.
In a second aspect, the present application further provides a task scheduling device. The device includes:
the first acquisition module is used for acquiring all candidate tasks and task concurrency quantity of each candidate task in the distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism in the distributed system;
the second acquisition module is used for acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
the adjusting module is used for adjusting the task concurrency number corresponding to the candidate task according to the minimum idle thread number of each key service node to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number of the corresponding candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, performs the steps of:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
According to the task scheduling method, the task scheduling device, the computer equipment, the storage medium and the program product, the task concurrency number of each candidate task is adjusted according to the minimum idle thread number of the key service node of each candidate task, and scheduling and planning of all candidate tasks are realized; after the task concurrency number of each candidate task is adjusted, for any candidate task, each key service node corresponding to the candidate task can execute the candidate task with the target task concurrency number, the effect of planning in advance through estimating the node thread resource occupation condition is achieved, and the probability of blocking of the key service nodes during execution of each candidate task can be effectively reduced in the execution process.
Drawings
FIG. 1 is a diagram of an application environment of a task scheduling method in one embodiment;
FIG. 2 is a flowchart illustrating a task scheduling method according to an embodiment;
FIG. 3 is a flow diagram illustrating the steps of determining key service nodes in one embodiment;
FIG. 4 is a diagram of a service invocation hierarchy in one embodiment;
FIG. 5 is a schematic flow diagram illustrating the construction of a node service classification tree in one embodiment;
FIG. 6 is a diagram of a service classification tree in one embodiment;
FIG. 7 is a flow diagram illustrating the determination of the minimum number of free threads for each candidate task in one embodiment;
FIG. 8 is a flowchart illustrating the determination of the minimum number of free threads for each candidate task according to another embodiment;
FIG. 9 is a flow diagram illustrating the updating of a classification tree for a service node in one embodiment;
FIG. 10 is a flowchart illustrating the process of determining the concurrency number of the target task according to the comparison result in one embodiment;
FIG. 11 is a block diagram showing the construction of a task scheduler in one embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The task scheduling method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The server 104 is a server for task scheduling in the distributed system, the terminal 102 is used for acquiring tasks (i.e., candidate tasks) subjected to fault-tolerant processing by the server 104 in communication with the server 104, the terminal 102 acquires all candidate tasks and task concurrency numbers of the candidate tasks in the distributed system, and acquires the minimum idle thread number of the key service node corresponding to each candidate task; according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks. To distinguish from the server 104, the terminal 102 may be defined as a function scheduler; the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
With the development of internet and cloud computing, a distributed architecture is widely used, which is better decoupled and combined by splitting an application into a plurality of micro-services in order to meet the demand of rapid development of a service, and therefore, one service request may involve the invocation of a large number of services, where the relationship between a service and a node server may be: 1) one node server can provide a plurality of services, and calling relations may exist between different services in the same node server; 2) a service may be provided by multiple different node servers together, that is, when a service is called, multiple node servers corresponding to the service need to be called. The calling chain of the service request means that each node server corresponding to each service is determined according to each service called by the service request, and the calling relationship among the node servers forms a service request calling chain.
When a user initiates a service request to a distributed system through a client, due to complexity and uncertainty of a network environment, a situation that a resource of a certain node server accessed by the user is blocked may occur, and the blocking of the node server may cause unavailability of a micro service corresponding to the node server, and may even cause a full-line breakdown of the service system. The triggering condition of fusing is that when the condition that individual service in the distributed system is unavailable and response is overtime, the calling of the fault service is temporarily stopped, and when the service is recovered, the calling of the service is continued; the service degradation means that when the overall load of the distributed system exceeds a set threshold value, some services are delayed, suspended or partially suspended from using part of functions of the services in a strategic manner under the condition of ensuring the normal operation of the core services of the system; service throttling refers to limiting the number of requests for services that are sent concurrently and the number of requests processed per unit time. However, the service degradation, the service throttling or the fusing all belong to a fault-tolerant mechanism in the process or a fault-tolerant mechanism after the process, and the resource occupation condition of the service node in the call chain cannot be accurately estimated aiming at the candidate task which is restarted and executed, so that the candidate task which is executed again may still be blocked.
In one embodiment, as shown in fig. 2, a task scheduling method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 200, acquiring all candidate tasks in the distributed system and task concurrency quantity of each candidate task.
Wherein the candidate task represents a task in the distributed system that has currently been executed with the fault tolerance mechanism. That is, for service degradation, a candidate task refers to a task that is processed to delay use, pause use, or partially pause use of a service; for service flow limitation, the candidate task refers to a task after concurrent number reduction processing; for fusing, a candidate task refers to a task that is called again when the service is restored.
Generally, a common batch processing task in the distributed system is a task executed at a certain time interval from a real-time online task, and a node server called by the batch processing task is different from a node server called by the real-time online task, so that the batch processing task generally has no timeliness requirement, does not need to be subjected to fault-tolerant processing, and can be scheduled and executed by a fault-tolerant processing module (i.e., the server 104) in the conventional distributed system. However, the candidate task in the embodiment of the present application is a fault-tolerant processed task, and the fault-tolerant processed task may involve some key node servers, so the candidate task in the embodiment of the present application is different from the conventional batch processing task, and needs to be rescheduled and scheduled by the terminal 102. Specifically, the terminal 102 is connected to a fault-tolerant processing module in the conventional distributed system, and is configured to obtain each candidate task and the concurrency number of each candidate task, and then perform unified scheduling on each candidate task again through the terminal 102.
The specific way for the terminal 102 to uniformly schedule each candidate task is as follows: and sequencing the acquired candidate tasks according to a preset priority, and sequentially judging whether the node server of the distributed system supports the smooth execution of each candidate task of the corresponding task concurrency number at each preset execution time point (or execution time period) according to the sequencing. For example, the candidate tasks received by the terminal 102 from the error handling module are candidate task 1 (task concurrency number is 4), candidate task 2 (task concurrency number is 2), and candidate task 3 (task concurrency number is 4), respectively. Obtaining the following results after sorting according to the priority: candidate task 1, candidate task 2, and candidate task 3; at this time, taking the candidate task 1 ranked at the top as the current candidate task, firstly judging whether the distributed system can execute 4 candidate tasks 1 at the current time point, if so, continuing to take the candidate task 2 ranked at the top as the current candidate task according to the ranking, judging whether the distributed system can execute 2 candidate tasks 2, if not, judging how many candidate tasks 2 the distributed system can execute, and adjusting the concurrency number of the candidate tasks 2; if the distributed system can not execute 4 candidate tasks 1 at the current time point, only the candidate task 1 can be taken as the current candidate task, and the concurrency number of the candidate task 1 is adjusted, at the moment, only after the concurrency number corresponding to the candidate task 1 is scheduled, the candidate task 2 in the sequence is taken as a new current candidate task, and all candidate tasks are scheduled one by one in this way; the scheduling mode of the candidate task 3 is the same as that of the candidate task 2, and is not described again. The current time point of the estimation judgment step is earlier than the execution time point (execution time period), and each candidate task is scheduled in advance through pre-evaluation, so that the distributed system is prevented from being blocked when the candidate task is executed at the execution time point (execution time period).
In the embodiment of the present application, when scheduling each candidate task, the terminal 102 selects a time node that is relatively stable in real-time service invocation of guest services (online services) and does not undergo sudden increase and decrease, so as to avoid that when the candidate task is executed, a node server of the service is blocked due to sudden increase of the real-time service invocation of the guest services (online services).
For each candidate task, the task concurrency number of the candidate task represents the access/request number of a plurality of clients for accessing/requesting the same node server based on the candidate task in a concurrent manner. The embodiment of the application is based on the parameter of the concurrency number to judge whether the node server corresponding to the candidate task can execute the candidate task or not so as to judge whether the concurrency number of the candidate task needs to be limited (reduced); the concurrent processing capability of the node server refers to the maximum number of requests that the node server can process in a unit time, the unit time is generally defined as 1 second, the concurrent processing capability of the node server is determined according to the number of threads of the node server, and specifically, only one concurrent request can be processed by one thread of the node server at the same time.
Step 202, obtaining the minimum number of idle threads of the key service node corresponding to each candidate task.
For each candidate task, the key service node of the candidate task refers to a key node of all nodes called by executing the candidate task, the key service node may be preset, and the number of the key service nodes is at least one. Each key service node corresponds to an idle thread number, and the idle thread number represents the number of threads which are not occupied by the node server; the minimum idle thread number of the key service node corresponding to each candidate task is the minimum value of the idle thread numbers of all the key service nodes corresponding to the candidate task.
Specifically, the terminal 102 needs to communicate with each key service node to obtain the number of idle threads of each key service node, and after obtaining the number of idle threads of each key service node, the number of idle threads of each key service node may be locally cached. When the idle thread number of any key service node is obtained, the following two ways can be adopted: 1) directly taking the number of unoccupied threads of the accessed node server as the number of idle threads of the node server, wherein the number of idle threads of the node server is the total number of threads of the node server-the number of occupied threads of the node server; 2) acquiring the unoccupied thread number of the node server, and calculating to obtain the idle thread number of the node server according to the unoccupied thread number of the node server and the reserved thread number, wherein at the moment, the idle thread number of the node server is equal to the total thread number of the node server, the occupied thread number of the node server and the reserved thread number; the reserved thread number refers to the thread number which is expected to be occupied by the node server when the node server is used for processing the passenger service in real time. In this embodiment, the number of idle threads of the key service node is obtained by the above manner 2), and thread resources necessary for the online service (real-time to guest service) can be reserved, so that the online service (real-time to guest service) is not affected when the candidate task is executed.
And 204, adjusting the task concurrency number corresponding to the candidate task according to the minimum idle thread number of each key service node to obtain the target task concurrency number of each candidate task.
The target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks. For any candidate task, taking a current candidate task as an example, if the minimum idle thread number of each key service node of the current candidate task is greater than or equal to the task concurrency number of the current candidate task, it indicates that each key service node of the current candidate task can execute the current candidate task of the current task concurrency number, and at this time, the task concurrency number of the current candidate task does not need to be adjusted; if the minimum idle thread number of each key service node of the current candidate task is smaller than the task concurrency number of the current candidate task, it is indicated that each key service node of the current candidate task cannot execute the current candidate task of the current task concurrency number, at this time, the task concurrency number of the current candidate task needs to be adjusted to the target task concurrency number, so that each key service node of the current candidate task can execute the candidate task of the target task concurrency number, and after the adjustment, the current candidate task can be scheduled again.
In the task scheduling method, the task concurrency number of each candidate task is adjusted according to the minimum idle thread number of the key service node of each candidate task, so that the scheduling and planning of all candidate tasks are realized; after the task concurrency number of each candidate task is adjusted, for any candidate task, each key service node corresponding to the candidate task can execute the candidate task with the target task concurrency number, the effect of planning in advance through estimating the node thread resource occupation condition is achieved, and the probability of blocking of the key service nodes during execution of each candidate task can be effectively reduced in the execution process.
When the execution condition of the candidate task is estimated in advance, if all service nodes on a call chain of the candidate task are analyzed, when the number of the service nodes is large, the analysis efficiency is possibly low due to huge data quantity, and the practicability is poor; in the actual task execution process, the service node causing the blocking is usually a node (or a node easy to crash) which is called very frequently and is easy to have a thread resource bottleneck on a calling chain, so that on the premise of not losing task scheduling accuracy, the embodiment of the application only analyzes the upper part of key nodes of the calling chain, and the effect of improving the analysis efficiency is achieved. Therefore, in an embodiment, in step 202, before obtaining the minimum number of idle threads of the key service node corresponding to each candidate task, the task scheduling method further includes step 201, as shown in fig. 3, step 201 includes: and acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task, and determining a key service node for a service node with a preset node identifier in the service node calling chain of each task in the service calling hierarchical relationship.
The node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value, and the node identification can be manually calibrated. When the key service nodes are calibrated, an administrator can identify the key service nodes one by one in the level of the nodes according to experience, or can select part of the key services in the level of the services, mark the nodes corresponding to the key services in a key service marking mode, and take the nodes as the key service nodes. Specifically, the service invocation hierarchical relationship is stored in a registration center, the terminal 102 communicates with the registration center to obtain the service invocation hierarchical relationship, and the service invocation hierarchical relationship can be reconstructed and initialized in a memory when the application version is updated; when the key service is selected, the administrator can analyze the operation and maintenance platform of the actual production environment and calibrate (select) each key service in the service calling hierarchical relationship, so that the key service can be changed and adjusted, and the node called by the key service can be changed in the registry.
In the registry, the service invocation hierarchical relationship may be stored in a manner that: (1) storing the call chain of each task independently; (2) uniformly storing the call chains of all tasks in a table mode to generate a service call hierarchical table; (3) and uniformly storing the call chains of all tasks in a graph mode to generate a service call hierarchical graph.
Next, the above-described mode (3) is described by way of example, and as shown in fig. 4, one node in fig. 4 can provide multiple services, and for convenience of representation, each service in fig. 4 is simplified to be provided by only one node, and actually each service may be provided by multiple nodes together.
In fig. 4, the services that the node 0 can provide include: service w11, service w12, service w13, and service w 14; the services that the node 1 can provide include: service w21, service w22, service w23, and service w 24; the services that the node 2 can provide include: service w31, service w33, service w33, service w34, service w 35; the services that the node 3 can provide include: service w41, service w42, service w43 and service w 44.
The call chain corresponding to the task s11 with the node 0 as the starting point is as follows: calling a service w21, a service w32 and a service w43 in sequence from a service w 11; the call chain corresponding to task s 11' starting from node 0 is: calling a service w24, a service w35 and a service w44 in sequence from a service w 11; the call chain corresponding to task s12 starting from node 0 is: taking the service w12 as a starting point, calling the service w23, the service w32 and the service w43 in sequence; the call chain corresponding to task s13 starting from node 0 is: taking the service w13 as a starting point, calling the service w22, the service w33 and the service w45 in sequence; the call chain corresponding to task s 13' starting from node 0 is: taking the service w13 as a starting point, calling the service w24, the service w35 and the service w44 in sequence; the call chain corresponding to task s14 starting from node 0 is: and calling the service w34 and the service w43 in sequence by taking the service w14 as a starting point.
Following the example in case (3), the call chain corresponding to task s11 is: service w11, service w21, service w32 and service w43, wherein the identified key services are service w23, service w32 and service w43 (black areas identified in the figure), the node corresponding to service w23 is node 1, the node corresponding to service w32 is node 2, the node corresponding to service w43 is node 3, and then node 1, node 2 and node 3 are key service nodes; therefore, key service nodes corresponding to all tasks in the calling hierarchical diagram can be obtained by analogy, wherein all tasks in the calling hierarchical diagram can contain candidate tasks, can not contain candidate tasks, and can also contain all tasks executed historically by the distributed system.
After the key services are identified, each key service in any task call chain and each key service node corresponding to each key service are represented in the storage mode (2), and the following table-table 1 can be referred to:
TABLE 1
Task Key services Key service node
s11 w32、w32、w43 2、3
s12 w23、w32w43 1、2、3
S13 Is free of Is free of
s14 w43 3
Since the key service is a service which is easily subjected to a thread resource bottleneck on a call chain which is called very frequently, the key service node corresponding to the key service is also a node (or an easily-crashed node) which is easily subjected to a resource bottleneck on the call chain which is called very frequently, and therefore, when judging whether the current candidate task can be executed, only whether the idle thread of the key service node to be called by the current candidate task can execute the current candidate task is required to be measured. Therefore, before determining whether the current candidate task can be executed, first, the idle thread number of each key service node corresponding to the current candidate task is acquired, and then, the minimum idle thread number of the key service node corresponding to the current candidate task is determined based on the acquired idle thread number of each key service node corresponding to the current candidate task.
Specifically, the method for determining the minimum number of idle threads corresponding to the current candidate task based on the obtained number of idle threads of each key service node corresponding to the current candidate task includes: the comparison method 1 is characterized in that the idle thread numbers of all key service nodes corresponding to the current candidate task are directly compared, and the minimum value is determined to be the minimum idle thread number; a comparison mode 2, dividing each key service node corresponding to all tasks in the service calling hierarchical relationship into a plurality of node intervals, and respectively determining the idle thread number corresponding to each node interval, wherein the real-time idle thread number of each node interval is the minimum value of the real-time idle thread numbers of each node in the node interval; and then, when the current candidate task is obtained, determining the key service node corresponding to the current candidate task and which node interval/node intervals are successfully matched, and directly calling the minimum value of the idle thread number of the successfully matched node interval as the minimum idle thread number of the key service node corresponding to the current candidate task.
Specifically, the comparison method 1 is a real-time comparison method, and is suitable for a case where the number of candidate tasks is small, or the number of key service nodes corresponding to the current candidate task is small, or the key service nodes corresponding to the current candidate task are distributed more dispersedly. The comparison mode 2 is suitable for the function scheduler to schedule a plurality of candidate tasks under the condition that the guest service is relatively stable in real time in the distributed system, or the condition that the number of key service nodes corresponding to the current candidate task is large and the current candidate task is relatively distributed and concentrated.
In order to integrate the advantages of the two comparison modes, the embodiment of the application adopts a hierarchical (or layered) storage mode to store the real-time idle thread data of the key service nodes of all tasks in the service calling hierarchical relationship, wherein the bottom layer of the hierarchical storage is the real-time idle thread number of the service node level, and the middle layer and/or the top layer of the hierarchical storage is the real-time idle thread number of the node interval level obtained based on the bottom layer; by adopting a hierarchical storage mode, when the current candidate task is obtained, the hierarchical storage model can be utilized no matter the current candidate task is applicable to the comparison mode 1 or the comparison mode 2, and the hierarchical storage model can be updated regularly to be applicable to the next candidate task, so that the reuse rate of the hierarchical storage model is improved; in addition, under the condition that the real-time guest service of the distributed system is stable, the real-time idle thread data of each service node of the distributed system is stable, so that when the hierarchical storage model judges the next candidate task, only the real-time idle thread data of the service node and the node section associated with the previous candidate task needs to be updated, and the reuse rate of the data in the hierarchical storage model is improved.
Therefore, in an embodiment, as shown in fig. 5, in step 202, obtaining the minimum number of idle threads of the key service node corresponding to each candidate task includes:
step 2021, building a service node classification tree according to each key service node in the service calling hierarchical relationship.
The service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval.
Step 2022, determining the minimum number of idle threads of the key service node corresponding to each candidate task according to the service node classification tree.
The mode of dividing the node interval may be: division mode 1), subject to the continuity of the node, or division mode 2), and the node interval is divided by taking the key service as the aggregation unit. The division mode 1) has higher requirement on the continuity of the nodes; the above-described division manner 2) has a high matching property between the key service and the key service node.
Specifically, for example, in one of the cases in the partition manner 2), if the key services already stored in the service invocation hierarchical relationship include a key service w1 and a key service w2, a key service node corresponding to the key service w1 is {1, 2, 4 }; the key service node corresponding to the key service w2 is {3, 5, 6, 8 }; the obtained key service node corresponding to the key service w3 corresponding to the current candidate task is {3, 5, 6, 7}, at this time, if a key service matching mode is adopted, the key service w3 is not matched with the key service w2, but the key service w2 and the key service w3 both occupy the nodes 3, 5, 6, so that the situation that the use condition of each key service node corresponding to the current candidate task cannot be accurately matched and accurately estimated may occur.
Therefore, as shown in fig. 6, in the embodiment of the present application, the node interval is divided by the above-mentioned division method 1), specifically, a classification storage model is constructed by using a classification tree concept to obtain a service node classification tree, where the classification tree concept is to use an item in a data set as a leaf node of the classification tree, a generalized leaf node becomes a node of the service node classification tree, and a root node of the classification tree is a set of all leaf nodes. In the embodiment of the application, each key service node stored in the service calling hierarchical relationship is set to be a node 1, a node 2, a node 3, a node 4 and a node 5, and if the node 1, the node 2, the node 3, the node 4 and the node 5 are continuous, the root node of the service node classification tree is a node interval [1, 5 ]; according to the first layer of leaf nodes under the node intervals [1, 5], the node intervals [1, 3] and the node intervals [4, 5] are respectively formed; the leaf nodes of the second layer are node intervals [1, 2], node intervals [3, 3], node intervals [4, 4] and node intervals [5, 5 ]; the leaf nodes at the third layer are a node interval [1, 1] and a node interval [2, 2 ]. Setting real-time idle thread data corresponding to a node interval [1, 5] to form a resource array {3, 5, 8, 2, 90}, wherein the real-time idle thread data corresponding to the node 1 is 3, the real-time idle thread data corresponding to the node 2 is 5, and the real-time idle thread data corresponding to the node 3 is 8 … …; the minimum idle thread number corresponding to the node interval [1, 5] is 2, and 2 is the real-time idle thread number of the node interval [1, 5 ]. Referring again to fig. 6, each node interval in the service node classification tree is provided with a parameter [ x, y ]: z, where x and y represent an interval node endpoint of a node interval, and z is the minimum idle thread number of the resource array corresponding to the node interval, that is, real-time idle thread data of the node interval.
In view of the foregoing comparison method 1, in an embodiment, as shown in fig. 7, in step 2022, determining the minimum number of idle threads of the key service node corresponding to each candidate task according to the service node classification tree includes:
0221, inquiring the number of real-time idle threads of the key service node corresponding to each candidate task from the service node classification tree.
In the above example, the real-time idle thread number of the key service node corresponding to each candidate task queried from the service node classification tree is the real-time idle thread data of the nodes [1, 1], [2, 2], [3, 3], [4, 4] and [5, 5] queried at the bottom layer in the service node classification tree.
0222, for any candidate task, determining the minimum value of the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In the above example, the real-time idle thread data of the nodes [1, 1], [2, 2], [3, 3], [4, 4] and [5, 5] are compared, and the minimum value of each real-time idle thread data is 2, so that the minimum idle thread number of the key service node corresponding to the current candidate task is 2.
As for the comparison method 2, as shown in fig. 8, in step 2022, the determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree includes:
0223, for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from the service node classification tree.
In the comparison method 2, firstly, a node interval formed by key service nodes corresponding to the current candidate task needs to be determined, the node interval formed by the key service nodes corresponding to the current candidate task is set to be [1, 4], and the service node classification tree in the previous example is connected, because the root node [1, 5] does not conform to the interval, the left and right subintervals of the root node are inquired downwards, and the inquired node interval [1, 3] and the node interval [4, 4] are successfully matched with the interval [1, 4] corresponding to the current candidate task, and then the node interval [1, 3] and the node interval [4, 4] are target node intervals to which the key service nodes corresponding to the current candidate task belong.
0224, determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In the above example, the real-time idle thread data corresponding to the target node interval [1, 3] is 3, the real-time idle thread data corresponding to the target node interval [4, 4] is 2, the real-time idle thread data corresponding to the target node interval [4, 4] is compared with the real-time idle thread data corresponding to the target node interval [2, 2 is compared with the real-time idle thread data corresponding to the target node interval [4, 4], and the minimum value is 2, then 2 is used as the minimum idle thread number of the key service node corresponding to the current candidate task.
In the embodiment of the application, the minimum value in the real-time idle thread data of the target node interval is directly called as the minimum idle thread number of each key service node corresponding to the current candidate task, so that the scheduling speed of a single candidate task is improved, and the efficiency of the functional scheduler for scheduling a plurality of candidate tasks is improved by multiplexing the classified storage model and part of data in the classified storage model.
After the current candidate task is scheduled and before the next candidate task is scheduled, the classification tree of the service node needs to be updated, in an embodiment, as shown in fig. 9, after the task concurrency number of the corresponding candidate task is adjusted to obtain the target task concurrency number of each candidate task, the task scheduling method further includes step 206: and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
After obtaining the target task concurrency number of the current candidate task, the terminal 102 may communicate with the portal service of the service processing cluster in the distributed system, so as to directly schedule the current candidate task to start execution through the terminal 102, or return the generated target task concurrency number to the server 104.
Before the next candidate task is executed, updating a target node interval in a service node classification tree according to the target task concurrency number of the current candidate task, and updating each node interval associated with the target node interval, namely, connecting node intervals [1, 4] formed by key service nodes corresponding to the current candidate task in the previous example, wherein the target node interval corresponding to the node intervals [1, 4] is a node interval [1, 3] and a node interval [4, 4], adjusting z (namely, real-time idle thread data) corresponding to the target node interval [1, 3] to the target task concurrency number to realize updating, and adjusting z (namely, real-time idle thread data) corresponding to the target node interval [4, 4] to the target task concurrency number to realize updating; and adjusting z (namely real-time idle thread data) corresponding to the root node interval [1, 5] associated with the target node interval to be a target task concurrency number so as to realize updating. And after the current candidate task is executed, resetting the real-time idle thread number corresponding to each target node interval to realize reverse updating.
As shown in fig. 6, the delay flag indicates that the target node section and the node section associated with the target node section are marked, and "Lazy ═ target task concurrency number of the current candidate task" may be adopted for the marking. And after the current candidate task is executed, deleting the delay marks corresponding to the target node intervals. And if the process of inquiring the minimum idle thread number by the next candidate task relates to the node of the target node interval of the current candidate task, marking a delay mark to each node of the target node interval of the current candidate task.
In an embodiment, as shown in fig. 10, in step 204, adjusting the task concurrency number of the corresponding candidate task according to the minimum idle thread number of each key service node, to obtain the target task concurrency number of each candidate task, includes:
step 2041, comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task.
For any candidate task, such as the current candidate task, the comparison result represents the size relationship between the task concurrency number of the current candidate task and the minimum idle thread number of the key service node corresponding to the current candidate task.
Step 2042, the task concurrency number corresponding to each candidate task is adjusted according to the comparison result of each candidate task, and the target task concurrency number of each candidate task is obtained.
The method for adjusting the task concurrency number of the corresponding candidate tasks according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task comprises the following steps: aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; the minimum idle thread number is the target task concurrency number of the candidate tasks. In the embodiment of the application, for any candidate task, for example, a current candidate task, when the current candidate task has a comparison result that the task concurrency number of the current candidate task is greater than the minimum idle thread number of the key service node corresponding to the current candidate task, the concurrency number of the current candidate task is reduced by adopting a current limiting idea, so that non-blocking scheduling is realized.
The terminal 102 acquires all candidate tasks and task concurrency quantity of each candidate task in the distributed system from the server 104, schedules all candidate tasks one by one before timing time/delay time, takes the service as a current candidate task when any candidate service is scheduled, and determines each key service node corresponding to the current candidate task according to the service calling hierarchical relation acquired from the registry; generating a service node classification tree according to key service nodes corresponding to all tasks in the service calling hierarchical relation; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval; inquiring in a service node classification tree according to each key service node corresponding to the current candidate task, and determining the minimum idle thread number of the key service node corresponding to the current candidate task in a node interval comparison mode or a node comparison mode; if the minimum idle thread number of each key service node of the current candidate task is greater than or equal to the task concurrency number of the current candidate task, it is indicated that each key service node of the current candidate task can execute the current candidate task with the current concurrency number, and at this time, the task concurrency number of the current candidate task does not need to be adjusted; if the minimum idle thread number of each key service node of the current candidate task is smaller than that of the current candidate task, it indicates that each key service node of the current candidate task cannot execute the current candidate task with the current concurrency number, and at this time, the task concurrency number of the current candidate task needs to be adjusted to the target task concurrency number, so that each key service node of the current candidate task can execute the candidate task with the target task concurrency number. According to the minimum idle thread number of the key service nodes of each candidate task, the task concurrency number of each candidate task is adjusted, and scheduling and planning of all candidate tasks are achieved; after the task concurrency number of each candidate task is adjusted, for any candidate task, each key service node corresponding to the candidate task can execute the candidate task with the target task concurrency number, the effect of planning in advance through estimating the node thread resource occupation condition is achieved, and the probability of blocking of the key service nodes during execution of each candidate task can be effectively reduced in the execution process. And by constructing the service node classification tree, the minimum value of the target node interval can be directly called as the minimum idle thread number of each key service node corresponding to the current candidate task, so that the scheduling speed of a single candidate task is improved, and the efficiency of scheduling a plurality of candidate tasks by the terminal 102 is improved by multiplexing the classification storage model and part of data in the classification storage model.
It should be understood that, although the steps in the flowcharts related to the above embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a task scheduling device for implementing the above related task scheduling method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the task scheduling device provided below may refer to the limitations on the task scheduling method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 11, there is provided a task scheduling apparatus 100 including: a first obtaining module 110, a second obtaining module 120, and an adjusting module 130, wherein:
a first obtaining module 110, configured to obtain all candidate tasks and task concurrency numbers of the candidate tasks in the distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism in the distributed system;
a second obtaining module 120, configured to obtain a minimum idle thread number of the key service node corresponding to each candidate task;
the adjusting module 130 is configured to adjust the task concurrency number of the corresponding candidate task according to the minimum idle thread number of each key service node, so as to obtain a target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In one embodiment, the apparatus further comprises a third obtaining module configured to:
acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task;
determining service nodes with preset node identifiers in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
In one embodiment, the second obtaining module 120 includes:
the construction module is used for constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval;
and the matching module is used for determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
In one embodiment, the matching module is further configured to query, from the service node classification tree, a real-time idle thread number of a key service node corresponding to each candidate task;
and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the matching module is further configured to, for any candidate task, query a target node interval to which a key service node corresponding to the candidate task belongs from the service node classification tree;
and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the apparatus further comprises:
and the updating module is used for updating the real-time idle thread number of the target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task aiming at any candidate task and marking the updated target node interval.
In one embodiment, the adjusting module 130 further comprises:
the result generation module is used for comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task;
and the adjusting module is used for adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
In one embodiment, the adjusting module is further configured to, for any candidate task, adjust the concurrency number of the candidate task to the same number as the minimum number of idle threads if the comparison result of the candidate task is that the concurrency number of the candidate task is greater than the minimum number of idle threads of the key service node corresponding to the candidate task; the minimum idle thread number is the target task concurrency number of the candidate tasks.
The modules in the task scheduling device can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of task scheduling. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task; determining service nodes with preset node identifications in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval; and determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
In one embodiment, the processor, when executing the computer program, further performs the steps of: inquiring the real-time idle thread number of the key service node corresponding to each candidate task from the service node classification tree; and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the processor, when executing the computer program, further performs the steps of: for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from a service node classification tree; and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the processor when executing the computer program further performs the steps of: and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
In one embodiment, the processor, when executing the computer program, further performs the steps of: comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task; and adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
In one embodiment, the processor, when executing the computer program, further performs the steps of: aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; and the minimum idle thread number is the target task concurrency number of the candidate tasks.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number of the corresponding candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task; determining service nodes with preset node identifications in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
In one embodiment, the computer program when executed by the processor further performs the steps of: constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval; and determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
In one embodiment, the computer program when executed by the processor further performs the steps of: inquiring the real-time idle thread number of the key service node corresponding to each candidate task from the service node classification tree; and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from a service node classification tree; and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
In one embodiment, the computer program when executed by the processor further performs the steps of: comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task; and adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; the minimum idle thread number is the target task concurrency number of the candidate tasks.
In one embodiment, a computer program product is provided, comprising a computer program which when executed by a processor performs the steps of:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task which is executed with a fault tolerance mechanism currently in the distributed system;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number corresponding to the candidate task to obtain the target task concurrency number of each candidate task; the target task concurrency number is the number of tasks which can be executed by each key service node and correspond to the candidate tasks.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task; determining service nodes with preset node identifications in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
In one embodiment, the computer program when executed by the processor further performs the steps of: constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all key service nodes and the real-time idle thread number of each node interval; and determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
In one embodiment, the computer program when executed by the processor further performs the steps of: inquiring the real-time idle thread number of the key service node corresponding to each candidate task from the service node classification tree; and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from a service node classification tree; and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
In one embodiment, the computer program when executed by the processor further performs the steps of: comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task; and adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
In one embodiment, the computer program when executed by the processor further performs the steps of: aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; the minimum idle thread number is the target task concurrency number of the candidate tasks.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method for task scheduling, the method comprising:
acquiring all candidate tasks and task concurrency quantity of each candidate task in a distributed system; the candidate task represents a task in the distributed system that has currently been executed with a fault tolerance mechanism;
acquiring the minimum idle thread number of the key service node corresponding to each candidate task;
according to the minimum idle thread number of each key service node, adjusting the task concurrency number of the corresponding candidate task to obtain the target task concurrency number of each candidate task; and the target task concurrency number is the number of tasks which can be executed by each key service node and corresponds to the candidate task.
2. The method of claim 1, wherein before obtaining the minimum number of idle threads of the key service node corresponding to each of the candidate tasks, the method further comprises:
acquiring a service calling hierarchical relationship, wherein the service calling hierarchical relationship comprises a service node calling chain of each task;
determining service nodes with preset node identifiers in service node calling chains of all tasks in the service calling hierarchical relationship as key service nodes; the node identification is used for identifying the service node with the task scheduling blocking probability being greater than the preset probability value.
3. The method of claim 2, wherein the obtaining the minimum number of idle threads of the key service node corresponding to each candidate task comprises:
constructing a service node classification tree according to each key service node in the service calling hierarchical relationship; the service node classification tree comprises node intervals corresponding to all the key service nodes and the real-time idle thread number of each node interval;
and determining the minimum idle thread number of the key service node corresponding to each candidate task according to the service node classification tree.
4. The method of claim 3, wherein determining the minimum number of idle threads of the key service node corresponding to each candidate task according to the service node classification tree comprises:
inquiring the real-time idle thread number of the key service node corresponding to each candidate task from the service node classification tree;
and aiming at any candidate task, determining the minimum value in the real-time idle thread number of the key service node corresponding to the candidate task as the minimum idle thread number of the key service node corresponding to the candidate task.
5. The method of claim 3, wherein determining a minimum number of free threads for a key service node corresponding to each of the candidate tasks according to the service node classification tree comprises:
for any candidate task, inquiring a target node interval to which a key service node corresponding to the candidate task belongs from the service node classification tree;
and determining the minimum value of the real-time idle thread number of the target node interval as the minimum idle thread number of the key service node corresponding to the candidate task.
6. The method according to any one of claims 3 to 5, wherein after adjusting the task concurrency number of the corresponding candidate tasks to obtain the target task concurrency number of each candidate task, the method further comprises:
and aiming at any candidate task, updating the real-time idle thread number of a target node interval corresponding to the candidate task in the service node classification tree according to the target task concurrency number of the candidate task, and marking the updated target node interval.
7. The method according to claims 1 to 5, wherein adjusting the task concurrency number of the corresponding candidate task according to the minimum idle thread number of each key service node to obtain the target task concurrency number of each candidate task comprises:
comparing the task concurrency number of each candidate task with the minimum idle thread number of the key service node corresponding to each candidate task to obtain a comparison result of each candidate task;
and adjusting the task concurrency number corresponding to the candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task.
8. The method according to claim 7, wherein the adjusting the task concurrency number corresponding to each candidate task according to the comparison result of each candidate task to obtain the target task concurrency number of each candidate task comprises:
aiming at any candidate task, if the comparison result of the candidate task is that the task concurrency number of the candidate task is larger than the minimum idle thread number of the key service node corresponding to the candidate task, the task concurrency number of the candidate task is adjusted to be the same as the minimum idle thread number; and the minimum idle thread number is the target task concurrency number of the candidate tasks.
9. A task scheduling apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring all candidate tasks and the task concurrency number of each candidate task in the distributed system; the candidate task represents a task in the distributed system that has currently been executed with a fault tolerance mechanism;
the second obtaining module is used for obtaining the minimum idle thread number of the key service node corresponding to each candidate task;
the adjusting module is used for adjusting the task concurrency number corresponding to the candidate task according to the minimum idle thread number of each key service node to obtain the target task concurrency number of each candidate task; and the target task concurrency number is the number of tasks which can be executed by each key service node and corresponds to the candidate task.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 8 when executed by a processor.
CN202210757280.5A 2022-06-30 2022-06-30 Task scheduling method, device, computer equipment, storage medium and program product Pending CN115016915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210757280.5A CN115016915A (en) 2022-06-30 2022-06-30 Task scheduling method, device, computer equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210757280.5A CN115016915A (en) 2022-06-30 2022-06-30 Task scheduling method, device, computer equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115016915A true CN115016915A (en) 2022-09-06

Family

ID=83078888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210757280.5A Pending CN115016915A (en) 2022-06-30 2022-06-30 Task scheduling method, device, computer equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115016915A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472594A (en) * 2023-12-27 2024-01-30 中诚华隆计算机技术有限公司 Processor task execution method based on subtask characteristics

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472594A (en) * 2023-12-27 2024-01-30 中诚华隆计算机技术有限公司 Processor task execution method based on subtask characteristics

Similar Documents

Publication Publication Date Title
US11442790B2 (en) Resource scheduling system, method and server for ensuring high availability of services
US20170293865A1 (en) Real-time updates to item recommendation models based on matrix factorization
CN112910945B (en) Request link tracking method and service request processing method
US10505863B1 (en) Multi-framework distributed computation
CN103713935B (en) Method and device for managing Hadoop cluster resources in online manner
CN111191221A (en) Method and device for configuring authority resources and computer readable storage medium
CN110335009A (en) Report form generation method, device, computer equipment and storage medium
CN111190753A (en) Distributed task processing method and device, storage medium and computer equipment
CN109196807A (en) The method of network node and operation network node to carry out resource dissemination
CN111222821A (en) Goods supplementing method and device, computer equipment and storage medium
CN115016915A (en) Task scheduling method, device, computer equipment, storage medium and program product
US20210034574A1 (en) Systems and methods for verifying performance of a modification request in a database system
Wang et al. Concept drift-based runtime reliability anomaly detection for edge services adaptation
CN112347394A (en) Method and device for acquiring webpage information, computer equipment and storage medium
US20180336252A1 (en) Summarization of Large Histograms
CN115118612A (en) Resource quota management method and device, computer equipment and storage medium
Lin et al. A multi-centric model of resource and capability management in cloud simulation
CN115102784B (en) Rights information management method, device, computer equipment and storage medium
CN111813842B (en) Data processing method, device, system, equipment and storage medium
CN116882648A (en) Account resource allocation method, device, computer equipment and storage medium
CN117453759B (en) Service data processing method, device, computer equipment and storage medium
US20240177035A1 (en) Quantum system view via gateway mechanisms
US20220374729A1 (en) Assessing entity performance using machine learning
US20230281500A1 (en) Managing access to quantum services in quantum computing devices
US20240179102A1 (en) Systems and methods for managing message queuing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination