CN113535363A - Task calling method and device, electronic equipment and storage medium - Google Patents

Task calling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113535363A
CN113535363A CN202110857492.6A CN202110857492A CN113535363A CN 113535363 A CN113535363 A CN 113535363A CN 202110857492 A CN202110857492 A CN 202110857492A CN 113535363 A CN113535363 A CN 113535363A
Authority
CN
China
Prior art keywords
task
resource
node
tasks
node cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110857492.6A
Other languages
Chinese (zh)
Inventor
孙振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Duxiaoman Youyang Technology Co ltd
Original Assignee
Chongqing Duxiaoman Youyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Duxiaoman Youyang Technology Co ltd filed Critical Chongqing Duxiaoman Youyang Technology Co ltd
Priority to CN202110857492.6A priority Critical patent/CN113535363A/en
Publication of CN113535363A publication Critical patent/CN113535363A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application provides a task calling method, a task calling device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a task description file, wherein the task description file comprises the dependency relationship of N tasks and the resource amount required by executing each task of the N tasks, and N is a positive integer; obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks; acquiring a resource configuration file, wherein the resource configuration file comprises resource information of a node cluster; and distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster. In other words, in the task calling process, the resource management of the single scheduling is merged into the topology sorting process of the directed acyclic graph, so that the task calling reliability and efficiency are improved.

Description

Task calling method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a task calling method and device, electronic equipment and a storage medium.
Background
ETL (Extract-Transform-Load) is used to describe the process of extracting (Extract), transforming (Transform), and loading (Load) data from a source to a destination. The ETL task calling system is used to find a suitable server node for the ETL task to execute the ETL task, and the process of finding a suitable server node for the ETL task is called scheduling.
The common ETL task calling method is a timing fragment task scheduling method, and the timing fragment task scheduling method is used for splitting a task into a plurality of small tasks according to a user-defined rule and deploying the small tasks on each node of a distributed cluster for execution. Specifically, when the execution time set by the task reaches, the task is triggered to execute.
As can be seen from the above, the scheduling method for the timing slice type task depends on the set execution time, does not concern the dependency relationship between tasks, and the resource allocation is unbalanced, thereby causing poor task execution stability.
Disclosure of Invention
The embodiment of the application provides a task calling method and device, an electronic device and a storage medium, and is used for improving the execution stability of a task.
In a first aspect, an embodiment of the present application provides a task invoking method, including:
acquiring a task description file and a resource configuration file, wherein the task description file comprises the dependency relationship of N tasks and the resource amount required by executing each task in the N tasks, the resource configuration file comprises the resource information of a node cluster, and N is a positive integer;
obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks;
and distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster.
In a second aspect, an embodiment of the present application provides a task invoking device, including:
the task description file comprises a dependency relationship of N tasks and a resource amount required by executing each task of the N tasks, wherein N is a positive integer;
the processing unit is used for obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks;
a second obtaining unit, configured to obtain a resource configuration file, where the resource configuration file includes resource information of a node cluster;
and the calling unit is used for distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster.
In some embodiments, the invoking unit is specifically configured to write at least one first task into a resource allocation queue, where the at least one first task written into the resource allocation queue for the first time is at least one task with an introductivity of 0 in the directed acyclic graph; extracting a first task from the resource allocation queue, and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required by executing the first task; when the query shows that the residual resource amount of the node cluster meets the resource amount required by executing a first task in the resource allocation queue, determining a first node from the node cluster, and writing the first node and the first task into a task execution queue, wherein the first task is a task with a first execution sequence in the resource allocation queue; traversing at least one downstream task of the first task in the directed acyclic graph, taking the at least one downstream task as a new at least one first task, and repeatedly executing until each task in the directed acyclic graph is allocated to each node in the node cluster for execution.
In some embodiments, the invoking unit is further configured to configure a sleep time for a first task in the resource allocation queue when it is queried that the remaining resource amount of the node cluster does not satisfy the resource amount required for executing the first task; and writing the first task into the resource allocation queue again when the first task configuration sleep time is up.
In some embodiments, the task description file further includes a type of each task in the N tasks in terms of resource usage, and the invoking unit is specifically configured to obtain the type of the first task in terms of resource usage from the task description file; and acquiring a first node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use.
In some embodiments, the invoking unit is specifically configured to obtain, from the node cluster, at least one node that matches the type of the first task in terms of resource usage according to the type of the first task in terms of resource usage; and selecting the first node from the at least one node according to a preset screening strategy.
Optionally, the preset screening policy includes any one of a worst adaptation policy, a first adaptation policy, a next adaptation policy, and a best adaptation policy.
In some embodiments, the invoking unit is further configured to check whether a first task in the resource allocation queue is timed out before querying whether a remaining resource amount of the node cluster satisfies a resource amount required for executing the first task; and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required for executing the first task or not when the first task is checked not to be overtime.
In some embodiments, the invoking unit is specifically configured to, when it is determined that a sum of resource amounts required for executing tasks in the same level in the directed acyclic graph is less than or equal to a remaining resource amount of the node cluster, and/or when it is determined that a resource amount required for executing any task in the directed acyclic graph is less than or equal to a remaining resource amount of a node with the largest remaining resources in the node cluster, allocate the N tasks to each node in the node cluster for execution according to the directed acyclic graph, a resource amount required for executing each task in the N tasks, and resource information of the node cluster.
In some embodiments, the first obtaining unit is further configured to check whether a closed loop exists in the directed acyclic graph; if the directed acyclic graph is checked to have a closed loop, generating first indication information, wherein the first indication information is used for indicating a user to modify the dependency relationship of the N tasks; and if the directed acyclic graph is checked to have no closed loop, acquiring a resource configuration file.
In some embodiments, at least one of the N nodes includes a work process and an agent process, where the work process is configured to execute a scheduled task, and the agent process is configured to report resource usage information of the node where the work process is located.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the task calling method according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which includes computer instructions, and when the instructions are executed by a computer, the computer is enabled to implement the task calling method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, where the program product includes a computer program, where the computer program is stored in a readable storage medium, and the computer program can be read by at least one processor of a computer from the readable storage medium, and the at least one processor executes the computer program to make the computer implement the task calling method according to any one of the first aspect.
According to the task calling method, the task calling device, the electronic equipment and the storage medium, the task description file is obtained, the task description file comprises the dependency relationship of N tasks and the resource amount required by executing each task of the N tasks, and N is a positive integer; obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks; acquiring a resource configuration file, wherein the resource configuration file comprises resource information of a node cluster; and distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster. In other words, in the task calling process, the resource management of the single scheduling is merged into the topology sorting process of the directed acyclic graph, so that the task calling reliability and efficiency are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram of a task invoking system architecture provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a task invoking method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a task invoking method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a task invoking method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an example of a task invocation system according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a task invoking device according to an embodiment of the present application
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be understood that, in the present embodiment, "B corresponding to a" means that B is associated with a. In one implementation, B may be determined from a. It should also be understood that determining B from a does not mean determining B from a alone, but may be determined from a and/or other information.
In the description of the present application, "plurality" means two or more than two unless otherwise specified.
In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
In order to facilitate understanding of the embodiments of the present application, the related concepts related to the embodiments of the present application are first briefly described as follows:
monomer scheduling: one node in the cluster runs a scheduling program, and the node has the authority of accessing other nodes, can collect resource information, state information and other information of each node and then manages the information in a unified manner. And matching according to the task submitted by the user and the requirement of the resource, wherein the finally matched result is the node for executing the task.
Directed Acyclic Graphs (DAGs) refer to a directed graph without loops in mathematics, particularly graph theory and computer science.
Degree of entry: refers to the sum of the number of times a certain point in the directed graph is taken as the end point of the edge in the graph.
Output degree: refers to the sum of the number of times a certain point in the directed graph is taken as the starting point of the edge in the graph.
Kahn algorithm: a classical algorithm for ordering nodes of a directed acyclic graph. And on the premise of not damaging the node sequence, the DAG is pulled into a chain.
ETL: is an abbreviation of english Extract-Transform-Load, and is used to describe the process of extracting (Extract), converting (Transform), and loading (Load) data from the source end to the destination end.
Task: the tasks in the cross-bottom book mainly refer to related tasks of extracting (extract), converting (transform) and loading (load) data.
Resource: resources in the book of intersection are mainly divided into two categories:
(1) scheduling resources inside the cluster, including but not limited to CPU, GPU, memory, hard disk storage;
(2) scheduling cluster external resources including, but not limited to, query-per-second (QPS) quotas for dependent hypertext Transfer Protocol (HTTP) services, pressure limits for relational database management system (MySQL) cluster services.
QPS: the number of completed requests can be processed for 1 second for one HTTP service.
Fig. 1 is a schematic diagram of a task invocation system architecture according to an embodiment of the present application, and as shown in fig. 1, the task invocation system architecture includes a Master (Master) node and a plurality of Slave (Slave) nodes.
The Master node runs a scheduling process for resource management, Tasks and resource matching.
The Slave Node comprises a Node (Node)1, a Node2, … and a Node N, wherein the Node1, the Node2, the Node … and the Node N report a Node State to a Master Node.
In some embodiments, the Master node includes a Cluster State (Cluster State) module and a Scheduling Logic (Scheduling Logic) module.
The Cluster State module is used for managing the states of the nodes in the Cluster, such as resources and the like, and transmitting the resource states of the nodes to the Scheduling Logic module.
And the Scheduling Logic module matches the Tasks with the resources and sends the Tasks to the matched nodes according to the matching result.
The technical solutions of the embodiments of the present application are described in detail below with reference to some embodiments. The following several embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes.
Fig. 2 is a schematic flowchart of a task invoking method according to an embodiment of the present application, and as shown in fig. 2, the task invoking method according to the embodiment includes:
s201, acquiring a task description file.
The execution subject of the embodiment of the present application may be understood as a device having a task calling function, for example, a task calling device. In the distributed system, the task invoking device is a node for task invocation in the distributed system, such as the master node shown in fig. 1, or a component in the node for task invocation, such as a processor in the node.
The following embodiment takes an execution subject as a task calling node as an example for explanation.
The task description file comprises the dependency relationship of the N tasks and the resource quantity required by executing each task of the N tasks, wherein N is a positive integer.
In one example, the amount of resources required by the task includes an amount of resources required to be occupied for executing the task or an up-and-down floating interval range of the occupied amount of resources, such as 10G to 20G of memory required for executing a certain task.
In one example, the task description file also includes a dependency description of the task on the resource, e.g., a task is primarily dependent on the CPU.
In one example, the task description file also includes a classification of the task on resource usage, such as compute intensive (e.g., executing the task relies primarily on a CPU or GPU), IO (Input/Output) intensive, and so on.
In one example, the task description file further includes trigger timings for the tasks, where the trigger timings for the tasks include, but are not limited to, the following:
in opportunity 1, if the triggering of the task depends on the completion of one or more other tasks, then a globally unique name or id of the clearly dependent upstream task needs to be described.
In the timing 2, if the triggering of the task depends on time, the clearly set execution time and execution period need to be described for the timing task.
In one example, the task description file also includes task base attributes. Optionally, the task basic attribute includes at least one of: parameters of the task, the number of retries of the task execution failure, the timeout time of the task, and the like.
In some embodiments, the task description file is written by a user, such as a data engineer.
Optionally, the user writes a specific execution logic of the task in addition to writing the task description file.
S202, obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks.
According to the steps, after the task description file is obtained, the task description file is analyzed to obtain the dependency relationship of the N tasks, and then the directed acyclic graph is drawn according to the dependency relationship of the N tasks to obtain the directed acyclic graph describing the execution sequence of the N tasks.
S203, acquiring a resource configuration file.
The resource configuration file comprises resource information of the node cluster.
In one example, the resource information of the node cluster includes a QPS class of the HTTP server, Spark resources, a CPU of the node, a GPU of the node, a memory of the node, and a storage space of the node. The node CPU, the node GPU, the node memory and the node storage space are virtualized resources from resources owned by the nodes, and QPS (hyper text transport protocol) resources and Spark resources of the HTTP server can be shared by all the nodes in the node cluster. It should be noted that the resource configuration file includes virtualized initial resource information, and the initial resource information changes with the execution of the task.
In one example, the resource profile also includes the amount of redundancy that the virtualized resource needs to reserve in order to be able to serve properly.
For example, if a memory 128G of a certain node of a cluster is virtualized according to 10G, a memory of 12 units can be virtualized, then 2 units of redundancy are set, and a remaining memory of 10 units is used as an online line that can be used by the resource and is also an early warning line for the use of the resource. Early warning is needed once actual use is exceeded.
In one example, the resource profile also includes other dependent virtualized resources.
Optionally, the resource configuration file is written by a user, for example, by an operation and maintenance engineer.
In some embodiments, in order to ensure the limitation of the generated directed acyclic graph, this embodiment further includes a process of determining the generated directed acyclic graph, which specifically includes the following steps 1 to 3:
step 1, checking whether a closed loop exists in the directed acyclic graph.
In some embodiments, a Kahn algorithm may be employed to check whether there is a closed loop in the directed acyclic graph.
And 2, if the directed acyclic graph is checked to have a closed loop, generating first indication information, wherein the first indication information is used for indicating a user to modify the dependency relationship of the N tasks. And after modifying the dependency relationship of the task by the user, uploading the modified dependency relationship to the task calling node, redrawing the directed acyclic graph by the task calling node according to the modified dependency relationship of the N tasks, and repeating the steps until the directed acyclic graph meeting the requirements is drawn.
And 3, if the directed acyclic graph is checked to have no closed loop, executing the step 203 to acquire the resource configuration file.
And S204, distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by each task in the N tasks and the resource information of the node cluster.
Specifically, according to the execution sequence of the tasks in the directed acyclic graph, and the amount of resources required for executing each of the N tasks and the resource information of the node cluster, the N tasks are allocated to each node in the node cluster to be executed, for example, task 1 is executed before task 2 in the directed acyclic graph, and the resources required for executing task 1 are greater than the data required for executing task 2, but in the node cluster at the current time, the amount of resources of node1 with the most remaining resources is smaller than the amount of resources required for executing task 1 but greater than the amount of resources required for executing task 2, so that task 2 is allocated to node1 first, and node1 executes the task 2. For task 1, it is necessary to wait until there is a node satisfying task 1 in the node cluster, and then assign task 1 to the node to execute task 1.
Therefore, in the task calling process, the execution sequence of the tasks and the resource use condition are considered, and the task calling efficiency is improved.
In some embodiments, before executing S203, in the embodiments of the present application, it is first determined whether the resource amount of the node cluster can satisfy the execution of the N tasks, which specifically includes the following several ways:
in the first mode, whether the sum of the resource quantity required when executing each task at the same level in the directed acyclic graph is less than or equal to the residual resource quantity of the node cluster is judged.
In some embodiments, the directed acyclic graph can be understood as a tree structure comprising a plurality of levels, each level comprising at least one task. And judging whether the sum of the resource quantity required by executing each task of the hierarchy is less than or equal to the residual resource quantity of the node cluster or not for each hierarchy in the directed acyclic graph. For example, a certain level of the directed acyclic graph includes 3 task nodes, the sum of the resource amounts required for executing the 3 task nodes is a, the remaining resource amount of the current node cluster is b, and if a is less than or equal to b, it is determined that the node cluster can execute the 3 tasks, then the above S204 is executed. If a is larger than b, it indicates that the node cluster cannot execute the 3 tasks, and the task calling process is finished.
And judging whether the resource quantity required by executing any task in the directed acyclic graph is less than or equal to the residual resource quantity of the node with the most residual resources in the node cluster. For example, for any task in the directed acyclic graph, for example, task 3, the task 3 mainly depends on the CPU in terms of resources, for example, 10G of CPU resources are needed to execute task 3, it is checked whether the CPU resources remaining in the node with the most remaining resources of the CPU in the node cluster at this time are greater than 10G, if the CPU resources remaining in the node are greater than or equal to 10G, it is determined that the node can execute the task 3, and at this time, the S204 is executed. If the remaining CPU resources of the node are less than 10G, it indicates that all nodes in the node cluster cannot execute the task 3 at this time, and the task calling process is ended.
And judging whether the sum of the resource quantity required by executing each task in the same level in the directed acyclic graph is less than or equal to the residual resource quantity of the node cluster, and judging whether the resource quantity required by executing any task in the directed acyclic graph is less than or equal to the residual resource quantity of the node with the most residual resources in the node cluster.
When determining that the sum of the resource amounts required for executing the tasks of the same level in the directed acyclic graph is less than or equal to the remaining resource amount of the node cluster, and/or when determining that the resource amount required for executing any task in the directed acyclic graph is less than or equal to the remaining resource amount of the node with the most remaining resources in the node cluster, executing the step S204, that is, allocating the N tasks to the nodes in the node cluster for execution according to the directed acyclic graph, the resource amount required for executing each task of the N tasks, and the resource information of the node cluster.
If the sum of the resource quantity required for executing each task in the same level in the directed acyclic graph is determined to be larger than the residual resource quantity of the node cluster, and/or if the resource quantity required for executing any task in the directed acyclic graph is determined to be larger than the residual resource quantity of the node with the most residual resources in the node cluster, the node cluster cannot execute the N tasks, and at this time, the whole task scheduling process is finished.
Therefore, in the topology sorting process of the directed acyclic graph, the embodiment of the application adds the real-time check on the task dependent resources, improves the effectiveness of task resource allocation, and further improves the reliability and efficiency of task calling.
In some embodiments, when it is determined that the resource amount of the node cluster can satisfy the resources required for executing the N tasks, the N nodes are deployed, specifically including the following processes:
step A, deploying worker work processes at the cluster nodes and taking charge of executing the specific tasks after scheduling.
And step B, deploying an Agent process at the cluster node.
The functions of the Agent process include, but are not limited to, the following:
and the function 1 and the Agent process are responsible for regularly synchronizing the used resources and the residual available resources of the nodes to the task scheduler to update the cluster internal resource information in the virtual resource pool.
And the function 2 and the Agent receive the request of the task scheduler and return the used resources and the residual available resources of the nodes to the task scheduler in real time to update the resource information in the cluster in the virtual resource pool.
As can be seen from the above, at least one node of the N nodes in the embodiment of the present application includes a work process and an agent process, where the work process is used to execute a scheduled task, and the agent process is used to report resource usage information of the node where the work process is located.
According to the task calling method provided by the embodiment of the application, the task description file is obtained and comprises the dependency relationship of N tasks and the resource amount required by executing each task in the N tasks, wherein N is a positive integer; obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks; acquiring a resource configuration file, wherein the resource configuration file comprises resource information of a node cluster; and distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster. In other words, in the task calling process, the resource management of the single scheduling is merged into the topology sorting process of the directed acyclic graph, so that the task calling reliability and efficiency are improved.
The following describes a specific implementation process of S204 in detail with reference to specific embodiments.
Fig. 3 is a schematic flowchart of a task invoking method according to an embodiment of the present application, where based on the embodiment, as shown in fig. 3, the step S204 includes:
s301, writing at least one first task into a resource allocation queue.
In some embodiments, the at least one first task written into the resource allocation queue for the first time is at least one task with an in-degree of 0 in the directed acyclic graph.
For example, a task call is performed from at least one task with an introductivity of 0 in the directed acyclic graph, first, at least one task with an introductivity of 0 in the directed acyclic graph is written into the resource allocation queue, and then, step S302 is executed to extract a first task with a first execution order from the resource allocation queue and query whether the remaining resource amount of the node cluster meets the resource amount required for executing the first task.
S302, extracting a first task from the resource allocation queue, and inquiring whether the residual resource amount of the node cluster meets the resource amount required by executing the first task.
Wherein the first task is a task with the first execution order in the resource allocation queue.
In some embodiments, before querying whether the remaining resource amount of the node cluster satisfies the resource amount required for executing the first task, whether the first task in the resource allocation queue is overtime is checked, and if the first task is overtime, the whole task scheduling process is ended. And if the first task is not overtime after checking, inquiring whether the residual resource quantity of the node cluster meets the resource quantity required by executing the first task.
S303, when the query shows that the residual resource amount of the node cluster meets the resource amount required by executing the first task in the resource allocation queue, determining the first node from the node cluster, and writing the first node and the first task into the task execution queue.
In this step, the above-mentioned ways of determining the first node from the node cluster include, but are not limited to, the following:
in the first mode, the node with the largest current remaining resource amount in the node cluster is determined as the first node.
In a second mode, the task description file further includes a type of each of the N tasks in terms of resource usage. In this case, S203-a3 includes:
S303-A1, obtaining the type of the first task on the resource usage from the task description file, such as calculation intensive type or IO intensive type.
S303-A2, according to the type of the first task on the resource usage, obtaining the first node matched with the type of the first task on the resource usage from the node cluster.
In a possible implementation manner, the S303-a2 includes: acquiring at least one node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use; and selecting the first node from at least one node according to a preset screening strategy.
The embodiment of the present application does not limit the specific type of the preset screening policy.
Optionally, the preset screening policy includes any one of a worst adaptation policy, a first adaptation policy, a next adaptation policy, and a best adaptation policy.
S304, traversing at least one downstream task of the first task in the directed acyclic graph, and taking the at least one downstream task as a new at least one first task.
And repeatedly executing the steps of S301 to S304, that is, writing the new at least first task into the resource allocation queue, extracting a new first task from the current resource allocation queue, querying whether the remaining resource amount of the node cluster meets the resource amount required for executing the new first task, if so, determining a new first node from the node cluster, and writing the new first node and the new first task into the task execution queue. And then traversing at least one downstream task of the new first task in the directed acyclic graph, taking the at least one downstream task as the new at least one first task, and repeating the steps until each task in the directed acyclic graph is distributed to each node in the node cluster for execution.
In some embodiments, the present application further comprises:
s305, when the inquired residual resource amount of the node cluster does not meet the resource amount required by executing the first task in the resource allocation queue, configuring sleep time for the first task.
S306, when the first task configuration sleep time is up, writing the first task into the resource allocation queue again.
For example, the CPU resource required for executing the first task is 10G, but the CPU remaining in the node with the largest amount of CPU remaining resources in the current node cluster is smaller than 10G, it is determined that the amount of remaining resources in the current node cluster does not satisfy the amount of resources required for executing the first task in the resource allocation queue, at this time, sleep time is configured for the first task, the sleep time is a preset value, and the specific size is determined according to actual needs, which is not limited in this embodiment. And when the sleep time of the first task configuration arrives, writing the first task into the resource allocation queue again to reallocate the nodes for the first task.
In a specific embodiment, as shown in fig. 4, a task calling process related to the embodiment of the present application includes:
s401, the single directed acyclic graph starts scheduling.
S402, writing at least one task with the income degree of 0 in the directed acyclic graph into a resource allocation queue.
S403, determine whether the resource allocation queue is empty, if not, execute the following step S404, and if it is empty, execute the following step S413.
S404, extracting a first task from the resource allocation queue, wherein the first task is a task with the first execution sequence in the resource allocation queue.
S405 checks whether the first task is timed out, and if not, executes the following step S406, and if timed out, executes the following step S413.
S406, inquiring whether the remaining resource amount of the node cluster satisfies the resource amount required for executing the first task, if so, executing the following S409, and if not, executing the following steps S407 and S408.
And S407, configuring the sleep time for the first task.
And S408, when the first task configuration sleep time is up, writing the first task into the resource allocation queue again, returning to execute the S404, and re-extracting the first task from the resource allocation queue.
S409, obtaining the type of the first task on the resource use from the task description file.
S410, acquiring a first node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use.
S411, writing the first node and the first task into a task execution queue.
S412, traversing at least one downstream task of the first task in the directed acyclic graph, taking the at least one downstream task as a new at least one first task, writing the new at least one first task into the resource allocation queue, and returning to execute the foregoing S404. And repeating the steps until each task in the directed acyclic graph is distributed to each node in the node cluster for execution.
And S413, finishing the scheduling of the single directed acyclic graph.
The task calling method provided in this application embodiment, when scheduling the generated directed acyclic graph, first writing at least one task with an admission of 0 in the directed acyclic graph into a resource allocation queue, extracting a first task from the resource allocation queue, where the first task is a task whose execution order is first in the resource allocation queue, querying whether a remaining resource amount of a node cluster satisfies a resource amount required for executing the first task, if not, configuring a sleep time for the first task, when the sleep time for the first task configuration arrives, writing the first task into the resource allocation queue again, extracting the first task from the resource allocation queue again, and if the remaining resource amount of the query node cluster satisfies the resource amount required for executing the first task, obtaining a type of the first task in resource usage from a task description file, and acquiring a first node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use, and distributing the first task to the first node for execution. And then traversing at least one downstream task of the first task in the directed acyclic graph, taking the at least one downstream task as a new at least one first task, writing the new at least one first task into a resource allocation queue, and repeating the steps until each task in the directed acyclic graph is allocated to each node in the node cluster for execution. In other words, in the task calling process, the execution sequence of the tasks and the resource management are considered, and therefore the task calling reliability and efficiency are improved.
In some embodiments, an embodiment of the present application further provides a task invoking system, as shown in fig. 5, where the task invoking system includes: the system comprises a task calling node, a task execution node pool and a virtualization resource pool.
In some embodiments, the task invocation nodes comprise a directed acyclic graph generator, a task invoker and an executor.
The directed acyclic graph generator is configured to obtain a task description file written by a data engineer, analyze the task description file to obtain a dependency relationship between N tasks, and generate a directed acyclic graph according to the dependency relationship between the N tasks, where a specific process may refer to S201 and S202. In some embodiments, the directed acyclic graph generator also checks the generated directed acyclic graph for the presence of closed loops, and the like.
The task invoker is used for invoking the directed acyclic graph generated by the directed acyclic graph generator, acquiring a resource configuration file written by an operation and maintenance engineer, analyzing the resource configuration file to obtain resource information of the node cluster, and storing the resource information of the node cluster in the virtualized resource pool.
In some embodiments, the directed acyclic graph generator writes the generated directed acyclic graph information into a meta-information table.
In one embodiment, the task invoker writes the virtualized resource information into a meta-information table.
And then, the task invoker allocates the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by each task in the N tasks and the resource information of the node cluster.
In one example, the task invoker generates a task execution instruction, which is used for indicating a node executing the task, and sends the task execution instruction to the executor, so that the executor issues the task to the corresponding node.
In one possible implementation, the executor issues tasks to each node in the pool of executing task nodes through an intermediary (e.g., an asynchronous queue). Specifically, the executor issues the task to the asynchronous queue of the mediator, and the worker on the node meeting the condition can get the task, so that the purpose of task parallel is achieved.
In one embodiment, the executor records the task execution related information in the execution task recording table.
And executing the issued tasks by each node in the task node executing pool, and feeding back the execution condition of the tasks and the use condition of the node resources.
And the task invoker updates the resources in the virtualized resource pool according to the resource use condition of the node.
The preferred embodiments of the present application have been described in detail with reference to the accompanying drawings, however, the present application is not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the present application within the technical idea of the present application, and these simple modifications are all within the protection scope of the present application. For example, the various features described in the foregoing detailed description may be combined in any suitable manner without contradiction, and various combinations that may be possible are not described in this application in order to avoid unnecessary repetition. For example, various embodiments of the present application may be arbitrarily combined with each other, and the same should be considered as the disclosure of the present application as long as the concept of the present application is not violated.
It should also be understood that, in the various method embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply an execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 2 to fig. 4 describe the task invoking method according to the embodiment of the present application, and on this basis, describe the task invoking device according to the embodiment of the present application.
Fig. 6 is a schematic structural diagram of a task invoking device according to an embodiment of the present application. The task invoking device 30 is configured to execute the technical solution of the above embodiment. As shown in fig. 6, the task invoking device 30 may include:
a first obtaining unit 31, configured to obtain a task description file, where the task description file includes dependency relationships of N tasks and a resource amount required to execute each task of the N tasks, and N is a positive integer;
the processing unit 32 is configured to obtain a directed acyclic graph describing an execution sequence of the N tasks according to the dependency relationship of the N tasks;
a second obtaining unit 33, configured to obtain a resource configuration file, where the resource configuration file includes resource information of a node cluster;
and the invoking unit 34 is configured to allocate, according to the directed acyclic graph, the resource amount required for executing each task of the N tasks and the resource information of the node cluster, the N tasks to each node of the node cluster to be executed.
In some embodiments, the invoking unit 34 is specifically configured to write at least one first task into a resource allocation queue, where the at least one first task written into the resource allocation queue for the first time is at least one task with an introductivity of 0 in the directed acyclic graph; extracting the first task from the resource allocation queue, and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required by executing the first task; when the query shows that the residual resource amount of the node cluster meets the resource amount required by executing a first task in the resource allocation queue, determining a first node from the node cluster, and writing the first node and the first task into a task execution queue, wherein the first task is a task with a first execution sequence in the resource allocation queue; traversing at least one downstream task of the first task in the directed acyclic graph, taking the at least one downstream task as a new at least one first task, and repeatedly executing until each task in the directed acyclic graph is allocated to each node in the node cluster for execution.
In some embodiments, the invoking unit 34 is further configured to configure a sleep time for the first task in the resource allocation queue when it is queried that the remaining resource amount of the node cluster does not satisfy the resource amount required for executing the first task; and writing the first task into the resource allocation queue again when the first task configuration sleep time is up.
In some embodiments, the task description file further includes a type of each task in the N tasks in terms of resource usage, and the invoking unit 34 is specifically configured to obtain the type of the first task in terms of resource usage from the task description file; and acquiring a first node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use.
In some embodiments, the invoking unit 34 is specifically configured to obtain, from the node cluster, at least one node that matches the type of the first task in terms of resource usage according to the type of the first task in terms of resource usage; and selecting the first node from the at least one node according to a preset screening strategy.
Optionally, the preset screening policy includes any one of a worst adaptation policy, a first adaptation policy, a next adaptation policy, and a best adaptation policy.
In some embodiments, the invoking unit 34 is further configured to check whether the first task in the resource allocation queue is timed out before querying whether the remaining resource amount of the node cluster satisfies the resource amount required for executing the first task; and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required for executing the first task or not when the first task is checked not to be overtime.
In some embodiments, the invoking unit 34 is specifically configured to, when it is determined that a sum of resource amounts required for executing tasks in the same level in the directed acyclic graph is less than or equal to a remaining resource amount of the node cluster, and/or when it is determined that a resource amount required for executing any task in the directed acyclic graph is less than or equal to a remaining resource amount of a node with the largest remaining resources in the node cluster, allocate the N tasks to each node in the node cluster for execution according to the directed acyclic graph, and the resource amount required for executing each task in the N tasks and the resource information of the node cluster.
In some embodiments, the first obtaining unit 33 is further configured to check whether a closed loop exists in the directed acyclic graph; if the directed acyclic graph is checked to have a closed loop, generating first indication information, wherein the first indication information is used for indicating a user to modify the dependency relationship of the N tasks; and if the directed acyclic graph is checked to have no closed loop, acquiring a resource configuration file.
In some embodiments, at least one of the N nodes includes a work process and an agent process, where the work process is configured to execute a scheduled task, and the agent process is configured to report resource usage information of the node where the work process is located.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatus 30 shown in fig. 6 may correspond to a corresponding main body in executing the method of the embodiment of the present application, and the foregoing and other operations and/or functions of each unit in the apparatus 30 are respectively for implementing corresponding flows in each method such as the method, and are not described herein again for brevity.
The apparatus and system of embodiments of the present application are described above in terms of functional units in conjunction with the following figures. It is to be understood that the functional units may be implemented in hardware, by instructions in software, or by a combination of hardware and software units. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software units in the decoding processor. Alternatively, the software elements may reside in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, or other storage medium known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device is configured to execute the task calling method in the foregoing embodiment, and refer to the description in the foregoing method embodiment specifically.
The electronic device 400 shown in fig. 7 comprises a memory 401, a processor 402, a communication interface 403. The memory 401, the processor 402 and the communication interface 403 are communicatively connected to each other. For example, the memory 401, the processor 402 and the communication interface 403 may be connected by a network connection. Alternatively, the electronic device 400 may further include a bus 404. The memory 401, the processor 402 and the communication interface 403 are communicatively connected to each other via a bus 404. Fig. 7 is an electronic device 400 in which a memory 401, a processor 402, and a communication interface 403 are communicatively connected to each other via a bus 404.
The Memory 401 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM). The memory 401 may store programs, and the processor 402 and the communication interface 403 are used to perform the above-described methods when the programs stored in the memory 401 are executed by the processor 402.
The processor 402 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more Integrated circuits.
The processor 402 may also be an integrated circuit chip having signal processing capabilities. In implementation, the method of the present application may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 402. The processor 402 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 401, and a processor 402 reads information in the memory 401 and completes the method of the embodiment of the application in combination with hardware thereof.
The communication interface 403 enables communication between the electronic device 400 and other devices or communication networks using transceiver modules such as, but not limited to, transceivers.
When electronic device 400 includes bus 404, as described above, bus 404 may include a pathway to transfer information between various components of electronic device 400 (e.g., memory 401, processor 402, communication interface 403).
According to an aspect of the present application, there is provided a computer storage medium having a computer program stored thereon, which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. In other words, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiments.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of the above-described method embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (13)

1. A task invocation method, characterized by comprising:
acquiring a task description file, wherein the task description file comprises the dependency relationship of N tasks and the resource amount required by executing each task in the N tasks, and N is a positive integer;
obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks;
acquiring a resource configuration file, wherein the resource configuration file comprises resource information of a node cluster;
and distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster.
2. The method according to claim 1, wherein said allocating the N tasks to each node in the node cluster for execution according to the directed acyclic graph, and the amount of resources required for executing each task of the N tasks and the resource information of the node cluster comprises:
writing at least one first task into a resource allocation queue, wherein the at least one first task which is written into the resource allocation queue for the first time is at least one task with the degree of income of 0 in the directed acyclic graph;
extracting a first task from the resource allocation queue, and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required by executing the first task;
when the query shows that the residual resource amount of the node cluster meets the resource amount required by executing a first task in the resource allocation queue, determining a first node from the node cluster, and writing the first node and the first task into a task execution queue, wherein the first task is a task with a first execution sequence in the resource allocation queue;
traversing at least one downstream task of the first task in the directed acyclic graph, taking the at least one downstream task as a new at least one first task, and repeatedly executing until each task in the directed acyclic graph is allocated to each node in the node cluster for execution.
3. The method of claim 2, further comprising:
when the fact that the remaining resource quantity of the node cluster does not meet the resource quantity needed by execution of a first task in the resource distribution queue is inquired, sleep time is configured for the first task;
and writing the first task into the resource allocation queue again when the first task configuration sleep time is up.
4. The method of claim 2, wherein the task description file further includes a type of each of the N tasks on resource usage, and wherein determining a first node from the cluster of nodes comprises:
acquiring the type of the first task on resource use from the task description file;
and acquiring a first node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use.
5. The method of claim 4, wherein obtaining the first node from the node cluster that matches the type of the first task in resource usage according to the type of the first task in resource usage comprises:
acquiring at least one node matched with the type of the first task on the resource use from the node cluster according to the type of the first task on the resource use;
and selecting the first node from the at least one node according to a preset screening strategy.
6. The method according to claim 5, wherein the preset filtering strategy comprises any one of a worst-fit strategy, a first-fit strategy, a next-fit strategy, and a best-fit strategy.
7. The method of claim 2, wherein before querying whether the remaining amount of resources of the node cluster satisfies the amount of resources required to perform the first task, the method further comprises:
checking whether a first task in the resource allocation queue times out;
and inquiring whether the residual resource quantity of the node cluster meets the resource quantity required for executing the first task or not when the first task is checked not to be overtime.
8. The method according to claim 1, wherein said allocating the N tasks to each node in the node cluster for execution according to the directed acyclic graph, and the amount of resources required for executing each task of the N tasks and the resource information of the node cluster comprises:
when the sum of the resource quantity required for executing each task in the same level in the directed acyclic graph is determined to be less than or equal to the residual resource quantity of the node cluster, and/or when the resource quantity required for executing any task in the directed acyclic graph is determined to be less than or equal to the residual resource quantity of the node with the most residual resources in the node cluster, the N tasks are allocated to each node in the node cluster for execution according to the directed acyclic graph, the resource quantity required for executing each task in the N tasks and the resource information of the node cluster.
9. The method of claim 1, wherein prior to obtaining the resource profile, the method further comprises:
checking whether a closed loop exists in the directed acyclic graph;
if the directed acyclic graph is checked to have a closed loop, generating first indication information, wherein the first indication information is used for indicating a user to modify the dependency relationship of the N tasks;
and if the directed acyclic graph is checked to have no closed loop, acquiring a resource configuration file.
10. The method of claim 1, wherein at least one of the N nodes comprises a work process and a proxy process, wherein the work process is used for executing the scheduled task, and the proxy process is used for reporting resource usage information of the node.
11. A task invocation device, characterized by comprising:
the task description file comprises a dependency relationship of N tasks and a resource amount required by executing each task of the N tasks, wherein N is a positive integer;
the processing unit is used for obtaining a directed acyclic graph describing the execution sequence of the N tasks according to the dependency relationship of the N tasks;
a second obtaining unit, configured to obtain a resource configuration file, where the resource configuration file includes resource information of a node cluster;
and the calling unit is used for distributing the N tasks to each node in the node cluster to execute according to the directed acyclic graph, the resource amount required by executing each task in the N tasks and the resource information of the node cluster.
12. An electronic device, comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is adapted to execute the computer program to implement the method of any of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the storage medium comprises computer instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1 to 10.
CN202110857492.6A 2021-07-28 2021-07-28 Task calling method and device, electronic equipment and storage medium Pending CN113535363A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857492.6A CN113535363A (en) 2021-07-28 2021-07-28 Task calling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857492.6A CN113535363A (en) 2021-07-28 2021-07-28 Task calling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113535363A true CN113535363A (en) 2021-10-22

Family

ID=78121199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857492.6A Pending CN113535363A (en) 2021-07-28 2021-07-28 Task calling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113535363A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919989A (en) * 2021-10-29 2022-01-11 国信蓝桥教育科技(杭州)股份有限公司 Cloud resource configuration detection method and system
CN114741121A (en) * 2022-04-14 2022-07-12 哲库科技(北京)有限公司 Method and device for loading module and electronic equipment
CN115114028A (en) * 2022-07-05 2022-09-27 南方电网科学研究院有限责任公司 Task allocation method and device for electric power simulation secondary control
CN115378999A (en) * 2022-10-26 2022-11-22 小米汽车科技有限公司 Service capacity adjusting method and device
CN116302381A (en) * 2022-09-08 2023-06-23 上海数禾信息科技有限公司 Parallel topology scheduling component and method, task scheduling method and task processing method
CN116909756A (en) * 2023-09-13 2023-10-20 中移(苏州)软件技术有限公司 Cross-cloud service method and device, electronic equipment and storage medium
WO2023206635A1 (en) * 2022-04-29 2023-11-02 之江实验室 Job decomposition processing method for distributed computing
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing
WO2024093280A1 (en) * 2022-11-01 2024-05-10 华为技术有限公司 Task management method, apparatus and system, and communication device and storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919989A (en) * 2021-10-29 2022-01-11 国信蓝桥教育科技(杭州)股份有限公司 Cloud resource configuration detection method and system
CN114741121A (en) * 2022-04-14 2022-07-12 哲库科技(北京)有限公司 Method and device for loading module and electronic equipment
CN114741121B (en) * 2022-04-14 2023-10-20 哲库科技(北京)有限公司 Method and device for loading module and electronic equipment
WO2023206635A1 (en) * 2022-04-29 2023-11-02 之江实验室 Job decomposition processing method for distributed computing
US11907693B2 (en) 2022-04-29 2024-02-20 Zhejiang Lab Job decomposition processing method for distributed computing
CN115114028A (en) * 2022-07-05 2022-09-27 南方电网科学研究院有限责任公司 Task allocation method and device for electric power simulation secondary control
CN116302381A (en) * 2022-09-08 2023-06-23 上海数禾信息科技有限公司 Parallel topology scheduling component and method, task scheduling method and task processing method
CN116302381B (en) * 2022-09-08 2024-02-06 上海数禾信息科技有限公司 Parallel topology scheduling component and method, task scheduling method and task processing method
CN115378999A (en) * 2022-10-26 2022-11-22 小米汽车科技有限公司 Service capacity adjusting method and device
WO2024093280A1 (en) * 2022-11-01 2024-05-10 华为技术有限公司 Task management method, apparatus and system, and communication device and storage medium
CN116909756A (en) * 2023-09-13 2023-10-20 中移(苏州)软件技术有限公司 Cross-cloud service method and device, electronic equipment and storage medium
CN116909756B (en) * 2023-09-13 2024-01-26 中移(苏州)软件技术有限公司 Cross-cloud service method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113535363A (en) Task calling method and device, electronic equipment and storage medium
US8381230B2 (en) Message passing with queues and channels
CN110941481A (en) Resource scheduling method, device and system
CN114741207B (en) GPU resource scheduling method and system based on multi-dimensional combination parallelism
CN111352736A (en) Method and device for scheduling big data resources, server and storage medium
CN111324427B (en) Task scheduling method and device based on DSP
CN107070709B (en) NFV (network function virtualization) implementation method based on bottom NUMA (non uniform memory Access) perception
CN111177984B (en) Resource utilization of heterogeneous computing units in electronic design automation
WO2020125396A1 (en) Processing method and device for shared data and server
CN110362409A (en) Based on a plurality of types of resource allocation methods, device, equipment and storage medium
CN112559163A (en) Method and device for optimizing tensor calculation performance
US11954419B2 (en) Dynamic allocation of computing resources for electronic design automation operations
CN113010265A (en) Pod scheduling method, scheduler, memory plug-in and system
CN114327861A (en) Method, apparatus, system and storage medium for executing EDA task
Petrov et al. Adaptive performance model for dynamic scaling Apache Spark Streaming
US11816511B1 (en) Virtual partitioning of a shared message bus
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN112988383A (en) Resource allocation method, device, equipment and storage medium
CN113419839A (en) Resource scheduling method and device for multi-type jobs, electronic equipment and storage medium
EP4425892A1 (en) Resource operating method and apparatus, electronic device, and storage medium
CN116089089A (en) Resource management method and device
CN114741165A (en) Processing method of data processing platform, computer equipment and storage device
CN114675954A (en) Task scheduling method and device
CN114116790A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination