CN111522630B - Method and system for executing planned tasks based on batch dispatching center - Google Patents

Method and system for executing planned tasks based on batch dispatching center Download PDF

Info

Publication number
CN111522630B
CN111522630B CN202010359516.0A CN202010359516A CN111522630B CN 111522630 B CN111522630 B CN 111522630B CN 202010359516 A CN202010359516 A CN 202010359516A CN 111522630 B CN111522630 B CN 111522630B
Authority
CN
China
Prior art keywords
task
batch
tasks
execution
pod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010359516.0A
Other languages
Chinese (zh)
Other versions
CN111522630A (en
Inventor
罗孟波
翁国海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiangrongxin Technology Co ltd
Original Assignee
Beijing Jiangrongxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiangrongxin Technology Co ltd filed Critical Beijing Jiangrongxin Technology Co ltd
Priority to CN202010359516.0A priority Critical patent/CN111522630B/en
Publication of CN111522630A publication Critical patent/CN111522630A/en
Application granted granted Critical
Publication of CN111522630B publication Critical patent/CN111522630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a method and a system for executing a plan task based on a batch dispatching center, wherein the method for executing the plan task comprises the following steps: step S1: analyzing the relation among the tasks of each batch, and sequentially connecting the tasks of each batch to generate a task execution flow chart along a fixed direction; step S2: sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart; during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has a respective independent resource space.

Description

Method and system for executing planned tasks based on batch dispatching center
Technical Field
The invention relates to a method and a system for executing a plan task based on a batch dispatching center.
Background
The existing batch dispatching center application server has the following defects when pulling up the batch service application server process:
1. batch execution does not have a complete execution plan flow, a unified module does not manage the relation and the execution sequence among tasks, and each batch execution needs to be manually determined whether the batch execution can be executed or not.
2. The DFS (data Flow Server) calls a local jar packet to pull up a batch task process instead of a container pod, the process is directly started on a virtual machine or a physical machine, whether the process is started successfully or not, whether running is wrong or not, starting parameters of the process, a starting process and the like cannot be monitored in a friendly mode, resources are not isolated when the batch application process is started, and the batches can seize the resources mutually, so that the condition of uneven resource distribution is caused.
3. The task chain in the execution plan cannot be executed in a preloading mode, the subsequent task executes to pull up the container pod, and the operation must be performed after the previous task is successfully executed, so that the defects that the previous task is successfully executed, the subsequent task is not executed immediately, and a lot of time is consumed to pull up the container pod and start the batch application program are overcome.
4. The DFS needs to deploy the application server independently, so that the resource cost and the maintenance cost are increased; when the batch dispatching center executes tasks, the dispatching center requests to access the DFS application servers, and then the DFS pulls up the batch execution task process, so that network requests for the dispatching center to access the DFS application servers are increased.
Therefore, it is desirable to develop a method and system for allocating and dispatching execution plans based on a lot dispatching center to overcome the above-mentioned drawbacks.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for executing a planned task based on a batch dispatching center, wherein the method comprises:
step S1: analyzing the relation among the tasks in each batch, sequentially connecting the tasks in each batch, generating a task execution flow chart along a fixed direction, wherein each task node is associated with one batch execution task, and displaying the task execution flow chart to a user;
step S2: sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart;
during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has a respective independent resource space.
In the above method for executing a planned task, the operation of the user on each batch of tasks to monitor the task execution progress includes one or more of the following operations: checking execution records, pausing execution, canceling pause, and abnormal rerun.
In the above method for executing a planned task, step S2 includes:
step S21: starting an execution plan;
step S22: finding all task nodes pointed by the start node and sequentially executing tasks related to all task nodes;
step S23: when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed;
step S24: and the like until all the tasks in the whole task execution flow chart are successfully executed.
In the above method for executing a planned task, the DFS application service is integrated into a batch dispatching center, and when the batch dispatching center executes a task, the process of executing the batch task is pulled up by the DFS application service integrated into the batch dispatching center.
In the above method for executing a planned task, when a task associated with a certain task node starts to be executed, all next-stage task nodes pointed by the task node pull up the relevant pod containers for all task partitions to perform preloading, start the corresponding pod application but do not execute the relevant task, and only when it is monitored that the pod application is in an executable state, start to execute the relevant task.
The invention also provides a system for executing the plan task based on the batch dispatching center, which comprises the following components:
the drawing unit analyzes the relationship among the tasks in each batch, sequentially connects the tasks in each batch, generates a task execution flow chart along a fixed direction, is associated with each task node to configure one batch of execution tasks, and displays the task execution flow chart to a user;
the batch dispatching center is used for sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart;
during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has a respective independent resource space.
In the above planned task execution system, the operation of the user on each batch of tasks to monitor the task execution progress includes one or more of the following operations: checking execution records, pausing execution, canceling pause, and abnormal rerun.
In the above system for executing planned tasks, the batch dispatching center starts an execution plan to find all task nodes pointed by the start node and sequentially executes tasks associated with each task node; when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed; and the like until all the tasks in the whole task execution flow chart are successfully executed.
In the above-mentioned planned task execution system, the DFS application service is integrated into the lot dispatching center, and when the lot dispatching center executes a task, the process of executing the lot task is pulled up by the DFS application service integrated into the lot dispatching center.
In the above-mentioned planned task execution system, when a task associated with a certain task node starts to execute, all next-stage task nodes pointed by the task node pull up the relevant pod containers for all task partitions at the same time to perform preloading, start the corresponding pod application but do not execute the relevant task, and only when it is monitored that the pod application is in an executable state, start to execute the relevant task.
Aiming at the prior art, the invention has the following effects:
1. the relation and the execution sequence among tasks of each batch are uniformly managed and configured by a plurality of tasks in a task chain diagram form, and the execution state and the result of each task can be monitored through the task chain diagram.
2. The containers pod are pulled up by the DFS, each pod uses independent resources, such as a memory, a CPU and the like, each pod does not influence each other, only uses the resource space allocated to the pod, and does not occupy the resources of other pods; when the pod enters, detailed information of the pod can be checked, such as relevant information of starting application parameters, a memory, a CPU (central processing unit), the execution time, the process, the state, the log and the like of the pod; the parameters of the container pod can be configured and used in the form of parameters by the lot dispatching center.
3. When a certain task node is in operation, the subsequent task of the task node preferentially pulls up the container pod and starts the batch application program, monitors the partition execution state corresponding to the pod, immediately executes the service program of the application once the monitored state is runnable, and finishes the processes of pulling up the container pod and starting the application program in the middle when the previous task is runnable.
4. The DFS application server codes are integrated into the batch dispatching center server, only the batch dispatching center application server needs to be deployed when the application is deployed, the DFS application server does not need to be deployed independently, and resource cost and maintenance cost are reduced. When the batch dispatching center executes tasks, the batch task process can be pulled up and executed through the DFS application code integrated in the batch dispatching center, the intermediate server DFS does not need to be requested again to pull up the batch task process, and redundant network requests between the batch dispatching center and the DFS are avoided.
Drawings
FIG. 1 is a flow chart of a method of scheduling task execution in accordance with the present invention;
FIG. 2 is a flowchart illustrating the substeps of step S2 in FIG. 1;
FIG. 3 is a task execution flow diagram;
FIG. 4 is a diagram of batch executive tasks after DFS integration with a batch dispatch center;
FIG. 5 is a schematic diagram of a lot dispatching center pulling up a container pod through a DFS;
FIG. 6 is a diagram of a preloaded logic architecture;
FIG. 7 is a logic diagram for planned execution, run, abnormal re-run;
FIG. 8 is a logic diagram for abnormal re-run.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
References to "a plurality" herein include "two" and "more than two".
In the internet big data era, due to the fact that business development in various fields is strong, and requirements of batch processing business cannot be met by a single application server on processing of various data, files, messages and the like, for this reason, a batch application processing program is independently developed for each small module business, for convenience of management and maintenance of execution sequences of the batch application processing programs, the batch application processing programs are configured in a form of task and task partition, tasks are connected in series according to the front and back execution sequences, a complete task chain of an execution plan is formed and configured and used in a graph mode, and when the execution plan is executed, the tasks are sequentially executed according to the sequence of the task chain. The planned task chain involves, in the course of execution, the operational functions of executing, running, abnormal rerun, preloading, etc. the pull-up container pod.
Specifically, referring to fig. 1, fig. 1 is a flowchart of a method for scheduling task execution according to the present invention. As shown in fig. 1, the method for executing a planned task based on a lot dispatching center of the present invention includes:
step S1: analyzing the relation among the tasks in each batch, sequentially connecting the tasks in each batch, generating a task execution flow chart along a fixed direction, wherein each task node is associated with one batch execution task, and displaying the task execution flow chart to a user; in this embodiment, the task execution flowchart is drawn manually, but the present invention is not limited to this, and in other embodiments, the task execution flowchart may also be drawn automatically by software.
Referring to fig. 3, fig. 3 is a task execution flow chart. As shown in fig. 3, the planned task chain configuration of the present invention is mainly a task execution flow chart in which the relationships between batch tasks are connected in series from the unidirectional direction from the beginning to the end in the form of a link diagram. The task execution flow chart comprises a starting node, a task node and an ending node, wherein each node is configured with a batch execution task in a relevant mode.
In this embodiment, the task execution flowchart may further include a sink node, where the sink node is an empty task node and does not perform task execution processing, so that the display is beautiful when the front-back relationship among multiple tasks of the task execution flowchart is crossed.
Step S2: sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart; during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has a respective independent resource space.
Specifically, when the plan is executed, all the execution tasks pointed to by the start node are found by the start node and executed in sequence, after one task node is successfully executed, the program automatically continues to find all the execution tasks pointed to by the task node from the start node and executed in sequence, and so on, until all the tasks in the whole task chain are successfully executed, the plan is not successfully executed.
Referring to fig. 2, fig. 2 is a flowchart illustrating a sub-step of step S2 in fig. 1. As shown in fig. 2, the step S2 includes:
step S21: starting an execution plan;
step S22: finding all task nodes pointed by the start node and sequentially executing tasks related to all task nodes;
step S23: when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed;
step S24: and the like until all the tasks in the whole task execution flow chart are successfully executed.
When a task associated with a certain task node starts to execute, all next-level task nodes pointed by the task node simultaneously pull up related pod containers for all task partitions to perform preloading, start corresponding pod applications but do not execute the related tasks, and only when the pod applications are monitored to be in an executable state, the related tasks are started to execute, specifically, a pod is equivalent to a virtual machine, a batch of applications can be started in the pod applications, and the applications need to execute the corresponding task execution partitions below the task node.
Referring to fig. 4-5, fig. 4 is a diagram illustrating batch execution tasks after the DFS is integrated with the batch dispatching center; FIG. 5 is a schematic diagram of the lot dispatching center pulling up the container pod through the DFS server. As shown in fig. 4-5, the DFS application service is integrated into a lot dispatching center, and when executing a task, the lot dispatching center pulls up a process of executing the lot task, specifically, a pod, and executes a lot task process within the pod, through the DFS application service integrated into the lot dispatching center. The invention integrates the DFS server into the batch dispatching center, effectively avoids the independent deployment of the DFS server, and only needs to start the application server of the batch dispatching center; when the batch dispatching center executes tasks, the intermediate DFS server is not required to be requested to pull up the process of executing the batch tasks, and the code of the DFS server integrated in the batch dispatching center is directly used for pulling up the process of executing the batch tasks; the batch task process is pulled up when the batch dispatching center executes the tasks, so that network requests for accessing the DFS server are reduced, the execution efficiency of executing the batch tasks is improved, meanwhile, data management is unified, and the related configuration of the DFS server is moved to the batch dispatching center for operation.
During specific execution, the batch dispatching center manually or regularly starts an execution plan, a program sequentially executes according to a task execution sequence of a task chain configured in the execution plan, each task corresponds to one or more task partitions, and each task partition can pull up a corresponding container pod through a REST API (representational context API) interface of task execution calling dfs. Each pod executes an independent application program, and resources, application parameters and the like used among all the pods are different according to the configuration of the scheduling center; each time the batch dispatching center pulls up a container pod, the pod ID is recorded in the table field related to the task execution partition, and the batch dispatching center can view the details of the related execution pod by viewing the records of the task execution partition.
Further, referring to fig. 6, fig. 6 is a diagram of a preloaded logic architecture. As shown in fig. 6, the present invention further has a preloaded logic architecture, during the task execution process, the subsequent task node pointed by the task node will immediately pull up the relevant container and start the application for all task partitions, and when it is monitored that the pod application program is in an executable state, the listening will be skipped to execute the service program.
Specifically, task preloading refers to that when a certain task node starts to run during the execution of a task chain, a subsequent task node pointed by the task node immediately pulls up a relevant container pod for all task partitions and starts an application, the started tasks are called preloading tasks, a preloaded pod application program does not immediately execute business program processing, but is always in the process of program monitoring, and when the pod application program is monitored to be in an executable state, monitoring is skipped to execute the business program.
Further, referring to fig. 7-8, fig. 7 is a logic diagram of planned execution, run-through, and abnormal re-run, and fig. 8 is a logic diagram of abnormal re-run. As shown in fig. 7 to 8, in this embodiment, the user operating each batch of tasks to monitor the task execution progress includes one or more of the following operations: checking execution records, pausing execution, canceling pause, and abnormal rerun. The execution of the plan can be executed according to the task sequence configured by the task chain; the continuous running and the abnormal re-running are operated through the monitoring center, the continuous running is performed aiming at the suspended tasks, and the abnormal re-running is performed aiming at the tasks with abnormal batches.
The abnormal re-running can be divided into an abnormal re-running execution plan, an abnormal re-running execution task, and an abnormal re-running task execution partition, but the invention is not limited thereto.
The invention relates to a plan task execution system based on a batch dispatching center, which comprises:
the drawing unit analyzes the relationship among the tasks in each batch, sequentially connects the tasks in each batch, generates a task execution flow chart along a fixed direction, is associated with each task node to configure one batch of execution tasks, and displays the task execution flow chart to a user;
the batch dispatching center is used for sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart;
during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has a respective independent resource space.
Further, the user operating each batch of tasks to monitor the task execution progress includes one or more of the following operations: checking execution records, pausing execution, canceling pause, and abnormal rerun.
Furthermore, the batch dispatching center starts an execution plan to find all task nodes pointed by the start node and sequentially executes tasks related to all task nodes; when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed; and the like until all the tasks in the whole task execution flow chart are successfully executed.
Still further, the DFS application service is integrated in a lot dispatching center, and the lot dispatching center applies the service through the DFS integrated in the lot dispatching center when executing the task.
Furthermore, when a task associated with a certain task node starts to execute, all the next-level task nodes pointed by the task node simultaneously pull up the relevant pod containers for all the task partitions to perform preloading, start the corresponding pod applications but do not execute the relevant tasks, and only when the pod applications are monitored to be in an executable state, the relevant tasks are started to execute.
In summary, the invention has the following advantages:
1. after the batch dispatching center integrates the DFS, the DFS server does not need to be deployed independently, so that the resource and maintenance cost of the resource server is reduced; when the batch dispatching center executes the tasks, the pull-up tasks do not execute the operation by accessing the DFS intermediate server, so that a network request for accessing the DFS intermediate server is avoided, and the efficiency of batch task execution is improved.
2. During preloading, when the previous task is executing, the subsequent task pulls up the container pod to start the application and monitors the executable instruction sent by the scheduling center, and when the executable instruction is monitored, the application processing is started immediately, so that the time for pulling up the container pod and starting the application by the subsequent task is shortened.
3. The execution relation among the tasks is configured in a mode of a task link diagram, the tasks of the whole execution plan are visually attractive, easy to maintain and easy to manage, and the execution monitoring effect diagram of the whole execution plan also plays a clear role.
4. The batch dispatching center executes the batch tasks by pulling up the container pod, the pod process resources are independent and do not occupy the resources, the resource usage size can be configured and transmitted to the batch dispatching center for use, the batch dispatching center can monitor the execution state of the pod to confirm the execution result of the task partition, and the batch dispatching center can also enter the container pod to inquire the execution detailed information through the task execution partition record of the batch dispatching center.
5. The abnormal rerun can rerun the execution plan, can rerun the execution task, and can rerun the task execution partition.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A method for executing a planned task based on a batch dispatching center is characterized by comprising the following steps:
step S1: analyzing the relation among the tasks in each batch, sequentially connecting the tasks in each batch, generating a task execution flow chart along a fixed direction, wherein each task node is associated with one batch execution task, and displaying the task execution flow chart to a user;
step S2: sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart;
during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has an independent resource space;
when a task associated with a certain task node starts to execute, all next-level task nodes pointed by the task node simultaneously pull up related pod containers for all task partitions to perform preloading, start corresponding pod applications but do not execute the related tasks, only when the pod applications are monitored to be in an executable state, the related tasks are started to execute, when the certain task node is in operation, the subsequent tasks of the task node preferentially pull up the pod pods and start batch application programs, and monitor the partition execution state corresponding to the pods, when the monitored state is operable once, the service programs of the application are executed immediately, and the processes of pulling up the pod pods and starting the application programs in the middle are completed when the previous tasks are operated;
the DFS application service is integrated in the batch dispatching center, when the batch dispatching center executes the task, the batch task process is pulled up and executed through the DFS application service integrated in the batch dispatching center, when the batch dispatching center executes the task, the batch task process is pulled up and executed through the DFS application code integrated in the batch dispatching center, and the DFS does not need to be requested again to pull up the batch task process.
2. The method of planned task execution according to claim 1, wherein the user operating on the respective batch of tasks to monitor task execution progress comprises one or more of: checking execution records, pausing execution, canceling pause, and abnormal rerun.
3. The planned task execution method according to claim 1, wherein the step S2 includes:
step S21: starting an execution plan;
step S22: finding all task nodes pointed by the start node and sequentially executing tasks related to all task nodes;
step S23: when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed;
step S24: and the like until all the tasks in the whole task execution flow chart are successfully executed.
4. A system for executing a planned task based on a batch dispatching center is characterized by comprising:
the drawing unit analyzes the relationship among the tasks in each batch, sequentially connects the tasks in each batch, generates a task execution flow chart along a fixed direction, is associated with each task node to configure one batch of execution tasks, and displays the task execution flow chart to a user;
the batch dispatching center is used for sequentially executing all tasks of the task execution flow chart according to the sequence of each node of the task execution flow chart;
during task execution, real-time marking the execution state of each batch of tasks in different colors in the task execution flow chart, and allowing a user to operate each batch of tasks to monitor the task execution process; and
each batch task corresponds to at least one task partition, each task partition pulls up a corresponding pod by calling DFS application service, and the parameters of each pod are uniformly configured by the batch dispatching center so that each pod has an independent resource space;
when a task associated with a certain task node starts to execute, all next-level task nodes pointed by the task node simultaneously pull up related pod containers for all task partitions to perform preloading, start corresponding pod applications but do not execute the related tasks, only when the pod applications are monitored to be in an executable state, the related tasks are started to execute, when the certain task node is in operation, the subsequent tasks of the task node preferentially pull up the pod pods and start batch application programs, and monitor the partition execution state corresponding to the pods, when the monitored state is operable once, the service programs of the application are executed immediately, and the processes of pulling up the pod pods and starting the application programs in the middle are completed when the previous tasks are operated;
the DFS application service is integrated in the batch dispatching center, when the batch dispatching center executes the task, the batch task process is pulled up and executed through the DFS application service integrated in the batch dispatching center, when the batch dispatching center executes the task, the batch task process is pulled up and executed through the DFS application code integrated in the batch dispatching center, and the DFS does not need to be requested again to pull up the batch task process.
5. The planned task execution system of claim 4, wherein the user operating on batches of tasks to monitor task execution progress comprises one or more of: checking execution records, pausing execution, canceling pause, and abnormal rerun.
6. The planned task execution system of claim 5, wherein the lot dispatching center initiates execution of the plan to start the node to find all task nodes it points to and execute tasks associated with each task node in sequence; when the task associated with one task node is successfully executed, all the next-level task nodes pointed by the task node are found and the tasks associated with the next-level task nodes are sequentially executed; and the like until all the tasks in the whole task execution flow chart are successfully executed.
CN202010359516.0A 2020-04-30 2020-04-30 Method and system for executing planned tasks based on batch dispatching center Active CN111522630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010359516.0A CN111522630B (en) 2020-04-30 2020-04-30 Method and system for executing planned tasks based on batch dispatching center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010359516.0A CN111522630B (en) 2020-04-30 2020-04-30 Method and system for executing planned tasks based on batch dispatching center

Publications (2)

Publication Number Publication Date
CN111522630A CN111522630A (en) 2020-08-11
CN111522630B true CN111522630B (en) 2021-04-06

Family

ID=71904973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010359516.0A Active CN111522630B (en) 2020-04-30 2020-04-30 Method and system for executing planned tasks based on batch dispatching center

Country Status (1)

Country Link
CN (1) CN111522630B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553066A (en) * 2021-07-23 2021-10-26 国网江苏省电力有限公司 Intelligent task scheduling method based on flow configuration

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617486A (en) * 2013-11-21 2014-03-05 中国电子科技集团公司第十五研究所 Method and system for conducting dynamic graphical monitoring on complex service processes
CN104679488A (en) * 2013-11-29 2015-06-03 亿阳信通股份有限公司 Flow path customized development platform and method
CN104731645A (en) * 2015-03-19 2015-06-24 蔡树彬 Task scheduling method and device and data downloading method and device
CN105653365A (en) * 2016-02-22 2016-06-08 青岛海尔智能家电科技有限公司 Task processing method and device
CN106227592A (en) * 2016-09-08 2016-12-14 腾讯数码(天津)有限公司 Task call method and task call device
CN108062243A (en) * 2016-11-08 2018-05-22 杭州海康威视数字技术股份有限公司 Generation method, task executing method and the device of executive plan
CN109725785A (en) * 2018-05-08 2019-05-07 中国平安人寿保险股份有限公司 Task execution situation method for tracing, device, equipment and readable storage medium storing program for executing
CN110222284A (en) * 2019-05-05 2019-09-10 福建天泉教育科技有限公司 Multi-page loading method and computer readable storage medium
CN110399208A (en) * 2019-07-15 2019-11-01 阿里巴巴集团控股有限公司 Methods of exhibiting, device and the equipment of distributed task dispatching topological diagram
CN110427252A (en) * 2019-06-18 2019-11-08 平安银行股份有限公司 Method for scheduling task, device and the storage medium of task based access control dependence

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101019209B1 (en) * 2007-04-25 2011-03-04 이화여자대학교 산학협력단 Device of automatically extracting Interface of Embedded Software and Method thereof
CN103098035B (en) * 2010-08-31 2016-04-27 日本电气株式会社 Storage system
CN103812939B (en) * 2014-02-17 2017-02-08 大连云动力科技有限公司 Big data storage system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617486A (en) * 2013-11-21 2014-03-05 中国电子科技集团公司第十五研究所 Method and system for conducting dynamic graphical monitoring on complex service processes
CN104679488A (en) * 2013-11-29 2015-06-03 亿阳信通股份有限公司 Flow path customized development platform and method
CN104731645A (en) * 2015-03-19 2015-06-24 蔡树彬 Task scheduling method and device and data downloading method and device
CN105653365A (en) * 2016-02-22 2016-06-08 青岛海尔智能家电科技有限公司 Task processing method and device
CN106227592A (en) * 2016-09-08 2016-12-14 腾讯数码(天津)有限公司 Task call method and task call device
CN108062243A (en) * 2016-11-08 2018-05-22 杭州海康威视数字技术股份有限公司 Generation method, task executing method and the device of executive plan
CN109725785A (en) * 2018-05-08 2019-05-07 中国平安人寿保险股份有限公司 Task execution situation method for tracing, device, equipment and readable storage medium storing program for executing
CN110222284A (en) * 2019-05-05 2019-09-10 福建天泉教育科技有限公司 Multi-page loading method and computer readable storage medium
CN110427252A (en) * 2019-06-18 2019-11-08 平安银行股份有限公司 Method for scheduling task, device and the storage medium of task based access control dependence
CN110399208A (en) * 2019-07-15 2019-11-01 阿里巴巴集团控股有限公司 Methods of exhibiting, device and the equipment of distributed task dispatching topological diagram

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于Lua的移动互联网中间件系统的研究与实现》;程君;《万方数据》;20180829;第24页 *

Also Published As

Publication number Publication date
CN111522630A (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN112379995B (en) DAG-based unitized distributed scheduling system and method
CN111506412B (en) Airflow-based distributed asynchronous task construction and scheduling system and method
CN108733461B (en) Distributed task scheduling method and device
US7979858B2 (en) Systems and methods for executing a computer program that executes multiple processes in a multi-processor environment
US20170295062A1 (en) Method, device and system for configuring runtime environment
CN113569987A (en) Model training method and device
CN109656782A (en) Visual scheduling monitoring method, device and server
CN113312165B (en) Task processing method and device
CN111552556B (en) GPU cluster service management system and method
CN104780146A (en) Resource manage method and device
CN109343939A (en) A kind of distributed type assemblies and parallel computation method for scheduling task
CN107066339A (en) Distributed job manager and distributed job management method
CN112559159A (en) Task scheduling method based on distributed deployment
CN102420709A (en) Method and equipment for managing scheduling task based on task frame
CN113032125A (en) Job scheduling method, device, computer system and computer-readable storage medium
US20220182851A1 (en) Communication Method and Apparatus for Plurality of Administrative Domains
CN111522630B (en) Method and system for executing planned tasks based on batch dispatching center
CN113658351A (en) Product production method and device, electronic equipment and storage medium
CN106648871B (en) Resource management method and system
CN114189439A (en) Automatic capacity expansion method and device
CN113515356A (en) Lightweight distributed resource management and task scheduler and method
CN113220480A (en) Distributed data task cross-cloud scheduling system and method
US8402465B2 (en) System tool placement in a multiprocessor computer
CN110795223A (en) Cluster scheduling system and method for unified resource management
CN110764882A (en) Distributed management method, distributed management system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant